text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Opened 8 years ago
Closed 8 years ago
Last modified 8 years ago
#4074 closed bug (fixed)
Forking not possible with large processes
Description
If a haskell program requires a lot of memory, trying to fork() fails because, due to the program size, the clone() syscall takes long and will interrupted by the ghc runtime timer, restarting the syscall, just to be interrupted again.
This happens repeatedly with ghc 6.12, which seems to require noticeable more memory than 6.10, when building large Haskell programs on slower arches, and causes some problems with Haskell in Debian.
The problem can also be observed by running a simple C program that malloc’s a lot of memory (in the range of 1G) and then tries to fork with profiling enabled.
In the corresponding Debian bug report against libc (), which also has the demo C code, it was suggested that it might be the program’s responsibility to disable such timers while clone() runs.
Do you agree with that? Is it something you can do? Might this be related to #1882 (which mentions timers and fork)?
Change History (8)
comment:1 Changed 8 years ago by
comment:2 Changed 8 years ago by
I'm having trouble constructing a test case in Haskell that displays the problem. Can anyone else do this?
comment:3 Changed 8 years ago by
It is probably hard to find a machine that is both slow enough (CPU wise) and large enough (memory wise). Here is my test code:
import qualified Data.ByteString as BS import Control.Concurrent import System.Process main = do let size = 1000 * 1024 * 1024 bs = BS.replicate size (fromIntegral 5) BS.minimum bs `seq` return () forkIO $ putStrLn "Forked Child" runCommand "echo hi" putStrLn "Parent" BS.minimum bs `seq` return ()
On my computer, I could trigger the behaviour with a size of 1500*1024*1024:
vfork() = ? ERESTARTNOINTR (To be restarted) --- SIGVTALRM (Virtual timer expired) @ 0 (0) --- rt_sigreturn(0x1a) = 56 clone(child_stack=0xbea, flags=CLONE_FILES|CLONE_PTRACE|CLONE_VFORK|CLONE_DETACHED|126) = ? ERESTARTNOINTR (To be restarted) --- SIGVTALRM (Virtual timer expired) @ 0 (0) --- rt_sigreturn(0x1a) = 56 [..]
What is interesting is that the program uses vfork() at first. If this works (because no timer interrupt happens), this call is used:
vfork(Process 3243 attached ) = 3243
So there seems to be a fallback logic that tries clone() if vfork() does not work which kicks in even in the case of ERESTARTNOINTR – probably not what was intended. But I guess this is independent of our issue.
If you have problem reproducing the error, you can see if at strace at least shows one or two failed calls due to ERESTARTNOINTR. If so, the problem is present, the symptoms are just not as bad.
I could not observe this problem with the threaded RTS right now.
comment:4 Changed 8 years ago by
I'm not able to reproduce it, but I've applied the following fix anyway:
Tue May 18 04:32:14 PDT 2010 Simon Marlow <marlowsd@gmail.com> * Fix #4074 (I hope). 1. allow multiple threads to call startTimer()/stopTimer() pairs 2. disable the timer around fork() in forkProcess() A corresponding change to the process package is required.
and the process package patch:
Tue May 18 09:36:17 BST 2010 Simon Marlow <marlowsd@gmail.com> * Fix #4074: disable the timer signal around fork()
If you can test the fix that would be very helpful.
comment:5 Changed 8 years ago by
comment:6 Changed 8 years ago by
Both merged.
comment:7 Changed 8 years ago by
So far, I avoided having to build ghc6, therefore I’m reluctant to do it now. Maybe when I find some spare time for it. But I’m optimistic that this fixes the problem.
comment:8 Changed 8 years ago by
I could reproduce the bug with a GHC built just before the patch was applied, and I couldn't reproduce it with a GHC built just after the patch was applied, so I assume this patch fixes the bug.
I think we probably should disable timers in
forkProcess; I'll look into it.
Incedentally, this timer/fork issue is the reason that #3998 is hard to fix. The workaround we use in the process library is to use
vforkinstead of
fork. (#1882 is unrelated, I think)
|
https://ghc.haskell.org/trac/ghc/ticket/4074
|
CC-MAIN-2018-13
|
refinedweb
| 715
| 72.26
|
08
FrontoFHouse
May 2013
FOH Publisher Acquires MMR, SBO, JAZZed and Choral Director Gibson Amphitheatre Announces Plan to Close in September Crew Member Dies after Fall at AT&T Center in San Antonio, TX
22
Show Report
Prolight + SoundHot Live Sound Products Debut in Frankfurt
34
Production Profile
44
Tech PreviewSSL Enters Live Sound Market
46Tips & TricksLive Strings for the rush tour
48The BizWeather and other touring Dangers
50Sound Sanctuarytech advice for h.o.W. volunteers
From L-ACOUSTICS, comes the classic KIVA Modular Line SourceIVAs response to 40Hz and an LF impact that brings down the house. A must-hear in 2013. More at
Talk is Cheap.From the makers of the worlds most rider-friendly digital mixers, Yamahas newest addition, the CL Series, comes from a strong heritage of touring consoles. Featuring the same output structure of the legendary PM5D with even more inputs along with phenomenal sound quality, the CL Series is a wise business investment for more reasons than we can list. But we dont expect you to take our word. We expect to give it to you. Heres how. For a limited time, take the CL5 for a 72 hour test drive from one of our participating dealers, free of charge. * A complete demo system including: A CL5 with 2 Rio3224-D I/O units in a stage rack and a 150 snake are yours to borrow for any gig youd like for 72 hours. Thats a very capable 72 input + 35 output system. Take it for a test drive and experience for yourself why the CL Series is worthy of your praise.**
Ohio Eighth Day Sound Highland Heights Maryland Washington Professional Systems Wheaton Oregon AGI, Inc. Eugene Michigan Thunder Audio, Inc. Livonia Pennsylvania Philadelphia Sound Productions, Inc. Minnesota Philadelphia Reach Communications Brooklyn Park South Carolina Missouri Audio Communications Systems, Inc. Cignal Systems St. Louis Columbia Sounds Great Springfield Paragon Productions, Inc. Rock Hill Mississippi Tennessee Millennium Music Center Hattiesburg CTS Audio Brentwood North Carolina Memphis Audio Memphis Draughon Brothers, Inc. Fayetteville Spectrum Sound, Inc. Nashville SE Systems, Inc. Greensboro World Class Acoustical Visual Elements (WAVE) Charlotte
*Dealer may require deposit, shipping charges, etc. **Offer valid in the U.S. and Canada only. Promotion expires on June 30th, 2013.
Yamaha Commercial Audio Systems, Inc. P . O. Box 6600, Buena Park, CA 90620-6600 2013 Yamaha Commercial Audio Systems, Inc.
2013
MAY
CONTENTSColumns47 On the Digital EdgeDavid Morgan continues his discussion on using near-feld studio speakers in certain sound reinforcement applications and uncovers some real-world solutions.
Vol. 11.08
fohonline.com
Features22 Show ReportMore than 113,000 industry pros made the trek to Frankfurt for Musikmesse / Prolight+Sound, which offered no shortage of cool new products for sound pros and FOH was there to check out the action.
48 The BizTouring (or doing any outdoor concerts) is tough enough without having to face extreme weather and other dangers. Dan Daley offers some practical advice for helping you get through a tough season.
38
BUYerS gUIDeTodays headworn mics can equal or best many handheld models. Here are 22 for serious pro use.
50 Sound SanctuaryThe Ten Commandments might be essential to the message, but if you work with volunteers at your H.O.W., then Jamie Rios Ten Suggestions for great sound may be exactly what your crew needs to know.
Departments4 Editors Note
52
FOH-at-largeFew of us have the time to contemplate heady thoughts, so check these tenets for a sound philosophy.Circulation Stark Services P.O. Box 16147 North Hollywood, CA 91615
Business, Editorial & Advertising Ofce 6000 South Eastern Ave. Suite 14J Las Vegas, NV 89119 Ph: 702.932.5585 Fax: 702.554.5340
Front Of House (ISSN 1549-831X) Volume 11 Number 8 is published monthly by Timeless Communications Corp., 6000 South Eastern Ave., Suite 14J, Las Vegas, NV, 89119. Periodicals Postage Paid at Las Vegas, NV and additional mailing ofces. Postmaster: Send address changes to Front Of House, P.O. Box 16655, North Hollywood, CA 91615-6147. Front Of House is distributed free to qualifed individuals in the live sound industry in the United States and Canada. Mailed in Canada under Publications Mail Agreement Number 40033037, 1415 Janette Ave., Windsor, ON N8X 1Z1. Overseas subscriptions are available and can be obtained by calling 702.932.5585. Editorial submissions are encouraged, but will not be returned. All Rights Reserved. Duplication, transmission by any method of this publication is strictly prohibited without the permission of Front Of House.Publishers of...
Capture live multi-track audio without a computer Unbalanced/balanced analog, AES/EBU, Lightpipe, MADI and Dante digital versions available Can attach to any live console to provide virtual soundchecking Records onto a regular USB2 drive, or USB Flash drive Fail safe features for secure audio capture USB drive plugs straight into workstation for instant editing - no wasted time transferring files
Multiple units can be stacked for larger recordings Dedicated multi-channel playback option available
Distributed in the USA and Canada by FullScaleAV Tel: 615-833-1824 | | info@fullscaleav.com
MEDIORNET
ROCKNET
ACROBAT
ARTIST
PERFORMER
EDITORSNOTE
By GeorgePetersen. Maybe its just because I live in an area populated with a lot of people, schools and colleges (San Francisco), but I can definitely recall years of hell weeks as local sound companies would try to juggle covering 25 or so overlapping graduation ceremonies over a few days. These are certainly not the most exciting gigs to mix they rank right up there with political speeches, cardiology conventions and city council meetings but are definitely on the schedule of nearly every local sound company. And although anything but glamorous, these bread and butter things do help pay the bills and rarely have clients complaining about the monitor mix.
Catch Georges commentary at fohonline.com/foh-tv, or click on the picture from your digital edition.
fers advice and some basics every H.O.W. volunteer should know. Its a must-read. But knowledge comes from all kinds of sources. Recently our ProAudioSpace. com forum had a lively discussion about topics like XLR shell wiring and limits on how loud a show should be. And our resident scientist Phil Graham, writing about subwoofer placement in this issue (pg. 42) definitely opened my eyes with some interesting approaches to LF arrays. Dont miss it. And theres a lot more in this months FRONT of HOUSE, whether backstage at the Bon Jovi tour (pg. 34), on the show floor of Musikmesse/Prolight+Sound (pg. 22) or delving into the nuances or Solid State Logics first console for live sound (pg. 44). Check it out! Email FRONT of HOUSE editor George Petersen at george@timelesscom.com.
President Group Publisher Entertaintment Division Vice President Editor Managing Editor Art Director
Terry Lowe tlowe@timelesscom.com Greg Gallardo gregg@timelesscom.com William Hamilton Vanyo wvanyo@timelesscom.com George Petersen george@ timelesscom.com Frank Hammel fh@timelesscom.com Garret Petrov gpetrov@timelesscom.com Mike Street mstreet@timelesscom.com Kevin M. Mitchell kmitchell@timelesscom.com Phil Graham pgraham@timelesscom.com Dan Daley, Steve Jennings, Steve LaCerra, Baker Lee, David Morgan, Jamie Rio Josh Harris jharris@ timelesscom.com Matt Huber mh@timelesscom.com Mike Devine md@timelesscom.com Erin Schroeder erin@timelesscom.com
Image depicts components from 6.2 litre V8 engine from a 2009 Chevrolet Corvette ZR1. Not included with PLM Series, unsurprisingly.
Built for serious driving, the flagship PLM 20000Q Powered Loudspeaker Management system is a classic example of a whole that far exceeds the entirety of its parts.It starts with sheer muscle power combined with Lab.gruppens latest efcient innovations in power supply topology. Four bridgeable 5000 W* output channels, pack 20,000 W of exible power into a 2U platform together with two Lake Processing modules. And everything is precisely managed by the industry renowned Lake Controller interface. Take to the road with PLM Series.
Web Master
Ofce Administrator
SOLOTECH EXPANDS
BACK IN BUSINESS
NEWSFRONT of HOUSE Publisher Acquires the Assets of Symphony Publishing
LOS ANGELES For Rihannas Diamonds world tour, which is traveling through the U.S. and Canada from March through May 2013 before heading overseas, Eighth Kyle Hamilton at FOH Day Sound provided a d&b audiotechnik PA along with a pair of DiGiCo SD7 consoles manned could simply walk in and mix. Although Ehrbar was
Ed Ehrbar at monitors working on an SD7 for the whirlwind week, Hamilton had to make do withanother digital desk and was happy to be back on the SD7. When I was frst introduced to DiGiCo, I was using a D5 on Mary J Blige and Lionel Richie. Then the SD7 came out, I used it with Lionel Richie, Janet Jackson, Prince and now Rihanna, said Hamilton. The SD7 just feels right. Its the digital desk that feels analog to me; it sounds warm. I like the fact that the SD7 is transparent. What you put in is what you get out of it. Also, this console has tons of headroom. Each SD7 is in an Optocore fber loop with two DiGiCo SD192kHz racks along with one mini SD rack to wrangle the mass of over 96 inputs for the four-piece band and backup singers. The majority of the inputs were taken up by three drum kits, two keyboard rigs and 24 channels of Pro Tools.
LAS VEGAS Timeless Communications Corp. (TCC), publisher of FRONT of HOUSE,PLSN and Stage Directions magazines, announced April 30 that it had acquired the assets of Symphony Publishing. Efective May 1, 2013, Timeless has assumed the role as publisher of Musical Merchandise Review (MMR), School Band & Orchestra (SBO), JAZZed and Choral Director magazines. Were excited to receive the stewardship of four exceptional magazines, said TCC president/founder Terry Lowe. Its a perfect ft. Lowe is a lifelong veteran of the publishing industry. He founded PLSN in 1999, launched FOH in 2001, and purchased Stage Directions in 2007. Prior to the acquisition, Timeless Communications titles had a combined circulation of 60,000 subscribers, of which 6,500 are in 120 countries outside of North America. With the addition of the Symphony titles, TCC will have over 100,000 professionals in music and event production subscribing to its seven magazines. All of these titles, along with the current TCC magazines, have a full complement of digital media products associated with them. Each magazine has its own website, email newsletter and digital edition, and all are accessible on all mobile devices. TCC will be enhancing all of the newly acquired titles with a redesign within the next 12 months, along with a revitalization of the infrastructure of digital transmissions to provide subscribers with more options and ease of use. Lowe is also co-founder and executive producer of the Parnelli Awards (parnelliawards.com), which is in its 13th year of honoring live event professionals. This years awards ceremony will take place at the Mirage Resort and Casino in Las Vegas on Nov. 23, 2013. For more information about the Parnellis, go to parnelliawards.com.
INDUSTRYNEWSBoston Area Events Canceled After Bombing; Boston Strong Beneft Set for May 30 BOSTON A beneft concert to raise funds for Boston Marathon bombing victims has been set for the TD Garden Arena here May 30. Performers will include Aerosmith, NKOTB, James Taylor, Jason Aldean, Jimmy Bufett, Carole King, Boston and J. Geils Band, according to TD Garden and Live Nation New England. Bostons mayor and Massachusetts governor shortly after the bombings. From the promoters to the sound andlights, to the performers and the Garden staf everyone involved has responded from the heart in a spontaneous and simultaneous desire to be there, and to do what we can for the city we love, said James Taylor. I am honored to be a part of it. An earlier concert beneft, We (Heart) Boston, had been set to raise funds for The One Fund as well. Set to take place at the Royale Boston on April 19, it had to be canceled; that was the day of the Boston lockdown, manhunt and shootout with the bombing suspects. We (Heart) Boston was just one of numerous concert, theatre and sports events that were canceled, frst on April 15, the day of the bombings, then again on April 19 as the citys manhunt intensifed and the two bombing suspects were killed and captured. That lockdown included a shutdown of virtually all municipal and regional mass transit.
From left, Eric Bourgeois, Bill Lawlor, Denis Lefrancois, Larry Medwin and Brian Konechny
INDUSTRYNEWS
LAS VEGAS and LOS ANGELES 3G Productions Inc. recently acquired a Martin Audio MLA system to provide greater sound control at electronic dance music festivals and expand the companys touring capabilities. Founded by Eli Stearns and Jay Curiel in 2004, 3Gs corporate clientele includes a vast array of Fortune 100 companies, as well as close relationships with Las Vegas hotel and casino groups. The company also specializes in touring and events such as the Beyond Wonderland Festival for Insomniac at San Bernardinos San Manuel Amphitheater, which served as the MLA systems test drive. Response to the system was uniformly positive in terms of coverage, loudness, accuracy and control, so 3G decided to initially acquire 32 MLA, four MLD downflls, 24 MLX subs and 24 MLA Compact enclosures. For 3G, an intrinsic part of MLAs appeal was based on its ability to control SPLs at festivals
On the roadFlexible, tough multipair-cables for permanent use Robust stagebox-systems User-friendly cable solutions Customer oriented manufacturing Big stock and fast delivery
NEW YORK Clair Brothers Audio Systems installed a comprehensive sound reinforcement system within The Theater at Madison Square Garden, a 2,000-to-5,600 capacity venue within the larger MSG complex that is used for concerts, stage shows, meetings and graduation ceremonies. The sound system centers on Harmans JBL VTX line arrays powered by Crown Audio I-Tech HD amplifers. Gardens production staf, provided critical design and logistical support. The main loudspeaker system consists of
INDUSTRYNEWS LONDON Edwin Shirley, co-founder of Edwin Shirley Trucking (EST Trucking) and the driving force for a variety of other businesses supporting staging, trucking, crew buses, freight forwarding, rehearsal space and film production died April 16 after a battle with cancer. A member of Britains National Youth Theatre (NYT) in 1965, Shirley later shifted from acting/directing to handling lighting tours (and inevitably driving a truck) for Paul McCartney, David Bowie, Ike and Tina Turner and others. By 1974, Edwin Shirley Trucking Ltd. (later known as EST Trucking) was born a trucking company dedicated to the needs of concert touring. In the 1980s, Edwin
The new Margot V. Biermann Athletic Center is using D.A.S. Audio Aero Series 2, Artec, and Arco systems gear.
INDUSTRYNEWSFloridas Parker Playhouse Gets Digital Console FORT LAUDERDALE, FL In addition to his role as owner of Anton F. Audio Design, Anton Foresta serves as head audio engineer for Parker Playhouse, a 1,168-seat venue operating since 1967. Foresta recently acquired a Soundcraft Si Expression 3 digital mixer for use at Parker Playhouse, as well as other oneof events. Parker Playhouse hosts various productions for the Fort Lauderdale area and needed a digital mixer that would ft its needs. We host everything from a single mic production, like a comedian, to a full band. The Parker Playhouse just recently hosted comedian Jim Breuer, and Alan Parsons before that, said Foresta, who is also a touring FOH engineerfor Arrival The Music of ABBA, Direct From Sweden. Since he began using the Si Expression 3, Foresta has appreciated its performance and unique features. Everything has been solid. I like the I/O, the 32 inputs and 16 outputs. Having a graphic EQ on every single output is an advantage thats unique to this board as well. At this price for a Soundcraft console, nothing compares, Foresta concluded. The preamps on the Expression sound fantastic. Ive used all types of consoles and this is the best-sounding console Ive found in the small-format category.
LANCASTER, PA For a March tour stop at the American Music Venue here, country artist Gary Allan performed in support of his latest CD, Set You Free. Bauder Audio provided a Nexo GEO S12 line array for the date with support from Bauder systems engineer Tom Hogle and Allans A2, Sean Gary. Jason Spence specified a Yamaha PM5000 console for the entire Gary Allan tour, furnished by Sound Image of Nashville. The sound system included 24 Nexo GEO S12s over eight RS18 Ray Subs, all powered and processed with NXAMP4X4 amplifiers. Besides the fact that the GEO S12 sounds great, its easy to set up, which makes for a quick load-in, said Hogle. Nexo continues to manufacturer studio monitors for large venues! remarked Chris Sully Sullivan after mixing front of house for the show. Although most major tours are supported by digital mixing these days, Yamahas analog PM5000 was still deemed the right choice for the tour, Spence noted. We had options and, in fact, listened to other consoles. However, after listening to the 5K, Gary and the band chose to go analog.
Gary Allan Tour Gets Assist from Bauder Audio and Sound
From left, Bauder Audios Tom Hogle with Chris Sully Sullivan
10
Hometown Spotlight...
S D9
DSP) 6ch of nels (9 n a h C i x reo) 48 Fle es (ste s xi Buss g iGiTuBe D 16 Fle 8 l Foldin Matrix Channe i 8 lt x u 2 M 1 s Q E phic 16 Gra
M Mak akin ing gth the eri righ ghttch choi oice ce::
SD10
96 Input Ch annels inc 12 Flexi 48 Confgu rable Busses Plus Master 16 x 16 Mat rix 16 DiGi TuBes 216 Dynam ic EQs Mul ti Channel Fo lding
Spectrum Sound, Inc. a well respec ted professional audio equipment and services company from Nashville have made the right choice with DiG iCos SD9 and SD10 digital livesound console s. Ken Porter, President comments ... From rentals to installs the DiGiCo SD consoles have proven to be a valuable choice to ofer our clients. The fex ibility, superior detail, and the con fguration of inputs and outputs within a sma ller work platform make the SD9 and SD10 a perfect ft for Spectrum Sound, Inc. Wiile Curtis Flatt, FOH Enginner for Michael McDonald and Wyonn a Judd enthuses...
The SD10 blew me away on my frst use. The image and sonic qua lity get your attention immediately. I knew the SD series was going to be the cho ice for me. Great company. Great consoles. The perfect match. US distribution: Group One Ltd Toll Free 877 292 1623
INDUSTRYNEWSSt. Augustines Church Installs New Speakers, Processing MINSTER, OH Stage Right Productions helped tackle the challenging acoustics within St. Augustines Catholic Church with Communitys Entasys 200 loudspeakers and a dSpec loudspeaker processor. The interior is highly reverberant well suited for its pipe organ and 78-voice choir, yet always a challenge for voice intelligibility. The old sound system put the sound everywhere, said Stage Rights Steve Merrill.The reverb and echoes were terrible and I had to overcome these problems. Merrill had used Communitys original Entasys Column Line Array on another project but was attracted to several versatile models in the newer Entasys 200 family. He halfway back in the church and delayed them with the dSPEC. The sound seems to come from the lector, not the loudspeakers. Now, the sound is great everywhere.The intelligibility is excellent you can even hear
From left, Dave Christenson and David Ellis of Lift, David Croxton of KV2, Steve Palermo of Lift and George Krampera Jr. of KV2.
KV2 Audio Names Lift Distribution as Its Top Rep Firm in Pacifc Regionbased here, which distributes products from KV2 Audio in the U.S. and Canada, was named KV2s top distributor in the Asian Pacifc region for 2012. The announcement was made at the KV2 factory near Prague, Czech Republic. Lift Distribution took on the KV2 brand in September 2011 and grew the profle of the brand substantially through 2012 by making systems available for major events and reconnecting with KV2 owners across the U.S. and Canada. Their eforts
have resulted in a steady stream of mobile and installed system sales throughout the region. When looking for a new distributor in the U.S. and Canada, we really wanted a small, hands on, highly technical company that could promote the benefts of KV2 and specify the range correctly into the right applications, said KV2 director of sales and marketing Dave Croxton. Lift have exceeded our expectations in every way and we look forward to strong growth with them in the future.
breath noises! Merrill replaced older lavaliers with new Audio-Technica headworn mics and added new Ashly amplifers to power the Entasys 212s.
12
NASHVILLE Music City Center (MCC), set for a grand opening May 19-20, 2013, named LMG as its preferred AV supplier. Noting a multi-year contract to support the MCCs AV needs with dedicated onsite staf and equipment, LMG also recently named Curt Wallen accounts manager for the site. The Music City Center features 1.2 million square feet, a 350,000 square foot exhibit
hall, a 57,000 square foot grand ballroom and 18,000 square foot junior ballroom, and about 1,800 parking spaces. It also ofers 90,000 square feet of meeting room space approximately 60 meeting rooms and 32 loading docks that provide ultimate fexibility and ease of loading in and out for convention planners. On track to receive LEED Silver certifca-
tion, the facilitys green features include its curvilinear 175,000-square-foot roof, which is designed to collect rainwater for landscape irrigation and toilet facilities with a 360,000-gallon storage tank. We are thrilled to announce our new relationship with the Music City Center, and believe this unique facility will elevate Nashville as the next great meeting destination, said LMG CEO and president Les Goldberg, citing the companys recent expansion and proximity to the MCC as a boon for AV clients. LMGs new 24,000 square foot Nashville ofce and warehouse, less than fve miles from the Music City Center, opened in Octo-
ber 2012. Goldberg also noted that the LMG Design Studio, which ofers a hands-on environment for clients to brainstorm and discuss creative ideas, while immersed in the latest live entertainment technology, is walking distance from the MCC. Founded by Goldberg in 1984, LMG has three business segments show technology, systems integration, and touring, and ofces in Orlando, Tampa, Las Vegas, Dallas and Nashville. LMG has provided video, audio and lighting support for some of the worlds largest conventions and meetings, nationally televised events and international concert tours.
PRG VP Bob Rendon and systems engineer Bill Daly over into the broadcast feed. Lab.gruppen FP Series amplifers delivered the necessary horsepower while a Yamaha M7CL console handled the mix. Daly elaborated, For broadcast shoots like this its always challenging to keep the house sound from bleeding into the broadcast feed. Between the XTA processor and the clean directionality of the VUE speakers we were able to achieve really good isolation with minimal efort.
Yamahas CL-5 proved a good ft. equipment and support was instrumental in helping production sound mixer Josh Reid, associate sound designer Nicholas Pope and myself achieve our vision. With the orchestra located on one side of the house, Schreiers main objective was to
NEW ORLEANS Sound designer Michael Paz turned to Outlines iMode technology for the networking backbone at the renovated Little Gem Jazz Saloon, a nightclub/ restaurant in New Orleans historic Jazz Alley. There are two separate performance spaces one foor apart, and Outlines iMode can be used to monitor, control and run the same program material through both systems simultaneously, Paz noted. The upper and lower performance spaces of the Little Gem Saloon have almost identical front-of-house systems based on three Outline DVS12P-iSP self-powered, 12-inch, twoway trapezoidal cabinets arrayed in a left-center-right confguration. A total of fve Outline iSM112-iSP cabinets, two on the ground foor and three in the upper level space dubbed The Ramp Room in homage to Rampart Street, act as foor monitors in the venue. In addition, one Outline DVS118SW-iSP single 18inch subwoofer rounds out the installation. All iSP designated speakers are iMode-capable. In a historic venue like the Little Gem Saloon, the sound system should be pleasing to the eye, but invisible to the ear, said Tom Bensen, Outline North Americ.
The Richard Knox Trio performs at the Little Gem. 16 MAY 2013 fohonline.com
GLOBALNEWS
MEXICO CITY Tecno Son Espectaculos has added the Nexo STM series to its rental inventory. The purchase includes 36 sets of STM cabinets; a total of 108 Main M46, Bass B112 and Sub S118 cabinets and the Nexo Universal Amp Rack (NUAR) power and management system. One of the largest sound rental providers in the country, Tecno Son Espectaculos is led by Sergio Zenteno. With 30 years experience in the industry, Zenteno has been a member of Nexos STM project since the beginning, when it was a complete secret, he noted. As a freelance engineer himself, and the frst person to bring line array itself to Mexico, Zenteno has in-depth knowledge of Nexo systems, including the large-format GEO D and GEO T arrays. Zenteno said that while the new STM rig was a big investment, it is also a good business decision. He was also impressed by the STM systems exceptional SPL performance and its unique modularity, which allows him to split it into several small systems or combine it on huge festivals like Vive Latino, Mexico Citys largest music event, in the Foro Sol, Azteca Stadium and other large-scale venues.
The Noa Beach Club on Pag is using d&b audiotechnik C3 line arrays.
ZRE, Croatia The owners at Noa Beach Club, located in this town on the island of Pag, may not have to worry about its amplifed music rousting early-bird Italians from their slumbers across the Adriatic Sea. But there are other residents living on islands adjacent to Pag in the northern Dalmatian archipelago. To curb the likelihood of noise complaints, Tomislav Kuki Koran, from Rijeka-based Sunfower, steered the club toward C3 loudspeakers from d&b audiotechnik. Noa Beach Club owners Ivan Joki and Zlatko Balako were well-aware of the need for a good sound system before they could hope to attract internationally-known DJs. At frst, the d&b C3 loudspeakers might seem a surprising choice for a beach location, but as Koran noted, the Noa Clubs needs were unusual in that respect. Firing out to sea made noise
From left, Simon Bull, Martin Audio; Martin Connolly, Capital Sound; Anthony Taylor, Martin Audio; and Keith Davis, IanColville and Paul Timmins, Capital Sound.
LONDON Capital Sound said it would be providing a Martin Audio MLA system along with DiGiCo consoles for the main stage at the AEG-Barclaycard British Summer Time Concerts in Londons Hyde Park this July. Acts set to perform during the July 5 to14 outdoor concert series include the Rolling Stones and Bon Jovi. It is the frst year that AEG has promoted the shows and that Capital Sound has been asked to undertake this high profle role. The company is understandably delighted by the appointment. We are very pleased to have been asked by AEG to provide the sound for the Hyde Park main stage, using the Martin Audio MLA system and DiGiCo consoles, said Capital Sound project manager Martin Connolly. Following extensive listening tests, the system was chosen thanks to its exceptional controllability and audio quality. We know it will perform extremely well and are really looking forward to the shows.fohonline.com 2013 MAY 17
GLOBALNEWS This years Carmen followed 2012s La Traviata
Riva Audio, XLR Complete Major Renovation for Opra Royal de Wallonie LIGE, Belgium After more than two years of restoration work, the 1875-built Thatre Royal de Lige, home of the Opra Royal de Wallonie (ORW, also known as the Royal Opera of Wallonia) has re-opened its doors to the public The 31-million Euro renovation project features a full L-Acoustics sound system, installed by Riva Audio and XLR sprl. The setup includes 20 KIVA cabinets (two six-KIVA arrays in front of the curtain and two four-KIVA arrays to serve the lower seats) plus eight SB18 and 14 8XTi as inflls. Two four-KIVA arrays round out the new design for the 1,440-seat venue. The big challenge was to persuade the city ofcials and opera management to place new audio gear in this historic 19th century building they wanted the venue to be sound reinforcement-less, said Frdric Vard, Riva Audios managing director. The entire venue was cabled with Cat-6 fber optics, replacing the traditional microphone snakes. Installing Soundcraft stageboxes for the signal transfer from the stage to the Soundcraft Vi4 console was a crucial measure, said Vard. In an opera environment, noiseless connections are essential throughout the system.
M and N chose One Systems 108IM and 112IM speakers for the 247,000-squarefoot facility.
Panasonic System Japan handled the install. to change the coverage pattern on the horns to 105 x 60 degrees was a huge plus for this application.
Sick of getting your subs blown by EDM? Cant turn up loud enough without damage?
fohonline.com/app
GLOBALNEWSSwedish Nightclub Doubles Up With Digital Consolesminimize analog cable runs and facil STOCKHOLM Hamburger Brs itate system confguration. Onboard ofers live music and also hosts dinner efects were also cited as an important shows and corporate events. Head of factor, along with size and weight. We sound David Granditsky recently upsometimes need to move the monitor graded the venues FOH and monitor console down from the balcony where consoles. The new setup includes a pair it is normally situated. This is now a of Innovason Eclipse GT consoles and considerably less daunting task than two DioCore racks (64 and 48 inputs) it was previously, said Granditsky. The along with three SR16 mobile stageboxEclipses small footprint has enabled us es to replace analog snakes. to gain six more seats another boOne of the Swedish capitals most il The club added a pair of Innovason Eclipse GT consoles and two DioCore racks. nus. We also like the M.A.R.S. recorder lustrious venues and at the same locale we used it the other day to record Grace Jones, Rod Stewart and many more. for nearly three centuries, the original Granditsky, who serves as FOH engineer a live album that was released a couple of venue was demolished in the 1970s. The new facility built on the same site has hosted Sam- for Hamburger Brs, was looking for a sys- weeks after the opening of the current show my Davis Junior, Liza Minnelli, Tower of Power, tem based on an audio network that would and it worked like a dream. MPM, which has used Adamson gear for the Carcassonne Festival and Scorpions, is stocking up on E15s.
Once
I have been using the i5 on snare (top and bottom) for for five years, and its become one of my favorites. This microphone has an incredible SPL response with a smooth low end, and is durable enough to stand up to all the abuse from touring.Stephen Shaw - Front of House Engineer - Buckcherry
The Audix i5 is a workhorse and is one of the most durable mics I own. It can adapt to most situations, but I prefer it on snare because it doesnt color the natural tone of the drum.Joe Amato Front of House Engineer The Gaslight Anthem
Thanks to the Audix i5, getting a great snare drum sound is something that I take for granted. The i5 is what style of music. It is equally outstanding on stage and in the studio. The i5 keeps everyone happy: drummers, engineers, producers, and the audience.Charles A Martinez - Front of House Engineer Steely Dan
MOSCOW Allen & Heaths Russian distributor, MixArt, provided a new GLD digital mixing system to manage FOH and monitors at the V.L. Durov Animal Theatre. The facility has been entertaining animal lovers of all ages since it was founded in 1912 by renowned clown and animal trainer Vladimir Leonidovich Durov. The revamped sound system includes a QSC line array, a GLD-80 console supplemented by GLD-AR2412 and GLD-AR84 I/O audio racks. The GLD has a user-friendly interface similar to that of analog mixers, so its easy for the theatres sound engineers and technical staf to learn even if they had never used a digital mixer before, said MixArts Igor Eremin. Secondly, it is a very compact system, so we were able to install it at the back of the seating area, instead of putting it into a closed audio control room. Thirdly, considering the mixers great functionality and quality of sound, it is modestly priced.
This mic is slammin! If youre tired of a heavy stick hit blowing your snare mic cap to pieces, youll love the Audix i5!Anthony Roberts - Monitor Engineer Tower of Power
On the road you need three things: WD-40, gaffer tape, and an Audix i5. Use the first if it wont move, the second so it doesnt move, and use the i5 when it has to sound good. The Audix i5 is the thinking mans standard for an all-purpose snare mic.Howard Burke - Front of House Engineer - Little Feat
When JD Blair (Shania Twain) is out with us, I use only Audix mics on his kit. I have also used them for Derico Watson (Victor Wooten Band) for years. For full clarity, body, and accurate snare reproduction, I trust only the i5. Audix has never let me down!Jack Trifiro Front of House Engineer Shania Twain, Victor Wooten Band
I am quite familiar with the Audix i5, because I use it on both of Travis Barkers snares. The i5 handles the high SPLs of his fast and hard playing, as well as the subtle nuances of his delicate rolls, all without coloration or distortion. This mic helps me get a great mix!Jason Decter Front of House Engineer Blink 182
Audix is extremely proud of our award-winning i5 dynamic microphone, and of the many prestigious artists and audio pros who rely on it for live performances and studio sessions. The i5 accurately captures the backbeat of every drum kit - the snare drum.
Pictured with the DVICE a patented rim mount clamp with flexible mini-gooseneck.
20
ONTHEMOVEAdamson Systems Engineering added Brian Fraser to its headquarters sales and support team. Fraser, who has worked Brian Fraser in production management, at front of house and at monitors, has more than a decade of experience working in the live, touring and installation markets. Audio-Technica U.S. named Gary Dixon sales engineer/installed sound. Prior to Audio-Technica, Dixon worked with CAD Audio, SR Marketing, Integra Enterprises and Hudson Cable Television. served in various product engineering and marketing positions. Harman also named John Powell vice president of sales. Powell has been with John Powell Harmans sales team since 2001, having frst served as director of sales at Harman Music Group and, more recently, as director of sales for Harman Professional. Harman also named Brian Divine Brian Divine director of marketing of its Loudspeaker Special Business Unit (SBU). Divine returns to Harman from Bosch Communications Systems, where he last served as business line manager, Professional Sound for the brands Electro-Voice and Dynacord. Previously, Divine held key positions within Crown Audio for eight years, including marketing director of Installed Sound and Touring. Hosa Technology named Kyle Lassegard web marketing manager. Lassegard, who had managed Hosas marketing communications Kyle Lassegard activities for the past two years, is now responsible for all online marketing plans and programs. L-Acoustics named the Hills SVL Group as its distributor for the Australian and New Zealand market. Hills SVL, which operates Laure Guymont, Don seven sales ofcMcConnell and Tim McCall es and warehouse facilities, will handle L-Acoustics full product range. Pictured here are L-Acoustics Laure Guymont, Hills SVL Groups Don McConnell and L-Acoustics Tim McCall at L-Acoustics headquarters in Marcoussis, France. Martin Audio named James King director of marketing. King comes to Martin Audio from Motorola, where he served as EMEA brand marketing James King director. Prior to Motorola, he was with Dyson, among other companies. He will be based at the Martin Audios headquarters in High Wycombe, U.K. Meyer Sound named Bob McCarthy director of system optimization, a new position. McCarthy will be a part of the R&D depart Bob McCarthy ment, working on networked audio solutions. In addition, he will remain an active instructor in the Meyer Sound education program and support the design and commissioning of new Meyer Sound installations. Meyer Sound Marc Goossens also named Marc Goossens business development manager, Installation Markets, also a new position. Prior to Meyer Sound, Goossens was senior vice president, CTO and CIO at FUNA International Inc. He is based in Cincinnati, OH. Movek Corp LLC named Tommy Kim director of business development for Asia. He is based in Seoul, South Korea. Movek also named Promedia Tommy Kim Innovative Solution (PIS) distributor for myMix in Indonesia. Riedel Communications named Daniel Huard sales manager for Canada. He will work to increase awareness and adoption of Daniel Huard the companys products in broadcast, entertainment, and sports event applications, and to continue the growth of Riedels rental services.
Gary Dixon
EAW named Mega Audio, GmbH, based in Bingen, Germany as their distributor in Germany. Harman Professional promoted Mark Gander, who frst joined the company in 1976 as a transducer engineer, as director of JBL technology. Since joining the company he has
Mark Gander
21
2013Musikmesse/ Prolight + SoundFOH Staff Report
SHOWREPORT
prolight+soundSuSy Lowe
Product Hits of
Jochen Gnther
very year, the arrival of spring brings the Musikmesse and companion Prolight + Sound music/pro audio tradeshows to Frankfurt, Germany. For anyone whos never attended, its a huge affair sort of like a combined NAMM / AES / InfoComm / LDI / DJ Expo, with 10 exhibition halls (plus outdoor displays) spread out across the citys expansive Messe fairgrounds. Yet despite some initial concerns about the fnancial health of the European Union particularly given the difcult economic conditions in Greece and Portugal Musikmesse / Prolight + Sound (which took place from April 10 to 13) was a huge success. In fact, the 2013 event set a new record of 113,000 visitors from 142 countries up three percent from last years attendance of 109,481 from 120 countries. This veritable food of visitors exceeded
Solid State Logics new Live console was the talk of the show.
The combined exhibitions set an all-time attendance record of more than 113,000 visitors from 142 countries.
Avid unveiled its latest generation Pro Tools 11 DAW platform at the show.
Pietro Sutera
by a wide margin not only our expectations but also those of the exhibitors, said Messe Frankfurts executive board member Detlef Braun. This is a fantastic result. Other than a few light rain sprinkles, the main complaint about this years event was that the first few days overlapped the NAB show in Las Vegas, making for some creative travel plans as some exhibitors and visitors scrambled all-night connections to attend (at least part) of both shows. No problem with that next year, although the same overlap will return again for the 2015 events, so be warned. But aside from these minor glitches, with 2,285 exhibiting companies from 54
countries there was plenty to see and hear at Musikmesse / Prolight + Sound 2013, including a lot of activity in new consoles and speaker systems. So lets get started!
expanded metering and more. Shipping is slated to begin in Q2 2013, and despite rumors to the contrary, both Avid and third-party ASIO interfaces are supported. In perhaps a more surprising move, studio console manufacturer Solid State Logic (solidstatelogic.com) launched Live, its first mixer for live sound production. Based on SSLs new Tempest DSP platform, Live offers 192 user-configurable, full-processing audio paths at 96 kHz, and features both local I/O as well as a full range of MADI-connected stageboxes as well as fiber support for up to 256 channels of bi-directional audio and control. Extensive automation, onboard effects and high-grade preamps round out the package. Pricing ranges from $84,000 and $130,000, depending upon configuration and its due out in September. For more details, go to fohonline.com/news/8562ssl-launches-live-console.html.The expansive courtyards between the exhibit halls of the Messe Frankfurt fairgrounds ofered ample space for outdoor demos of large P.A. rigs.
Avid (avid.com) kicked off the show with the announcement of the Pro Tools 11 platform. While not specifically a live sound product, there are plenty of users who integrate PT into their systems either as show tracks, virtual sound checking or live recording/ archiving and will be interested in this major DAW that provides users with new, high-powered audio engines, 64bit architecture,
Consolesot all new mixer debuts were in the megabuck class. Allen & Heath (allen-heath.com) turned a few heads with its QU-16 rack-mountable digital mixer, a self-contained design with 16 mono onboard XLR /TRS analog inputs, three stereo inputs, four stereo FX return, four FX engines, moving fader automation, four mute groups, 16 buses and a 800 x 480 color touch screen. But one slick trick it adds is a Cat-5 dSNAKE port
for connecting to optional AR2412 or AR84 stageboxes. The Qu-16 is also iPad controllable via the free Qu-Pad app and is fully compatible with Allen & Heaths ME Personal Mixing System for individual monitor mix control.
Behringer iX-16
Speaking of iPads, Behringer (behringer. com) bowed its iX16, a compact 16-Input, 8-bus digital mixer with 16 programmable Midas preamps and USB audio interface. The iX16 can be wirelessly controlled from anywhere in the venue using iPad/iPad mini, tablet PC, laptop or Mac computer. Other features include six TRS aux sends; two XLR main outs; 18 x 18 channel USB 2.0 audio interface; eight buses with inserts, six-band parametric EQs, full dynamics processing and onboard Virtual FX rack.
for iPad with a host of new features, including new channel plug-ins, vintage-style EQ/ dynamics and improved snapshot operation. Midas (midasconsoles.com) was demoing Generation 2.1 software for its entire range of digital consoles. V2.1 adds a rack-full of new latency-compensated FX plug-ins and dynamics processing options to every console in the PRO series, as well as the fagship XL8. New FX plug-ins include a dual-channel Klark Teknik DN60 Real Time Analyzer, a Klark Teknik tape saturation efect, sub-harmonic generator, multi-channel input phase adjustment insert, a ffth input compressor option, ducker mode for noise gates, plus a new transient accent gate option and a variable presence control for all input channel compressors. Roland Systems Group (rolandsystemsgroup.com) ofered numerous upgrades for its entire digital console line, including now providing PC-based SONAR Essentials multi-channel REAC recording to all registered V-Mixer owners. Simply connect a Cat5/6e cable from a V-Mixer console or split port to your PCs network port to capture up to 40 audio channels. Roland Systems Group is also now shipping an iPad control app and V1.5 software update for its M-300 V-Mixer. Available in three frame sizes, Soundcrafts (soundcraft.com) Si Expression 1, 2 and 3 digital consoles ofer 16/24/32-faders and mic inputs respectively; all three are capable of up to a staggering 66 inputs to mix by connecting any Soundcraft stagebox including the two new Mini Stagebox 16 and 32 (16 x 8 and 32 x 16) models or by connecting additional inputs over MADI or AES/EBU. Other features include onboard BSS, dbx and Lexiconprocessing, central color touchscreen control, iPad ViSi remote control and FaderGlow, adopted from Soundcrafts Vi Series large format fagship consoles.
DiGiCo (digico.biz) unveiled SD Convert, a standalone fle interchange software tool that lets engineers transfer their session fles from one console in the companys SD range to any other SD series console. This makes it possible to move freely up and down the console range depending on space, budget and system requirements. The latest update to Mackies (mackie. com) Master Fader control app for its DL1608 and new DL806 digital live sound mixers is now available. Master Fader v1.4 launches 24 MAY 2013 fohonline.com
Yamahas MGP24X (shown here) and MGP32X combine digital convenience with an analog sound.
Yamaha (yamahaca.com) expanded its groundbreaking MGP mixer series with the MGP32X and MGP24X, with not only provide more input channels but also add three new digital features: USB device recording and playback, graphic EQ and a multiband com-
prolight+soundpressor. All combine a premium analog sound (with all-discrete Class-A preamps) with comprehensive digital control capabilities. In addition, the new MG large-format consoles ofer onboard VCM (Virtual Circuit Modeling) and two separate studio-grade efects processors, new 31-band graphic EQs on the stereo bus, 14-band GEQ and Flex9GEQ modes, and single-band and multi-band compressors. Yamaha also announced version 4 of its StageMix iPad app for its CL Series, M7CL and LS9 digital consoles. StageMix v4 includes new dynamics parameter editing, output port delay editing, output port level tweaking (gain/attenuation), PEQ copy/paste, phantom power switching, mix send pre/post switching, HPF slope parameter (CL V1.5 only), retina display support and other enhancements. designed as a standalone mid-size line array but can also be used as a downfll, or side fll in a larger e15 arena system. As the e12 and e15 share the mid and high components, they create a uniform and seamless ribbon of mid/high energy when the e12 is used in a downfll confguration. Also new from Adamson is the A-218, a workhorse double-18 subwoofer and the Point Concentric Series of passive, co-axial loudspeakers. Ideal for under-balcony or stair fll, the new line is available in a double-5 inch (PC5), single-6 inch (PC6) and single-8/10/12 (PC8/PC10/ PC12) versions. Alcons (alconsaudio.com) showed a production version of its LR24 mid-sized pro-ribbon line array, which the company refers to as its response to market demands for linear sound systems. The LR24 is said to deliver the same SPL as equivalent products in the midsize line array category, but with 15dB less distortion. d&b audiotechnik (dbaudio.com) showed Vi essentially installation-specifc versions of its popular V-Series medium-to-large line array systems, designed to ofer smooth and even frequency response over distance with high dynamic bandwidth and headroom. The series includes the Vi8, Vi12, Vi-SUB and complementary D12 DSP controlled amplifcation. ZLX is Electro-Voices (electrovoice.com) new line of portable speakers in powered and passive versions, available as 12- and 15-inch two-way models that can be used for mains and monitors. A proprietary split-bafe design optimizes driver time alignment.
The L-Acoustics 5XT packs a high-SPL coaxial punch into a diminutive enclosure.
Loud-Speakers!This years Musikmesse / Prolight + Sound expanded the opportunities to actually hear speaker systems with the addition of demo rooms in the Forum building, as well various outdoor stages set up in the courtyards adjacent to Halls 3/4/5 and Hall 8.0. And there was plenty of new product to suit every price, application and production requirement. Adamson Systems (adamsonsystems. com) celebrated its 30th anniversary with an event at Frankfurts famed Villa Leonhardi restaurant that attracted a number of A-list attendees including Greg Brunclik from Clearwing Productions and Peter Hendrickson of Tour Tech East. On the show foor, the company unveiled two new additions to its fagship Energia line the new e12
full-range line array module and e218 subwoofer. Like the e15, the new e12 is built around the e-capsule, a surrounding module constructed in aircraft grades of lightweight aluminum. With its 12-inch Kevlar driver, the e12 was
Adamson Systems announced a new e12 addition to its fagship Energia line.
Call 800-356-5844
or visit fullcompass.comLeading The Industry For Over 35 Years
25
SHOWREPORTJBL Pros JRX200 series ofers afordable passive systems for portable P.A. applications.
Equipson Arion 10
JBL also announced extended V5 preset support for additional models of its VTX Series and VerTec line array loudspeakers, which leverage the OmniDriveHD linear phase FIR processing capability of Crowns I-Tech HD DSP power amplifers, while adding improvements in horizontal coverage.Kling & Freitag Sequenza 5
Arion 10 from Equipson S.A. (equipson. es) is an ultra-compact line array system for small and medium size events and installations. Arion 10 combines a single 10-inch neodymium woofer with two 1.7-inch compression drivers on a fat front waveguide. A matching double-12 sub is also ofered. Fohhn Audio (fohhn.com) is shipping its new LX-220 hybrid line source system, the latest addition to its popular Linea Series. The 2.2-meter tall, 4-way loudspeaker is equipped with 18 long-excursion, 4-inch neodymium speakers arranged in a column, plus three 1-inch compression drivers (horn-loaded on the Fohhn Waveguide system). The new JRX 200 series of portable passive P.A. speakers from JBL Professional (jblpro. com) includes the JRX212 12-inch 2-way loudspeaker/stage monitor, JRX215 15-inch 2-way speaker, JRX225 dual-15-inch 2-way speaker and JRX218S 18-inch compact subwoofer. Designed to provide pro-level performance at an entry-level price, the top cabinets feature JBLs 2414H-C 1-inch compression driver mated to JBLs Progressive Transition waveguide. 26 MAY 2013 fohonline.com
Kling & Freitag (kling-freitag.com) debuted Sequenza 5, a compact line array based on the larger Sequenza 10, but with four 5-inch mid drivers and three 1-inch neodymium HF drivers. A matching Sequenza 5B 12-inch flyable subwoofer extends LF performance and the speakers easily adapt for flying or ground stack applications. Nicknamed the cheese box, the Redline KRM33 from K-Array (k-array.com) is an ultra-compact and low-profle powered wedge speaker with a controlled horizontal pattern and extended frequency response. Ideal for
prolight+sound2013 MIPA AwardsDPA d:facto II vocal mic
Outline expanded its GTO series with the new C-12 line array.
ne highlight of Musikmesse was the MIPA awards, held during the show on April 11. Here are some of the nominated and winning products most relevant for live sound pros. In the Portable Sound category, nominated products included Yamahas Stagepas 400i/600i, HK Audios Lucas Nano 300 and Line 6s StageSource L3t powered speaker. Line 6 won. In the PA System category, nominees included d&b audiotechniks V-Series, JBLs VTX and NEXOs STL, with JBL taking the top honor. The Live Microphones/IEMs category included the DPA d:facto II vocal mic, Sennheisers Digital 9000 wireless and AKGs D12 VR kick drum mic. DPA won. Nominated in the Live Mixing Desks category were DiGiCos SD5, Soundcrafts Si Performer and Behringers X32, with the award going to Behringer. MIPAs studio technology awards Recording Software category featured nominees MOTU Digital Performer 8, Steinberg Cubase 7 and PreSonus Studio One version 2.5; PreSonus won. The Mixing Desk (Project Studio) nominees were Allen & Heaths GS-R24, Behringer X32 and PreSonus StudioLive 16.0.2. PreSonus won in that category as well.JBL VTX line array
under-balcony and specialty installs, it has three 3.15-inch cone drivers and one 6-inch passive radiator for a 70 Hz to 18k Hz response. It also features an onboard DSP-driven, two-channel amplifer and a direct USB connection. L-Acoustics (l-acoustics.com) is now shipping its 5XT ultra-compact coaxial speaker
and SB15m compact single-15 subwoofer. The smallest member of the companys XT coaxial series, the 5XT has a one-inch diaphragm compression driver coaxially loaded by a fve-inch low-mid transducer mounted in a ported birch ply enclosure. The LA122 compact line array system from Next Audio (next-proaudio.com) incorporates a 12-inch neodymium woofer with two 1.4-inch exit neodymium compression drivers on a wave converter that transforms the spherical waves into cylindrical isophasic waves coupling seamlessly with the others high frequency transducers of the array. Outline (outline.it) celebrated its 40th anniversary with the unveiling of its new GTO C-12 line source array, which contains dual 12-inch
27
prolight+soundof available IIR and FIR flters and a range of powered subs are ofered. Crown Audio (crownaudio.com) is now shipping its DriveCore Install (DCi) Series, with 2/4/8-channel analog amplifers ranging from 300 to 600 watts into 4 and 8 ohms and 70/100-volt systems. All are 2U designs and feature a proprietary DriveCore amplifer IC chip that replaces more than 500 parts from a typical amplifer design with one single IC. L-Acoustics (l-acoustics.com) has added the LA4X amplifed controller to its amplifed controller series. The LA4X is based on a 4-input x 4-output architecture combining the benefts of self-powered loudspeaker packages with the fexibility of outboard DSP and amplifcation. Lab.gruppens (labgruppen.com) Intelligent Power Drive amps ofer advanced DSP features in an afordable, compact, single-rackspace package. Both the IPD 1200 (2 x 600 watts into 4-oms) and IPD 2400 (2 x 1,200W) incorporate integrated DSP, networked monitoring and control via computer or iPad, a 4-channel input matrix, confgurable front panel controls, analog and AES3 inputs, Lab. gruppen limiters and rugged build quality. Powersoft (powersoft-audio.com) adds three new 8-channel power amps for fxed installs. The Ottocanali 4K4/8K4/12K4 can
Amplifers: Turning It Up
operate at low or high impedance; down to 2 ohms and can power 70/100V distributed lines without external transformers. These Class-D amps provide up to 12,000 Watts over eight channels for the largest model. Smart Rails Management technology maximizes system efciency to drastically reduce power consumption at any load/usage condition.
Production EssentialsAnd to keep your system from self-destructing, Eminence (d-fend.net) is now shipping the stand-alone version of D-fend SA30, designed to protect passive loudspeakers from excessive power conditions. D-fend allows maximum driver performance while ensuring damage-free operation and eliminates worries about blown speakers, HF drivers or crossovers. The user simply sets the thresholds and D-fend monitors/limits the amount of input power it passes through to the loudspeaker. Its USB compatible, and can be programmed to yourEminence is now shipping a stand-alone version of its D-Fend speaker protection system.
woofers, four 6.5-inch cone midrange units and two 3-inch throat compression drivers. The C-12s footprint is exactly the same as all the other modules in the GTO range, allowing full mechanical compatibility with GTO, GTOSUB, GTO-LOW and GTO-DF in terms of rigging, fying hardware and wheel boards. The GTO C-12 cabinet is 21.6% smaller in the vertical plane than the GTO, and the overall cabinet weight is 30% less that its larger sibling. Proels (proel.com) new Axiom AX2010A is a passive line array with two back-loaded 10-inch woofers paired to dual 1.4-inch exit compression drivers mounted on transmission line wave-forming waveguides. RCF (rcf.it) was showing its new HDL 20-A, a two-way active line array module designed for
portable sound reinforcement and installation applications. It features two 10-inch woofers, a 3-inch titanium-diaphragm compression driver, 700W of digital amplifcation and a DSP-controlled input section with selectable presets. RCF also expanded its TT line with three twoway speakers that feature Class-D amps and 32-bit/96k Hz processing algorithms. The 800watt TT1-A has a 10-inch LF driver; the 1,600W TT2-A and TT5-A have 12- or 15-inch woofers. All HF drivers are 2-inch exit titanium dome designs. Westlab Audio (westlab-audio.com) was demoing a full line of speakers, from its LabRat Six (6-inch coaxial) to the LabTop TwelveThree (three-way 12-inch) to the LabLine TwoEight (three-way 6-inch line array). All are powered and DSP-driven with a combination
TRx3210 3-way dual 10 drivers Excellent intelligibility at 131dB SPL Stackable/Flyable with SureFly rigging Priced for faster return on investment 5-year warranty Made in USAP.O.D. concert
MADE IN
carvin.com 800.854.2235
USA
28
SWEETWATER PROUDLY SELLS GEAR FROM THE FOLLOWING MANUFACTURERS, PLUS MANY OTHERS!A Designs AKG Alesis Allen & Heath Aphex ART Ashly Atlas Sound Audio-Technica Audix Auralex Avantone Audio Avid Avid VENUE Aviom BBE Barcus Berry Beyerdynamic Blue Microphones Bose Bricasti Design Chauvet ClearSonic Cloud Microphones Crown dbx Denon Direct Sound DPA Earthworks Ebtech Electro-Voice Etymotic Research Eventide Fishman Furman Gator Gator Frameworks Hear Technologies Hosa JBL JoeCo Latch Lake Lauten Audio Lexicon Mackie Marantz Midas Miktek Mojave Audio Monster Neumann Peavey PreSonus Primacoustic Pro Co QSC Radial Rane RDE Roland Royer sE Electronics Sennheiser Shure SKB Soundcraft Switchcraft TASCAM TC-Helicon Toft Audio Ultimate Support Westone Yamaha
When it comes to great live sound, you need the right tools for the job. Compromise is not an option. Thats why we carry the consoles, loudspeakers, power amps, and other live sound gear you need to get great results in any venue. Well work with you to make sure you get exactly what you need, when you need it. From theater to arena, Sweetwater is your trusted source for live sound gear.
FREE Warranty*We cover nearly every item for a minimum of two years.
LOWEST PossiblePrices
* Please note: Apple products are excluded from this warranty, and other restrictions may apply. Please visit for complete details.
CASE FINDERIntroducing the easiest way to f nd exactly the right case for your gear! Sweetwater.com/casef nder
NEW!
Sweetwater Exclusive!
Sweetwater
CABLE FINDER
SHOWREPORTspecs from a desktop or laptop computer. Operating from a standard speaker-level signal, the D-fend SA300 requires no auxiliary power unless being used in low-power applications. The latest 6.1 version of the Lake Controller digital audio processing platform, features an improved implementation of Audinate Dante, support for SysTune 1.3 real-time audio analysis software and a number of other key developments aimed at both live sound and large-scale fxed installs. Comprising of both new software installer (Windows PC) and frmware update for Lab.gruppen PLM Series and LM Series products, Lake Controller v6.1 will be a free download from lab.gruppen.com. Riedels (riedel.net) RN.344.SI card provides RockNet integration for any Soundcraft Si Compact console via the consoles expanRiedels RN.344.SI card brings simple RockNet integration to Soundscafts Si consoles.
prolight+soundsion slot. In its frst frmware release, the card supports 32 inputs/32 outputs to the RockNet system. Word clock out is available at the front panel. The RN.344.SI lets the Soundcraft console become a part of the RockNet digital audio network and enables remote control of any RockNet mic preamp. The RN.344.SI also supports RockNets Independent Gain Feature. More to Come There was a lot more product action from Frankfurt and well cover many of those on our web site at fohonline.com and in future print issues of FRONT of HOUSE. Meanwhile, Musikmesse and Prolight + Sound returns to Frankfurt next year from 12 to 15 March 2014. So mark those calendars now and until then, Auf Wiedersehen!Powersoft ofered a peek at its nacscent M-Force technology in Frankfurt last month.
ne of the best things about tradeshows is getting a glimpse into new technologies. At PL+S, Powersoft (powersoft.it) announced the culmination of a four-year research project that could forever change the future of woofer design. Heres an early peek at Powersofts M-Force: Powered by proprietary switch-mode amplifcation, M-Force is an innovative motor design for moving a speaker cone. Rather than a conventional approach, where a voice coil moves within a magnet structure, M-Force employs moving magnets within a fxed-coil, push-pull linear motor driver under feedback loop control. The photo here shows a prototype attached to a cone on the show foor and an inset detail of the motor mechanism. Among other promising benefts, the system avoids the possibility of voice coil overheating a common cause of speaker failure. The approach to converting a signal from electrical to acoustic has not fundamentally changed since the beginning of acoustic design, says Powersofts R&D director Claudio Lastrucci. With M-Force, we have created an alternative method of acoustic transduction, by combining the latest technologies in the domains of power amplifcation, magnetic materials and advanced real-time digital signal processing. The result considerably improves the electrical-to-acoustic conversion efciency at a system level, allowing the true exploitation of the native qualities of switch mode amplifcation. This new approach brings many advantages that will set a new standard in low-frequency speaker design. George Petersen
PURE EMOTION
HDL20-A
sound culture
SHOWTIMELil Jon DJ Set Sea of Dreams 2013Transit/ LArt Pour LArt
The event featured Plantain, the Dustbowl Revival and Fort King
Sound Co
High-Desert-Music
Venue
Venue Crew
Crew
FOH Engineer: Pat McKeowen Monitor Engineer: Matt Cornick Systems Engineer: Duane Klose Production Manager: Alex Moran Systems Tech: Emily Welmerink
FOH/Monitor Engineer: Kevin Kent Production Manager: Baha Danesh Systems Tech: Gordon Hill
Gear
FOH Console: Avid Venue D-Show (1) Speakers: Meyer Sound MICA (14), CSPG Tour Subs (10), Meyer Sound 600-HP subwoofers (2) Processing: Galileo Power Distro: Motion Labs MON Console: Avid Venue Profle (1) Speakers: Meyer Sound MIDA (4), CSPG Tour Subs (4), CSPG Tour Triple 12s (2) Amps: Full Fat Audio Mics: Shure SM58, UHF-R wireless (4)
FOH Console: DiGiCo SD8 Speakers: McCauley MLA6 Amps: Lab.gruppen PLM10000Q Processing: Lake Power Distro: Six2 Rigging: CM Breakout/Snake Assemblies: Ramtech MON Speakers: Adamson M15 Amps: Lab.gruppen PLM10000Q Processing: Lake Mics: Shure
FOH Console: Behringer X32 Speakers: Mackie HDA (4), HD1801 (2) Snake Assemblies: Behringer S16 (2) MON Speakers: Yamaha A12M (4) Amps: Behringer EP1500 (2) Mics: Shure/Audix/CAD
32
Soundgarden
American Floyd
Mary J. Blige
Warbabies Productions
George and Sharon Mabry Concert Hall, Austin Peay State University, Clarksville, TN
FOH Engineer: Ted Keedick Monitor Engineer: Martin Strayer Systems Techs: Jason Brandt, Casey McDaniel, Rico Domirti
FOH Engineer: Rob Lenz Monitor Engineer: Cory Winkler Systems Engineer: Mark Hawkins Production Manager: Rick Valles Systems Tech: Lucas Wyatt
FOH Engineer: James Martin Monitor Engineer: Michael Dunwoody Systems Engineer: Chad Cain Production Manager: Victor Reed Tour Manager: Michael Huggy Carter Systems Tech: Trae Sales
FOH Console: Avid Venue Profle Speakers: Martin Audio W8LC (28), W8LMD (4), WSX (16) Amps: Martin Audio MA4.2, MA12k Processing: Dolby Lake, Apogee Big Ben Power Distro: Motion Labs Rigging: CM Breakout Assemblies: Whirlwind MON Console: Avid Venue Profle Speakers: d&b M2 wedges; EAW MicroWedge 12, Microsub; L-Acoustics SB28, Kudo Amps: d&b D12 amp rack; L-Acoustics LA-RAK; Biamp Rack (4 ch); Chevin; Lab.gruppen Mics: Telefunken M80 (vocals), Shure SM 57, SM81, 98, 91, 52, KSM137, Beta 56, 57; AKG 451, 414, Audix D6, Sennheiser e935
FOH Console: Midas PRO2 Speakers: Electro-Voice X-Line (8), XLD (16) Amps: QSC PowerLight Series Processing: XTA Power Distro: Warbabies Custom Rigging: CM Snake Assemblies: Midas MON Console: Yamaha M7CL 48 Speakers: Clair 12AM Amps: Crown Processing: Clair IR Mics: Shure, Audix, Royer
FOH Consoles: Avid Venue Profle, Soundcraft Vi4 Speakers: d&b audiotechnik V8 (12), Q1, J-SUBs, V-SUBs, J-INFRA Amps: d&b audiotechnik D12 Power Distro: Lex Products Rigging: CM Snake Assemblies: Whirlwind MON Consoles: Avid Venue Profle, Yamaha PM5D-RH Speakers: d&b audiotechnik M4, Q-SUBs, Q7 Amps: d&b audiotechnik D12 Power Distro: Lex Products
BATTLE FLEXFrom the simplest of cables, to the worlds most elaborate custom installations, Our certifed Fiber Optic department is second to none. Call us today and fnd out for yourself how Whirlwind is changing an industry.
TM
1 - 8 0 0 - 7 3 3 - 9 4 7 3 | 9 9 L i n g R d . R o c h e s t e r, N Y 1 4 6 1 2
whirlwindusa.com
33
PRODUCTIONPROFILE
MixingBecause We Can The Tour oBy GeorgePetersen
efnitely one of the hardest working bands in the business, quintessential American rockers Bon Jovi kicked of their 2013 Because We Can tour to a long string of packed SRO arenas, starting Feb. 10 at the Verizon Center in Washington, D.C. Named for the main single pull from the bands current What About Now CD, Because We Can is perhaps Bon Jovis most ambitious live undertaking to date. The tour is appearing on fve continents, including two early May dates in South Africa, as the band leaps up to a extended series of summer stadium shows, including a brief return to the States for fve stadium shows in July before heading to Brazil and Australia later this year. That enough should be big news, but of course the Bon Jovi headlines in 2013 were dominated by gossip about the sudden departure of longtime lead guitarist Richie Sambora. In mid-tour, Sambora was replaced by Phil X (Theoflios Xenidis), a noted session rock guitarist who had previously stepped in for Sambora on Bon Jovis 2011 tour. But as they say, the show must go on and Because We Can The Tour continues unabated.
On the RoadAt the helm at front of house is veteran Bill Sheppell, who applied his 28-plus years of experience to the task. Long associated as the FOH engineer with John Mellencamp, Sheppell is one of those guys who cannot be typecast into a single genre of music, having worked for artists such as ZZ Top, Prince, Michael Jackson, J-Lo, Ministry, Korn, Green Day and others. Like a lot of sound reinforcement pros, Sheppell started of 34 MAY 2013 fohonline.com
as a musician. I was a guitar player and ended up having a job and paying for all the bands audio gear, he says. So when the band broke up, I ended up having a sound company at 17. By the time I was 18, I fgured out I was a better sound guy than a guitar player and when I was young, I ended up mixing Jef Beck and Bill Gibbons with ZZ Top. Sheppell hadnt toured with Bon Jovi before, although he had a pretty good idea of what to expect. I did all their promo stuf last year, but it was all on random sound systems every six weeks or so, yet I kept some continuity by using the same console, he explains. And from the time that rehearsals for this tour started in January, Sheppell felt right at home. Its a good gig, and I like working for the organization. I have some friends whove been with them for 20 years, and its feels good. I havent been here a long time, but Ive been buds
The Bon Jovi audio crew, (L-R): monitor engineerGlen Collett; monitor tech Dustin Ponscheck;P.A. tech Thomas Morris; systems engineer Frank Principato; FOH engineer Bill Sheppell (in front); monitor engineer Andy Hill (in rear); and RF tech Ken Cubby McDowell.
Bon JoviBecause We Can The TourClair
with many of the crew for a long time, including some of the riggers I knew back from my ZZ Top days. Ive also worked with the production manager, Jesse Sandler, who was out with me on Michael Jackson.
The SystemClair is the sound company on the tour, which, to no surprise, features an all-Clair system. The speakers are i5s 14 in the main hang; 10 on the side hang and weve got Clair i5Bs in the air as subs, adds Sheppell. Ive used the d&b [audiotechnik] rig a lot, and I have a way of getting good low-end through the room with that. I can approximate that with the Clair rig, but we may switch to Clair
FOH engineer: Bill Sheppell Systems engineer: Frank Principato monitor engineers: Glen Collett, Andy Hill rF Tech: Ken Cubby McDowell monitor Tech: Dustin Ponscheck pa Tech: Thomas Morris production manager: Jesse Sandler
BT-218s in the air, because its a big rock show and there isnt much room beneath the stage we can only put three subs per side on the ground. Also, there are some high-dollar money seats up close. I cant pummel those people, so I need to get the meat of the P.A. from the air. This is why we have the subs in the air between the mains and the side hang. The upstage and rear flls are comprised of four hangs, each with ten Clair i3s. All amplifers are Lab.gruppen.
On the BoardFrom a console standpoint, Sheppell is defnitely in the DiGiCo camp. Im a SD7 fan at this point. I mixed on the Avid Profle when I was
pa SySTemmain pa Hang: (14) Clair i5s/side Side pa Hang: (10) Clair i5s/side Subs: Clair i5Bs (two columns of 12/side) upstage/rear Fills: Clair i3s amplifcation: Lab.gruppen
FOH GearConsole: DiGiCo SD7 Outboard: Summit TLA-100 tube compressors; Eventide H3000D/ se UltraHarmonizer; TC Eletronic M-2000 reverbs
mOniTOr GearConsoles: Midas Heritage 3000; Avid VENUE Outboard: TC Electronic M-5000 reverb; Summit TLA-100 tube compressors; Aphex Dominator limiter wireless: Shure Axient mics: see stage input list page 36
35
PRODUCTIONPROFILEAll Photos by steve Jennings
with Mellencamp, and I might switch to an SD10 when I go back to that. On this tour, Im tracking all the inputs and audience mics because there is no separate recording rig, and Im up to about 75 inputs. This seems like a lot for whats essentially a 5-piece rock band, but the studio guys like having everything for Jons records, so Im giving them every FOH outboard rack thing they want and everything theyve used so far in post has worked out well for them. And this way I can also have the virtual sound check and make sure everything sounds great. The SD just sounds good.
Heil PR 30 Shure KSM 313 Shure SM57 DI DI Shure SM57 DI Shure SM57 Shure SM57 DI DI Shure Beta 58A Axient Shure Beta 58A Axient Shure Beta 58A Shure Beta 58A Shure Beta 58A Shure Beta 58A Shure Beta 58A Shure SM91 Shure SM57
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75
Video L Video R Audience SL 1 Audience SR 1 Audience SL 2 Audience SR 2 FOH L 1 FOH R 1 FOH L 2 FOH R 2 Mix L Mix R Jon T/B Richie T/B Hugh TB Bobby TB
Direct Direct Shotgun Shotgun Cardioid Cardioid Shure VP 88 Shure VP 88 Shure KSM 141 Shure KSM 141 Direct Direct SM58 w/CoughDrop SM58 w/CoughDrop SM58 w/CoughDrop SM58 w/CoughDrop
MicsWeve been using the Shure Axient for Jons vocal, which in terms of RF, has been bulletproof. The wireless mics have Beta
36
58a capsules, and all the hardwired vocal mics are Beta 58as. We have the normal Shure kick drum setup, with an SM 91 and Beta 52. The snare has a Heil PR 31 on top and a PR 22 on the bottom, says Sheppell. The studio guys didnt want a condenser on hi-hat, so theres a PR 31 on that. The toms have Beta 98 AMPs these are great with the new hardware, more solid gooseneck and no separate transformer. Im using a Shure KSM 137 on ride and KSM 32s on the overhead channels. Jon and Bobbys [tour guitarist Bobby Bandiera] electric guitars are SM57s; and the Richie/Phil X iso cab has a Heil PR 30 and a Shure KSM 313 [ribbon mic]. Its not a Shure endorsement, but the stuf works well. Selection of a direct box for bass can be key to the sound on any tour. When we were in rehearsals, we listened to a lot of direct boxes, then fnally settled on a Countryman Type 85. The bass sits in there well. He [Hugh McDonald] is a very smooth player and plays with fngers, so we chose the one that gave us the most presence to let it sit out a bit.
considerable hearing loss over the years, which makes on-stage monitoring much more of a challenge. Phil, having good ears, makes it much easier. Since Phil essentially parachuted into handling Richies position, we started with wedges, adjusting the mix for him, but now hes entirely on in-ears and the wedges went away.
to constantly be mixing to and changing EQs for. Its a whole different requirement for an analog rock n roll band, or even for Paul Simon, who I used to mix with two H3000s, which worked really well. On one of his tours, which I didnt do, the monitor engineer was using a digital board and was constantly looking down at the board, rather than at Paul, who was frustrated because hed want to make quick changes in midstream and theyd take too long to implement. Monitor mixing is all about making the artist feel comfortable and secure when theyre on stage. But analog or digital, they both still have their place. On the subject of analog, I also use Summit tube compressors on Jons vocals and acoustic guitars, says Collett, and he loves the sound and warmth of that. Im using a TC Electronic M-5000, which to me is one of
the best reverbs out there. There really is not a lot of outboard gear in the mixes, but I do use an Aphex Dominator broadcast limiter on the in-ears, just to tickle it a little and keep a hard-driving mix like Jon Bon Jovis under control.
Wireless IssuesHandling wireless coordination a job thats growing increasingly complex is Ken McDowell, whos new to the Bon Jovi crew but most recently was out with Van Halen. For me, not having to worry about RF is heaven sent, says Collett, especially after the FCC sold all the 700 MHz band to Verizon, and wireless coordination became more and more difcult. Having a good, dedicated person handling RF used to be a luxury, but these days, its a necessity youre lost without it.
Loud or Louder?Unlike a lot of tours, Sheppell tries to keep the SPLs under control. Its not really loud, he says. The fans are my age or older, and nobody wants to get beat up. The louder stuf is peaking around 103 dBA; and I try to sit around 99 to 101 dBA for most of the show. For the ballads, I try to bring it down and make the crowd have to listen in to make it feel more intimate. I try to mix with enough bottom-end so it seems like a big rock show, but without brutalizing anyone.
BUYERSGUIDEBy GeorgePetersen
Beyerdynamic TG H54c
DPA d:fne
Headworn Microphones
ver the years, headworn microphones have evolved from clunky, low-f afairs into lightweight, nearly invisible transducers capable of serious audio quality that can equal or best many handheld mics. Once mainly relegated to singing drummers and keyboardists, todays headworn mics are more frequently becoming the choice of lead vocalists, classical soloists, live theatre performers, as well as for singers and spoken word in houses of worship applications. Besides interfacing with beltpack transmitters in wireless systems, these are also often also employed in a hardwired confguration. Nearly all of these mics operate from a low voltage (in the 1.5 to 12 VDC range) that would be sup-
plied via the beltpack, and would require a voltage adapter (available from most mic manufacturers) for direct connection to standard 48 VDC console phantom power. So in such applications, simply cutting of the miniature connector and replacing it with a full-size XLR will only result in damaging the mic. We checked out some headworn mics from a variety of manufacturers and encountered a wide selection in a variety of prices (all given in MSRP), terminations for various systems, and both headband and earworn styles to ft the individual preferences of any vocalist or presenter. It should be noted that most suppliers also carry headworn mic models in other styles and prices, so websites are listed, if you require additional information.
Style Headband type Condenser pattern Cardioid ColorS Beige, black termination Mini 4-pin XLR-F noteS Optional CV 18 for phantom powering; similar omni model also avail. Price $229
Style Headband type Condenser pattern Omniand directional versions available ColorS Black, brown, beige and lime green. termination Microdot; adapters available to XLR hardwire and all major wireless systems. noteS Left/right switchable; also single-ear, short/ long boom versions ofered. $620
Price
beyerdynamic.com Countryman H6
AKG HC577 L
Audix HT2
Style Headband type Condenser pattern Omni ColorS Beige termination 3-pin Mini-XLR-F noteS Can mount on either side; push-on presence cap boosts 12 kHz range by 3 dB; optional phantom adapter for hardwire applications. Price $449
Style Headband Style Headband type Condenser pattern Supercardioid ColorS Black termination Mini-XLR-F noteS Wired 48V phantom XLR (HT2P) version also ofered. Price (MSRP) $215 type Condenser pattern Omni; directional model also ofered. ColorS Light beige, tan, cocoa, black. termination Available unterminated or for all major pro wireless systems. noteS High-strength detachable cables can easily swap to ft diferent wireless models; left/ right side wearable boom; removable caps can vary HF response to individual needs. Price *Depends on termination $670-$695* Style Earworn type Condenser pattern Omnidirectional ColorS Beige, brown or black termination TA4F wired for E-V/Telex beltpacks. noteS 135 dB handling. $495
countryman.com
Crown CM-311AE
Style Earworn type Condenser pattern Omni ColorS Black, beige termination Available unterminated or for all major pro wireless systems. noteS Onboard 80 Hz rollof switch Price $339
Style Earworn type Condenser pattern Omnidirectional ColorS Black, beige or cocoa brown. termination Ships with Mipro adapter; optional adapters available for most major wireless beltpacks and wired XLR use. noteS Includes two screw-on replaceable cables Price $275
Style Headband type Condenser pattern Cardioid ColorS Black termination Unterminated noteS CM-311A (hard-wired version) operates with battery/phantom- powered belt pack. Price $499
Style Earworn type Condenser pattern Omnidirectional ColorS Beige termination TA5F wired for Lectrosonics UHF beltpacks. noteS Removable HF peak cap adds control of excessive high frequencies Price $395
audio-technica.com38 MAY 2013 fohonline.com
avlex.com
crownaudio.com
lectrosonics.com
Line 6 HS 70
Sony ECM-322BC
Style Headband type Condenser pattern Omnidirectional ColorS Tan, black termination TA4F mini XLR noteS Designed for Line 6 V75-BP, V70-BP, V55-BP, Relay G90 and G50 bodypacks. Price $199
Style Earworn type Condenser pattern Omnidirectional ColorS Beige termination TA4F, lockable 3.5mm, Hirose noteS Waterproof and sweat/makeup-proof design. Price $335
Style Headband type Condenser pattern Omnidirectional ColorS Beige, cocoa termination Available unterminated or with connectors for major wireless systems. noteS Same capsule as COS-11D lavalier Price *Depends on termination $695-$795*
Style Headband type Condenser pattern Omnidirectional ColorS Black termination Sony 4-pin SMC9-4P for use with Sony WL800 Series bodypacks. noteS Can be worn on left or right ear. Price $200
Style Earworn type Condenser pattern Omnidirectional ColorS Beige or black termination Available for Shure, AKG, Sennheiser or Audio-Technica bodypacks. noteS Interchangeable cables with attached plugs allow easily switching between several models of transmitters. Price $400
Style Earworn type Condenser pattern Omnidirectional ColorS Beige, black termination Optional adapters ft all major wireless systems. noteS Left/right side wearable; Petite model also available for smaller heads. $339 (plus $29.95 adapter)
Style Headband type Condenser pattern Omnidirectional ColorS Beige, black termination Unterminated or with connectors for major wireless systems. noteS Integrated MKE2 Gold capsule; Umbrella Diaphragm protects diaphragm from sweat. Price *Depends on termination $389-$449*
Style Earworn type Condenser pattern Omnidirectional ColorS Beige termination Available unterminated or with connectors for most wireless transmitters. noteS Wearable on left or right side.
$169-$231*
Style Headband type Condenser pattern Omnidirectional ColorS Black termination Mini 3-pin XLR-F noteS Boom adapts for left or right side. Price $27.99
Style Headband type Condenser pattern Omnidirectional ColorS Pink, black termination RODE MiCon, with optional adapters ft all major wireless systems. noteS Detachable Kevlar reinforced cable. Price $499 (plus adapter, $29-$43)
Style Earworn type Condenser pattern Omnidirectional ColorS Black, tan and cocoa termination TA4F connector for connecting to Shure bodypacks. noteS Wearable on left or right side; Kevlar-reinforced soft fex cable. Price $249
Style Earworn type Condenser pattern Omnidirectional ColorS Beige termination Includes twist-on adapters for AKG, A-T, Sony and Sennheiser beltpacks. noteS Removable, replaceable cable. Included accessories convert mic from earworn to lavalier or instrument mount. Price $319
peavey.com
rodemic.com
shure.com
widigitalsystems.comfohonline.com 2013 MAY 39
REGIONALSLANTS
KiAN supported Great Big Seas recent tour with Meyer LEO-M (30), 1100-LFC (12) and Galileo Callisto (4) components.
By Kevin M. Mitchell
ong a regional player in Western Canada, KiAN Concert Sound Services has stepped up to play ball by investing in a Meyer LEO system. This because nothing other than a Meyer system will do, according to company president Mark Reimann. Heck, he didnt even have to listen to LEO system before he bought it. Ive been dealing with Meyer since the 1980s, and I trust those guys, he says. I never have a question about their quality. John Meyer believes that what goes into the speaker should be what comes out. Their speakers dont have their own sound like others do. He likes also that their speakers are coming with built in amplifers which is the only way to go these days. But really without even hearing it? Once you go to the Meyer factory, you see the quality control is unreal, says GM Derek Mahafey, who joined the organization in 2009. When we took delivery of our LEO, we had less than four days to install it in a show. I would not have the confdence in any other manufacturMAY 2013 fohonline.com
The SS KiAN (left). Above, from left, Derek Mahafey and Mark Reimann
er to take a system like this out of the box and put it immediately up in the air. The maiden voyage of the system was a high profle event indeed: they used it frst on the Winter Festival held on Dec. 26, 2012 in BC Place, which was the frst electronic festival held at the BC, and was headlined by Deadmau5. Then theres that new speaker smell. For our existing clients, those with the big festivals that
weve been doing for four or fve years, its nice to ofer the newest and best thing, Mahafey adds. For them, its enticing to promoters, and also they know that something like this can take their festival to the next level.
that typically attracts more than 40,000 people. Especially dear to KiANs heart is the Commodore Ballroom, the ballyhooed nightclub in Vancouver, which theyve been servicing pretty much from the beginning. KiAN does a fair amount of audio installations, too, most recently at Great Canadian Gamings River Rock Show Theatre in Richmond, BC, and their Red Robinson Show Theater in Coquitlam, BC. KiAN company was founded in 1974 by Frank Jeltes. He named the company after a boat he had, where he had placed some Altec
40
When we took delivery of our LEO, we had less than four days to install it in a show. I would not have the confdence in any other manufacturer to take a system like this out of the box and put it immediately up in the air. Derek MahaffeyA-7 Voice of the Theatre speakers, says Reimann. Someone asked to rent those speakers, and so this company literally started on a boat. Reimann came on board in 1985 from another sound company. When Jeltes retired in 1999, Reimann became president and sole owner. Also in 1985, the company started what would be a lifelong relationship with Meyer speakers. I still have the original speakers! he laughs. They are still in service and still sound great. Jeltes had done well servicing the club scene, but that was dying out by the time Reimann joined, so there was some retooling that needed to happen. It worked, because by the next year, Vancouver hosted Expo 86, a four-and-ahalf-month Worlds Fair, and the company got involved with a variety of diferent installs. In the years that followed, KiAN handled larger tours and they bought more Meyer gear. By 1995, they were supporting opera great Luciano Pavarotti, which they would do for eight years as his preferred supplier in North America. We probably did 100 concerts with him and The Three Tenors, he adds, referring to Pavarotti, Plcido Domingo and Jos Carreras. Along the way, the company also served tour-
ing acts including k.d. lang, the Irish Rovers and Kenny Rogers. Today KiAN handles it all with six full-time employees and many freelancers. Our main events today are in the region, mostly in British Columbia, explains Mahafey. Although our business has become a broad spectrum of services over the years, from festivals to large corporate one-ofs, and now award shows. Recently, the company had one of its biggest: Regarded as the Oscars of India, the Times of India Film Awards (TOIFA) 2013 was staged in April at the BC Place in Vancouver. It was seen by a live audience of close to 40,000 plus millions more who watched it on TV around the world. (Because of the global popularity of Bollywood entertainment, the award show now moves to diferent locations around the world; this was its frst trip outside of India.) The three-day extravaganza featured live music, movie screenings and the red carpet award show itself.
event and some of our techs will turn around and go, wow I forgot how good these sound! And some of these guys arent even old enough to remember when they came out, he laughs. For the LEO, the two worked closely with Bob Snelgrove of GerrAudio, the Canadian distributor for Meyer. We were actually waiting for the new system to come out, and Bob was very helpful, Reimann says. Hes been a Meyer distributor for over 30 years and helped facilitate all that had to happen to make the deal work its a lot of money, and thats difcult to do for a small regional sound company. Another large event they just did with the LEO was supporting Great Big Sea, Celtic rock band from the eastern Canadian provinces. The system will be part of KiANs summer lineup of festivals including the Squamish Valley Music Festival. Itll be an active summer with the new LEO, Mahafey assures. While they take their responsibility seriously, as they handle most of the market in British Columbia, now that they have the new system, KiAN is aiming to do more touring and arena shows, Reimann says. With the touring, its not always coming down to the [lowest] dollar so much as it often does with one-ofs. Wed like to get back into more of the larger-scale touring, and thats why we buy the best. As for KiAN, the company continues to grow, both from the rental and installation sides. Ive noticed at one time it was feast or famine thered be tumbleweeds rolling through the shop on some months, Mahafey says. Now its more consistency, keeping us working steadily through the year.
Live audio engineer Salaries can range from $30K to $100K per year started extending the considerable reach of opera singer Luciano Pavarottis voice in 1995.
By PhilGraham
Fig. 1: The central cluster is one of the simplest approaches to subwoofer placement, and is predictable, yet non-directional. Photo Courtesy of Bennett PresCott Fig. 4: An endfre subwoofer array confguration.
Funny LooksMy frst attempt at directional control of subwoofers came in 1999, with the goal of reducing the spill between two stages. What I ended up trying out would today be called an endfre confgurations of main speakers, and pro audio people can be nervous to move out of the left/ right sub stack confguration into something more adventurous. If your production company, band, or venue has been looking to try some of these more advanced confgurations, let this article be comfort to see that others have blazed the trail, had successful gigs, and lived to mix another day.
Fig 2: Reversing the direction of one sub box in a ground stack (in this case, the middle enclosure) can result in a cardioid response.
Fig. 3: A cardioid array created by reversing the direction of the bottom enclosure in a ground stack confguration.
Guiding PrinciplesBefore diving into directional bass arrays, here are some essential principles for understanding how the arrays operate: Subwoofers arent very directional, so whether youre standing in front of one or behind one, their response is similar. Turning a sub around backwards is simply a straightforward way to produce physical separation in space between the drivers. 42 MAY 2013 fohonline.com
minimum and sometimes also include changes in level and polarity. efect,. Control of the array can be infuenced in each of the three dimensions by diferent array sizes and placements in each dimension. For instance, if an array is large horizontally, but small vertically, it will have narrower coverage in the horizontal and broader coverage in the vertical. Design of these arrays necessitates the availability of additional DSP, and amplifer channels, as each component of the array requires specifc processing for the directional control to work properly. Directional control is most efective in the far feld, which means at a distance far enough where the loudspeakers are similar in level to each other. If you stand very near the array, so the volume is dominated by a specifc loudspeaker, then the arrays directivity control will be found less efective. Simple arrays, like cardioid and endfre, can be set up (and visualized) without array modeling software, but more complicated arrays, with
directivity in multiple directions, are best predicted in software before implementing them in the feld. feld, provided to me courtesy of some of the participants on soundforums.net.
TECHFEATURE
ver the past two years of FRONT of HOUSE, we have covered substantial ground with respect to subwoofers. Whether on the specifcs of setting up a basic cardioid array or on how to design your own vented box, we have presented several nuts and bolts articles on the ever-important bottom-end that keeps sound reinforcement exciting. This installment is very much in keeping with the previous practical articles on subwoofers. I am suppressing my inner egghead tendencies towards discussing topics like phaser summation in the far-feld, or radiation impedance, and instead will look at a number of subwoofer confgurations as they were implemented in the feld of real gigs. We will briefy overview each confguration and discuss the implementation, and the goals behind the implementation. It is my hope that the readers will come away with a sense that many diferent clever subwoofer arrays have been implemented on real gigs, provide efective low frequency control and can be useful arrows in the sonic quiver when circumstances direct that way. Interested readers are encouraged to reach out if theyd like more technical references for creating these arrays.
Shown in Fig. 1, the frst efective improvement in low-end evenness for those small gigs. This array does not provide any directional control behind the boxes, so the amount of bass leaking behind this mobile stage is about the same being projected into the audience. Los monitor engineer, and helps Prescott manage the on-stage bass level in the theater. Notice that in both Figs. 2 and 3, Prescott and Caprio have left sufcient clearance around the arrays to allow the cardioid pattern to be formed efectively.
Prescott also provided Fig. 4, which is a picture of the stage right endfre sub array from an outdoor show in New Jersey. This particular endfre array is three rows deep, and provides at least 12 to 15 dB (or more) of rear-facing isolation across the sub band. The endfre confguration provides a broad horizontal coverage pattern out in the audience area, while still providing useful reduction in sub level on the stage behind the arrays. Endfre arrays ofer more isolation as you increase the number of rows in the array, and require a fair amount of depth. Subjectively, endfre arrays are typically slightly preferred in overall sonics to the cardioid array. Also visible in this photo is the cardioid side-fll subwoofer, and a supplemental center sub cluster to help even out the horizontal coverage in the audience.
Classic Endfre
Fig. 2, which comes courtesy of Robert Caprio, shows Cee Lo Greens side-fll rig for a series of performance dates in Las Vegas. Here the
Center EndfreDavid Sturzenbecher from Absolute Productions provides an interesting take on the endfre array for an event in South Dakota, as shown in Fig. 5. Here the 16 subwoofer
Fig. 6: The innovative ICAD approach creates a directional response simply by moving the center sub enclosure(s) back by 1/4 wavelength (at 80 Hz), or approximately 3.5 feet.
Fig. 7: Another hybrid technique involves combining two cardioid subwoofers: one front- and one rear-facing.
boxes were placed in a two-deep endfre infuenced by a large boundary towards the audience. They achieved 10 to 12 dB of isolation on stage behind the subwoofer cluster, and also achieved a more narrow horizontal bass distribution due to the width of the cluster. This is a nice confguration for the situation where you want isolation on the stage and a narrow main coverage lobe. It would be useful in the context of two adjacent stages at a festival.
fgurations spotlighted in this article may open a few eyes in terms of what can be possible for low frequency control. Whether in the simple/ no extra processing center array or simultane-
ous control of directivity in multiple directions like the ICAD array, there are many creative and useful ways to lick the problem of low frequency control.
Phil Graham is FOHs regular technical contributor and resident scientist. Email him at: pgraham@ fohonline.com.
Hybrid ArraysThe last two fgures are examples of what skilled systems techs can produce when they move beyond simple cardioid and endfre confgurations. Fig. 6 comes from Brandon Romanowski. He has dubbed this the ICAD array, for the Indigo Concert Audio Department array. This array likely started as a basic endfre confguration, but by spacing the subwoofers both vertically and horizontally, he can control the horizontal directivity and rear rejection simultaneously. This is a clever design that modeling software can really help confrm before implementing it in the feld. More advanced hybrid arrays are typically tweaked in the modeling software to nail down all the processing levels and delay times. The hybrid approach in Fig. 7 was frst confgured using the software EASE package before being implemented in the feld. It comes courtesy of Michael Smithers at Eclipse Audio in Australia. In the photo are two cardioid subwoofer pairs (one forward-facing SB1000 and one rear-facing). The two pairs are then confgured infuences the width of the horizontal coverage. Thus this array shapes both the rejection behind, and the width of the horizontal coverage. With the local residential area in clear view within the photo, this is a nice win in regards to noise complaints for the production company.
Use VIP CODE MCTFH for free access to the show foor and events!
infocommshow.orgfohonline.com 2013 MAY 43
TechPREVIEW
he word is out. At last months Musikmesse/Prolight+Sound show in Frankfurt (see report, page 22), Solid State Logic, a leading manufacturer of studio consoles and processing tools, has launched Live its frst console designed for sound reinforcement applications. This is defnitely big news, but hardly the frst time a studio-oriented manufacturer has entered the live console market, with notable examples being the Avid VENUE and PreSonus StudioLive series. Interestingly, Solid State Logic did make an unofcial entry into the world of live sound a decade ago. In March of 2003, an SSL MTP 4848 board (designed primarily for flm sound mixing) was specifed by FOH engineer Denis Savage as the house console installed into the then-new, 4,100 seat Colosseum at Caesars Palace in Las Vegas for a fve-year extended run the residency show featuring Celine Dion, A New Day. According to SSL managing director Antony David, the new Live console was a couple of years in the making, because we like to get things right at SSL and we are very confdent that we have created a console engineers will fall in love with. SSL consoles have never been designed for the budget crowd, and Live ranges from $84k to $130k, depending on confguration. Intended for FOH or monitor work on tours or installations, Live is based on SSLs new Tempest processing platform and ofers 192 audio paths at 96 kHz. All processing is built into the console surface, with 64-bit internal processing throughout.
At a Glance
Ready to Roll
Getting in TouchA central high-res 19-inch touch screen ofers visual feedback and access to the efects rack and confguration/setup menus. A separate system monitor screen (mountable on an optional spring-loaded boom arm) shows all channels, VCAs, stem groups, auxes, etc. Twelve quick controls (each with a rotary and three buttons) beneath the touchscreen are assignable as detail controls for EQs, efect parameters, etc. Color coding is used to tie together anything thats displayed on the screen with the controls, faders and control tiles. A dedicated channel control tile has a smaller touchscreen with controls for rapid access to EQ, dynamics and inserts on any selected channel path. Live also uses the SSLs Eyeconix display, which allows bitmap images (drums, guitars, etc.) to appear with each channel, for quickly identifying sources at a glance. In any case, operators are free to set up and use any combination of touchscreen and/or hardware controls that suits their own preferences. Each of Lives fader tiles can display up to fve scrollable layers, with up to fve vertical banks in each. A dedicated call button brings up each bank, and both layers and banks are identifed with user text and color-coding. within this structure, users are free to arrange channels in any order desired.
be confgured in mono, stereo or LCR and can even be routed into another Stem Group to simplify the handling of complex mixes. Essential to sound reinforcement mixing, Live ofers 32 VCA and ten mute groups. Additionally, the main fader has its own metering, solo/mute and Query buttons and can be assigned to any channel, Stem Group, aux, VCA, master or matrix output.
Renowned studio console company Solid State Logic enters the live console market with Live, a large-format digital sound reinforcement mixer that ofers 192 I/O paths, fexible routing, powerful automation and onboard efects.
Onboard ProcessingChannels can be full with complete processing or dry and consume less DSP resources. Full channels ofer high/low-pass flters, four-band parametric EQ (switchable between a precise constant Q mode and SSL Legacy EQ, high/low-pass flters with selectable slopes, SSL dynamics compressor with analog-style tube emulation, expander/gate, delay line panning and an all-pass flter. An efects rack ofers additional EQ options: 32-band graphic EQ; a 10-band parametric with selectable flter characteristics; and the unique G-EQ, a program-shaping EQ based on node selection operated with a familiar graphic EQ user interface. Efects can be accessed into the boards multiple the insert points for channels, buses or I/Os within the router. Selections within an included suite of 30 efects and tools include reverbs, delays (standard and multi-tap), modulation efects, EQ and Solid State Logics acclaimed Stereo Bus Compressor, with up to 96 efects available simultaneously. Beyond channel dynamics, the delay rack ofers de-essing, gating, dynamic EQ, Transient Shaping and SSLs popular Listen Mic Compressor. The reverb tool kit is based on the companys X-Verb plug-in, with gated verbs, plates, a specifc vocal processor, SSLs D-Gen and more. Modulation choices range from Band-Split Flanger, classic fange, envelope fanging, phaser, chorus and guitar chorus. Other useful add-ons include a denoiser, enhancer, pitch shifting and the VHD Saturator, which emulates SSLs Variable Harmonic Drive (VHD), adding amounts of second- or third-harmonic distortion to provide an edgy transistor sound or tube warmth.
SSL LiveShipping
solidstatelogic.com
nal triggers. Scene groups enable editing of all selected scenes in a single operation, and automation flters can be applied to enable storing or making changes/updates to all or any part of show fles. And entire show fles can be conveniently stored/loaded to/from a USB drive via a front panel port.
And More...In developing Live, Solid State Logic seems to have done its homework and listened to the needs of the industry. Attached to its steel chassis are handles placed for a two-man lift. The touchscreens are designed to provide clear viewing, even in daylight. A concealed light strip along the top edge illuminates the work surface. A wide variety of stageboxes and I/O options are ofered. MADI I/O connects the SSL Live-Recorder option, a single-rackspace unit that can record 64 tracks at 96kHz directly from the consoles input stage and play back through the channels in Soundcheck mode. And to ease setup, Lives onboard system tools include a tone/noise generator, a precision SPL meter and a FFT analyzer with fxed point per octave frequency spectrum analysis. So far, SSL has hosted a few private console demos of prototype systems in North America and worldwide. The frst deliveries of production models are due to start in September, 2013.
Anyone who likes the hands-on hardware approach to parameter access will appreciate the Channel Control Tile. Located on the right side of the console, this section has 15 rotary controls surrounding a 7.5-inch touch screen and dedicated hardware controls for EQ, dynamics, pan and insert efects. The Channel Control Tile also ofers access to control of delay parameters as well as confguring auxes, Stem Groups, VCAs and mute groups.
AutomationOnboard automation afords access to an unlimited number of automation scenes, which can be triggered manually or via exter-
TechFEATURE
n the early 1980s, there were no real solutions for musicians hearing onstage. With the support of the Whos Pete Townshend, H.E.A.R. (Hearing Education and Awareness for Rockers), a San Francisco-based non-proft organization, was instrumental in conducting the frst public information campaigns on music hearing conservation throughout North American and worldwide media with MTV, PBS, BBC, Time magazine, Rolling Stone and many others. The Steve Miller Band and The Grateful Dead co-headlined the frst full in-ear, wedge-less national stadium tour using Future Sonics Ear Monitors, creating a whole revolution: no foor monitors, speakers or equipment on stage: just performers and their musical instruments. Clean stages signifcantly cut down on overall stage sound levels and artists and sound crew were no longer at the mercy of being blasted by loud stage wedges and auxiliary amplifcation on stage, as they were now in control of their own stage volume and mix. Today, sales of custom-ftted in-ear monitors (CIEM) have grown exponentially, benefting music professionals in every feld, including touring musicians, club and dance bands, DJs, sound, monitor and FOH engineers, musical theatre orchestras, symphony and opera orchestras, houses of worship bands and audiophiles, as well as the general public. The pro sound market for CIEMs is reported to be 30 percent of audiology clientele and growing fast. There are many excellent pro companies supplying in-ear systems with drivers that appeal to nearly anyones taste or budget (see boxed list, above right). Universal-ft IEMs supplied with a variety of eartip styles and sizes are often a good way for musicians to test the waters of using in-ear devices, but for the best possible performance in terms of achieving isolation and eliminating sound leakage, custom ftted systems are the way to go.
diferent technical requirements in terms of the type of dam used (see below) to make the impression or how much outer ear coverage is needed, so check with them before moving on to the ear impressions phase. The process then requires going to a hearing specialist who is experienced in making custom in-ear music products. The importance of making an accurate ear impression is an essential factor in the formula for making the best possible CIEM earpiece. The art of crafting ear impressions is much like making a three-dimensional sculpture of the inside of your ears. Precise measurements and the knowledge of the interior dimensions of your ear is key. An ear impression is essentially a cast of your ear. Hearing professionals such as audiologists have the tools and know-how for being able to take accurate impressions. Handcrafted custom impressions that uniquely ft your ears are not something of the shelf. It takes a true artisan. And something that you purchase in a kit and attempt to do by yourself could be a very dangerous experience for your ears if you are untrained.
There are some do-it-yourself impression kits on the market of varying levels of quality and efciency and these provide a possible alternative to someone who absolutely (due to geographic proximity) cannot access a trained impressionist or audiologist. However, given the afordability and availability of having professional impressions done (some companies even do them free toward purchase during industry tradeshows such as NAMM and InfoComm), the convenience and economic considerations of getting pro impressions makes a lot of sense. And aside from the health considerations of inexperienced hands properly placing the dam and molding material into your ears, impressions from an experienced practitioner are far more likely to be properly made with a higher degree of ear sealing efciency. If youre about to invest in a quality set of CIEMs, starting out with a great set of pro impressions is defnitely worth the extra expense or inconvenience. Our H.E.A.R. partner and friend, audiologist Jami Tanihana (Los Angeles) would agree that attempting to make an ear impression yourself could end up costing you more with bad results and safety issues than if you went to your local audiologist to begin with.
through the gaps and may reduce the sound quality of the monitor. In order to have more of a full bass response you will need to have a good seal. Fit and comfort directly correlate to how good your monitors will sound. The better the ft, the better the feel and sound. In-ear monitors need to block -26 dB of ambient noise and ft snugly. Finding the right ear impressionist saves you time and problems down the line. Impressions for in-ear monitors are diferent than impressions for hearing aids. Hearing aids are much smaller than in-ear monitors, and their impression process doesnt need to capture a full-shelled earpiece with a completely in the canal (CIC) earmold, which is all-important for best results for CIEMs. Its crucial that you feel confident and comfortable with your choice of an ear impressionist. Call around. Find an audiologist who is music savvy and has made impressions for in-ear monitors before.
Kathy Peck is the co-founder and executive director of H.E.A.R. (Hearing Education and Awareness for Rockers), a non-proft organization that strives to prevent hearing loss and tinnitus among musicians, industry pros and music fans. Contact her at hearnet.com.
nstage strings meaning a real string section are nothing new to rock shows, although this addition to the current Rush outing is a defnite change. Another change for the continuing Clockwork Angels tour (the second leg of which kicked of on April 23, 2013, with Rush delighting a sold-out house of more than 16,000 fans in Austins Frank Erwin Center) was the bands return to a number of markets not visited during the tours frst leg, which began last September. At the same time, some things (thankfully) dont change, with the lineup of vocalist/ bassist/keyboardist Geddy Lee, guitarist Alex Lifeson and drummer Neil Peart continuing to churn out their own brand of intelligent, intricate and invigorating hard rock. So far, so good.
The ProblemBut this frst time that Rush had a string section might have been fraught with diffculties. More typically, in such instances, the string players are conveniently located of to the side, away from the amps, drum flls and Pearts elaborate 75-piece drum kit. So when Brad Madix signed on for his ffth full tour as FOH engineer for Rush this past year, he was faced with a sizeable dilemma. Not only would this iconic Canadian prog-rock power trio have a string section, but the players would be performing right on stage with Rush. Ive worked with strings in the past, but
John Arrowsmith
The SolutionWith fnd a solution that simply clamped on (as opposed to replacing a bridge or gluing something to the instrument). There were a few diferent solutions available, all of which amounted to some version of a piezoelectric pickup mounted to the instrument in one fashion or another. Then came the challenge of impedance matching and preamplifcation. Thats when things got a little dicey. All of the piezoelectric pickups are very Hi-Z. In fact, our frst choice topped the list at 10-million Ohms! Obviously, we were going to need a DI for these, and it was probably going to have to be an active one, and even then not
cludes a variety of Radial DI models used for keyboards, drum electronics, samplers and guitar efects; as well as the companys. Its nice that there are passionate designers and engineers out there paying attention to these details.
46
ast month, I wrote about the role played by using near-feld studio monitors during band rehearsals. This month I will concentrate on show applications for these small but very useful speaker systems. During rehearsals, I ran these speakers of the consoles monitor bus as one would in a studio. Once we are out on the road and working, I use a diferent output confguration to drive the nearfelds. Because I dont want the output signal interrupted whenever I PFL an input channel or AFL a VCA group or output bus, I put the nearfelds on their own stereo matrix output, instead of using the consoles Monitor Out. The monitor function is reserved solely for headphones.
This up-front console placement during the setup for the James Taylor show was later relegated toa less-than-ideal position where near-feld monitors were a must.
Signal FlowThe Avid VENUE console has eight stereo output matrices available. I use one each for Main Left/Right Out, Near-felds, Record, and Video. That still leaves four free for auxiliary uses. If I am using the Lake processing equipment I customarily employ on tour, I create the following groups on the main page of the tablet computer. On subsequent pages, these large groups are then subdivided in descending steps down to the most basic individual control elements. The following are the main control groups I use for a theater show: All PA System All PA Main Left/Right Flown Main Left/Right Ground Mono Subs All Frontflls Mono Inner Frontflls Mono Outer Frontflls Mono Under Balcony Mono House Delay Near-Fields This confguration gives me maximum control over gain, crossover drive levels, EQ and delays for the various speaker systems deployed in the performance space. It also allows me to mute the entire sound system with one click if I need to do a bit of private listening on the near-felds. The Lake systems AES digital inputs are fed by a combination of stereo and mono matrix busses soft-patched to the VENUEs eight AES outputs.
a camera shop or sporting goods store. I often use a good quality laser rangefnder to determine the various distances from the main array to any of the locations at which I am listening. A device that computes ground plane linear distance, height and hypotenuse distance is the most versatile and will yield the most accurate results. Shooting the distance and then dividing by 1.1 gives me a good ballpark number from which to start lining up the various audio elements I may be hearing. Personally, I like tuning and aligning the entire system setup by mind, eye and ear but computer assisted systems such as SMAART or SIM are perfectly valid tools as well. I always set up the near-feld delay and level last after I am certain that the arrival times of all the other audio sources already line up in the most advantageous way. As a general rule for determining when all the delays are properly set, the point of reference should always be the stage. Audience members should be minimally conscious of the fact that they are listening to distributed sound sources. For under-balcony or upper balcony arrays, I always set delay times three or four milliseconds long to ensure that aural focus is directed toward the performance and not the ceiling. I next set the volume level right at the point where it cannot be determined if the delayed speakers are actually in use unless they are turned of. I then confgure the delay on the near-felds in exactly the same way. However, I run the nearfelds at a hotter level because a mix position located deep under a balcony is adversely affected by several acoustic problems. By far, the two most difcult hurdles one must overcome are the loss of high frequency content and the overall attenuation/compression created by the architectural structure. Most FOH mixers have a volume and frequency comfort zone in which the show just sounds right. I use the near-felds to make up for the volume and high-end loss and strive to best simulate the sound of acoustic environment in free space. This keeps me from mixing too loud for the rest of the room or making the system too bright.
rectangular room that is approximately three times wider than it is deep. This less-than-perfect circumstance is immensely ameliorated by the use of near-felds. We have all encountered this problem especially at corporate events that consider sound and lighting control areas as unwelcome incursions into the decorating scheme. As soon as I made aware that a FOH mixing position has been located where its impossible to hear one side of the PA, I always require that near-felds be provided. That way I dont have to trash the detailed stereo mix that Ive carefully created and revert to mono just so I can fully hear the balance all of the instruments. At this particular event, there are two main left/right four-deep arrays of Nexo Geo systems. On each side, there are additional two-deep arrays hung about 20 to 30 feet further ofstage. My greatest challenge has been manipulating the various arrival times to a location that is way outside the audio sweet spot. I will also use near-felds in a particularly
reverberant space so that I can resist By DavidMorgan the temptation to turn up the PA in a futile attempt to hear better articulation. More volume from the sound system has never defeated a bad room. Its a far better idea to adjust the setup to best accommodate a negative acoustic space and then trust your system engineer to perform all of the on-the fy tweaks during the show. I use the near-felds to get into a comfort zone and I try to stay there during the show. Excessive refectivity and reverberation are generally problems one encounters during an arena or shed tour. In most of these larger venues, there is adequate setup space and distance isolation from the audience to use larger nearfeld systems. I therefore carry the 12-inch Tannoy T12 Dual Concentric monitors on most arena/shed tours. Judicious use of good, reliable near-felds will help to keep you focused and out of the negative headspace that bad sounding rooms can all too easily impart. One fnal reason for regularly having nearfelds at FOH is the artistic requirement for constantly updating a mix. When I have the time and space in an arena or shed, I will set up the near-felds and run a bit of the previous nights show so that I can have instant feedback on the progress of the overall mix, the correctness of snapshots and the efectiveness of efects, inserts and equalization. I revert to the system EQ I created at rehearsal on the 1031s and the sub if those are what we are carrying or the full-range rehearsal EQ on the Tannoys. One should never stop crafting the presentation or sit on ones laurels even mid-tour. I often marvel at how the luxury of carrying a multitrack recording system has now become a day-to-day necessity. What used to require its own truck now fts easily in a 17-space shock-mounted rack. What an age we live in! Safe travels, everyone! Catch David at dmorgan@fohonline.com
The SetupNot surprisingly, one of the most valuable pieces of gear in my FOH workbox comes from
DaviD Morgan
THE BIZ
notifcation, and public education related to hazardous weather events. Linkin Parks current world tour was the frst to have been deemed Storm Ready, as have production staging systems vendors Brown United and Mountain Productions. Jim Digby, Linkin Parks production manager, tells me that hes heard that several other touring artists have begun the process of accreditation for that status for this year.
Weather ConcernsWeather is always an important safety variable, and its the one factor that seems the most difcult to predict. The storms that led to structural collapses at the Big Valley Jamboree in Alberta, Canada (Aug. 1, 2009; one fatality, 75 injuries); Ottawa Bluesfest (July 17, 2011; three injuries); Indiana State Fair (Aug. 13, 2011; seven fatalities, 58 injuries) and the Pukkelpop festival in Belgium (Aug. 18, 2011; fve fatalities, 140 injuries) all packed winds that took organizers by surprise. Whether it was through heightened caution or improvements in safety procedures, training, rigging, storm monitoring or simple luck, there were far fewer fatalities and injuries from weather-related accidents during the 2012 touring season. Winds strong enough to tear down scafolding erected for advertising banners in Cape Town, South Africa (Nov. 7, 2012; one fatality; 19 injuries) did not afect Linkin Parks stadium show; the band, unaware of the chaos unfolding near the stadiums remote parking areas, performed as planned. More recently, winds were blowing in Miamis Bayfront Park on March 14, 2013 and
may have been a factor the night LED panels fell some 30 feet from the main stage structure during the setup for the Ultra Music Festival, seriously injuring two crew members. But the worst staging accident of 2012 involving the collapse of part of the massive staging setup for Radioheads concert in Torontos Downsview Park (one fatality, three injuries) occurred on a warm, sunny day, with no more than a light breeze. Other events have sidestepped high wind risk either through an abundance of caution such as Insomniac Events decision to postpone, then cancel, performances on Night Two of the three-day Electric Dai-
Wireless Microphones Personal Monitor Systems Wireless Com Systems Wired Com Systems MAY 2013 fohonline.com
THEORY& PRACTICE
ne of the more serious issues facing engineers who travel without production is consistency of audio quality or lack thereof. When your entire audio chain is diferent every night, its tough to maintain a high standard of sound, and thats not taking into consideration the drastic variations in acoustics from venue to venue. In an ideal world wed all be able to use our preferred gear every night, but the reality is that when youre relying upon promoter-provided production, its sometimes peanuts, sometimes shells. Even if you cant pack your favorite 48-input desk into a carry-on bag, you can carry (or check as baggage) a small rack with a single high-quality channel designated for your star performer. A huge selection of vocal channels is ofered from a wide variety of manufacturers. Live sound engineers can thank the recording guys for this, since its become the norm for most studios to have a few premium channels for tracking, whether or not they can aford a megabuck mixing console.
By SteveLaCerra
from the mic to the preamp, which is always a good thing. But several issues crop up: 1. Gain control of the vocal channel is now strictly in the hands of the monitor engineer. This can present difculty in situations where the FOH desk and the monitor desk are very diferent (e.g., one is a modern digital desk and the other is a vintage analog desk). Perhaps one desk has less headroom than the other and requires a lower signal level from the output of the vocal channel. 2. Compression and EQ requirements for the monitor mix may be drastically diferent from what the FOH engineer desires especially when the band is using in-ears. 3. Unless the vocal channel has multiple outputs, the mic splitter will now be operating at line level. Transformer-isolated splitters in particular may not have the
Simply Awesome.Lectrosonics gear is built like a Mack truck. We travel the world and in all the time weve been using this gear, Ive never, ever had any issues. Lectrosonics durability is, in my opinion, unsurpassed. - Lorenzo Banda, Monitor Engineer, Foreigner
PreSonus Eureka, which employs a Saturate control for emulating the sound of analog tape. The TCHelicon VoiceWorksPlus boasts voice modeling as well as modulation and harmony efects. Typically, output from the vocal channel is patched to the line input of a channel on the mixing desk, bypassing the mic pre which is often the weak link on inexpensive (and sometimes, even expensive) mixers. You may be able to bypass even more of the consoles channel circuitry by using the channel insert return instead of the line input. Desks that provide separate jacks for insert send and insert return make this easy: simply use the insert return jack as a line input. Consoles using a single TRS insert with tip = send, ring = return (or vice-versa) will require fabrication of a cable that leaves the send contact on the TRS disconnected. That being the case, keep the cable short: TRS send/return insert jacks are by nature unbalanced.
49
SOUNDSANCTUARY
f you go to church regularly, you probably know The Ten Commandments. Even if you dont difcult to learn. And if we break down all the tasks individually, the entire process becomes simple and fun. Okay, lets get on with it.
The first rule is, turn your sound system on from upstream to downstream and to always turn your system off from downstream to upstream. Or, in simpler terms, turn you mixer (and any outboard gear) on first, then your power amps or powered speakers. When shutting down the sound system, turn your power amps or powered speakers off first, then your mixer. Always turn things on and off in this order. You will avoid speaker pop and speaker damage. To take this rule a little further, when you turn power amps off, wait a few seconds (10 to 20) before turning off the mixer. Many power amps have internal capacitors that will store a voltage charge (the source of the pop) for a few seconds. So, dont turn off that mixer too soon. Second, whenever you plug in a microphone or any cable into your snake or mixer, mute the channel of the mixer that mic or cable is being plugged into. A mic (or cable) plugged into an open mixer channel can cause a nasty pop thats capable of damaging your speakers. If your mixer does not have mute buttons, just pull the fader completely down. There is one more thing to remember in the realm of pops. (Ill call this three.) Pull down faders when engaging phantom power. If your mixer has individual phantom power for each channel then pull the fader back before you give it the phantom power. Should your mixer have a global phantom
FOLO: First-Of-Last-On
Walter Bush (at the console) explains some fne points of digital mixing on the Yamaha MC7L with volunteers at Destiny Christian Church in Yuma, AZ.
es out there only use one length of XLR cables. If that is your church, more power to you. However if your house of worship uses cables of varying length, then color coding is invaluable. Wrapping cables properly will be my eighth suggestion today. There is only one way to wrap a cable so that it will unwrap smoothly and last longer. It is called the inside twist or over/under method. This technique involves holding the cable end in one hand (XLR or other) and creating a single round loop with the other hand. Then create another loop in the opposite direction by rolling the cable in your fingers. This is a bit difficult to explain without some sort of video clip, but you will create a figure-8 on top of itself. A properly wound cable will ultimately become trained to go into that shape more easily every time you wind it. This cable will be easier to unwind and last longer. Do not wrap a cable using
your forearm as a guide and dont wrap it like it was an electrical cable. Number nine is very simple but extremely important. Arrive at church early for every service. Even if you mix the same preacher and the same worship band day in and day you need to check out your mic placement, your mixer settings and outboard gear setting etc. Chances are they will be the same but better to find a problem before a service than during one. The tenth suggestion is to keep your gear clean. This is the only rule that should be a commandment, because clean gear sounds better longer. Use a vacuum or compressed air to clean your mixer, outboard gear, power amps and the immediate space that this gear occupies. Remember, cleanliness is next to Godliness. Have a comment? Contact Jamie Rio at jrio@ fohonline.com.
Jan-al cases
Case makers to Lady Gaga, Jay Z, Madonna, Metallica, Rihanna, Alicia Keys, Beyonc, Fleetwood Mac and Depeche Mode50 MAY 2013 fohonline.com
MARKETPLACETo Advertise in Marketplace, Contact: Greg 702.454.8550 gregg@plsn.com
FRONT OF HOUSE
Yamahas CL5 CL5 Yamahas 72 Hour Hour Test Test Drive Drive 72For a limited time, take the CL5 for a 72 hour test drive from Hi-Tech Audio, free of charge. A complete demo system including: A CL5 with 2 Rio3224-D I/O units in a stage rack and a 150 snake are yours to borrow for any gig youd like for 72 hours.* Dont call a salesperson, call an Owner. Louis @ 650-742-9166, Louis@hi-techaudio.com*Dealer may require deposit, shipping charges, etc. Promotion expires on June 30th, 2013.
ADVERTISERS INDEXCOMPANY Adamson Systems Engineering Applied Electronics Audio technica U.S., Inc. Audiovend Audix Corporation BassMaxx Beachsound Cable Factory Cadac Carvin Pro Audio Celestion Checkers Industrial Products Countryman Associates, Inc. D.A.S. Audio David Clark Pro Audio d&b audiotechnik db technologies DiGiCo Earthworks Full Compass Georgia Case Grund Audio Design Hi-tech Audio Systems, Inc. InfoComm Jan-Al Cases JH Audio JoeCo Ltd. Kaltman Creations KV2 Audio LAcoustics Lectrosonics Master Mix Live Meyer Sound Mogan Microphones OSA International/Mt Cases Peavey/Crest Audio QSC Radial Engineering RCF-S.p.A. Rental Point Software Inc. Riedel Communications GmbH Scorpion Case Manufacturing Shure/Full Compass SKB Cases Sommer Cable GmbH Soundcraft Sound Productions Sweetwater Sound tannoy / Lab.gruppen tVI Audio waves Audio Ltd. whirlwind Yamaha Commercial PAGE 13 36 37 48 20 18 51 C1 6 28 16 26 26 15 14 10 21 11 35 12, 25 27 26 51 43 50 C1 2 23 27 C2 49 41 9 24 47 19 C4 C3 30 18 3 4 12 51 7 8 51 29 4 51 17, 28 32, 33 1 PHONE905.982.0520 800.883.0008 330.686.2600 281.218.8330 503.682.6933 855.822.7770 305.623.3339 888.383.4883 888.954.3828 800.854.2235 +44 (0)1473 835300 800.438.9336 800.669.1422 888.237.4872 800.298.6235 828.670.1763 +49 (0)2203 92 537 76 516.249.1399 603.654.243 800.356.5844 888.422.2737 712.322.390 650.742.9166 800.659.7469 323.260.7212 866.485.9111 615.833.1824 678.714.2000 888.954.3828 33 (0) 1.69.63.69.63 800.821.1121 702.947.0877 510.486.1166 714.522.8878 866.682.2737 877.732.8391 800.854.4079 604.942.1001 (39) 0522.274.411 888.591.1962 49 (0) 202.29290 614.274.7246 800.257.4873 800.410.2024 49 (0)7082.49133.0 888.251.8352 800.203.5611 800.222.4700 818.665.4900 765.393.2574 865.909.9200 ext. 2 888.733.4396 714.522.9011
ON tHE wEB adamsonsystems.com appliednn.com audio-technica.com audiovendwireless.com audixusa.com bassmaxx.com beachsound.com cablefactory.com cadac-sound.com carvin.com celestion.com checkersindustrial.com countryman.com dasaudio.com davidclark.com dbaudio.com dbtechnologies.com digico.biz earthworksaudio.com fullcompass.com georgiacase.com grundaudio.com hi-techaudio.com infocommshow.org janalcase.com jhaudio.com joeco.co.uk kaltmancreationsllc.com liftdistribution.com l-acoustics.com lectrosonics.com mastermixlive.com meyersound.com moganmicrophones.com mtcases.com peavey.com or crestaudio.com qsc.com radialeng.com rcf.it rentp.com riedel.net scorpioncase.com fullcompass.com/shurepromo skbcases.com sommercable.com usa.soudcraft.com soundpro.com sweetwater.com labgruppen.com tviaudio.com waves.com whirlwindusa.com yamahaca.com
FOH-at-LARGEng, electric or some other technical profession. Audio engineering happens to be a great technical profession for poets, musicians and truth-seekers, as it seems to lend itself better to the artistic and idealistic type than some of the other technical careers available to emerging philosophers. Thats not to say that philosophy is too heady for the practitioners of other technical disciplines. However, we artistic types may have a little more time to be philosophical and to wax poetic as we ride the bus from gig to gig, or as we wait for the other philosophers on the crew to fnish the task of focusing their lights. The great thing about philosophy? While it can be a noble, intellectual pursuit, it is not exclusive to merely the lofty and intellectual people of this world. Everyone can have a philosophy, and most people usually have more than one to which they adhere and can expound upon; as witnessed by anyone who has taken a cab in New York City before the advent of the back seat television.By BakerLee
A Sound Philosophy IIllustratIon by andy au
century philosopher said, Every great cause begins as a movement, becomes a business, and eventually degenerates into a racket. I assume he was speaking about many applied philosophies. Possibly it might be best not to rely too heavily upon the untried philosophies and rather go with those that have been proven from a lifetime of efort, although sometimes a high ideal works out even if its not precisely as written. Margaret Thatcher said, Europe was created by history. America was created by philosophy. A bit snide, I would say, but not altogether untrue. Another quote, Rule number one, never lose money. Rule number two, never forget rule number one, coming from anyone but Warren Bufett would seem mundane, but since Mr. Bufett has such a well-proven track record, his tongue-and-cheek remark carries some philosophical weight. Alice Roosevelt Longworth said, I have a simple philosophy: Fill whats empty. Empty whats full. Scratch where it itches. Philosophically concise, to the point, and eloquently stated, although I wouldnt be surprised if Ms. Longworth was also guided by a more complicated philosophy as well.
A Sound PhilosophyThis brings me around to the point, 5. The rational investigation once again, that everybody lives their of the truths and principles of Most of us need to make a liv- lives according to one or more philobeing, knowledge or conduct. sophical viewpoints. Sometimes we ing and dont have the time or choose a philosophy that we strive 6. A system of philosophical doctrine: (i.e., the philosophy of energy to contemplate heady to achieve, and other times we cite a Spinoza). that explains our actions. thoughts, but I do have a few philosophy 7. The critical study of the That said, I started thinking about the slogans that I think can be philosophy of sound. After all, there is basic principles and concepts of a particular branch of knowla philosophy of politics, a philosophy called a sound philosophy edge; (i.e., the philosophy of of mathematics, and so on. I have read science). some of these very heady dissertations 8. A system of principles and all I can say is that I see why certain for guidance in practical afairs; (i.e., a followed correctly) will lead to posi- philosophies are written by either cave-livphilosophy of life). tive results in ones life. ing gurus or trust fund recipients. Perhaps the reason its so difcult to Other lofty ideals such as democracy, Most of us need to make a living and make a living as a philosopher is because its capitalism and socialism to name a few dont have the time or energy to contemso hard to defne. By the way, the Spinoza of are also applied philosophies, which, if plate such heady thoughts as whether or item #6 refers to Baruch Spinoza who lived adhered to correctly, should provide excel- not sound exists on its own plane outside in the 1600s and is considered as one of lent results. The same positive results should the physical realm, but after putting in some Western philosophys most important think- also apply if one should attach themselves time in the audio business, I do have a few ers. It should also be noted that other than to a particular set of religious beliefs. Philo- slogans that I think can be called a sound being a well-regarded philosopher, he was sophically speaking, these theoretical sup- philosophy. also the recipient of a large inheritance. He positions work just fne at least on paper These are a few of the sound slogans by also had a steady job as a lens grinder and yet in their everyday application, perfec- which I work and live. was known to be quite talented at his trade. tion is hard to attain. Also, considering the 1. Control your environment and do not fast-paced life we all lead, who really has let your environment control you. time to read and disseminate all the intrica2. Keep a low input and a high output. Applied Philosophies 3. Follow signal fow input never goes As a trade, philosophy itself is not a high cies of the deep philosophical writings that paying vocation (its ranked just slightly accompany these intellectual and spiritual into output. 4. What happens on tour stays on tour. above audio engineer), although if one is concepts? 5. Know your limits. successful at their chosen profession, they 6. Open the channels, keep it simple, and can then write books and appear on talk Philosophical Slogans shows to explain the philosophy behind We are all inundated with sound bytes know when to stop mixing. 7. Feedback is not an option. their success. There is an abundance of self- and advertisements, which often appear 8. Im a technician, not a magician. help literature available these days regard- to be statements purporting to be a given 9. Neatness counts. ing sex, love, diet, health and money; and philosophy. A slogan is elegant in its brev10. Stupid gigs should be compensated many of these writings espouse not only a ity and is often easier to remember than a practical means to achieving happiness in whole treatise on the alleged subject and, with stupid money! 11. All gigs are stupid. regard to the aforementioned topics but just because it is concise, does not denote And my friend Joe Biegel adds this philosalso a philosophical way to vanquish the de- that it is less signifcant than the book on mons that keeps one from being fulflled in the same subject. Do unto others as you ophy to live by: Never become the ground. their endeavors. These way-of-life solutions would have them do to you comes to mind Good advice! are not just whimsical thoughts to ponder, as one of the more notable philosophies in but rather are applied philosophies that (if a nutshell. Eric Hofer, a well-respected 20th Contact Baker Lee at blee@fohonline.com.
Philosophy, DefnedFrom the Online Dictionary, philosophy is defned as: 1. A set of ideas or beliefs relating to a particular feld or activity; an underlying theory; (i.e., an original philosophy of advertising). 2. A system of values by which one lives; (i.e., an unusual philosophy of life). 3. Any system of belief, values or tenets 4. A personal outlook or viewpoint
NEXT MONTHNew Gear InfoComm Products Preview Installations New Systems for Sports Facilities Buyers Guide Small-Format Digital Consoles Applications HOW: Recording the Service
AcousticPerformanceThe New Line of High-Output Installation Loudspeakers
Introducing
Perfect for premium, high-output, high-fidelity sound reinforcement applications such as nightclubs, performance spaces, and houses of worship, AcousticPerformance represents the perfect combination of form and function that delights the ears, the eyes - and the bottom line. To learn more about AcousticPerformance and the full line of QSC integrated systems solutions, visit us online at qsc.com/products/speakers/ap
qsc.com
2013 QSC Audio Products, LLC. All rights reserved. QSC, and the QSC logo are registered trademarks in the U.S. Patent and Trademark Offce and other countries.
|
https://id.scribd.com/document/154947349/FOH-1305
|
CC-MAIN-2019-43
|
refinedweb
| 23,170
| 60.35
|
Difference between revisions of "Web Tools Platform 2.0 Plan"
Revision as of 11:53, 26 April 2006
Contents
- 1 Eclipse Web Tools Project DRAFT 2.0 Plan
- 1.1 Release deliverables
- 1.2 Release milestones and release candidates
- 1.3 Target Operating Environments
- 1.4 Compatibility with Other WTP Releases
- 1.5 Eclipse WTP Subprojects
- 1.6 Common goals
- 1.7 Web Standard Tools subproject
- 1.8 J2EE Standard Tools
Eclipse Web Tools Project DRAFT 2.0 Plan
Revised draft April 25, 2006 (by Tim Wagner / based on WTP Project 1.5 Plan and Eclipse Project DRAFT 3.3 Plan)
Please send comments about this draft plan to the wtp-pmc@eclipse.org mailing list.
This document lays out the feature and API set for the next feature release of Eclipse Web Tools after 1.5, designated release 2.0. This document is a draft and is subject to change, we welcome all feedback.
Web Tools 2.0 will be compatible with / based on the Eclipse 3.3 Release., including projects such as JSF and Dali that are currently2 "Callisto '07" update site.
Release milestones and release candidates
Release milestone occurring at roughly 6 week intervals in synchronisation with the Eclipse Platform milestone releases (starting with M1) and will be compatible with Eclipse 3.3 builds. See Callisto Simultaneous Release for the Eclipse Platform schedule from 2006 on which the 2007 calendar will be based. (Note that this is not yet available, and that the dates below are simply a placeholder carried over from last year.)
The milestones and release candidates are:
* M4 - January 13, 2006 * M5 - March 3, 2006 (planned API freeeze) * M6/RC0 - April 14, 2006 (This is Feature Freeze except for JSF and Dali). (this 4/14 date is also called both RC0 and RC1 in Callisto plan). * RC1 - April 21, 2006 * RC2 - May 5, 2006 (tiny grace period where any safe fix can be made). * RC3 - May 19, 2006 (component lead approval required for all fixes) * RC4 - May 31, 2006 (planned code freeze, PMC approval required for all fixes) * RC5 - June 20, 2006 (PMC approval and adopter sign-off required for all fixes) * RC6 - Jume 28, 2006 Golden Master * Release - June 30, 2006 - WTP 1.5 Release (part of the "Callisto" joint release)
Target Operating Environments
Eclipse WTP is built on the Eclipse platform itself.
Most of the Eclipse WTP is "pure" Java™ code and has no direct dependence on the underlying operating system. The chief dependence is therefore on Eclipse. The 2.0 release of the Eclipse WTP Project is written and compiled against version 5 of the Java Platform APIs, and targeted to run on version 5 (aka 1.5) of the Java Runtime Environment, Standard Edition.
Eclipse WTP is mainly tested and validated on Windows platforms, it should run on all platforms validated by the platform project (note that this link below is outdated but a new version is not yet:
- Binary compatibility with 1.5 and 1.0 projects - users should need to take no special actions to use projects
from either of these WTP releases with the 2.0 version
API Compatibility:
- WTP will preserve (public, declared) API compatibility with the 1.5 release
Eclipse WTP Framwork 2.0 release.
Common goals
Adopter readiness
- productization support [125751]
- improve feature split Subsystem and Features document
- improve API coverage (convert additional provisional areas to APIs)
Improved Provisioning of Third Party Content
-
-
- Code folding
- Improved formatting, e.g. whitespace handling, WYSIWYG (for DITA, DocBook, etc.)
- Run in Eclipse RCP
- Large document performance enhancements
Architectural harmonization
DTP
Adopt DTP. Remove all Data access tools from WTP and instead use the corresponding components from DTP. We should take a two-phase approach.
- Phase 1 - Acheive functional parity with WTP 1.5, i.e. get back to where we currently are, but using DTP.
- Phase 2 - Exploit new DTP capabilites. DTP has much broader scope than the WTP 1.5 Data tools, e.g. in connection management, server exploration. We need to do an architectural assessment of current WTP capabilities and adopt superior alternatives available in DTP..
Web Services Support
- New WS-I Profiles, e.g. RAMP
- WS Security
- WS Policy
- Axis 2.0
- SOAP 1.2
- WSDL 2.0 - adopt Apache Woden 1.0
WSDL 2.0
WSDL is a key Web service artifact. WSDL 2.0 is the W3C standard and has many new features that align Web services with Web architecture, especially in the area of HTTP support and REST style Web services. All WTP tools that currently process WSDL 1.1 should be upgraded to handle WSDL 2.0. The integration should be seamless in the sense that the tools should understand both WSDL 1.1 and WSDL 2.0. All wsdl files will use the same file extension, i.e. *.wsdl. These tools include:
- WSDL editor - editor should detect namespace
- WSDL validator - validator should detect namespace
- Web service explorer
- Web service test tools - the WS-I test tools should be extended to support services described by WSDL 2.0 based on the W3C standard, e.g. verify the message log conforms to a WSDL 2.0 description
- Web service wizard - e.g. via integration with Axis 2.0 to support top-down, bottom-up, and client code generation based on WSDL 2.0
J2EE Standard Tools
Java EE 5
- JPA/Dali
- JSF
- JSF 1.2
- JSP 2.1
-)
|
http://wiki.eclipse.org/index.php?title=Web_Tools_Requirements_2.0&diff=4326&oldid=4262%7Cadditional
|
CC-MAIN-2016-50
|
refinedweb
| 897
| 68.06
|
camera_take_burst()
Take multiple photos in burst mode.
Synopsis:
#include <camera/camera_api.h>
camera_error_t camera_take_burst(camera_handle_t handle, int burst_count, void(*shutter_callback)(camera_handle_t, void *), void(*raw_callback)(camera_handle_t, camera_buffer_t *, void *), void(*postview_callback)(camera_handle_t, camera_buffer_t *, void *), void(*image_callback)(camera_handle_t, camera_buffer_t *, void *), void *arg, bool wait)
Arguments:
- handle
The handle returned by a call to the camera_open() function.
- burst_count
The number of frames to take in a single burst.
- shutter_callback
A function pointer to a function with the following signature: void function_name( camera_handle_t, void*). The function is a callback that gets invoked when the shutter fires. gets invoked when the postview (review) image data is available. This callback is used to provide a preview-sized copy of the photo. Typically, the preview-sized photo is used to provide visual feedback before the final image is available. You can use NULL if no function needs to be called.
-.
- wait
A boolean value that indicates whether the function blocks or not. If set to true, this function is blocking and will return once all specified callbacks have returned. If set to false, this function call is non-blocking and returns before all specified callbacks have returned.
Library:libcamapi
Description:
Before you take a photo, ensure that you have configured the viewfinder, set the output properties for the photo, and started the viewfinder. This function can only be called if the CAMERA_FEATURE_BURST is available. You can determine whether the feature is available by calling the camera_can_feature() function.
If you want an application to save the photo to a file, then this function should be invoked with the image_callback argument set. When the image_callback is set, the image buffer is provided as the second argument to the callback function. Then, in the invoked image_callback function, you can save the buffer to a file. If an application wants to save burst photos to disk, then the image_callback argument should be set. When this callback is invoked, if the image cannot be saved to disk before the next frame arrives, then the user should buffer frames instead and save the buffers to disk after the entire burst is complete.
The callbacks that you set for this function are called for each photo and therefore, are called multiple times.
Since burst mode captures images in rapid succession, you should choose an appropriate moment to play the shutter sound rather than play the shutter sound repeatedly.
|
http://developer.blackberry.com/playbook/native/reference/com.qnx.doc.camera.lib_ref/topic/camera_take_burst.html
|
CC-MAIN-2017-04
|
refinedweb
| 393
| 54.83
|
now, int's x y and z all make it here, but i cant make them go to the arrays,now, int's x y and z all make it here, but i cant make them go to the arrays,Code:
#include "accounts.h"
class Bank:Account{
sav_acct *Ps;
chk_acct *Pc;
time_acct *Pt;
char name[20];
char addy[50];
public:
Bank(int x, int y, int z);{
Ps = new sav_acct[x];
Pc = new chk_acct[y];
Pt = new time_acct[z];
}
for example when i do Bank B1(200, 400, 200),
it doesnt pass 200 to the sav_account[]
**note** sav_ acct and all them are defined in accounts.h
|
http://cboard.cprogramming.com/cplusplus-programming/34168-my-dynamic-pointer-wont-pass-values-printable-thread.html
|
CC-MAIN-2015-48
|
refinedweb
| 107
| 58.12
|
How to Make a WebsiteA Comprehensive Guide to Building a Website From Scratch Born Out of My Frustration Trying to Answer the Simple Question: How Do I Make a Non-Stupid Website? Includes Notes about How This Website Was Built and Source Code
by Oliver; Jan. 13, 2014
Introduction
This article is no longer up to date
Since just about everything in the world is getting sucked—Matrix-like—online, knowing web-design is an increasingly useful skill to have. Although you can find a zillion well-written tutorials teaching HTML, CSS, and javascript down to almost infinite granularity (e.g., W3Schools), from my perspective surveying the scene as a total beginner there seemed to be a lack of comprehensive guides showing you how to put everything together—the big picture. Given the innumerable approaches to this problem, each supported by a different faction in Google's search results, which path do you take? This article was born out of my frustration trying to answer the simple question: how do I make a non-stupid website? In the flood of information online, I had a lot of questions that I couldn't find the answer to in any one place. For example,
- Where do you begin?
- What are the best-practices?
- HTML is merely a markup language, so how do you start programming your site?
- What's a static versus a dynamic website?
- How do you structure your site, its code, and the directories that underlie it?
- Should you use a fancy content management system, like Drupal or Wordpress, a web framework, like Rails or Django, or can you make something just as good going it alone?
- How can you host your website?
What Technology Do I Use Again?Technology's always on the move and hard to pin down but, surveying the scene of as 2014, I see Google Blogger, Tumblr, WordPress, Amazon Web Services, Drupal, Rails, Django, Flask, Nodejs, Angularjs, Backbonejs, Emberjs, Bootstrap, Heroku, jQuery. WTF! Appropriate reaction: crawl into a Saddam-style spider hole, belly up, and die. The items in this list are not in a single category but, still, how do you pick your battle in this overwhelming and overpopulated landscape? If you simply want a blog, as we remarked in the last section, just go with WordPress or Google Blogger and dispense with any coding. If you want to program a web application, one important consideration we'll discuss below is whether your code should run on the client-side or the server-side.
If you can get away with it, running things on the client-side—with "the lingua franca of the web" Javascript/jQuery—is very appealing because it's independent of how your backend server looks and any package dependencies or installations. The scripts are simply downloaded onto the client's computer and his web browser executes the code for you (two interesting consequences of this fact are that (1) the app will work sans internet connectivity and (2) the client receives the entire source code). However, this is not always feasible. If your page needs to, say, access a large database of articles or product descriptions or anything else, you can't download this onto the client's computer everytime he visits your webpage. In this case, a web framework like Rails or Django is what you should use, which is the storyline this article develops. These frameworks run code on the server-side, which means you, the client, are controlling programs running on a remote computer somewhere. Contrast this app, which calculates the distance between airports, with this one, which performs calculations for a particular HIV assay based on user input. Can you guess which app is all (client-side) javascript and which app is running (server-side) Django? Hint: which one needs access to a lot of data the user isn't supplying? Read more about the airport app.
Let's introduce another layer of complexity. What if you're an entrepreneur looking to make an app that will, you hope, have a large user base? An important question beyond the scope of this article is, how will your website scale? Will it handle traffic from 1,000 users just as well as it handles traffic from 100 users? I haven't used Heroku myself, but I know that many start-ups favor it for its reputedly hassle-free scaling and git-based code deployment. This is the approach explored in Coursera's excellent Startup Engineering (Stanford) by Balaji S. Srinivasan and Vijay S. Pande (I have a short post about it here).
There's no single answer to the What Technology? question, but I hope this provides a little bit of orientation in this deluge of choices—back to the basics now!
A Problem with Many FacetsMaking a website is a problem that contains many distinct sub-problems. One of these hydra heads is how it comes into existence in the first place. That's something along the lines of:
Server (hardware) --> HTTP Server (software) --> Your WebsiteIf you're going to do this 100% from scratch, you can't even think about making a site until you've gotten over the first two hurdles. In practice, a server is a computer that's always on, and thus able to serve up your website any time someone requests it. A wise choice is outsourcing this task to a company you trust can handle it, like Amazon Web Services. You can read a thorough account of how I set up mine at How to Host a Website on the Amazon Cloud (AWS). An HTTP Server is a program, such as Apache, that transforms that machine that's always on—your hardware server—into something able to host pages. You can read about one approach to this problem at Setting Up Nginx, uWSGI, and Django on Amazon's EC2. I'll assume you've already done these two things and focus on the last piece, your website.
It's useful to consider a website in terms of three independent categories:
- The Code
- The Graphic Design
- The Content
HTML, CSS, JS, and the Long Ingredients List of a WebsiteA difficulty for new website builders is the same one we ran into last section: multifacetedness. The simplest websites use only HTML. If all you need is a glorified text file—think of a placard with words and pictures floating in cyberspace—it will suit your purposes very well. However, most basic websites use a combination of three languages which you need to learn merely to get your foot in the door:
HTMLHTML is the main actor and the other two are helpers. As we've said before, it's a relatively simple markup language. This means an HTML file is basically just a text file, with special tags around certain words to mark them up—say, make them italic or reference a web link. A supremely important point is that HTML is not a programming language. This is frustrating if you're used to basic human-rights like variables, conditional logic, loops, and functions. What would be elementary in other languages, like the principle "Thou Shalt Not Repeat Code", suddenly becomes hard in HTML. It's an accident of the web's evolution that HTML still has the central role that it does, and a lot of web programming seems to be overly complicated because it's trying to turn this simple tag language into a dancing marionette. If the web had been invented last year, it would definitely not have been entrusted to such a primitive standard-bearer. How else to explain why you need three languages for a garden-variety website? The lack of self-sufficiency demonstrates that it was not designed from first principles with efficiency in mind. For sophisticated websites, "real code" whirs away in the background and translating everything into HTML is the last step. Yet the lack of variables, logic, and loops is still so problematic that modern web designers have added these things into HTML by creating templating languages, as you can see with the Django template language or Embedded Ruby (ERB) in Rails. HTML is excellent as a pure markup language, and you can't not know it. Just be aware of exactly where it sits in the web programming ecosystem. It does not and cannot hold the whole web on its shoulders. I've included a short HTML primer below.
CSSCSS is the language of aesthetics. You can use it to set the layout of your page as well as to set the font, line-spacing, background color, etc. of various content blocks. You can assign a unique id or a non-unique class to any HTML tag and this is CSS's entry point into the picture. CSS has a lot of quirks and is unlike most other languages. It gets tricky surprisingly fast. You can read more about CSS in the primer below.
JavascriptJavascript—or jQuery, a pre-digested, self-sufficient Javascript library—is like pixie dust. Sprinkle it on your website and magic happens: menus can pop up, text can shrink, pictures can animate, and so on. We'll talk more about dynamic websites below, but javascript is one way of making your page dynamic. Javascript is client-side scripting, as opposed to server-side scripting, which is to say your javascript runs on the user's computer, not your server's. Javascript excels at programming stuff on a given webpage—manipulating HTML tags by type, id, or class. But it's not useful for the other thing people associate with the word dynamic, site-wide structural programming. There's a very small jQuery primer below.
Only three languages, you say—that's not bad. However, to make a more advanced website you have to know still more—at least one conventional programming language, like Python, and a database language, like MySQL. That's five!
HTML PrimerIf you're new to HTML, W3Schools is a good place to start, as always:
HTML tags exampleMark up is surrounding text with tags to give it additional properties. It's easy. For example, the following code:
<p>This line is within paragraph tags.</p> <p><i>This</i> is italic.</p> <p><b>This</b> is bold.</p> <h1 id="myHeader">This is a header</h1> <p><a href="">This is a link to The New York Times</a>.</p> <p><a href="#myHeader">This is an <i>internal page link</i> to the tag with id == myHeader</a>.</p>will produce:
This line is within paragraph tags.
This is italic.
This is bold.
This is a header
This is a link to The New York Times.
This is an internal page link to the tag with id == myHeader.
- <p> paragraph
- <i> italics
- <b> bold
- <h1> header 1
- <a> link
There are on the order of 100 other HTML tags. You can see a complete list here:
Class, id, and the <div> tagThe <div> tag is particularly useful. It doesn't do much on its own (div stands for "division") and is therefore perfect to use as a wildcard to do what we like with. It really comes into its own when you give it a class or an id. What are these? Tags can be assigned to a class, which groups a bunch of distinct tags into the same category, and given an id—used to mark elements uniquely. A <div> tag might look something like this:
<div id="myid" class="myclass"></div>Both CSS and javascript can manipulate elements by class and id, as well as by tag type. For example, some HTML in our page's body might look like this:
<div class="text1"> We can style this text one way </div>class and id are an all-important bridge from the world of simple mark-up to the world of programming.
<div class="text2"> and style this text another way </div>
<div class="text3"> because they belong to diff classes </div>
<div class="text2"> and style this text another way </div>
<div class="text3"> because they belong to diff classes </div>
HTML page structureThe bare-bones structure of an HTML webpage looks like the following:
<!DOCTYPE html> <html> <head> </head> <body> </body> </html>Inside the <head> tag is where you set the title of your page, link javascripts and CSS stylesheets, and so on. The <body> tag is where the code for the page the world will see resides.
HTML reserved charactersHTML has a set of reserved characters you cannot use, most notably:
- &
- >
- <
- &
- >
- <
HTML whitespaceBy default, HTML doesn't see carriage returns and, if you want more than one whitespace, you have to use the special code . For carriage returns use the unpaired tag <br>.
Where do the HTML files which create your webpage reside on your computer?One final point: you've written an HTML file, but who decides if this file will be rendered into a website? This is a matter for your HTTP server—e.g., Apache—to decide. In a basic setup, you'll choose a folder out of which to serve web content. This information will be in your HTTP server's configuration file. Then, by longstanding convention, when you go to your web address, it will look in this folder for a file called index.html and that's what it will serve.
CSS IntroductionYou can make a stylesheet anywhere in your directory structure—say, in a folder called styles—and link to it in your <head> tag:
<link rel="stylesheet" type="text/css" href="/path/styles/mystyle.css" />In CSS, the symbol # refers to an HTML tag's id, while the symbol . refers to its class. Referring again to our <div> tag:
<div id="myid" class="myclass"></div>in mystyle.css we can say:
#myid { /* my styles for the element of id == myid */ } .myclass { /* my styles for any element of class == myclass */ }The unique div tag with id="myid" will be styled according to the first block of instructions. And any div tag that has class="myclass" will be styled according to the second block. To make this concrete, suppose we want three different font styles in our document. We might do something like this:
.text1 { /* page header style */ font:72px bold arial,sans-serif; padding: 15px; } .text2 { /* left sidebar style */ font:10px arial,sans-serif; padding: 5px; color: blue; border:1px solid rgb(200,200,200); background: yellow; } .text3 { /* main text style */ font:12px times; line-height: 24px; }We can also style HTML tags directly, even if they don't belong to a class or have an id. For example, if we wanted to style all <p> tags:
p { /* my styles for all HTML p tags */ }It's more common to work with class and id, however, because this doesn't allow us much flexibility.
CSS supports inheritance. We can style particular HTML tags that appear in our classes, adding to the style they already get by being members of the class. For example, we could add specific style rules for all the paragraph elements (<p>) in myclass.
.myclass { /* my styles */ } .myclass p { /* extra style for paragraph elts in myclass */ }We can also nest multiple user-defined classes. The syntax is:
.myclass.myclass2 { /* Here we've defined multiple classes. This style will apply to anything with class="myclass myclass2" */ }For a more extensive introduction, see
CSS Layout PrimerNow for the hard part: page layout. Here are some important CSS concepts governing this.
DisplayDisplay is a property with three especially useful values:
display: inline display: block display: noneinline and block are roughly what they sound like: inline elements go in a line, as if you're adding words to the end of a sentence. A block element, on the other hand, will force a carriage return. Compare inline display:
a a ato block display:
aBy default, the <div>, <p>, and <h1> tags display as blocks, while the <i>, <b>, and <span> tags display inline.
a
a
a
a
Dimensionality and the box modelOnce you've created a content block, you can set its dimensions explicitly, as in:
height: 500px; width: 200px;In most circumstances, if you don't specify this the block will stretch around the child elements it contains, automatically sizing itself.
As you consider your content block, note the Box Model, which explains how CSS interprets properties like margin, border, and padding. This picture from W3Schools is key:
A point that sometimes causes confusion: as the picture shows, margin defines the space outside your element.
Normal document flowNormal document flow is the natural state of affairs in which elements are aware of each other's existence and behave accordingly. If you type an a, for example, followed by a b, the letters will be next to each other rather than superimposed on one another. In CSS, however, giving elements certain attributes will take the elements out of normal document flow, which means other elements will behave as if they don't exist and things can overlap. For example, assigning an element the attribute position: absolute will take it out of document flow.
PositionThe CSS position property can take on values like:
position: static position: relative position: absolute position: fixedposition: static is the status quo, while position: fixed fixes the position of an element relative to the browser window. The menu on the left sidebar of this webpage (with HOME, Full Sitemap, etc.) is using the fixed effect. position: relative gives the element an offset relative to where it would normally be, and position: absolute sets the offset relative to the containing element. You use these position attributes in combination with specifications about the offset:
top: 5px bottom: 5px left: 5px right: 5pxI found an excellent discussion of relative and absolute positioning in an article called "Web Design 101: Positioning" in Digital Web Magazine. Since neither the magazine nor the article seems to exist anymore, I'll quote it here:
Relative PositioningReading this discussion, you can start to appreciate why CSS is hard: it's full of subtleties and the interplay between attributes is complex.?
Float and clearCSS float is an attribute that usually takes on the values:
float: left float: rightFloating an element will either kick it as far left or right as it can go within its containing block. The floating attribute does not take elements out of normal document flow, which makes it very useful. About the clear attribute, W3Schools says: "Elements after the floating element will flow around it. To avoid this, use the clear property." It has a great example, which I'll copy here verbatim:
img { float:left; } p.clear { clear:both; }
<img src="logocss.gif" width="95" height="84" /> <p> This is some text. This is some text. This is some text. This is some text. This is some text. This is some text. </p> <p class="clear"> This is also some text. This is also some text. This is also some text. This is also some text. This is also some text. This is also some text. </p>produces:
Getting started - making container blocksThe first step is to block out your page. There's a nice set of templates here:
You can see we've made an outer container block, which circumscribes our universe: everything outside it is background and everything inside it is our page. It's:
#container { margin: 2px auto 2px auto; width: 1000px; background: #fff; }By adding a width property, we've made our page fixed-width—that is to say, the internal content blocks won't accordion if the user changes the window size. I like fixed-width pages—nytimes.com is a good example—because you have complete control over the appearance. Remember: if we don't specify width our container will auto-size to fit its child elements. Adding the margin property (it goes in the order top, right, bottom, left) with auto on both sides means our page will always be centered in the browser window. Inside our container, you can see numerous blocks forming sub-divisions. There's a header, a left sidebar, a content container, a right sidebar, and so on.
Examples of CSS weirdnessTo a beginner like me, the business of positioning in CSS often seems finicky. Let me give you two examples of random CSS problems that you might not anticipate, via Stackoverflow. The first is that if you use absolute positioning for a container block, you'll run into the issue that it won't re-size to fit the child elements you put inside it. See the discussion here:
Absolutely positioned elements do not count towards the container's contents in terms of flow and sizing. Once you position something absolutely, it will be as if it didn't exist as far as the container's concerned, so there's no way for the container to "get information" from the child through CSS. If you must allow for your scroller to have a height determined by its child elements without Javascript, your only choice may be to use relative positioning.My solution is to never use absolute position for a container block—it's not worth the headache.
Another problem that could blindside you is that, if a container block without explicit size has child elements which all float, the container will "collapse". There's a discussion about it here: notes:
Although elements like <div>s normally grow to fit their contents, using the float property can cause a startling problem for CSS newbies: if floated elements have non-floated parent elements, the parent will collapse.These and other tripwires are awaiting you in CSS.
jQuery PrimerjQuery, a user-friendly incarnation of Javascript, is more like a conventional programming language than either CSS or HTML and it has a great online API. Thus, I'll keep this discussion really short. As with your CSS stylesheet, you link to your javascript in the <head> tag. To import jQuery use:
<script type="text/javascript" src=""></script>
$(document).ready()One of the most useful parts of the jQuery suite is the ready() function, often invoked as:
$(document).ready(function(){ })As the manual says,
A page can't be manipulated safely until the document is "ready." jQuery detects this state of readiness for you.It's typical to put all the functions you, the programmer, write inside ready() so the top of your webpage might look something like this:
<script type="text/javascript" src=""></script> <script type="text/javascript" src="/path/myscript.js"></script> <script> $(document).ready(function() { // these are functions defined in myscript.js // to be executed once the document is ready: function1(); function2(); // and so on }); </script>while your myscript.js would look like this:
function1() { // my javascript } function2() { // my javascript }
Selecting ElementsAs we've remarked, jQuery is a full-fledged language with variables, logic, functions, and all the rest, too big to cover here. I just want to state a couple other basic things you should know about it. The first is, you can select elements based on id and class. For example, this code snippet would select the element of id == someid and all the elements of class == someclass and invoke the hide() function on them:
$('#someid').hide() $('.someclass').hide()Thus jQuery gives you great power to manipulate the elements on your page. Be sure to check out the indispensable library of event functions at
DebuggingSomething else you want in your bag of tricks is the ability to debug by printing the value of variables, or whatever you like, to your web browser's Error Console. To see your console in Safari, go to the menu Develop > Show Error Console:
Here's the jQuery to print hello to the console:
console.log("hello");
DOMjQuery sometimes refers to the Document Object Model (W3Schools Entry), or DOM. This is a way of thinking about your page as a tree: The <html> tag is the root, and probably contains the two branches <head> and <body>, which contain other branches, and so on.
You can see a more involved example of jQuery in the source code included at the end of this article.
An Example Webpage Using HTML, CSS, and JSWithout an example to crystallize what we've just covered, this article would be impotent. If you have a Macintosh, you can mock up a web page in any folder you like. You don't even need internet connectivity—just create an .html file and drag it onto Safari to see it. I've created three sample files, test.html, test.css, and my_functions.js, and I use one 100 pixel-width image file I stole from Google Images called hellokitty.jpeg:
The following are from a website I tried to make a year ago and, reviewing it today, I see a lot of room for improvement, especially in the coding of the CSS layout. The point of including it here is so you can see how HTML, CSS, and JS play together. Here are the files:
test.html(you can scroll down within the following box)
<!DOCTYPE html> <html> <head> <title>My Homepage</title> <!-- define character set --> <meta http- <!-- favicon --> <link rel="icon" type="image/ico" href="images/favicon.ico"/> <!-- my styles --> <link rel="stylesheet" type="text/css" href="test.css" /> <!-- jQuery --> <script type="text/javascript" src="" charset="utf-8"></script> <!-- my scripts --> <script type="text/javascript" src="my_functions.js"></script> <script> $(document).ready(function() { animateText(); }); </script> </head> <body id="body_background"> <!-- The point of this overarching content container is to get the fixed width effect and get this block centered --> <div id="container_outer"> <div id="block0"> <div class="text1">My Homepage</div> <div id="block0-1"> <i>About:</i> <br> <div class="text2"> About me: I work as an astronaut ... </div> </div> </div> <!-- block0 --> <div id="block1"> <div id="block1_column1"> <div class="text3">Hobby 1</div> <div class="text3in">ABCD<br>EFG<br>HIJK</div> <div class="text3">Hobby 2</div> <div class="text3in">LMNOP<br>QRS<br>TUV</div> </div> <div id="block1_column2"> <div class="text3">Hobby 3</div> <div class="text3in">WX<br>Y<br>Z</div> <div class="text3">Hobby 4</div> <div class="text3in">123<br>456<br>789</div> </div> <div id="block1_column3"> <div class="text3in"> <a href="#">work page</a>, <a href="#">blog</a>, <a href="#">contact</a> </div> </div> <!-- for main picture block --> <div id="block1-1"> <!-- images --> <div id="block0-2"></div> </div> </div> <!-- block1 --> </div> <!-- container_outer --> </body> </html>This is straightforward. In the head, we're giving paths or addresses to our CSS and JS scripts, and calling one homemade jQuery function animateText() we'll see below. In the body, there are two major content blocks <div id="block0">, the header, and <div id="block1">, which contains a number of columns and a large space for a picture.
test.css
#body_background { background: rgb(50,50,50); } #container_outer { /* container for all content */ /* to get the fixed width effect and center block centered */ border: 5px solid white; width: 900px; /* if you don't include height, it will not be fixed and accommodate however many elts you have */ height: 930px; background: white; /* margin defines the space OUTSIDE the border of the elt */ /* setting the left and right margin to auto will automatically center the elt */ margin: 4px auto 50px auto; } #block0 { /* title bar block */ display: block; border:1px solid rgb(200,200,200); width: 850px; height: 200px; margin-top:10px; margin-right: auto; margin-left: auto; /* block0-1 is position:absolute - looks to nearest positioned parent, which is this block. Use position: relative but don't shift any direction. The point is, it has a position. */ position:relative; } #block0-1 { /* About me block */ border:1px solid rgb(200,200,200); width: 290px; height: 182px; position:absolute; top:8px; left:550px; } #block0-2 { width: 225px; height: 200px; /* absolute pos takes out of doc flow */ position:absolute; top: 70px; left: 340px; background-image: url(hellokitty.jpeg); background-repeat:no-repeat; } #block1 { /* main block */ border:1px solid rgb(200,200,200); width: 850px; height: 680px; position:relative; margin-right: auto; margin-left: auto; margin-top: 8px; } #block1_column1 { /* column 1 in main block */ border:1px solid rgb(200,200,200); width: 180px; height: 663px; position:absolute; top:8px; left:8px; } #block1_column2 { /* column 2 in main block */ border:1px solid rgb(200,200,200); width: 180px; height: 663px; position:absolute; top:8px; left:200px; } #block1_column3 { /* column 3 in main block */ width: 200px; height: 20px; position:absolute; top:624px; left:660px; } #block1-1 { /* for large picture embedded within main block */ border:1px solid rgb(200,200,200); width: 450px; height: 583px; position:absolute; top:8px; left:390px; } .text1 { /* a class for styling text for main heading of the web page! */ display: block; margin-top: 10px; margin-left: 10px; font-size: 48px; font-weight:500; letter-spacing:10px; color: black; font-family:"Times New Roman",Georgia,Serif; } .text2 { /* a class for styling text in the About-Me block */ display: block; margin: 6px; font-size: 13px; color: black; font-family:"Times"; letter-spacing:0px; } .text3 { /* a class for styling text for topic headers */ display: block; margin-top: 4px; margin-left: 4px; font-size: 14px; font-weight:bold; letter-spacing:2px; color: rgb(32,62,132); font-family: Verdana, Arial, Helvetica, sans-serif; } .text3 a {color: rgb(32,62,132); text-decoration:none;} /* unvisited link */ .text3 a:visited {color: rgb(32,62,132);} /* visited link */ .text3 a:hover {color: rgb(32,62,132); text-decoration:underline;} /* mouse over link */ .text3 a:active {color: rgb(32,62,132);} /* selected link */ .text3in { /* a class for styling text for topic sub-headers */ /* "in" stands for "indent" */ display: block; margin-top: 4px; margin-left: 8px; line-height: 1.6; font-size: 10px; font-weight:700; letter-spacing:1px; color: rgb(0,171,255); font-family: Verdana, Arial, Helvetica, sans-serif; } .text3in a {color: rgb(0,171,255); text-decoration:none;} /* unvisited link */ .text3in a:visited {color: rgb(0,171,255);} /* visited link */ .text3in a:hover {color: rgb(0,171,255); text-decoration:underline;} /* mouse over link */ .text3in a:active {color: rgb(0,171,255);} /* selected link */This stylesheet gets the job done, but it doesn't do it particularly well. Remember when we discussed CSS positioning and remarked that it's unwise to overuse absolute positioning to place container blocks? Well, that's exactly the sin I'm committing here. It's fine if I'm only including a couple of things in each column. But if I have a lot of text—say, a list of 25 hobbies—the columns won't auto-size to accommodate this because they're out of document flow. It is therefore smarter to use floats to position blocks because it keeps them in document flow. Beginner that I am, I didn't grasp this when I wrote this CSS.
my_functions.js
function animateText() { console.log("Testing ..."); $("#block1_column2").click(function() { $(".text3").css("font-size", "32px"); }); }This jQuery function increases the font-size of our Hobby headers when the user clicks on the second column. It's purely pedagogical—there's no reason you would want to do such a thing.
Together, these three files produce:
If we click on the second column, the jQuery expands all text of class == text3 as follows:
If you copy these files onto your computer, they should produce the same result.
Making Your Page Non-Stupid: The DRY Principle, Server Side Scripting, and CGI
DRYAfter you've drunk deep from the cups of HTML, CSS, and Javascript, you'd think you'd be ready to go. So I thought, until I got down to business and hit a monumental logjam. When you visit some website, you'll often notice simple features that are unchanging across all pages—things like headers, footers, and sidebars. If you think about making 10 pages within a given site, you will very quickly become acquainted with the DRY Principle, which stands for "Don't Repeat Yourself". This is pure common sense: if you later decide to change your header, would you rather re-code it 10 times or re-code it once? In every walk of programming, this same reasoning causes people to package re-usable swaths of code into functions or modules. So, congratulations! You've had your first crisis, all stemming from the trivial question: How can I avoid repeating my HTML header code on each page?
If you have any programming experience, this is the easiest problem in the world: just store the header in a string as some kind of site-wide global variable and spit it out on every page. However, we've seen that HTML is not a programming language, so this isn't an option. What to do then? We could solve the problem with javascript/jQuery and create a function that inserts a bunch of HTML into a specific <div> tag on each page. This works, but it's a bad solution. First, you don't really want half of your HTML to be scattered inside javascripts. Second, there's a tiny chance the visitor to your site might have javascript disabled—although such a visitor wouldn't deserve much sympathy. And third, perhaps because javascript's niche is pop-up menus, animations, etc., it just feels awkward writing this in javascript.
Server-side scripting and CGILet's talk about a couple of distinctions related to the problem at hand. The first is static vs dynamic websites. A static website is just a marked up text file floating inanimately in cyberspace. A dynamic website, on the other hand, is one that has been programmatically generated in some way. For example, if your website has a series of magazine articles, and you want to generate a page on the fly that has all the articles with a given search-word, there's no way to do this statically. In our case, we want some very simple programming to automate attaching the header, sidebar, and footer. If we're going to write a program, where is going to run? Will it run on the computer of the person who's visiting our page, the client, or will it run on our computer, the server? This is the distinction between client-side scripting and server-side scripting. Javascript does the former, but we can also do the later.
We all know our HTTP server can serve up boring old HTML files. But you should also appreciate an important fact: it can oversee scripts that we run on our server as well. This means we can extricate ourselves from the HTML straitjacket and return to the land of sane programming. With server-side scripting, a visitor to your website from anywhere in the world can click some button and cause a script to run on your computer. Massive bombshell!
Ponder the implications of this major boon. Instead of requesting a plain HTML file, you can have the visitor request that a script be run which programmatically creates the HTML, automating headers included. But it could lead to problems if, for example, you allow the visitor to run a program that takes some input and subsequently makes a system call, because he could remove some of your files or worse. Hence, if your computer is going to permit some random person on the internet to run a script on it, it has to be done with care. One way of doing this is putting any such executable in a designated directory, often named cgi-bin. This isn't any great security measure, but at least it will restrict the scripts you have to worry about to one place. Who decides what and where directory this is and what the specific rules are? This is the job of your HTTP server, and you'll have to tailor its configuration file accordingly. With Apache, for example, you can read about how to do it here: (Common Gateway Interface) script refers to any script that can be run by, or is under the control of, your webpage. You can write it in any language you like, and it's sometimes given the extension .cgi. If you're using a web framework, it will handle the CGI stuff for you and you shouldn't have to worry about security. If you're DIY-ing it, be careful to sanitize user inputs as this comic shows.
If you look around the web and pay attention to URLs, you can spot programs running in the cgi-bin directory all over the place. For instance, I paid someone with PayPal the other day and captured this screenshot:
You can also see many scripts on the web with the .cgi extension. For example, in bioinformatics Blast is a program maintained by NCBI which aligns nucleotide or protein sequences based on homology. Look at this screen shot:
A user can upload his own sequences and run blast on NCBI's servers via the script Blast.cgi. This would be impossible without server-side scripting.
The URL as a Path; The URL as a Function InputAs we make the jump from simple static pages to more advanced dynamic pages, we're recapitulating the history of the web itself, which started primarily with one and progressed primarily to the other. The role of the URL is a nice illustration of this transition. Let's suppose your root HTML directory (the directory your HTTP server is set to serve pages out of) is:
/path/testwebpageand that this is linked to the address: suppose your site's directory structure looks like this:
$ tree testwebpage/ testwebpage/ ├── index.html └── pages ├── about.html ├── contact.html ├── people │ ├── janedoe.html │ └── johndoe.html └── people.htmlIf you want to see janedoe's page on the web, the address is: if you want to find the file janedoe.html on your computer, it's at:
/path/testwebpage/pages/people/janedoe.htmlWhat you notice is that the URL, with its telltale forward slashes, is really just a glorified unix path. When you're browsing through your computer on the command-line, you go to the path you desire. On the internet, you go to a domain plus a path, almost as if you're inside the unix file system of the whole world's computer. In simple setups, your site's structure on the web is determined by its directory structure on your server. With modern websites, this is not necessarily the case. Instead, the URL is treated as a string which is the input to some function. When you call a website, a server-side script parses this string via regular expression and conditional logic decides what to do from there. We'll see an example of this below when we look at Django's urls.py. You can lose sight of some of these nuances if you jump into a modern web framework right away without appreciating what came before.
Making Your Own Dynamic Website with Apache and 1990s Perl CGIIf you were crazy, you could try to make your own dynamic website from scratch. But it would be foolhardy to try to re-invent the wheel. There are great web frameworks that do exactly this and are better than anything you can make in a reasonable time investment. Before I knew any better, I took a crack at it with "1990s" Perl and Apache, so I'll throw it up here purely as a curiosity. You can skip this section with impunity.
Change your Apache .conf file and parse your URL with mod_rewriteThe first step is to change your Apache configuration to allow the execution of .cgi scripts, and to parse the incoming URL. Something like this:
ErrorLog /path/root_html/error_log <Directory "/path/root_html/"> Options FollowSymLinks Indexes MultiViews Includes ExecCGI AllowOverride All Order allow,deny Allow from all RewriteEngine on RewriteBase / # Require all granted AddHandler cgi-script .cgi # this is working! - # RewriteCond %{REQUEST_URI} !^/~username/cgi-bin [NC] # RewriteRule foo.html RewriteRule ^(.*)\.html$ </Directory>What's going on here is that I'm using Apache's mod_rewrite to remap incoming URIs from, e.g., when a visitor thinks he's going to visit the page test.html what actually happens is that this URL is remapped to run the program cgi-bin/main.cgi and we can grab what's after the ? character—test in this example—and use it in our script.
Accessing our parsed URL from a perl CGI scriptNow we need a main program, main.cgi, that stitches strings together to make an HTML document based on how it parses the URL. I'll give a sketch how you might do this, without doing the whole thing:
#!/usr/bin/env perl use strict; use warnings; use CGI; use Cwd 'abs_path'; use FindBin qw($RealBin); ############################################# #### DEFINE VARS #### # path to site directory root my $sitedir="../mysite"; # main blocks of html content my $block0html; my $block11html; my $block12html; ############################################# #### MAIN #### # Print the CGI response header, # required for all HTML output # (must have this line or it will crash!) # Note the extra \n, to send the blank line print "Content-type: text/html\n\n" ; print "<!DOCTYPE html>\n"; print "<html>\n"; my $page_name = parse_uri(); if ( -e "$sitedir/pages/$page_name" ) { $block0html = `cat $sitedir/pages/$page_name/html/block0.html`; $block11html = `cat $sitedir/pages/$page_name/html/block1-1.html`; $block12html = `cat $sitedir/pages/$page_name/html/block1-2.html`; print_html($block0html, $block11html, $block12html); } else { print "page not found\n"; } print "</html>\n"; exit; ############################################# #### SUBROUTINES #### # parse URI, for URI routing sub parse_uri { # use Apache mod_rewrite to change incoming URIs like, e.g., # # to # # this is going to return what's after the "?" my $uri = $ENV{"REQUEST_URI"}; # split on "?" my @tmp=split(/\?/, $uri); return $tmp[1]; } # print useful CGI stuff sub print_cgi_info { my $str = ""; $str=$str."Versions:\n"; $str=$str."perl: $]\n"; $str=$str."CGI: $CGI::VERSION\n"; my $q = CGI::Vars(); $str=$str."\nCGI Values:\n"; foreach my $k ( sort keys %$q ) { $str=$str."$k [$q->{$k}]\n"; } $str=$str."\nEnvironment Variables:\n"; foreach my $k ( sort keys %ENV ) { $str=$str."$k [$ENV{$k}]\n"; } return $str; }This doesn't accomplish anything great, but you can see the lay of the land. We're going to read some HTML files into Perl variables as strings and we'll stitch them together in the method print_html(), which isn't shown. What a lot of work! Since everything is done programmatically, we can solve our problem about repeating the header. However, this solution is obsolete.
The Way Forward: A CMS, like Drupal, vs. a Web Framework, like DjangoSummary thus far:
- you should use server side scripting.
- You should not try to write a web framework by yourself.
Should you use a web framework, like Rails or Django? For me, the answer was yes. They're a lot more technical than Drupal or Wordpress but that's precisely why I like them better. I chose Django for this site because it's Python-based, but mostly out of ignorance. I like it so far! It gives you great tools for parsing URLs and hooking your site up to a database. There's a small primer below.
To reemphasize the point I made in the introduction, this solution fit my needs but it might not fit yours. See What Technology Do I Use Again? Don't take my word for anything!
Django PrimerWhen you get down to it, many web sites can be thought of as miniature magazines. After all, the web is comprised of pages. There are exceptions—some of those pages may have little TV screens or video games embedded in them—but for the most part the magazine metaphor works, especially for the most obvious form of personal site, the blog. Django, originally designed for a Kansas news-site, fits well into this paradigm. Django is based on Python modules or packages that you create. It's meant to be used together with a database to hold your data (the default is Sqlite). Let's see how we would build an article module. In Django, there are a few standard scripts that go in every user-created package:
- models.py - defines the representation of your data
- urls.py - controls URL routing
- views.py - does logic processing
models.pymodels.py is the script responsible for establishing the model of your data and Django will use it to create a table in your database. It's easy to understand models.py in terms of object-oriented programming: think of it as where you define what an article class is and supply any methods you want to act on article objects (i.e., instances of this class). Consider a magazine article for a moment. Aside from the text itself, there's a lot of metadata associated with it—the author's name, the date of publication, the format in which it's to be rendered, how many "likes" it has, and so on. All of this stuff goes into your article object. models.py might look like this:
from django.db import models # class Article(models.Model): author = models.CharField(max_length=500) title = models.CharField(max_length=500) pub_date = models.DateField('date published') body = models.TextField() # example method: def get_author(self): return self.authorAs soon as we've set this up, running
$ ./manage.py syncdbcreates a table in our database corresponding to the model. (Django comes with the script manage.py, which takes care of a lot of miscellaneous tasks.) It is now our job to populate this database with content, .e.g.:
sqlite> SELECT * FROM article_data LIMIT 1; id|author|title|pub_date|body 1|joe|first_article|2014-01-13|my first articleFor an example wrapper script to load data into an sqlite database, see Wiki: MySQL and SQLite. This is all necessary leg work, even though we haven't said anything about our actual website yet.
urls.pyThe next step is to determine how we want to route a given URL to a given page in our site. Think of the URL as a string which is the input to a function. The function is going to take this string, parse it, and then send the user along to the appropriate page. This is the job of urls.py, which might look like this:
from django.conf.urls import patterns, include, url urlpatterns = patterns('', url(r'^news/(?P<article_name>\w+)/$', 'article.views.my_news_article_function'), url(r'^blog/(?P<article_name>\w+)/$', 'article.views.my_blog_article_function'), )This parses the URL via regular expression (see the python regex docs if this is unfamiliar to you). Any URL that follows the pattern blog/some_article_name such as call the the function my_blog_article_function in views.py, passing the variable article_name to it as input. On the other hand, URLs of the pattern news/some_article_name will call a different function. So—what's views.py?
views.pyviews.py is the logical brain of your operation. It takes the variables passed by urls.py, does whatever scripting it needs to do, chooses the appropriate page template, and creates your HTML. For example:
from django.shortcuts import render, render_to_response from django.http import HttpResponse from django.template.loader import get_template from django.template import Context from django.views.generic.base import TemplateView from article.models import Article def my_news_article_function(request, article_name): name = "Dear Reader" html = "<html><body>Hi %s</body></html>" % name return HttpResponse(html) def my_blog_article_function(request, article_name): name = "Dear Reader" t = get_template('article/blog_template.html') html = t.render(Context({'name': name, 'article': Article.objects.get(title=article_name)})) return HttpResponse(html)The function my_blog_article_function passes a Context object to the template blog_template.html, which has access to the python dict passed as an argument. We also have access to our article object—note the line:
from article.models import Articleand Django provides convenient methods on it, like the one we're using to get a specific article by title. Still, we don't know how the HTML looks until we see our template.
TemplatesDjango has a templating language, which is HTML augmented with simple variables, loops, and logic. This is a big subject you can read about here:
{{ variable }}Our template blog_template.html might look like this:
<html> <body> Hi {{ name }} Welcome. You are reading {{ article.title }} by {{ article.author }} published on {{ article.pub_date }} <br> <br> {{ article.body }} </body> </html>Thus we see how data which originated in our database makes its way, via parsing in urls.py and conditional logic in views.py, into our HTML.
More about DjangoI have some more notes about Django here.
Debugging Your Website and Lifting Code from Pages You LikeNow that you're merrily in the midst of building your website, you may find yourself doing a lot of debugging. We all know how to debug an ordinary program, but how do you debug a website? If you're on a Macintosh, Safari has a nice suite of developer tools. We've already mentioned the Error Console. Another thing you want to use is the Web Inspector in the menu:
Develop > Show Web InspectorIt looks like this:
The web inspector lets you view your source code and get a DOM's-eye-view of your page. Crucial for debugging, you can click on an HTML tag and see what CSS styling it's getting. Most other browsers have similar capabilities—e.g., Chrome's is at:
View > Developer(You may have heard of the world famous Command Option J).
Anytime you view a website, all of the HTML, javascript, CSS, and images are downloaded to your computer. You can see all of them with the web inspector. Do you like something from somebody else? Take it then (but give credit, as common sense demands).
Misc Topics: Favicons and robots.txtA couple of miscellaneous topics that don't fit anywhere in particular. As you put the finishing touches on your website, you'll want to make a favicon, which is the little graphic next to the URL. Two useful sites for this are:
features an ape motif. To make it, I used a 16 x 16 pixel canvas in Photoshop—the actual size of a favicon—and then converted the picture into an .ico file. A word of advice: don't try to do too much with it artistically. The tiny dimensions force you to stay bold and simple.
Changing gears completely, search engine spiders (or robots) crawl over the web all the time, making page indexes. By convention, these robots should obey instructions you give them in a robots.txt file in your website's root directory. For example, an instruction might forbid preview information about your site to be displayed on Google's search results. Read more about robots.txt here:
Working with and Optimizing Images for Your SiteAs emphasized in the intro, it mustn't be forgotten that content and visual comeliness are the most important aspects of any site—programming is merely the necessary mechanical part. When you build a website from scratch you're facing, first and foremost, a graphic design problem with a secondary computer science problem piggy-backing on its shoulders. Or is it the other way around? In any case, without nice graphic design everything suffers. Here are a couple of tips about the technical aspects of image manipulation.
When you work with images, knowing and adjusting the pixel dimensions is often necessary. There's a very easy way to do this on a Mac. Using Preview, go to Tools > Adjust Size in the menu. Even if you don't adjust the size, this will tell you the dimensions. Another way to do this is with the nonpareil (and free) ImageMagick toolkit. You can re-size and re-position pictures or change their formats in bulk from the command-line. Of course, Adobe Photoshop is basically indispensable, as all graphics professionals know. Buy it! Your page will be much better for it.
Because image files can be large, they have the potential to slow down your site. You should put some thought into how much burden you're willing to put on the visitor to your site who has to download them. There's a wonderful discussion about this in Chapter 10: Optimizing Images from an O'Reilly book called Even Faster Websites. The authors note:.One easy step which goes a long way: play with ImageMagick to experiment with different compression formats for your pictures. Try .gif, .jpg, and .png and use the one that produces the smallest file size (I review some basic ImageMagick commands here). The authors have other good suggestions, like stripping metadata from your images before you post.
How I Made This Website: Example Source Code (Django + jQuery)You could read a tome on web design, but there's no substitute for example code. Let's look at an early draft of the source code for this website. I'm using the Django web framework. It would be impractical to write out all of the code, so I'll focus on the most important bit, the article module. Instead of trying to explain every line of code below, I'll just give some helter-skelter observations following each script.
article/models.pyMy article/models.py looks like this:
from django.db import models # class Article(models.Model): shortname = models.CharField(max_length=50) title = models.CharField(max_length=500) subtitle = models.CharField(max_length=500) author = models.CharField(max_length=500) body = models.TextField() pub_date = models.DateField('date published') likes = models.IntegerField() tags = models.TextField() abstract = models.TextField() category = models.CharField(max_length=200) format = models.IntegerField() polish = models.IntegerField() def tags_as_list(self): return self.tags.split(',') def get_id(self): return self.id def get_cat(self): # return the first (primary) category in the comma-delimited list my_category_list=self.category.split(',') return my_category_list[0] def get_full_cat(self): # return the full category return self.categoryI included a lot of attributes here, probably more than I need. category and shortname are going to comprise the last two fields of the URL. For this article, category == computing and shortname == tut_web, so the URL is: have a body attribute, but I ultimately decided to keep the HTML body content in a static files because it's easier to edit than having to fish in and out of a database all the time—although perhaps you couldn't get away with this for a large-scale site. So there's a directory for each article that contains a content.html file along with whatever images go along with the it. The content.html file contains the body of the article and is completely decoupled from the page layout.
Articles can be in more than one category, so the method get_cat returns the primary one, while get_full_cat gives the whole list in a comma-delimited string.
article/urls.pyMy article/urls.py looks like this:
from django.conf.urls import patterns, include, url urlpatterns = patterns('', url(r'^all/$', 'article.views.all_articles_better'), url(r'^(?P<article_category>\w+)/$', 'article.views.all_category_articles'), url(r'^(?P<article_category>\w+)/(?P<article_name>\w+)/$', 'article.views.one_named_article'), )This routes URL to the three different page types related to articles:
- a full site page of contents, taken care of by the function all_articles_better
- a page of contents for articles of a given category, taken care of by the function all_category_articles
- the article itself, taken care of by the function one_named_article
article/views.pyLet's look at article/views.py:
from django.shortcuts import render, render_to_response from django.http import HttpResponse from django.template.loader import get_template from django.template import Context from django.views.generic.base import TemplateView from article.models import Article from django.core import serializers import os, re CW_DIR = os.path.dirname(__file__) # NOTE: you have to re-start uwsgi for changes in views to take effect! ######################### def all_articles_better(request): d={} # dict where keys = categories, values = list of article objects of that category for i in Article.objects.all(): myid = i.get_id(); mycat_list = i.get_full_cat(); # comma-delimited list of categories for mycat in mycat_list.split(','): if not (mycat in d): d[mycat] = [Article.objects.get(id=myid)] else: d[mycat].append(Article.objects.get(id=myid)) return render_to_response( 'article/articles_contents.html', {'my_cat_article': d} ) def all_category_articles(request, article_category=1): # use case insensitive contains: icontains # see # "Basic lookups keyword arguments take the form field__lookuptype=value. (Thats a double-underscore)." return render_to_response( 'article/articles_category_contents.html', {'myarticle': Article.objects.filter(category__icontains=article_category)} ) def one_named_article(request, article_category=1, article_name=1): t = get_template('article/one_named_article.html') # get file path mypath = CW_DIR + "/../static_page_data/" + article_name + "/html/content.html" htmlcontent = "" with open(mypath, "r") as myfile: htmlcontent=myfile.read() html = t.render( Context({'myarticle': Article.objects.get(shortname=article_name), 'htmlcontent': htmlcontent}) ) return HttpResponse(html)The function all_articles_better makes the table of contents for the full site. I create a dict d where each key is a category, and each value is a list of article objects of that category. This data structure gets passed to the template article/articles_contents.html, which we'll see below. Note that by design the same article can be in two different categories.
all_category_articles filters my article objects to make a category-specific table of contents, and one_named_article passes raw HTML to a template we won't discuss called one_named_article.html, which is the view you see when you read an article.
TemplatesA key part of Django templating is inheritance. When a child templates extends a parent template, it looks exactly like the parent except where it explicitly overwrites it. We're going to look at the template that has the full table of contents for this site, article/articles_contents.html. But first, let's look at its parent, base.html:
{% load static from staticfiles %} <!DOCTYPE html> <html> <head> <!-- title --> {% block titleblock %} <title>Oliver</title> {% endblock %} <!-- meta --> {% block metablock %} <meta charset="UTF-8"> <meta name="description" content="Oliver's Homepage"> <meta name="keywords" content="Oliver, Oliver Artwork"> <meta name="author" content="Oliver"> {% endblock %} <!-- favicon --> <link rel="icon" type="image/ico" href="{% static "img/favicon.ico" %}" /> <!-- styles --> <link rel="stylesheet" type="text/css" href="{% static "css/default.css" %}" /> <link rel="stylesheet" type="text/css" href="{% static "css/style0.css" %}" /> <link rel="stylesheet" type="text/css" href="{% static "css/style1.css" %}" /> <link rel="stylesheet" type="text/css" href="{% static "css/style2.css" %}" /> <link rel="stylesheet" type="text/css" href="{% static "css/style3.css" %}" /> <link rel="stylesheet" type="text/css" href="{% static "css/style4.css" %}" /> <!-- scripts --> <script type="text/javascript" src="" charset="utf-8"></script> <script type="text/javascript" src="{% static "js/my_functions.js" %}"></script> <script type="text/javascript" src="{% static "js/modernizr-latest.js" %}"></script> <script> $(document).ready(function() { createToc(); unhideContainer(); // add page specific jquery {% block jqblock %} {% endblock %} }); </script> </head> <body> {% block progblock %} {% endblock %} <noscript> <div class="text_warn"> <h3><i>Please Enable JavaScript!</i></h3> </div> </noscript> <div id="container"> <div id="section-navigation"> {% block sidebar %} <div class="text2"> <b><a href='/'>HOME</a></b> <br> <br> <b><a href='/article/all/'>Full Sitemap</a></b> <br> </div> {% endblock %} </div> <div id="content-container"> <div id="header"> {% block header %} {% endblock %} </div> <div id="content"> {% block content %} Content {% endblock %} </div> <div id="aside"> {% block aside %} {% endblock %} </div> </div> <div id="footer"> {% block footer %} <small>© 2014 Oliver</small> {% endblock %} </div> </div> </body> </html>In the <head>, there are the usual links to stylesheets and a few javascript functions. The rest of the page is mostly template blocks which the child templates can write into.
Here's article/articles_contents.html:
{% extends "base.html" %} {% block titleblock %} <title>Oliver - Full Sitemap</title> {% endblock %} {% block content %} <div class="text3close"> <h3><a href='/art/fns'>Art</a></h3> <!-- key is category, value is list of articles in that category --> <!-- loop thro categories --> {% for key,value in my_cat_article.items %} {% if key != "unlinked" and value|length > 1 %} <h3><a href='/article/{{ key }}'>{{ key|capfirst }}</a></h3> <!-- loop thro list of values (it shud contain article objs) --> {% for j in value %} <!-- only print articles with polish --> {% if j.polish > 10 %} <a href='/article/{{ key }}/{{ j.shortname }}'>{{ j.title }}</a> <br> {% endif %} {% endfor %} {% endif %} {% endfor %} </div> {% endblock %} {% block footer %} <small>© 2014 Oliver</small> <a href='/article/unlinked/art_hero'> <div class="image1_c"> <img src="/static/article/img/hero2_purple_w100.png" alt="image" width="100" /> </div> </a> {% endblock %}The line:
{% extends "base.html" %}tells Django this template is building off of the groundwork laid in base.html. Remember my_cat_article is the dict mapping each article category to a list of articles. In this template, we're looping over each category and then all the articles within it. The point is to make links connecting to the rest of the site. There's a little bit of conditional logic: the special category unlinked is for articles I don't want to appear in the table of contents. I also won't print a category heading if the category has one or fewer articles in it. The contents will only contain articles with a polish attribute of greater than 10. With this dial, I can work on writing an article but keep it unlisted until it's duly polished. You can see what this template produces here.
CSSLet's leave the world of Django, and talk CSS for a minute. I have about 1000 lines of it so far. For the layout, I started with a script from maxdesign.com and modified it. Here's a tiny peek at my CSS containing the font information for the main text categories. I opted for a clean-cut and functional look using a sans-serif font, Arial:
.text1 { /* page header style */ font:30px bold arial,sans-serif; padding: 15px; } .text2 { /* left sidebar style */ font:10px arial,sans-serif; padding: 5px; } .text3 { /* main text style */ font:16px arial,sans-serif; line-height: 26px; padding: 15px; }The class I'm using for the body of the text is primarily text3. Remember, if you want to view or steal more CSS, you can do it through Safari's web inspector.
jQueryTo give you some j-flavor, here's one of the jQuery functions I wrote for this site. We didn't see the article template, but each article is divided into sections with <h2> header tags. At the top of every page is a page-wide (not site-wide) table of contents. It would be a pain to write this manually for every page and have to tweak it when something changes. So I wrote a jQuery function, createToc(), that automatically generates a table of contents from the <h2> tags. It's going to write into an empty div tag I include in the article template:
<div id="toc_main"></div>Here's createToc():
function createToc() { /* This creates a table of contents automatically by looping through h2 tag elements */ var my_url_path = window.location.pathname // console.log( my_url_path ); // testing: /* $( "h2" ).each(function( index ) { console.log( index + ": " + $( this ).text() ); }); */ $( "#toc_main" ).append( '<i>Table of Contents</i><br>'); $( "#toc_main" ).append( '<small>'); // loop thro all h2 tags $( "h2" ).each(function( index ) { // Add id to header $( this ).attr('id', 'h_id_' + index); // testing: was the id added successfully? // var $myid = $( this ).attr('id'); // var $myclass = $( this ).attr('class'); // console.log($myclass + ' ' + $myid); var $mytext = $(this).text(); // remove whitespace and special chars and truncate to 40 chars: var $mycleantext = $mytext.replace(/\s+|\W+/g, "").substring(0,40); // add internal page link var $internal_link = '<a id="' + $mycleantext + '"></a>'; $( this ).before($internal_link); // testing: // console.log($mytext.replace(/\s+|\W+/g, "")); // make a bit of html which is a link, has an id (sidebar_id_index), // and contains text from the h2 element var $onebased = index + 1; var $htmlstr = '<a id="sb_id_' + index + '" href="#' + $mycleantext + '">'; $htmlstr += $onebased + '. ' + $mytext; $htmlstr += '</a>'; $htmlstr += ' <small><small><small><small><a href="' + my_url_path + $onebased; $htmlstr += '">VIEW_AS_PAGE</a></small></small></small></small>'; // append this html to the elt with id == #toc_main (this will be in the main body) $( "#toc_main" ).append( $htmlstr + '<br>'); // foreach element with sidebar id == sb_id_index, scroll to element with header id = h_id_index // from: $("#sb_id_" + index).click(function() { $('html, body').animate({ scrollTop: $("#h_id_" + index).offset().top }, 1000); }); // add a little TOP link to take you back to the top of the document $( this ).append(' <small><small><a href="#">TOP</a></small></small> '); $( this ).append('<small><small><small><a href="' + my_url_path + $onebased + '">VIEW_AS_PAGE</a>'); $( this ).append('</small></small></small>'); }); $( "#toc_main" ).append( '</small>'); }The first thing this script does is to find the ID == toc_main div tag and write "Table of Contents" into it. Then it loops through each <h2> tag and gives it a unique ID, h_id_index, based on the index of the loop. It also adds a TOP and VIEW AS PAGE link to each tag. The variable $mytext grabs the text out of the header and $mycleantext is that same text with whitespace and special characters removed. Just before each <h2> tag, the script adds an <a> tag, whose ID is mycleantext. Internal page links, which get appended to the toc_main div and have IDs of the form sb_id_index, are going to jump to the elements of ID h_id_index—the <h2> headers. They look like this:
<a id="sb_id_index" href="#mycleantext">number. mytext</a>This is a little bit subtle: when you click these internal page links where you go is determined by the click, animate, and scroll code which supersedes where the href source would take you. However, because of the href part, the URL still becomes "pretty": we added <a> tags of ID mycleantext just above each <h2> tag:
<a id="mycleantext"></a>So, if we enter this URL de novo, we still get to the right place.
|
http://www.oliverelliott.org/article/computing/tut_web/
|
CC-MAIN-2017-39
|
refinedweb
| 10,868
| 64.91
|
03-30-2019 08:08 AM
Hello,
I am trying to add some utilities to my PetaLinux build (v2018.2) on a Zedboard, including gdb, zip, and vim. I go into the configuration menu (petalinux-config -c rootfs) and select the utilities to add, save the configuration, and run petalinux-build. When I copy the resulting BOOT.BIN and image.ub files to an SD card and attempt to boot, I get a kernel panic. Xilinx UG1144 (pp. 51-52) states that if the image size changes (which I believe it must after adding all of those utilities), you should change the CONFIG_SYS_BOOTM_LEN parameter in platform-top.h to a size larger than the image size, and undefine CONFIG_SYS_BOOTMAPSZ in the same file. I didn't notice CONFIG_SYS_BOOTMAPSZ actually being defined in platform-top.h, but I threw in a #undef for it anyway. None of this worked, and the kernel still panics on boot. Below is the boot output:
U-Boot 2018.01 (Mar 30 2019 - 10:39:10 -0400) Model: Zynq Zed Development Board Board: Xilinx Zynq Silicon: v3.1 DRAM: ECC disabled 512 MiB MMC: sdhci@e0100000: 0 (SD) SF: Detected s25fl256s_64k with page size 256 Bytes, erase size 64 KiB, total 32 MiB *** Warning - bad CRC, using default environment In: serial@e0001000 Out: serial@e0001000 Err: serial@e0001000 Model: Zynq Zed Development Board Board: Xilinx Zynq Silicon: v3.1 Net: ZYNQ GEM: e000b000, phyaddr 0, interface rgmii-id eth0: ethernet@e000b000 U-BOOT for avnet-digilent-zedboard-2018_2 ethernet@e000b000 Waiting for PHY auto negotiation to complete......... TIMEOUT ! Hit any key to stop autoboot: 0 Device: sdhci@e0100000 Manufacturer ID: 3 OEM: 5344 Name: SS32G Tran Speed: 50000000 Rd Block Len: 512 SD version 3.0 High Capacity: Yes Capacity: 29.7 GiB Bus Width: 4-bit Erase Group Size: 512 Bytes reading image.ub 138610792 bytes read in 7524 ms (17.6 MiB/s) ## Loading kernel from FIT Image at 10000000 ... Using 'conf@system-top.dtb' configuration Verifying Hash Integrity ... OK Trying 'kernel@1' kernel subimage Description: Linux kernel Type: Kernel Image Compression: gzip compressed Data Start: 0x10000108 Data Size: 3825017 Bytes = 3.6 MiB Architecture: ARM OS: Linux Load Address: 0x00008000 Entry Point: 0x00008000 Hash algo: sha1 Hash value: bcd4ab7a45df06f49d0d19a6a410bf10e5f84556103a99d8 Data Size: 134768897 Bytes = 128.5 MiB Architecture: ARM OS: Linux Load Address: unavailable Entry Point: unavailable Hash algo: sha1 Hash value: f478f57bb77a16deee88c3bbbf73db0ff8ed9d81103a5f84 Data Size: 14739 Bytes = 14.4 KiB Architecture: ARM Hash algo: sha1 Hash value: 861ab5ee88e4dd3680d9941a0012c8f73a270fa5 Verifying Hash Integrity ... sha1+ OK Booting using the fdt blob at 0x103a5f84 Uncompressing Kernel Image ... OK Loading Ramdisk to 17293000, end 1f319901 ... OK Loading Device Tree to 1728c000, end 17292992 ... OK Starting kernel ... Booting Linux on physical CPU 0x0 Linux version 4.14.0-xilinx-v2018.2 (oe-user@oe-host) (gcc version 7.2.0 (GCC)) #1 PREEMPT Sat Mar 30 10:48:17 EDT 2019 CPU: ARMv7 Processor [413fc090] revision 0 (ARMv7), cr=18c52c79 CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache OF: fdt: Machine model: Zynq Zed Development Board bootconsole [earlycon0] enabled Memory policy: Data cache writeback cma: Reserved 16 MiB at 0x16000000 CPU: All CPU(s) started in SVC mode. Built 1 zonelists, mobility grouping on. Total pages: 130048 Kernel command line: console=ttyPS0,115200 earlyprintk PID hash table entries: 2048 (order: 1, 8192 bytes) Dentry cache hash table entries: 65536 (order: 6, 262144 bytes) Inode-cache hash table entries: 32768 (order: 5, 131072 bytes) Memory: 361536K/524288K available (6144K kernel code, 255K rwdata, 1536K rodata, 1024K init, 134K bss, 146368K reserved, 16384K cma-reserved, 0K highmem) Virtual kernel memory layout: vector : 0xffff0000 - 0xffff1000 ( 4 kB) fixmap : 0xffc00000 - 0xfff00000 (3072 kB) vmalloc : 0xe0800000 - 0xff800000 ( 496 MB) lowmem : 0xc0000000 - 0xe0000000 ( 512 MB) pkmap : 0xbfe00000 - 0xc0000000 ( 2 MB) modules : 0xbf000000 - 0xbfe00000 ( 14 MB) .text : 0xc0008000 - 0xc0700000 (7136 kB) .init : 0xc0900000 - 0xc0a00000 (1024 kB) .data : 0xc0a00000 - 0xc0a3fd20 ( 256 kB) .bss : 0xc0a3fd20 - 0xc0a616bc ( 135 kB) Preemptible hierarchical RCU implementation. Tasks RCU enabled. NR_IRQS: 16, nr_irqs: 16, preallocated irqs: 16 efuse mapped to e0800000 slcr mapped to e0802000 zynq_clock_init: clkc starts at Switching to timer-based delay loop, resolution 3ns clocksource: ttc_clocksource: mask: 0xffff max_cycles: 0xffff, max_idle_ns: 537538477 ns timer #0 at e0808000, irq=17 Console: colour dummy device 80x30 Calibrating delay loop (skipped), value calculated using timer frequency.. 666.66 BogoMIPS (lpj=3333333) pid_max: default: 32768 minimum: 301 Mount-cache hash table entries: 1024 (order: 0, 4096 bytes) Mountpoint-cache hash table entries: 1024 (order: 0, 4096 bytes) CPU: Testing write buffer coherency: ok Setting up static identity map for 0x100000 - 0x100060 Hierarchical SRCU implementation. devtmpfs: initialized random: get_random_u32 called from bucket_table_alloc+0x12c/0x178 with crng_init=0 VFP support v0.3: implementor 41 architecture 3 part 30 variant 9 rev 4 clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns futex hash table entries: 256 (order: -1, 3072 bytes) pinctrl core: initialized pinctrl subsystem random: fast init done NET: Registered protocol family 16 DMA: preallocated 256 KiB pool for atomic coherent allocations cpuidle: using governor menu hw-breakpoint: found 5 (+1 reserved) breakpoint and 1 watchpoint registers. hw-breakpoint: maximum watchpoint size is 4 bytes. zynq-ocm f800c000.ocmc: ZYNQ OCM pool: 256 KiB @ 0xe0840000 zynq-pinctrl 700.pinctrl: zynq pinctrl initialized e0001000.serial: ttyPS0 at MMIO 0xe0001000 (irq = 24, base_baud = 3125000) is a xuartps console [ttyPS0] enabled console [ttyPS0] enabled bootconsole [earlycon0] disabled bootconsole [earlycon0] disabled <giometti@linux.it> PTP clock support registered EDAC MC: Ver: 3.0.0 FPGA manager framework fpga-region fpga-full: FPGA Region probed Advanced Linux Sound Architecture Driver Initialized. clocksource: Switched to clocksource arm_global_timer NET: Registered protocol family 2 TCP established hash table entries: 4096 (order: 2, 16384 bytes) TCP bind hash table entries: 4096 (order: 2, 16384 bytes) TCP: Hash tables configured (established 4096 bind 4096) UDP hash table entries: 256 (order: 0, 4096 bytes) UDP-Lite hash table entries: 256 (order: 0, 4096 bytes) NET: Registered protocol family 1 (write error); looks like an initrd /initrd.image: incomplete write (-28 != 134768897) Freeing initrd memory: 131612K hw perfevents: no interrupt-affinity property for /pmu@f8891000, guessing. hw perfevents: enabled with armv7_cortex_a9 PMU driver, 7 counters available workingset: timestamp_bits=30 max_order=17 bucket_order=0 jffs2: version 2.2. (NAND) (SUMMARY) © 2001-2006 Red Hat, Inc. io scheduler noop registered io scheduler deadline registered io scheduler cfq registered (default) io scheduler mq-deadline registered io scheduler kyber registered dma-pl330 f8003000.dmac: Loaded driver for PL330 DMAC-241330 dma-pl330 f8003000.dmac: DBUFF-128x8bytes Num_Chans-8 Num_Peri-4 Num_Events-16 brd: module loaded loop: module loaded m25p80 spi0.0: found s25fl256s1, expected n25q512a m25p80 spi0.0: s25fl256s1 (32768 Kbytes) 4 ofpart partitions found on MTD device spi0.0 Creating 4 MTD partitions on "spi0.0": 0x000000000000-0x000000500000 : "boot" 0x000000500000-0x000000520000 : "bootenv" 0x000000520000-0x000000fa0000 : "kernel" 0x000000fa0000-0x000002000000 : "spare" libphy: Fixed MDIO Bus: probed CAN device driver interface libphy: MACB_mii_bus: probed macb e000b000.ethernet eth0: Cadence GEM rev 0x00020118 at 0xe000b000 irq 26 (00:0a:35:00:1e:53) Marvell 88E1510 e000b000.ethernet-ffffffff:00: attached PHY driver [Marvell 88E1510] (mii_bus:phy_addr=e000b000.ethernet-ffffffff:00, irq=POLL) e1000e: Intel(R) PRO/1000 Network Driver - 3.2.6-k e1000e: Copyright(c) 1999 - 2015 Intel Corporation. ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver ehci-pci: EHCI PCI platform driver usbcore: registered new interface driver usb-storage chipidea-usb2 e0002000.usb: e0002000.usb supply vbus not found, using dummy regulator ULPI transceiver vendor/product ID 0x0451/0x1507 Found TI TUSB1210 ULPI transceiver. ULPI integrity check: passed. i2c /dev entries driver IR NEC protocol handler initialized IR RC5(x/sz) protocol handler initialized IR RC6 protocol handler initialized IR JVC protocol handler initialized IR Sony protocol handler initialized IR SANYO protocol handler initialized IR Sharp protocol handler initialized IR MCE Keyboard/mouse protocol handler initialized IR XMP protocol handler initialized cdns-wdt f8005000.watchdog: Xilinx Watchdog Timer at e0942000 with timeout 10s EDAC MC: ECC not enabled Xilinx Zynq CpuIdle Driver started sdhci: Secure Digital Host Controller Interface driver sdhci: Copyright(c) Pierre Ossman sdhci-pltfm: SDHCI platform and OF driver helper mmc0: SDHCI controller on e0100000.sdhci [e0100000.sdhci] using ADMA ledtrig-cpu: registered to indicate activity on CPUs usbcore: registered new interface driver usbhid usbhid: USB HID core driver fpga_manager fpga0: Xilinx Zynq FPGA Manager registered mmc0: new high speed SDHC card at address aaaa mmcblk0: mmc0:aaaa SS32G 29.7 GiB mmcblk0: p1 p2 NET: Registered protocol family 10 Segment Routing with IPv6 sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver NET: Registered protocol family 17 can: controller area network core (rev 20170425 abi 9) NET: Registered protocol family 29 can: raw protocol (rev 20170425) can: broadcast manager protocol (rev 20170425 t) can: netlink gateway (rev 20170425) max_hops=1 hctosys: unable to open rtc device (rtc0) of_cfs_init of_cfs_init: OK ALSA device list: No soundcards found. RAMDISK: Couldn't find valid RAM disk image starting at 0. VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6 Please append a correct "root=" boot option; here are the available partitions: 0100 16384 ram0 (driver?) 0101 16384 ram1 (driver?) 0102 16384 ram2 (driver?) 0103 16384 ram3 (driver?) 0104 16384 ram4 (driver?) 0105 16384 ram5 (driver?) 0106 16384 ram6 (driver?) 0107 16384 ram7 (driver?) 0108 16384 ram8 (driver?) 0109 16384 ram9 (driver?) 010a 16384 ram10 (driver?) 010b 16384 ram11 (driver?) 010c 16384 ram12 (driver?) 010d 16384 ram13 (driver?) 010e 16384 ram14 (driver?) 010f 16384 ram15 (driver?) 1f00 5120 mtdblock0 (driver?) 1f01 128 mtdblock1 (driver?) 1f02 10752 mtdblock2 (driver?) 1f03 16768 mtdblock3 (driver?) b300 31166976 mmcblk0 driver: mmcblk b301 8388608 mmcblk0p1 7d296fa6-01 b302 22775808 mmcblk0p2 7d296fa6-02 Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
I should note that if I remove all of these utilities, Petalinux boots as expected and there are no issues. It's only when I add these utilities through the petalinux-config -c rootfs command that this issue arises.
Any help in resolving this issue would be very much appreciated as I would potentially like to add even more utilities to the rootfs in the future, and this issue is preventing that.
Thank you..
03-31-2019 11:24 PM
You need to do this below chnages , it should work.
Note: If kernel or rootfs size increases and is greater than 128 MB, you need to do the following:
1. Mention the Bootm length in platform-top.h
#define CONFIG_SYS_BOOTM_LEN <value greater then image size>
2. Undef the CONFIG_SYS_BOOTMAPSZ in platform-top.h
Thanks & regards
Aravind
04-01-2019 08:12 AM
@aravindb I noticed that line in UG1144 and did those things before making my initial post. I mentioned that there was no CONFIG_SYS_BOOTMAPSZ defined in platform-top.h, but I did a #undef for it anyway. I also increased the CONFIG_SYS_BOOTM_LEN parameter to a value greater than what I believe is the image size (I think I set it to 512 MB to be safe), but I don't know how to determine the actual image size. Is it the size of the image.ub file? The size of that file is 132 MB.
To clarify, with those changes above, I get the output copied in my initial post. Is there some other step that I'm missing? Do I need to include files other than BOOT.BIN and image.ub on the SD card?
Thanks for your response.
04-03-2019 09:05 AM
05-08-2019 06:06 PM
I am seeing same issue after adding OpenCV. Anyone fixed this yet?.
05-08-2019 06:18 PM
05-08-2019 06:21 PM
@johnfrye11 Shoot me a message if you need any additional help with that. The summary I posted here is also part of a tutorial I’m writing for my research lab, so if something is unclear, I’d like to update the instructions.
05-08-2019 06:26 PM
Hi @johnfrye11 , @ngc6027 ,
If the INITRAMFS image size is larger then it is recommeded to use with ext4 filesystem not FIT image(image.ub)
Based on it is visible that the most likely there is no enough space in shmem that's why it is failing.
For smaller rootfs sizes it can fit that's why no message is shown.
In failure case, its using tmpfs and in working case its using ramfs.
This means it works without root= as long as the root filesystem can fit in tmpfs. In this case, when you increase the filesystem size which can't fit in tmpfs and hence root=/dev/ram option is needed in order to let linux use ramfs instead of tmpfs.
You can try adding this option in device-tree
chosen { bootargs = "console=ttyPS0,115200 earlyprintk root=/dev/ram"; stdout-path = "serial0:115200n8"; };
Note: This is not a perfect solution and do not to use in production.
05-08-2019 07:31 PM
I am still getting a very similar error message.
[ 4.025642] VFS: Cannot open root device "mmcblk0p2" or unknown-block(179,2): error -30 [ 4.033639] Please append a correct "root=" boot option; here are the available partitions: [ 4.041996] 0100 65536 ram0 [ 4.041998] (driver?) [ 4.048090] 0101 65536 ram1 [ 4.048091] (driver?) [ 4.054179] 0102 65536 ram2 [ 4.054180] (driver?) [ 4.060259] 0103 65536 ram3 [ 4.060260] (driver?) [ 4.066347] 0104 65536 ram4 [ 4.066348] (driver?) [ 4.072432] 0105 65536 ram5 [ 4.072434] (driver?) [ 4.078517] 0106 65536 ram6 [ 4.078519] (driver?) [ 4.084602] 0107 65536 ram7 [ 4.084603] (driver?) [ 4.090684] 0108 65536 ram8 [ 4.090685] (driver?) [ 4.096773] 0109 65536 ram9 [ 4.096774] (driver?) [ 4.102851] 010a 65536 ram10 [ 4.102853] (driver?) [ 4.109027] 010b 65536 ram11 [ 4.109028] (driver?) [ 4.115194] 010c 65536 ram12 [ 4.115196] (driver?) [ 4.121370] 010d 65536 ram13 [ 4.121372] (driver?) [ 4.127537] 010e 65536 ram14 [ 4.127539] (driver?) [ 4.133713] 010f 65536 ram15 [ 4.133715] (driver?) [ 4.139883] 1f00 1024 mtdblock0 [ 4.139884] (driver?) [ 4.146403] 1f01 256 mtdblock1 [ 4.146405] (driver?) [ 4.152925] 1f02 22528 mtdblock2 [ 4.152927] (driver?) [ 4.159436] b300 7639040 mmcblk0 [ 4.159438] driver: mmcblk [ 4.166221] b301 1048576 mmcblk0p1 7539477a-01 [ 4.166222] [ 4.172999] b302 6589440 mmcblk0p2 7539477a-02 [ 4.173001] [ 4.179773] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(179,2) [ 4.188194] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.14.0-xilinx-v2018.3 #8 [ 4.195395] Hardware name: xlnx,zynqmp (DT) [ 4.199562] Call trace: [ 4.202001] [<ffffff8008088c58>] dump_backtrace+0x0/0x368 [ 4.207385] [<ffffff8008088fd4>] show_stack+0x14/0x20 [ 4.212421] [<ffffff8008a3f7f8>] dump_stack+0x9c/0xbc [ 4.217463] [<ffffff800809bdc0>] panic+0x11c/0x274 [ 4.222246] [<ffffff8008d91124>] mount_block_root+0x1a8/0x27c [ 4.227973] [<ffffff8008d91430>] mount_root+0x11c/0x134 [ 4.233181] [<ffffff8008d915b4>] prepare_namespace+0x16c/0x1b4 [ 4.238996] [<ffffff8008d90d44>] kernel_init_freeable+0x1b8/0x1d8 [ 4.245073] [<ffffff8008a51b48>] kernel_init+0x10/0x100 [ 4.250281] [<ffffff8008084a90>] ret_from_fork+0x10/0x18 [ 4.255576] SMP: stopping secondary CPUs [ 4.259482] Kernel Offset: disabled [ 4.262952] CPU features: 0x002004
I guess it is not reading the rootfs off the SD properly?
What are your configs under
misc/config System Config→ Subsystem AUTO Hardware Settings → Advanced bootable images storage Settings
05-08-2019 07:52 PM
Hi @johnfrye11 ,
Do you see the command line bootargs in boot logs?
I'm not using FIT due to this issue in kernel hence we go with ext4 filesystem on SD.
05-08-2019 08:02 PM
I sent you my whole bootlogs but I think this is what you want.
[ 0.000000] Kernel command line: earlycon console=ttyPS0,115200 clk_ignore_unused root=/dev/mmcblk0p2 rw rootwait
Not sure where I would need to change PetaLinux, quite how it relates to this log, but let me know any suggestion and I can try it? Should I use different PetaLinux-generated file instead of image.ub?
05-09-2019 01:37 PM
I also tried what the guide says to do
Problem Description: This message indicates that the Linux kernel is unable to mount EXT4 File System and unable to find working init.
Solution: Extract RootFS in rootfs partition of SD card. For more information, see the Copying Image Files.
I still get the same error on boot.
|
https://forums.xilinx.com/t5/Embedded-Linux/Adding-Utilities-to-PetaLinux-Build-Causes-Boot-Failure/m-p/956119
|
CC-MAIN-2020-24
|
refinedweb
| 2,702
| 66.54
|
I have finished editing the program but I still have a couple of errors that I can't figure out, please help
Errors:
Error 1 error C2601: 'split' : local function definitions are illegal \\ilabss\home$\D03279277\Documents\Visual Studio 2005\Projects\err\err\err.cpp 97
Error 2 fatal error C1075: end of file found before the left brace '{' at '.\err.cpp(82)' was matched \\ilabss\home$\D03279277\Documents\Visual Studio 2005\Projects\err\err\err.cpp 137
/* Specification: Gilberto Sotomayor-Candelaria Lab 7 Exercise 3 Append and display records in a address database*/ #include <iostream> #include <fstream> #include <string> using namespace std; void menu(void); void writeData(void); void readData(void); string * split(string, char); const char FileName[] = "c:/TestAddress.txt"; int main () { menu(); return 0; } //end main void menu(void) { char choice = ' '; cout << "\nWhat Would You Like To Do With These Records: \n\n"; cout << "Append Records (A), Show Records (S), or Exit (E)\n"; cin >> choice; while(choice == 'A' || choice == 'S'); { switch(choice) { case 'a': case 'A': writeData(); break; case 's': case 'S': readData(); break; } cout << "What Else Would You Like To Do?\n"; cout << "Append Records (A), Show Records (S), or Exit (E)\n"; cin >> choice; } }//end menu void writeData(void) { char choice = 'Y'; string name = ""; string street = ""; string city = ""; string state = ""; string zipCode = ""; ofstream outMyStream(FileName, ios::app); do { cout << "\nEnter The Name: "; getline(cin, name); cout << "\nEnter The Street: "; getline(cin, street); cout << "\nEnter The City: "; getline(cin, city); cout << "\nEnter The State: "; getline(cin, state); cout << "\nEnter The Code: "; cin >> zipCode; outMyStream << name << "," << street << "," << city << "," << state << "," << zipCode; cout << "\nEnter another Record? (Y/N) "; cin >> choice; } while (choice == 'Y' || choice == 'Y' ); outMyStream.close(); }//end write data void readData(void) { ifstream inMyStream (FileName); string lineBuffer; while (!inMyStream.eof() ) { getline (inMyStream, lineBuffer, '\n'); string *theFields = split(lineBuffer, ','); cout << "Name...... " << theFields[0] << endl; cout << "Street.... " << theFields[1] << endl; cout << "City...... " << theFields[2] << endl; cout << "State..... " << theFields[3] << endl; cout << "Zip code.. " << theFields[4] << endl; }/
|
https://www.daniweb.com/programming/software-development/threads/306117/append-and-display-records-in-a-address-database-help
|
CC-MAIN-2017-51
|
refinedweb
| 327
| 58.42
|
This class is used as a namespace to group several global properties of Panda.
More...
#include "pandaSystem.h"
List of all members.
This class is used as a namespace to group several global properties of Panda.
Application developers can use this class to query the runtime version or capabilities of the current Panda environment.
Definition at line 29 of file pandaSystem.h.
[protected]
Don't try to construct a PandaSystem object; there is only one of these, and it constructs itself.
Use get_global_ptr() to get a pointer to the one PandaSystem.
Definition at line 31 of file pandaSystem.cxx.
References add_system(), and set_system_tag().
Referenced by get_global_ptr().
Don't try to destruct the global PandaSystem object.
Definition at line 69 of file pandaSystem.cxx.
Intended for use by each subsystem to register itself at startup.
Definition at line 392 of file pandaSystem.cxx.
Referenced by PandaSystem().
[static]
Returns a string representing the date and time at which this version of Panda (or at least dtool) was compiled, if available.
Definition at line 291 of file pandaSystem.cxx.
Returns a string representing the compiler that was used to generate this version of Panda, if it is available, or "unknown" if it is not.
Definition at line 251 of file pandaSystem.cxx.
Returns the string defined by the distributor of this version of Panda, or "homebuilt" if this version was built directly from the sources by the end-user.
This is a completely arbitrary string.
Definition at line 239 of file pandaSystem.cxx.
Returns the global PandaSystem object.
Definition at line 482 of file pandaSystem.cxx.
References PandaSystem().
Referenced by get_package_host_url(), and get_package_version_string().
Returns the major version number of the current version of Panda.
This is the first number of the dotted triple returned by get_version_string(). It changes very rarely.
Definition at line 176 of file pandaSystem.cxx.
Returns the minor version number of the current version of Panda.
This is the second number of the dotted triple returned by get_version_string(). It changes with each release that introduces new features.
Definition at line 190 of file pandaSystem.cxx.
Returns the number of Panda subsystems that have registered themselves.
This can be used with get_system() to iterate through the entire list of available Panda subsystems.
Definition at line 332 of file pandaSystem.cxx..
Definition at line 159 of file pandaSystem.cxx..
Definition at line 134 of file pandaSystem.cxx.
References get_global_ptr()..
Definition at line 109 of file pandaSystem.cxx.
Returns a string representing the runtime platform that we are currently running on.
This will be something like "win32" or "osx.i386" or "linux.amd64".
Definition at line 304 of file pandaSystem.cxx.
Returns the sequence version number of the current version of Panda.
This is the third number of the dotted triple returned by get_version_string(). It changes with bugfix updates and very minor feature updates.
Definition at line 204 of file pandaSystem.cxx.
Returns the nth Panda subsystem that has registered itself.
This list will be sorted in alphabetical order.
Definition at line 344 of file pandaSystem.cxx..
Definition at line 370 of file pandaSystem.cxx.
Returns the current version of Panda, expressed as a string, e.g.
"1.0.0". The string will end in the letter "c" if this build does not represent an official version.
Definition at line 81 of file pandaSystem.cxx.
Returns true if the current version of Panda claims to have the indicated subsystem installed, false otherwise.
The set of available subsystems is implementation defined.
Definition at line 317 of file pandaSystem.cxx..
Definition at line 433 of file pandaSystem.cxx.
References MemoryHook::heap_trim()..
Definition at line 222 of file pandaSystem.cxx.
Intended for use by each subsystem to register its set of capabilities at startup.
Definition at line 406 of file pandaSystem.cxx.
|
http://www.panda3d.org/reference/1.8.0/cxx/classPandaSystem.php
|
CC-MAIN-2015-06
|
refinedweb
| 626
| 53.17
|
All Loops Are a Code Smell
The death of for, while, and their ilk.
Loops are a fundamental part of programming. We need to do something for each item in a list. We need to read input until input is exhausted. We need to put n number of boxes on the screen. But every time I see a loop being added to code in a PR, my eyebrows go up. Now I have to examine the code closely to ensure the loop will always terminate. Sometimes it’s very easy to tell, but I’d just as soon not have to make that determination. I want to see all loops disappear into some well-tested library. But I still see them creeping in, so I thought I’d show just how to eliminate them in cases that might tempt you to use them.
The key to making loops disappear is functional programming. All you should supply is the code to execute in the loop and the parameters of the loop (what it should loop on). I’ll be using Java as an example language, but a lot of languages support this style of functional programming which can help to eliminate loops in your code.
The simplest case is doing something for each element in a list.
List<Integer> list = List.of(1, 2, 3);
// bare for loop.
for(int i : list) {
System.out.println("int = " + i);
}// controlled for each
list.forEach(i -> System.out.println("int = " + i));
In this simplest case, there’s not a lot of benefit either way. But the second way gets us into the habit of not using the bare
for loop and, to my eye, has a slightly cleaner syntax.
The
forEach for me is also problematic and should only be used for methods that contain safe side-effects. By safe side-effects, I mean they don’t alter program state. In the above example, we are just logging, so it’s fine to use. Other examples of safe side-effects are writing to a file, database, or message queue.
Unsafe side-effects alter program state. Here’s an example, and how to fix it:
// bad side-effect, the loop alters sum
int sum = 0;
for(int i : list) {
sum += i;
}
System.out.println("sum = " + sum);// no side-effect, sum is calculated by loop
sum = list
.stream()
.mapToInt(i -> i)
.sum();
System.out.println("sum = " + sum);
Another example I see all too often:
// bad side-effect, the loop alters list2
List<Integer> list2 = new ArrayList<>();
for(int i : list) {
list2.add(i);
}
list2.forEach(i -> System.out.println("int = " + i));// no side effect, the second list is built by the loop
list2 = list
.stream()
.collect(Collectors.toList());
list2.forEach(i -> System.out.println("int = " + i));
One problem that occurs is when you need the index within the method that processes list items, but that can be solved as well:
// bare for loop with index:
for(int i = 0; i < list.size(); i++) {
System.out.println("item at index "
+ i
+ " = "
+ list.get(i));
}// controlled loop with index:
IntStream.range(0, list.size())
.forEach(i -> System.out.println("item at index "
+ i
+ " = "
+ list.get(i)));
How about the age-old problem of reading each line in a file until the file is exhausted.
BufferedReader reader = new BufferedReader(
new InputStreamReader(
LoopElimination.class.getResourceAsStream("/testfile.txt")));
// while loop with clumsy looking syntax
String line;
while((line = reader.readLine()) != null) {
System.out.println(line);
}reader = new BufferedReader(
new InputStreamReader(
LoopElimination.class.getResourceAsStream("/testfile.txt")));
// less clumsy syntax
reader.lines()
.forEach(l -> System.out.println(l));
In the above case, we had a very convenient
lines method that returned a
Stream type.
But what if you’re reading character-by-character? There’s no method of the
InputStream class that returns a
Stream<Character>. We’ll have to make our own
Stream:
InputStream is =
LoopElimination.class.getResourceAsStream("/testfile.txt");
// while loop with clumsy looking syntax
int c;
while((c = is.read()) != -1) {
System.out.print((char)c);
}
// But this is even uglier
InputStream nis =
LoopElimination.class.getResourceAsStream("/testfile.txt");
// Exception handling makes functional programming ugly
Stream.generate(() -> {
try {
return nis.read();
} catch (IOException ex) {
throw new RuntimeException("Error reading from file", ex);
}
})
.takeWhile(ch -> ch != -1)
.forEach(ch -> System.out.print((char)(int)ch));
This is one instance where the while loop looks better. Also the
Stream version uses the
generate function which returns an infinite stream of items, so I have to inspect further to make sure the generation terminates, which it does because of the
takeWhile method. The
InputStream class is problematic because it doesn’t have a
peek method which we would need to use to create an
Iterator that could easily be turned into a
Stream. It also throws a checked exception which really uglifies functional programming. I might give a PR a pass with the while statement in this case.
To make the above problem cleaner, you could create a new type
IterableInputStream like this:
static class InputStreamIterable implements Iterable<Character> {
private final InputStream is;
public InputStreamIterable(InputStream is) {
this.is = is;
}
public Iterator<Character> iterator() {
return new Iterator<Character>() {
public boolean hasNext() {
try {
// poor man's peek:
is.mark(1);
boolean ret = is.read() != -1;
is.reset();
return ret;
} catch (IOException ex) {
throw new RuntimeException(
"Error reading input stream", ex);
}
}
public Character next() {
try {
return (char)is.read();
} catch (IOException ex) {
throw new RuntimeException(
"Error reading input stream", ex);
}
}
};
}
}
Then the looping problem is greatly simplified:
// use a predefined inputstream iterator:
InputStreamIterable it = new InputStreamIterable(
LoopElimination.class.getResourceAsStream("/testfile.txt"));
StreamSupport.stream(it.spliterator(), false)
.forEach(ch -> System.out.print(ch));
If you encounter this type of
while loop a lot, you might invest in creating and using a specialized
Iterable class. But if it’s a one-off, it’s probably not worth it, and just an example of old Java being incompatible with new Java.
So the next time you’re writing
for or
while in your code, stop and think for a moment how that could be better accomplished with a
forEach or a
Stream.
The code for this article can be found in my GitHub repository
|
https://medium.com/swlh/all-loops-are-a-code-smell-6416ac4865d6
|
CC-MAIN-2021-21
|
refinedweb
| 1,025
| 57.06
|
we encourage it, as having an account provides a number of additional benefits. These benefits include:
Instant benefits
Registered users instantly acquire a userspace. For starters, this means:
- A user page: a page with your name and the prefix
User:, which you can use to write whatever introduction to other Uncyclopedians you feel inclined to give,
- A talk page: a page with your name and the prefix
User talk:, in which some Admin or Welcomer will amble around and give you a formal, file-based welcome,
- A potentially infinite number of userspace pages in which you can put stuff that isn't ready for the main encyclopedia but might be, some day.
Registered users also get:
- One vote (rather than half a vote) on what articles appear on Uncyclopedia's main page,
- The ability to use the Upload page to upload illustrations to use in your articles. These can be any pictures you have on your computer. (No shock images or illegal content, please.)
- Most importantly, your chosen username lets you strut around as an Uncyclopedian.
Balky, straggling benefits
After a few days, depending on whether the servers work, you will acquire even more benefits as restrictions against "new and anonymous users" cease to apply to you, on account of your sudden lack of newness:
- Editing such pages as we have protected against those new and anonymous users, giving you sudden unexpected ability to influence the workings of the website
- The ability to move pages
- You will have a greater voice in site happenings (Please see the dump)
Clandestine benefits
You might worry that we get greater ability to track you. In fact, we get less. Any edit you make as an anonymous user is logged to your IP address, which lets everyone see what city and probably what house you are calling from. When you pick a user name, we'd have to ask permission of the Suits to get such information, and we'd only do so if we thought you were using multiple names to game us.
How do I create an account?
It's really simple! See that shiny Login / Create account button in the top right corner of your screen?
- Click it.
- Next, switch over to the create account screen.
- Fill in all the fields with the appropriate information.
- Click confirm.
- PROFIT!
I can't create an account. HELP!
A number of problems may arise, mostly to test your mettle and prove your worthiness.
Uncyclopedia has a different namespace from the rest of Wikia. This means that not only can you not use a name that has been taken, but you cannot use a name that was taken by you. We regard this as the perfect system "to avoid confusion"! So you will have to pick another, and live out your life with your username on Uncyclopedia being different from your username on the rest of Wikia. Most users append numbers onto the end, as though it were a password, to make it perversely difficult for the rest of us to address you. Others pursue difficulty of being addressed in different ways.
With luck, you will find a username that has not already been used by someone who logged on in 2006 and never made an actual edit. If ten attempts fail, try something else, such as dicking with capital letters.
Why should I make an account?
Were you even listening to me before?
|
http://uncyclopedia.wikia.com/wiki/Uncyclopedia:Create_an_account?redirect=yes
|
CC-MAIN-2014-49
|
refinedweb
| 569
| 67.99
|
module Sound.ALSA.Sequencer.Concurrent ( threadWaitInput , threadWaitOutput , threadWaitDuplex , input , output , drainOutput ) where import qualified Sound.ALSA.Sequencer.Poll as AlsaPoll import qualified Sound.ALSA.Sequencer.Marshal.Sequencer as Seq import qualified Sound.ALSA.Sequencer.Event as Event import Sound.ALSA.Exception (code, ) import Control.Concurrent (yield, threadWaitRead, threadWaitWrite, forkIO, killThread, ) import Control.Concurrent.MVar (newEmptyMVar, putMVar, takeMVar, ) import Control.Exception (catchJust, catch, ) import Control.Monad (guard, when, ) import Data.Function (fix, ) import Data.Word (Word, ) import Foreign.C.Error (eINTR, ) import System.IO.Error (isFullError, ) import System.Posix.Types (Fd, ) import qualified System.Posix.Poll as Poll import qualified Data.EnumSet as EnumSet import Prelude hiding (catch, ) data WaitFd = WaitRead Fd | WaitWrite Fd pollWaits :: Poll.Fd -> [WaitFd] pollWaits (Poll.Fd f e _) = (if EnumSet.subset Poll.inp e then [WaitRead f] else []) ++ (if EnumSet.subset Poll.out e then [WaitWrite f] else []) -- | Wait for any of the given events, like poll, and return the one that is ready threadWaitPolls :: [WaitFd] -> IO WaitFd threadWaitPolls [] = yield >> return undefined threadWaitPolls [p@(WaitRead f)] = threadWaitRead f >> return p threadWaitPolls [p@(WaitWrite f)] = threadWaitWrite f >> return p threadWaitPolls l = do w <- newEmptyMVar let poll1 p = catch (threadWaitPolls [p] >>= putMVar w . Right) (putMVar w . Left) t <- mapM (forkIO . poll1) l r <- takeMVar w mapM_ killThread t either ioError return r threadWaitEvents :: Poll.Events -> Seq.T mode -> IO () threadWaitEvents e sh = AlsaPoll.descriptors sh e >>= threadWaitPolls . concatMap pollWaits >> return () -- | Wait for new input to be available from the sequencer (even if there is already input in the buffer) threadWaitInput :: Seq.AllowInput mode => Seq.T mode -> IO () threadWaitInput = threadWaitEvents Poll.inp -- | Wait until new output may be drained from the buffer to the sequencer (even if the output buffer is already empty) threadWaitOutput :: Seq.AllowOutput mode => Seq.T mode -> IO () threadWaitOutput = threadWaitEvents Poll.out -- | Wait until new input is available or new output may be drained threadWaitDuplex :: (Seq.AllowInput mode, Seq.AllowOutput mode) => Seq.T mode -> IO () threadWaitDuplex = threadWaitEvents (Poll.inp EnumSet..|. Poll.out) catchFull :: IO a -> IO a -> IO a catchFull f e = catchJust (guard . isFullError) f (\() -> e) catchIntr :: IO a -> IO a catchIntr f = catchJust (guard . (eINTR ==) . code) f (\() -> catchIntr f) -- | A thread-compatible version of @Sound.ALSA.Sequencer.Event.input@. -- This call is always blocking (unless there are already event in the input -- buffer) but will not block other threads. The sequencer, however, must be -- set non-blocking or this will not work as expected. input :: Seq.AllowInput mode => Seq.T mode -> IO Event.T input sh = do n <- catchIntr $ Event.inputPending sh True when (n == 0) $ threadWaitInput sh fix $ catchFull (Event.input sh) . (threadWaitInput sh >>) -- | A thread-compatible version of @Sound.ALSA.Sequencer.Event.output@. -- This call is always blocking (unless there is space in the output -- buffer) but will not block other threads. The sequencer, however, must be -- set non-blocking or this will not work as expected. output :: Seq.AllowOutput mode => Seq.T mode -> Event.T -> IO Word output sh ev = Event.outputBuffer sh ev `catchFull` do threadWaitOutput sh _ <- Event.drainOutput sh `catchFull` return (-1) output sh ev -- | A thread-compatible version of @Sound.ALSA.Sequencer.Event.drainBuffer@. -- This call is always blocking but will not block other threads. The -- sequencer, however, must be set non-blocking or this will not work as -- expected. drainOutput :: Seq.AllowOutput mode => Seq.T mode -> IO () drainOutput sh = do n <- Event.drainOutput sh `catchFull` return (-1) when (n /= 0) $ do threadWaitOutput sh drainOutput sh
|
http://hackage.haskell.org/package/alsa-seq-0.6.0.1/docs/src/Sound-ALSA-Sequencer-Concurrent.html
|
CC-MAIN-2014-52
|
refinedweb
| 570
| 54.39
|
NAMEmlock, mlock2, munlock, mlockall, munlockall - lock and unlock memory
SYNOPSIS
#include <sys/mman.h>
int mlock(const void *addr, size_t len); int mlock2(const void *addr, size_t len, unsigned int flags); int munlock(const void *addr, size_t len);
int mlockall(int flags); int munlockall(void);
DESCRIPTIONmlock(), to indicate the error, and no changes are made to any locks in the address space of the process.
ERRORS
- EAGAIN
- (mlock(), mlock2(), and munlock()) Some or all of the specified address range could not be locked.
- EINVAL
- (mlock(), mlock2(), and munlock()) The result of the addition addr+len was less than addr (e.g., the addition may have resulted in an overflow).
- EINVAL
- (mlock2()) Unknown flags were specified.
- EINVAL
- (mlockall()) Unknown flags were specified or MCL_ONFAULT was specified without either MCL_FUTURE or MCL_CURRENT.
- EINVAL
- (Not on Linux) addr was not a multiple of the page size.
- ENOMEM
- (mlock(), mlock2(), and munlock()) Some of the specified address range does not correspond to mapped pages in the address space of the process.
- ENOMEM
- (mlock(), mlock2(), and munlock()).)
- ENOMEM
- (Linux 2.6.9 and later) the caller had a nonzero.
- EPERM
- (munlockall()) (Linux 2.6.8 and earlier) The caller was not privileged (CAP_IPC_LOCK).
VERSIONSmlock2() is available since Linux 4.4; glibc support was added in version 2.27.
CONFORMING TOmlock(), munlock(), mlockall(), and munlockall(): POSIX.1-2001, POSIX.1-2008, SVr4.
mlock2() is Linux specific.Memory..
|
https://man.archlinux.org/man/mlock.2.en
|
CC-MAIN-2022-21
|
refinedweb
| 231
| 68.06
|
Created on 2017-12-13 15:48 by barry, last changed 2018-03-05 19:23 by barry. This issue is now closed.
Along the lines of Issue32303 there's another inconsistency in namespace package metadata. Let's say I have a namespace package:
>>> importlib_resources.tests.data03.namespace
<module 'importlib_resources.tests.data03.namespace' (namespace)>
The package has no __file__ attribute, and it has a misleading __spec__.origin
>>> importlib_resources.tests.data03.namespace.__spec__.origin
'namespace'
>>> importlib_resources.tests.data03.namespace.__file__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'importlib_resources.tests.data03.namespace' has no attribute '__file__'
This is especially bad because the documentation for __spec__.origin implies a correlation to __file__, and says:
"Name of the place from which the module is loaded, e.g. “builtin” for built-in modules and the filename for modules loaded from source. Normally “origin” should be set, but it may be None (the default) which indicates it is unspecified."
I don't particularly like that its origin is "namespace". That's an odd special case that's unhelpful to test against (what if you import a non-namespace package from the directory "namespace"?)
What would break if __spec__.origin were (missing? or) None?
3.5 is in security fix only mode, and this is not a security issue.
Note that this change was originally also backported to 3.6 in PR 5504 but, due to third-party package regressions discovered in pre-release testing, the 3.6 change was reverted in PR 5591 prior to release of 3.6.5rc1.
Should this get an entry in the What's New?
I guess it depends on whether you think this is a new feature or a bug fix. Or, OTOH, since we had to revert for 3.6, maybe it makes sense either way since some code will be affected.
As is usual for me, I am here because some coverage.py code broke due to this change. A diff between b1 and b2 found me the code change (thanks for the comment, btw!), but a What's New doesn't seem out of place.
On Mar 5, 2018, at 10:33, Ned Batchelder <report@bugs.python.org> wrote:
> As is usual for me, I am here because some coverage.py code broke due to this change. A diff between b1 and b2 found me the code change (thanks for the comment, btw!), but a What's New doesn't seem out of place.
Sounds good; I’ll work up a PR
|
https://bugs.python.org/issue32305
|
CC-MAIN-2018-39
|
refinedweb
| 421
| 68.97
|
SPLOIT: How to Make a Python Port Scanner
NOTICE: Ciuffy will be answering questions related to my articles on my behalf as I am very busy. Hope You Have Fun !!!
Hello Guys,
Welcome to my first tutorial and in this tutorial we are basically going to create a port scanner in python ( I guess without external libraries ).
Before we starting build the project, I would first like to thank the null-byte community for been such a great help in my life: The fascinating website, Loving and always willing to help/learn members, selfless admin. You guys are awesome.
Aside my gratitudes, I would like to brief a little bit on networking.
INTRODUCTION TO THE WORLD OF NETWORKING
I am only going to treat what is needed to build this project.
From our friends at: WikiPedia
A port is a place where information goes into and out of a computer.. ( Read More ).
To portsweep is to scan multiple hosts for a specific listening port. The latter is typically used to search for a specific service, ( For example, an SQL-based computer worm may portsweep looking for hosts listening on TCP port 1433 eg. SQL Slammer ).
WHAT WE MOSTLY USE TODAY ( TCP/IP ).
Some port scanners scan only the most common port numbers, or ports most commonly associated with vulnerable services, on a given host.
TYPES OF SCANNING PROTOCOLS
Lets quickly brush through some types of network scanning protocols.
TCP Scanning
The simplest port scanners use the operating system's network functions and are generally the next option to go to when SYN is not a feasible option ( described next ). Nmap calls this mode connect scan, named after the Unix connect() system call. If a port is open, the operating system completes the TCP three-way handshake, and the port scanner immediately closes the connection to avoid performing a Denial-of-service attack. Otherwise an error code is returned. This scan mode has the advantage that the user does not require special privileges. However, using the OS network functions prevents low-level control, so this scan type is less common. This method is "noisy", particularly if it is a "portsweep": the services can log the sender IP address and Intrusion detection systems can raise an alarm. an RST packet, closing the connection before the handshake is completed. If the port is closed but unfiltered, the target will instantly respond with a RST packet.. However, the RST during the handshake can cause problems for some network stacks, in particular simple devices like printers. There are no conclusive arguments either way.
UDP Scanning
UDP scanning is also possible, although there are technical challenges. UDP is a connectionless protocol so there is no equivalent to a TCP SYN packet...
ACK Scanning
ACK scanning is one of the more unusual. Using this scanning technique with systems that no longer support this implementation returns 0's for the window field, labeling open ports as closed.
FIN Scanning
Since SYN scans are not surreptitious enough, firewalls are, in general, scanning for and blocking packets in the form of SYN packets.3 FIN packets can bypass firewalls without modification. Closed ports reply to a FIN packet with the appropriate RST packet, whereas open ports ignore the packet on hand. This is typical behavior due to the nature of TCP, and is in some ways an inescapable downfall.
Other Scan Types
Some more unusual scan types exist. These have various limitations and are not widely used. Nmap supports most of these.
X-mas and Null Scan - Similar to FIN scanning, but:
X-mas sends packets with FIN, URG and PUSH flags turned on like a Christmas tree.
Null sends a packet with no TCP flags set.
Protocol Scan - Determines what IP level protocols (TCP, UDP, GRE, etc.) are enabled.
Proxy Scan - A proxy ( SOCKS or HTTP ) is used to perform the scan. The target will see the proxy's IP address as the source. This can also be done using some FTP servers.
Idle Scan - Another method of scanning without revealing one's IP address, taking advantage of the predictable IP ID flaw.
Cat Scan - Checks ports for erroneous packets.
ICMP Scan - determines if a host responds to ICMP requests, such as echo (ping), netmask, etc.
What we basically need to know is the TCP/IP Protocol.
SCREENSHOT OF FINAL WORK
SCRIPT
BUILDING THE PORT SCANNER
Our port scanner is going to be a simple one, Less than 50 lines and the codes used are basic.
Let's begin.
SECTION 1: IMPORTING MODULES
Socket - Important ( 10 / 10 )
Datetime - Optional ( 1 / 10 )
Sys - Maybe Important ( 5 / 10 )
Time - Optional ( 1 / 10 )
from socket import * - From the socket module, import everything
import sys, time - Import the sys module, Import the time module ( We can import as many modules as we want so far as we put a comma in between them )
from datetime import datetime - From the datetime module, import function datetime and leave the rest
SECTION 2: DECLARING PROGRAM SETTINGS
Basically, the settings we need are the host address , start and stop port ( Can be embedded in the script, But we wanna give others the chance to change it with ease should our script get out. )
Our script has been limited to port 5000 for quick demonstration.
SECTION 3: INITIATING PROGRAM
Lets ask the user for the target host address, This could be a url address of the host or the direct numberic ip address.
Except KeyboardInterrupt
This handles keyboard interrupts ( Ctrl + C ) should the user want to close the script for whatever purposes. This prevents python from stopping our program execution and spilling its Keyboard Interrupt exception code to the screen. ( Kind-off seems unprofessional ). Like this ...
Let's see what our program does when a user hits the interrupt command.
Nicely done !!!
SECTION 4: GETHOSTBYNAME
This function simply returns the ip numberic values of a host address or url.
When we call this function, we add our host address or url in open brackets and maybe associate it with a print like the screenshot which should echo the ip address of the host url or hostname.
It prints out the ip address of Google.
SECTION 5: DATETIME
This function simply returns the current time value of the OS.
SECTION 6: SCANNING
min_port - 1
max_port - 5000
range - It generates a list of numbers, which is generally used to iterate over with for loops. It takes two args, min number and max number. ( Read more on Range()
for port in range(min_port, max_port): For each number alias port in range, do something.
Now the try and except statment ensures the program execution is not stopped by an error ( except statement catches the error ).
In the try statement, response is calling a function scan_host and this function is taking two arguements: host and port ( We will explain the function in the next section). Response is a variable that holds the returned value from the scan_host function.
If the received value from scan_host is 0, print out Port <port number>: Open ....
When an exception or error occurs, pass simply allows the script to continue its execution.
SECTION 7: SCAN_HOST FUNCTION
This basically utilizes the OS socket function to connect to the intended target. It accepts two arguements ( Actually 3 but r_code is optional and has been set to 1 ).
s - Initiates the sockets
code - Executes the connect function and captures the connection result ( Whether successful, failed )
if the code value is 0 which is what most linux programs use when there is successful execution, r_code is set to code which is 0.
The socket is then closed, Should an error occur: The except function is executed and pass allows the program to continue execution. ( Remember: r_code will still be set to 1, when the response variable receives it: It won't do anything since it has no command for 1 )
return - Simply returns r_code to its caller
NOTE: For the script to print out an open port, when r_code is set to 0 and the response variable receives it then we can have an open port printed out.
SECTION 8: CONCLUSION
I've successfully explained the needed part of the script and now it's time for testing ...
Let's test nmap to see it's result
i think we created our own nmap ( Don't let this get into your head, Nmap is way advanced than our script ).
NOTE TO ALL is not a domain, I modified my /etc/hosts file to demonstrate the gethostbyname function. As you can see, its ip is 10.0.2.11 which is part of my internal NAT Network Adresses. Please don't go scanning ghost domains. :)
IN A NUT SHELL
Hope we had fun and notify me of any misinformation, typing errors or anything that needs correction. Have a nice day !!!!
# Sergeant
18 Comments
So many references and terms, hope the community will benefit.
Very detailed, well done.
Ciuffy:
Thanks Sir, thats a lot coming from you
# Sergeant
Great compendium of information it is. Less fluff and more stuff just like it should be.
CyberHitchHiker:
Thanks for the compliment ...
# Sergeant
Nice work, very in-depth and detailed.
ghost_
Dr. Ghost:
Thanks and this also means alot coming from you
# Sergeant
I give Kudos, well thought out.
Cracker:
Thanks for your Kudos and your time of reading the article. ( Means alot ), My gratitudes.
# Sergeant
This is one of the best tutorials I have seen so far!!
Cameron:
Thanks ... ( I have to say: I love your articles too, Hope to read more ).
Really appreciate your words.
# Sergeant
Great work. Very informative.
Lemon:
Thanks ... Really appreciate your feedback.
# Sergeant
Your tutorials are really cool, Did your scan take long: Mine is really taking long
It depends.
# Sergeant
Yeah, My scan also took like 8 minutes to scan from 1 to 65535. Is there a fix for this or should i increase my RAM size.
awesome !, Man i have been meaning to write a port scanner but never had the idea or basics ... thanks
wont this type of scan log your ip address...
In line 42 I have an error with "," what should I change at the script?
Share Your Thoughts
|
https://null-byte.wonderhowto.com/how-to/sploit-make-python-port-scanner-0161074/
|
CC-MAIN-2017-17
|
refinedweb
| 1,693
| 71.34
|
Last week I asked about this so I tried to write another example to try it out. I have a Line class which needs to have and members of type Point and member functions that return type Point. The Point class needs to have member functions that return type Line (this has been omitted from the example because the solution will be the same).
Last week I was informed that if I have a situation like this, then in class Line I can only have members of type *Point. Clearly, I would return them to the main program with return &MyPoint. However, with member functions, since they can only return *Point, there is obviously no way to get the value to main without in main doing Point P=&Line.getP1(). That seems like it would get a bit old.
1) is this all correct?
2) is there not a better way to do it?
Thanks!
David
Line.h
#ifndef LINE_H #define LINE_H #include "Point.h" class Point; class Line { // Ax+By+C = 0 double A,B,C; Point *p1, *p2; public: Line(); Line(Point P1, Point P2); ~Line() {} double getA() { return A;} double getB() { return B;} double getC() { return C;} //Point getP1() {return *p1;} //Point getP2() {return *p2;} }; #endif
Line.cpp
#include "Line.h" Line::Line() { A=0; B=0; C=0; } Line::Line(Point P1, Point P2) { p1=&P1; p2=&P2; double x1,x2,y1,y2; x1=P1.getX(); x2=P2.getX(); y1=P1.getY(); y2=P2.getY(); //derived from slope(m) intercept(b) form double m,b; m = (y2-y1)/(x2-x1); b = y1 - ((y2-y1)/(x2-x1))*x1; C = 1;//arbitrary, so set to 1 B = -1/b; A = -B*m; }
Point.h
#ifndef POINT_H #define POINT_H #include "Line.h" class Line; class Point { double x, y, z; public: Point(); Point(double x_create, double y_create, double z_create); ~Point() {} double getX() { return x; } double getY() { return y; } double getZ() { return z; } void setX(double Xin) { x=Xin; } void setY(double Yin) { y=Yin; } void setZ(double Zin) { z=Zin; } }; #endif
Point.cpp
#include "Point.h" Point::Point() { x=0; y=0; z=0; } Point::Point(double x_create, double y_create, double z_create) { x=x_create; y=y_create; z=z_create; }
test.cpp
#include "Line.h" #include "Point.h" #include <iostream> using namespace std; int main() { Point MyPoint(1,2,0); cout << MyPoint.getX() << " " << MyPoint.getY() << " " << MyPoint.getZ() << endl; Point MyPoint2(2,5,0); cout << MyPoint2.getX() << " " << MyPoint2.getY() << " " << MyPoint2.getZ() << endl; Line MyLine(MyPoint, MyPoint2); cout << MyLine.getA() << " " << MyLine.getB() << " " << MyLine.getC() << endl; //cout << MyLine.getP1() << " " << MyLine.getP2() << endl; int i; cin >> i; return 0; }
|
https://www.daniweb.com/programming/software-development/threads/114375/classes-which-depend-on-each-other
|
CC-MAIN-2017-17
|
refinedweb
| 434
| 74.19
|
Dropping privileges in a SUID binary
I required an application to run as a SetUID application under FreeBSD, requiring the application to run under a different user and set of groups after completing a task. I choose the term 'set of groups' because in some systems (like FreeBSD :) ) may associate a process with a number of groups besides the standard POSIX group the process is running under. To drop the privileges my application had to accomplish the following steps:
- Locate the user and groups the process wished to become
- Change the groups
- Change the user
- Issue an exec
Locating the user and group are optional, as you may hardwire the numbers (not recommended), however steps two and three must be done in order. These tasks were easy enough to complete, however I decided to share as general knowledge because I didn't find many resources detailing process :).
Locating the user and groups we wish to becomeOf course the target user and group are dependent on your goals, however you need at least one group identifier (GID) and one user identifier (UID). These may be obtained in a number of ways, including hardwiring your numbers that correlate to your /etc/passwd and /etc/group entries. I'll show two methods of retrieving both identifiers: from a file's user and group, and another through the system's user and group database module.
Obtaining a file's {GID,UID}
stat and friends are your friend in this case as the
struct stat provides the information. The two fields you are looking for are
st_uid and
st_gid within the output of type
struct stat. Be careful when using lstat as you will be refereeing to the symbolic link itself instead of the target of the link.
Example
The following is an example of locating the {GID,UID} from a file using
stat
#include <errno.h> #include <iostream> #include <sys/types.h> #include <sys/stat.h> using namespace std; int main(int argc, char** argv){ if(argc < 2){ cout << "Requires a file as an argument" << endl; }else{ char* fileName = argv[1]; cout << "Owner and group of " << fileName << ":" << endl; struct stat info; if(stat(fileName,&info) != 0){ cout << "Unable to retrieve file information because "<<strerror(errno) << endl; }else{ cout << "Owner: " << info.st_uid << endl; cout << "Group: " << info.st_gid << endl; } } return 0; }
Locating a {GID,UID} pair from the system databases by name
To locate the {GID,UID} pairs requires accessing two different system databases to obtain the information. The databases are usually stored in /etc/passwd and /etc/group in simple deployments and maybe be stored in a remote LDAP database in large enterprises. The easiest method of accessing the data is to utilize the functions which are apart of the standard C library. I'll detail a simple example of both user name to UID and group name to GID.
User name to UID
To convert a user name to UID you utilize the
getpwnam function, taking the user name as input and providing a structure as a result. The memory returned in the pointer is owned by the
getpwnam module. This function is not thread-safe, although there are thread safe functsion providing the same functionality. The resulting struct is of type
struct passwd and provides all of the fields in the password database. The
pw_uid field will provide the user identifier. As an added bonus if you are only searching for the primary group you could also pickup the
pw_gid.
User name to GID
Conversion from the user name to GID is just as simple, utilizing the
getgrnam function. The returned
struct group contains a
gr_gid field for your group.
Example of {GID,UID}
The following example is taken from the library which you may download from below
/** * Retrieves the GID number for the given group name. * * @arg groupName is the name of the group to retrieve the GID for * @return the GID for the given group * @throws std::invalid_argument if the group doesn't exist or otherwise can't be located */ gid_t getGroup(const char* groupName) throw(PUGException){ struct group* tuple; errno = 0; //Grab the group name from the group database tuple = getgrnam(groupName); if(tuple == NULL){ if(errno == 0){ std::stringstream buffer; buffer << "Unable to find a group by the name '"<< groupName << "'"; throw PUGException(buffer); }else{ std::stringstream buffer; buffer << "The following error occured while attempting to lookup an entry for group '"<< groupName << "': " << strerror(errno); throw PUGException(buffer); } } return tuple->gr_gid; } /** * Retrieves the UID number for the given user name. * * @arg userName is the user name to retreive the UID for * @return the UID for the given user * @throws std::invalid_argument if the user doesn't exist or otherwise can't be located */ uid_t getUser(const char* userName) throw (PUGException){ struct passwd* tuple; errno = 0; tuple = getpwnam(userName); if(tuple == NULL){ if(errno == 0){ std::stringstream buffer; buffer << "Unable to find user '"<
pw_uid; }
Modifying the process's groupYou have three fields related to the process group which must be modified. From my understanding, in many configuration root may assign arbitrary values to the fields, however non-superusers may only assign specific values. The effective user ID and real user ID must be set in that order. They are set using the functions
setuidand
seteuidrespectively. If working with an operating system other than FreeBSD there may be additional fields you may consider setting like FSGID on Linux. Under FreeBSD a process running as root may also sets the access groups associated with the current process using the
setgroupsfunction. User FreeBSD this is an important step, as the list will allow the process access to the resources owned by the groups within the list.
Modifying the process's userConvenient enough there is a
seteuidand
setuidfunctions to change the UID. This one is straight forward and easy without much fuss. As with the group, set the effective UID first with the
seteuidfunction, then the actual UID with
setuid.
Issuing an
To lock the effective and current identifiers you should issue an a call to one of the many
exec
execfunctions. If you do not, then the operating system may allow the user to revert their user ID back using a feature known as the "saved-user-id". Tragic if you are attempting to isolate a process from the reset of the system which was running under root.
ExampleDownload PUG and an example driving application. This is an excerpt from the attached code base and is written in C++. Why C++ for an otherwise C application? Exceptions for error handling :).
/* * POSIX includes */ #include <errno .h> #include <grp .h> #include <pwd .h> [..] #include <stdexcept> #include <string> #include <sstream> [...] /** * This method attempts to determine if the current user of the process is a super user. * The current implementation of this method is rather naive as the implemenetation * just checks to see if the current user id is zero. Should probably work under a * majority of cases. * * @returns true if the current user is a super user, otherwise false */ bool isSuperUser() throw(); /** * Changes teh associated groups for the current process, include the current, * effective, and access groups. Under most opreating systems if the current * user is not a super user, then you may only set the process group to the * current group. * * @param group is the primary group to associate with this process * @param setGroups will also set the access groups to the given group * @throws PUGException if a problem occurs in an underlying functions */ void setGroup(const gid_t group, const bool setGroups = isSuperUser()) throw(PUGException); /** * Drops teh privileges into the current user and group. If the current user of the * process is not the super user, then the only valid parameters are teh current * user and group. Otherwise if the process user is the super user, then any valid * value for the system will result in a change to the specified user. * * @param uid is the user to drop into * @param gid is the group to drop into * @throws PUGException if a POSIX function fails */ void dropPrivileges(const uid_t uid, const gid_t gid) throw (PUGException); [...] bool isSuperUser(){ return getuid() == 0; } void setGroup(const gid_t group, const bool setGroups){ if(setegid(group) != 0 ){ std::stringstream msg; msg << "unable to set effective group becaues " << strerror(errno); throw PUGException(msg); } if(setgid(group) != 0){ std::stringstream msg; msg << "unable to set real group because " << strerror(errno); throw PUGException(msg); } if(setGroups){ if(setgroups(1,&group) < 0) { std::stringstream msg; msg << "Unable to the associated groups because " << strerror(errno); throw PUGException(msg); } } } void dropPrivileges(const uid_t uid, const gid_t gid){ setGroup(gid); if(seteuid(uid) != 0 ){ std::stringstream msg; msg << "Unable to change the effective user because " << strerror(errno); throw PUGException(msg); } if(setuid(uid) != 0 ){ std::stringstream msg; msg << "Unable to change the real user because " << strerror(errno); throw PUGException(msg); } }
|
http://meschbach.com/kb/posix-suid-drop-privileges.html
|
CC-MAIN-2018-13
|
refinedweb
| 1,461
| 50.06
|
Prabhu Ramachandran posted the following note to the enthought-dev mailing list. The result is that TVTK Scene in Pyface is now GUI-toolkit-independent, and Mayavi2 is independent of Envisage.
Just a quick note. I’ve refactored the pyface.tvtk scene in the
branches. I’ve “abstracted” the core TVTK functionality of a generic
scene in enthought.pyface.tvtk.tvtk_scene.TVTKScene. This scene basically uses
a TVTK renderwindow interactor and is usable by itself. It does not
really use anything from Pyface or wxPython. The
enthought.pyface.tvtk.scene.Scene class derives from it and creates a wxPython
specific widget that the pyface wx backend can use. So it should be
easy to create a QT backend.
I’ve changed mayavi a little bit to separate out the core Engine from
the EnvisageEngine and also use the new refactored pyface code. With
this, if you run examples/standalone.py you won’t see any envisage
workbench messages. There should be no envisage imports at all. I
even checked with python -v. Apart from some setuptools namespace
stuff which I don’t control there are no envisage imports.
In addition I’ve checked in an offscreen.py in the examples that shows
how you can do offscreen rendering with mayavi and the TVTKScene. The
key here is that you write mayavi scripts that can be rendered off
screen. In theory it should be possible to now do a visualization
on the full UI version, save (possibly a specialized) visualization
and then have that all rendered out offscreen. I am not going to
implement that until the persistence issues are sorted out. The thing
I am happy about is that this is now definitely possible to implement.
There should be no major API breakage unless you are using the mayavi
engine directly.
When updating mayavi2 from SVN you will have to install pyface from
branches as well.
|
http://blog.enthought.com/python/improved-code-separation-in-tvtk-and-mayavi2/
|
CC-MAIN-2017-22
|
refinedweb
| 316
| 67.45
|
Have you seen Stackbit’s new “Code” & “Content” editors? I’m not sure what more a person needs in life to learn or teach static site generator HTML templating & front-matter data modeling practices.
I wish this had been around a year ago when I started my Jamstack journey.
Templating
Check this out: once I spin it up, I can see that the in Stackbit’s new Agency theme, the body text of a hero section is created with the following code, and that it lives in a file called
hero_section.html:
{% assign content_is_not_empty = section.content | is_not_empty %} {% if content_is_not_empty %} <div class="hero__body text-block"> {{ section.content | markdownify }} </div> {% endif %}
…just by clicking “code” when hovering over the text within my visual preview/editing panel.
Data model
I can also tell that the data is stored in front matter like this:
... sections: - type: hero_section ... content: >- We are a brand and design practice. We work closely with you, your team to deliver inspiring work, which enables your organization to grow. [Let's talk](/contact/). ...
…just by clicking “content” when hovering over the text within my visual preview/editing panel.
Time saver
I can’t begin to tell you how many months of weekends I lost combing through the Git codebases of all of Stackbit’s themes to figure out that this is how good page builder themes are designed and implemented in data models & templates, trying to connect the dots about what back-end code made what front-end visual effects happen.
Gatsby React WSYIWYG CMS-Friendly Markdown
Katie ・ Jun 22 '20 ・ 13 min read
(Tip: Stackbit writes really well-architected themes that are worth poring over.)
- Being able to just click “code” or “content” and jump straight to part of an experienced web developer’s work that you’d like to dissect (so as to better understand it) is incredibly educational if you’re learning web development in the Jamstack.
- It’s a great tool for teaching , too, if you want to show others how you made something.
I’m super impressed, Stackbit. Keep it coming.
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/katiekodes/stackbit-can-teach-you-web-development-3blh
|
CC-MAIN-2021-21
|
refinedweb
| 347
| 61.77
|
A plot item, that represents a series of points. More...
#include <qwt_plot_curve.h>
A plot item, that represents a series of points.
A curve is the representation of a series of points in the x-y plane. It supports different display styles, interpolation ( f.e. spline ) and symbols.
Attribute for drawing the curve
Curve styles.
Attributes how to represent the curve on the legend
Attributes to modify the drawing algorithm. The default setting enables ClipPolygons | FilterPoints
Constructor
Constructor
Complete a polygon to be a closed polygon including the area between the original polygon and the baseline.
Find the closest curve point for a specific position
Get the curve fitter. If curve fitting is disabled NULL is returned.
Draw the line part (without symbols) of a curve interval.
Draw dots
Draw lines.
If the CurveAttribute Fitted is enabled a QwtCurveFitter tries to interpolate/smooth the curve, before it is painted.
Draw an interval of the curve
Implements QwtPlotSeriesItem.
Draw step function
The direction of the steps depends on Inverted attribute.
Draw sticks
Draw symbols
Fill the area between the curve and the baseline with the curve brush
Reimplemented from QwtPlotItem.
Reimplemented from QwtPlotItem.
Set the value of the baseline.
The baseline is needed for filling the curve with a brush or the Sticks drawing style.
The interpretation of the baseline depends on the orientation(). With Qt::Horizontal, the baseline is interpreted as a horizontal line at y = baseline(), with Qt::Vertical, it is interpreted as a vertical line at x = baseline().
The default value is 0.0.
Assign a brush.
In case of brush.style() != QBrush::NoBrush and style() != QwtPlotCurve::Sticks the area between the curve and the baseline will be filled.
In case !brush.color().isValid() the area will be filled by pen.color(). The fill algorithm simply connects the first and the last curve point to the baseline. So the curve data has to be sorted (ascending or descending).
Specify an attribute for drawing the curve
/sa testCurveAttribute(), setCurveFitter()
Assign a curve fitter
The curve fitter "smooths" the curve points, when the Fitted CurveAttribute is set. setCurveFitter(NULL) also disables curve fitting.
The curve fitter operates on the translated points ( = widget coordinates) to be functional for logarithmic scales. Obviously this is less performant for fitting algorithms, that reduce the number of points.
For situations, where curve fitting is used to improve the performance of painting huge series of points it might be better to execute the fitter on the curve points once and to cache the result in the QwtSeriesData object.
Specify an attribute how to draw the legend icon
Specify an attribute how to draw the curve
Initialize the data by pointing to memory blocks which are not managed by QwtPlotCurve.
setRawSamples is provided for efficiency. It is important to keep the pointers during the lifetime of the underlying QwtCPointerData class.
Set data by copying x- and y-values from specified memory blocks. Contrary to setRawSamples(), this function makes a 'deep copy' of the data.
Initialize data with x- and y-arrays (explicitly shared)
Initialize data with an array of points.
Assign a series of points
setSamples() is just a wrapper for setData() without any additional value - beside that it is easier to find for the developer.
|
http://qwt.sourceforge.net/class_qwt_plot_curve.html
|
CC-MAIN-2015-22
|
refinedweb
| 539
| 58.48
|
This is your resource to discuss support topics with your peers, and learn from each other.
07-25-2011 02:03 AM
I have made an application that uses BBM 6 features and uploaded it to the app world.
However because I have used the BBM 6 classes,this application will only run on devices that have BBM 6.
On all other devices I am getting the error "Module net_rim_bb_qm_platform" not found.
Now the problem is that BBM 6 has not even been released yet.
I tried to solve this problem by wrapping all BBM features(including register) around this code
public boolean hasBBM6() { int moduleHandle = CodeModuleManager.getModuleHandle("net_rim_bb_qm_p
latform"); return (moduleHandle != 0); }latform"); return (moduleHandle != 0); }
However the problem still persists.
I think the problem is because of the import statements.
can someone please help me with this?
My app is live on the App World and it is basically useless
PLEASE HELP!!!
07-25-2011 12:53 PM
07-26-2011 12:42 AM
hey
thanks for your help
the problem is that i need my app to have those BBM features so removing the code would not help
also preprocessor directives would only change the code at compile time.....i need something that changes the imports depending on the presence of BBM 6 in those devices
07-29-2011 04:29 PM
Not posible AFAIK. I think you will to produce a BBM6 and non BBM 6 version.
07-29-2011 07:41 PM
yeah thats what i have done right now....i hope blackberry comes up with a solution for this
thanks for the help peter
07-30-2011 01:41 PM
07-31-2011 05:49 PM
Is there a way to implement something like the dependency checker that doesn't suck?
|
http://supportforums.blackberry.com/t5/Java-Development/BBM-6-help-please/m-p/1232435
|
CC-MAIN-2014-23
|
refinedweb
| 297
| 72.36
|
my main program must accept the names of input and output text files as command line arguments. it should create a pipe and then create 2 other children processes. the pipe is for communication between the children. after the children are created the parent must wait for children to complete before exiting with wait() function.
the first child process should replace its process image (with execl() function) with a read program that will open the specified input file, read it using the read() function, and write the contents to a pipe.
the second child process should replace its image with a write program that will read the data from pipe and then write to specified output file.
the read program should read in 512 characters at a time from the file. the read program should terminate after processing the file. the write program should exit once all of the data is written to the output file.
I am unsure where the second fork() function should go for one thing. when i run my program my output file is overwritten with just blank space which leads me to believe my write buffer is blank, so it is never getting the data from the pipe. also, how does the execl() function work exactly? in my messing around it seems like nothing executes after the execl() function call in a given if block.
Code://parent #include <iostream> #include <fstream> using namespace std; int main(int argc, char *argv[]) { int rc, rc2, ptest; int aipipe[2]; cout << "before child" << endl; ptest = pipe(aipipe); rc = fork(); cout << rc << endl; cout << ptest << endl; if (rc == 0) { char readbuffer[512]; sprintf(readbuffer, "%d", aipipe[1]); cout << "in first child" << endl; execl("./reader", "reader", argv[1], readbuffer, NULL); } else if (rc == -1) { cout << "fork call error" << endl; } cout << "hey" << endl; rc2 = fork(); char writebuffer[512]; if (rc2 == -1) { cout << "fork call error" << endl; } else if (rc2 == 0) { cout << "in second child" << endl; execl("./writer", "writer", argv[2], writebuffer, NULL); } close(aipipe[0]); close(aipipe[1]); wait(); wait(); return 0; } //reader include <iostream> #include <fstream> using namespace std; int main(int argc, char *argv[]) { cout << "in reader" << endl; char readbuffer[512]; int ipipefd = atoi(argv[2]); ifstream ifs(argv[1]); ifs.read(readbuffer, sizeof(readbuffer)); write(ipipefd, readbuffer, ifs.gcount()); return 0; } //writer #include <iostream> #include <fstream> using namespace std; int main(int argc, char *argv[]) { cout << "in writer" << endl; char writebuffer[512]; int rc; int ipipefd = atoi(argv[2]); ofstream ofs(argv[1]); rc = read(ipipefd, writebuffer, sizeof(writebuffer)); ofs.write(writebuffer, rc); return 0; }
|
http://cboard.cprogramming.com/cplusplus-programming/130201-please-help-fork-execl-linux-functions-cplusplus-code.html
|
CC-MAIN-2014-35
|
refinedweb
| 427
| 67.59
|
import custom module from plugin
On 17/10/2016 at 07:09, xxxxxxxx wrote:
Hi all,
This seems like a basic question but I couldn't find any answers here on the forum (correct me if I'm wrong).
I'm writing a shader plugin that will reference another module with all the shader formulas. Is there a way to access this module (formulas.py) when it's in the same folder as the plugin? If I put it there, cinema complains: "ImportError: No module named formulas".
If I put it in the packages dir it does find it, but I would like to keep em together. Here's an example of the directory structure I would like to achieve:
+plugins
+myshader
myshader.pyp
formulas.py (also tried .pyp, no succes)
I've tried adding an __init__.py, but then cinema doesn't even find the plugin itself anymore.
Any help would be appreciated!
Kind regards,
Hermen
On 17/10/2016 at 08:52, xxxxxxxx wrote:
Hi, Hermen.
I'm not really sure about your question, but have you seen this thread?
On 17/10/2016 at 12:18, xxxxxxxx wrote:
Hi dmitry,
To be honest, no, I didn't find that one. But if it's that complicated, I'll guess I leave it just where it is.
But thanks for the find, anyway!
regards,
Hermen
On 25/10/2016 at 05:04, xxxxxxxx wrote:
Hello All,
I tried to grasp Niklas' code, but I am afraid it is beyond me. But MAXON's Python SDK tells us we can use a simple:
dir, file = os.path.split(__file__)
which is certainly more pythonic. And pure python. And only one line...
So I thought I'd post this here, more so because the post above this one is about the same topic, as dmitry pointed out
Regards,
Hermen
On 25/10/2016 at 07:33, xxxxxxxx wrote:
Hi Hermenator, what's hard to grasp on the code? You don't need to understand what's happening
in the localimport class if you don't want to.
I advise against manually appending paths to sys.path unless you want to make the module available
to the whole Python ecosystem.
On 25/10/2016 at 10:09, xxxxxxxx wrote:
Hi Niklas,
Well, I had a particular hard time on the second line:
"eJydGctu20bwrq8gkAPJmKXjBr0IZVCkSIGiRQ5B0UMFgqDIpbw1RRK7q9SykX/vzOyTItU4vVjL3ZnZeT/W/DiNQkXNOJ2zQz/"
(Just kidding)
But I must say I was mistaken. The page in the SDK refers to a resource folder, ie for bitmaps and other resources. Actually importing a module this way is not possible.
Reason I considered it too much effort is because while still in development I am only going to be using this on my own computer. If time comes to publish this plugin, I think your approach makes sense. And you have even taken the effort of making pre-minified versions, so thank you for that
Regards,
Hermen
|
https://plugincafe.maxon.net/topic/9758/13122_import-custom-module-from-plugin
|
CC-MAIN-2019-22
|
refinedweb
| 486
| 73.98
|
A simple packet to read and parse messages from AXESS TMC X3 devices.
Project description
Axessy is a simple package to read and parse messages from AXESS TMC X3 devices.
ChangeLog
- 0.4: Added gate error messages and events (directions); added system error messages
- 0.3: Fixed message() method
- 0.1: First release
Installation
There are two ways to install the package:
Using pip with the following command:
pip install axessy
Start setup.py file from this repository:
python setup.py install
Usage
You can import the module in the following way:
import axessy.axessy
In this package there is the “AxessPackage” class defined with the following methods:
- parsePacket(params): “params” includes the GET parameters sent via an “/online” or “/batch” command from the device, and stores the data inside the class variables;
- message(msg, beep=100, show=2): builds a string that you can put inside an HttpResponse to send back to the device;
- sendKeepAlive(url, username, password): asks the device to send a keepalive message.
Also the “AxessPackage().ack” and “AxessPackage().keepalive” variables are defined for string responses to “/batch” and “/keepalive” commands.
A dictionary with all possible errors saved into a transaction, “AxessPackage().error_dict”, is defined and automatically used by “parsePacket” method.
Finally there are two new Error classes used by the “checkPacket()” method (this method is used by “parsePacket” automatically):
- MacAddressError
- CardError
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/axessy/
|
CC-MAIN-2018-47
|
refinedweb
| 253
| 55.34
|
I'm trying to use Plotnine library, which is essentially a faithful reproduction of ggplot2. Underneath, it uses matplotlib.
However, what I found is that the example code from here looks very different when run in Databricks vs in a local Jupyter Notebook (freshly-installed). Have a look at the screenshots.
Jupyter:
Databricks:
As you can see, Databricks' plot is very different and unusable!
This is the example code I'm using:
```
from plotnine import *
graph = ggplot(diamonds, aes(x='depth')) + geom_density() +
\ facet_grid("cut~color", scales="free_y")
# in jupyter, this will display inline
graph
# in Databricks, this will display inline
display(graph.draw())
```
I made sure to check the version numbers for pandas, matplotlib, mizani - they're all identical.
The question therefore is: does Databricks apply any customisations to matplot lib? Because I think this is the only thing that can be causing the difference.
Answer by sdaza · May 28, 2018 at 05:13 PM
Any insights about this @gregoltsov? Thanks!
I am very interested in the answer to this question as I am having the same problem (this time with seaborn). Does anyone have any insights on this?
Answer by jwthomas · May 27 at 05:24 AM
This code now displays correctly in Databricks as of the latest runtime DBR 7.0 (Beta). It was likely an issue with transparency, as it is possible to replicate the earlier Databricks behavior in Jupyter by saving the figure using
matplotlib.pyplot.savefig
with the option
transparent=True
Line plot by group shows missing value as zero 0 Answers
Is there a way to display interactive matplotlib graphs? 1 Answer
Seaborn plot display in Databricks 4 Answers
"Script Error" with Bokeh Google Maps plot 0 Answers
Matplotlib Display Recursion Error 2 Answers
Databricks Inc.
160 Spear Street, 13th Floor
San Francisco, CA 94105
info@databricks.com
1-866-330-0121
|
https://forums.databricks.com/questions/13472/databricks-inconsistent-look-for-matplotlib-graphs.html
|
CC-MAIN-2020-50
|
refinedweb
| 308
| 62.88
|
import java.util.* import java.io.* public class Unscrambler { //----------------------------------------------------------------------------------------- // This program reads a random word from a file created separately called words.txt that // has been provided. The program then scrambles the word by swapping random letters a // random number of times. The scrambled word is displayed with character indexes on top. // The user selects 1 to swap a pair of letters, 2 to print the word unscrambled and quit, // and 3 to simply quit. When the word is scrambled correctly, the computer congratulates // the user, repeats the unscrambled word, and the number of steps it took to solve. An // error message appears if invalid indexes are input by the user. //----------------------------------------------------------------------------------------- public static void main (String[] args) throws IOException { //Setting Up Variables and Scanners to read from User and words.txt String words; Scanner fileScan, wordsScan, optionScan; int option; boolean done = false; Random generator = new Random(); fileScan = new Scanner (new File("words.txt")); words = filescan.nextLine(generator.nextInt(22); optionScan = new Scanner (System.in); while (!done) { //Menu System.out.prinln ("Welcome Computational Unscrambling!"); //read from User option = optionScan.nextInt(3) + 1; if (option == 1) { //Allow user to switch letters between index; //If User unscrambles word, done == true; } else { if (option == 2) { //Print the unscrambled word and quit } } else { if (option == 3) { //Just quit the program } } } } }
Basically I think I have the structure set up....but I don't know how to scramble the actual words....I know I should use the random generator to switch the characters but I dont really have any clues as what else to do. I included the comments to enhance readability. If anyone could offer some suggestions that would be appreciated.Thanks everyone!
This post has been edited by baavgai: 10 March 2013 - 03:08 PM
Reason for edit:: tagged
|
http://www.dreamincode.net/forums/topic/314891-word-unscrambler-gamefiguring-out-the-statements/
|
CC-MAIN-2016-26
|
refinedweb
| 295
| 57.47
|
As a general rule, I avoid using any class libraries in my sample code. This isn't because I'm opposed to class libraries, but rather because I don't want to narrow my audience to "people who use MFC" (to choose one popular class library). If I were to start using MFC for all of my samples, I'd probably lose all the people who don't use MFC.
"Oh, but those people can just translate the MFC code into whatever class library they use."
Well, sure, they could do that, but first they would have to learn MFC. I wouldn't be talking about
HWNDs and
HDCs any more but rather
CWnds and
CDCs. I would write "Add this to your
OnDropEx handler", and all the non-MFC people would say, "What are you talking about? I'm not using MFC. What is the Win32 equivalent to
OnDropEx?" (Suppose my article on using accessibility to read the text under the mouse cursor were titled "How to use MFC to retrieve text under the mouse cursor." Would you have read it?)
"Well, fine, don't use MFC, but still it wouldn't kill you to use a smart pointer library."
But which one? There's MFC's
CIP, ATL's
CComPtr, STL^H^H^Hthe C++ standard library's
auto_ptr, the Microsoft compiler's built-in
_com_ptr_t (which you get automatically if you use the nonstandard
#import directive), and boost's grab bag of smart pointer classes
scoped_ptr,
shared_ptr,
weak_ptr,
intrusive_ptr... And they all behave differently. Sometimes subtly incompatibly. For example, MFC's
CIP::CreateObject method uses
CLSCTX_INPROC_SERVER, whereas ATL's
CComPtr::CreateInstance method uses
CLSCTX_ALL. When you're chasing down a nasty COM marshalling problem, these tiny details matter, and if you're an ATL programmer looking at MFC code, these tiny details are also something you're going to miss simply due to lack of familiarity. (And woe unto you if your preferred language is VB or C# or some other popular non-C++ language. Now you have double the translation work ahead of you.)
Instead of hiding the subtleties behind a class library, I put them right out on the table. Those of you who have a favorite class library can convert the boring error-prone plain C++ code into your beautiful class library.
In fact, I almost expect you to do it.
(On a related note, some people are horrified at the rather dense code presentation I use here. I don't write code like that in real life; I'd be just as horrified as you if I saw that code in a real program. I just use that style here because of the nature of the medium. A great way to lose people's interest is to make them plow through 100 lines of boring code before they reach the good stuff.)
I would think that most folks who read your blog realize that the style of your code here and your not using class libraries is to reach the widest possible audience with high quality examples that get right to the point with no mucking around. I don’t think 99% of your readership cares how you code it just so long as you show the how and the why of it being done. From there we take and wrap it into whatever we need to use it in.
Sort of like learning to program in Lisp. I wouldn’t use it in production code but it is great for teaching certain concepts.
I’m pretty sure from reading the comments on your blog that most of your readers understand this and just hope you keep writing such good work.
I could be wrong though and there could be a vocal majority of code police out there that are pestering you to code differently, but if there are please ignore them and keep on doing exactly what you are doing.
Thank you for posting your information in the base API. I hadn’t even noticed that, as I learned the API way back when.
I rarely use the API directly anymore, but I can always translate an API call into MFC, which is what I use predominantly these days, while translating the other direction is sometimes hard.
It’s unfortunate you are compelled to use your blog time to explain this stuff. I understand why, it’s still unfortunate.
It’s a pleasure to read some clean, "bare to the metal" code still coming from Microsoft.
Good work!
Thanks for everything, Raymond. This web site is a breath of reason and sanity in a world of CWhiffyWidget and System.Net.Bibllebabble. I learned Win32 when MFC was still struggling to prove itself and programmers feared it. I may be a masochist, but I’d reather deal directly with the API so I know what really is happening. Lord knows your class library is doing "for" you inside the thin wrapper — sometimes you can go read the source, sometimes you can’t.
I appreciate the clarity and universal adaptability of your teachings. I’ve recently "done time" trying to get Windows to draw a set of properly sized, positioned and themed minimize-style buttons in the Caption bar. Wow, what an adventure — the few existing examples are all very MFC-centric.
Please don’t loose the Win32 stuff in favour of the current big thing.
Well, it’s worth noting that all four of boost’s smart pointers have been adopted into C++ TR1, which most compilers – in MSVS starting with 2k5 – implement. Granted, it’s a little annoying that Microsoft has deployed them in stdext:: instead of std::, but you can get around the resultant maintenance headache by collapsing the namespace for the smart pointers explicitly (normally a god-awfully badbear thing to do, but hey, it’ll prevent bugs here, so do it.)
One of the advantages of the approach Boost and now C++ take is that, beyond being portable and near-zero overhead, they give you the option of selecting between behaviors – strong pointers, weak pointers and so on – rather than condemning you to a standard. Those subtle incompatibilities you mention are the result of other approaches just picking one for you and sticking you to it. C++ doesn’t dance to that groove.
There’s a standard behavior now. You’re wise enough to not use MFC when an API approach would reach a broader audience. What of using pure C++ approaches? Here, it’s (only very recently) a clear option. Basically every major compiler implements at least that part of TR1, given that there’s such an available reference implementation.
The new smart pointers only want to love you. Won’t you love them back?
I also thank you as some of us don’t use
C at all and by not making your examples
more obscure by forcing us to decode C++,
ATL, MFC, etc is greatly appreciated.
First part… nice explanation… second part that reads like a disclaimer… duh.
Unfortunate it comes to explaining everything you say to exhaustion. I hope you find you don’t have to do that much longer.
In any case, I appreciate that you teach us how things work down at the barest metal and under the darkest hood.
It means when I go back to work, I have an understanding of what’s happening deeper that my work surface, and it’s easier to figure out problems.
It’s easy to understand why you need to explain your moves. I mean, two months ago you wrote about publishers and their ideas about what kind of books are beeing sold these days. Some of them told you that "nobody buys Win32 books any more."
Well, today I spend some tima at Amazon and I’ve noticed that your book has reached Sales Rank of 8,632. Is that good? Or is it bad? How bad? So, I compared you to some rather established author: Charles Petzold. He has this book in three versions: api, C# and VB.Net. As much as I know, all of them cover very similar topics with very similar examples in three different languages.
So, these are the numbers – Amazon Sales Rank:
#17,960 – Programming Windows, Fifth Edition,
#94,473 – Programming Windows with C#
#515,469 – Programming Microsoft Windows with Microsoft Visual Basic .NET
Then I decided to compare languages and few books by language designers:
#1,062 – The C Programming Language, by K&R
#248,286 – The C# Programming Language, by Anders Hejlsberg
(BTW, sales ranks at Amazon seem to vary almost every minute but not enough to change my point here…)
Anyone can draw all kinds of conclusions from these numbers, but I can see here that rumors about death of Win32 API development in C have been greatly exaggerated!
Is that within a particular category or across the US? Either way, it’s pretty nice – amazon sells a lot of books.
I like your code as it is. If I want ATL or MFC I can convert it to that.
Some of the smart pointers might be standard (and I think stdext:: is the correct namespace for now :-), but why use them?
You are trying to show how Windows works "in the belly," why things are the way they are.
I think using just enough code to make the point is the right thing.
I’m going to chime in here and let you know that I really appreciate the Win32 perspective on things as well. Keep up the awesome work!
I think it’s also a matter of accepting the Win32 abstraction model.
Win32 is a class library, especially when it comes to UI code. You derive from the basic window by handling the messages, and adding new custom messages. Window classes let you use the class you have defined as a black box.
I admit to implementing my own abstraction layers on top of the operating systems ones as often as the next guy. Sometimes it might be ‘objectively’ better, but often I think it is because I’ve never really internalized the operating system’s model.
Whereas Raymon has internalized the Win32 abstractions and can use them to structure his programs, rather than something else. I assume it’s in no small part due to the fact that they are actually his abstractions, not somebody else’s.
I PREFER to use the direct API for certain types of code. That comes from too many experiences with "library changed", "library no longer supported", …
@Igor Delovski: Of course it could just be that you *need* a book to learn the Win32 API, whereas the newer APIs are easier to learn.
I’m not saying that’s true or even my opinion, just putting forward another interpretation of your statistics.
"This web site is a breath of reason and sanity in a world of CWhiffyWidget and System.Net.Bibllebabble."
Quoted for truth.
I must admit that I am embarrased because I do not speak latest and greatest developer jargon. I mean, wtf are smart pointers, weak pointers, strong pointers?!?
If someone takes time to explain, please do it via example which shows their real world usage. Statistics of how often those features are needed would be most welcome too.
I’m not going to provide in-depth examples, but basically:
Smart pointers are essentially just a class that holds a single pointer and will delete it in the destructor. The point being that whenever you allocate some memory on the heap you immediately wrap a smart pointer object around it and then it will automatically be deleted when that object goes out of scope, regardless of whether that was through normal exit or throwing an exception. Basically just a generic way to use RAII on arbitrary blocks of memory.
Weak pointers are basically references to another smart pointer, with the property that they will not prevent the smart pointer being destroyed — they simply null out when that happens. They’re handy for putting into lookup tables & caches etc where you don’t want to extend the lifetime of an object but you still need to avoid accessing an invalid pointer, since something else is controlling the lifetime of the object being pointed to.
A strong pointer is the opposite of a weak pointer. Most types of smart pointer are actually strong pointers. A more useful type of strong pointer is a shared pointer, where multiple smart pointers can refer to the same block of memory, and the memory will only get deleted when the last shared pointer pointing to that block is destroyed. Handy when the lifecycle of a piece of memory doesn’t match a given method or class, so you can’t scope it more simply.
I think now that various C++ smart pointers are standardized, you should seriously consider using them in your code samples–assuming the sample isn’t about memory management. Using raw pointers means you’re cluttering sample code with "irrelevant" details.
As a primarily *nix developer who also targets Windows as a port platform, your weblog is really useful _because_ it focuses on the low level details and makes the content comprehensible for people not used to all the Windows arcana.
For me, keeping track of the Windows API typedefs is quite hard enough (*especially* the ones that obscure pointers – those are _nasty_) I’m very glad you’re not also using a class library.
Personally, I use Qt for almost all my Windows GUI development – because it has a clean API and is cross platform. But I wouldn’t expect you to know it, or write about it – and because your examples are general they’re informative whether the reader uses MFC, Qt, or makes Windows API calls directly in x86 assembler.
So – personally I like things just like they are.
I find your articles on platform-wide details like Windows DLLs & dynamic linking incredibly useful, and would personally love to see more in that vein. The discussion of calling conventions used in win32 was also extremely handy.
I’m with Sergio on this: an example using plain old malloc/free (or equivalent) will still make sense to someone who uses smart pointers: they just know to use their ‘smart’ counterpart, and that some of the free calls won’t be necessary thanks to that ‘smartness’. The other way round, however, would be harder: if Raymond posted an example which had a free call missing, is it because the memory gets freed by the OS later? Does it get freed by ‘smartness’, so if you’re using regular pointers you need to insert a call of your own? Is it just a mistake? Is it safe to free a buffer after the last reference you see, or does it need to be kept around until some later point the ‘smart’ compiler knows about but we don’t?
If Raymond keeps this stuff explicit, we can all follow it easily – start relying on some optional external understanding, you’ve added an unnecessary and potentially harmful dependency to the mix.
I sincerely vote against any smart pointers, boosts and other stuff of the kind. To paraphrase one popular statement:
‘If a guy doesn’t know how to program, he has one problem. If he says "I know I’ll use smart pointers" now he has two problems.’
(Instead of "smart pointers" any hype-of-the-day can be inserted.) Just because something comes in some new standard it doesn’t mean it should be *always* used.
In my experience, using smart wrappers in sample code or documenting them apart from their source code just invites people to misuse them.
My (least) favorite example is the comment on CComPtrBase::operator&. "The assert usually indicates a bug." What kind of bug? Ha ha, we won’t tell you. So people inevitably use the workaround and leak memory anyway.
I like you to use ansic & winapi. Please continue to do so.
I fear for how long we will be able to program in Native APIs in Windows because M.S Products gradually becomes .NET oriented.
Suppose I wanna write a windows gadget, it will be more easier (I don’t the native way) if we use .NET framework and the programming style changing a lot.
But it’s really nice to know some internals and power of the native code. I’d like you to write sample code using Win32 APIs. For those who knows the basics of MFC, the code is really easy to convert.
Sarath: Most Windows programmers have probably never touched the Native API, only the Win32 one layered on top of it. Programming the Native API has always been fairly difficult, thanks to a shortage of documentation and compiler/linker support, although aspects of that have improved in recent years.
> In fact, I *almost* expect you to do it.
(emphasis mine)
Good, because I don’t =)
Bare Metal: AFAIK any C source file that includes <windows.h> does not compile with pedantic ansi errors turned on.
(I use win32 in C without a CRT lib)
PingBack from
|
https://blogs.msdn.microsoft.com/oldnewthing/20070216-03/?p=27983
|
CC-MAIN-2017-47
|
refinedweb
| 2,860
| 70.33
|
: 635
Iris with a GoPro in the daytime or use a FLIR camera to detect heat signatures at day/night. You would need an FPV system and two people; a pilot and a watcher.
Well, heck, I've done this myself here in the US.
If you had an operator on the actual train, they could release the multi-rotor while the train was in motion, inspect the length of the train, then return back to a landing spot on the train. Sounds like a fun job!
Check out my video on this... Now I did speed-up the video for purposes of YouTube, but the idea can be implemented.
If you want, I can post the raw video, it's about 8-10 minutes long.
Nah, one person is enough, just record high quality video 1080P or better, then review on board after inspection. I did this flight from the ground, but had I been on the train, it would be the same idea.
Just hope you don't have a crash, cause the train probably can't stop for a mile or so, LOL.
Thank you for your video. This is pretty close to what I'm planning to do.
The main differente is that my inspection would be done with a pilot on the motion train, not out.
So the challenge is to take off the IRIS+ to inspect the car trains at a point A and to land the quadcopter on the coordenates where it was took off which are going to be point A' = A + velocity * time.
What do you think?
Yes, that would be very doable - take off from the train, then return to the take-off location (on the moving train).
Here's my thought (assuming you are taking off from the front of the train). Take-off at the front, fly to a location perhaps 10m to the left or right side of the train. Put the Iris+ in a PosHold Mode, and remain in that stationary position with the Iris+ pointed at the train. The train will go by the Iris+ requiring very little pilot input (and this will provide the most stable video platform). Then, once the rear of the train has passed by, fly the Iris+ to the front, then position it on the opposite side, wait for the train to pass, then fly back to the front and land.
Here's the tricky part, what to do in an RTL situation? RTL won't work because your take-off position has moved. If you know the train will be traveling at a very steady pace, you could have the final waypoint in an AUTO mission be somewhere different than where you took-off from. The final waypoint could be programmed where the landing point "will be", then, instead of hitting RTL, put the Iris+ in AUTO, and have it fly to the final, pre-determined location.
This seems like a fun project! Good luck.
how fast will the train be moving?
How long is the train?
The problem might be fight time if you are hovering in one spot then trying catch up to the front of a moving train.
If the train speed was up over 45mph, with a head-wind, could definitely be a problem, getting back to the front.
My F450 can easily do 50mph carrying a GoPro H3 and H3-3D gimbal (no wind), I suspect the Iris+ might have similar speed capabilities???
Flight time shouldn't be a problem. A 1mile long train will pass a stationary point in 2 minutes traveling at 30MPH, the Iris+ traveling at a return (ground) speed of 45MPH, you should be able to reach the front of the train in about 4 minutes (there's a bit of algebra at work here, because the train is moving forward and the quad is moving forward too but at a relatively slower speed compared to the train (about +15MPH), so it will take much longer to return to the front.
Here's my algebra, let's see if I'm right?:
Train @ 30MPH = 44ft/s
Quad @ 45MPH = 66ft/s
equation: (44ft/s) x t + 5280ft = (66ft/s) x t
Solving for t, t = 240sec, divide by 60 sec, gives 4 minutes for the return.
I think its doable, 6 minutes per side.
Now if train is moving at 40MPH (58.7ft/s):
The stationary pass will be 5280ft / 58.7ft/s, or about 90secs
the return equation: (58.7ft/s) x t + 5280ft = (66ft/s) x t, or about 720secs
Total Time 90s + 720s, or about 13.5 minutes, per side.
Also, make sure your GEO_FENCE is turned off, otherwise the Pixhawk Flight Controller with go into RTL if you breach the fence distance... assuming you have this setting turned on as a FAILSAFE. In the 30MPH train example above, your total flight distance will be almost 16,000ft.
You might want to get a wind speed gauge (Anemometer) to determine actual head-wind speed (cumulative between the train and wind). I would put this on a stick far enough out so as to not measure the turbulence air near the train....
You could add this piece of information in your algebra equation to account for any head-wind (tail wind will be to your advantage).
You would definitely need to disable nearly every failsafe since they're all based on the drone environment being stationary. If you have a control failure, it's basically has to crash. No return to launch. No geofence. You'll need to accept the financial risk.
You'll need to consider the range of your transmitter too. You need to be able to control the UAV for the entire length of the train plus some margin of error. A mile or two is well outside the standard range of most stock control systems.
On the same token, your FPV system will need the same range.
@Pedals - Definitely good point on the transmitter / FPV range capabilities. My Futaba T14SG can reach out about to 5000' to 8000' max, so I was thinking a mile long train was doable.
If you defeat the transmitter failsafe and one were to use AUTO mode for most of process, you could have the quad wait for X amount of time at the "PosHold" monitoring point (location you fly to the side of the train for the starting point), then after the waiting period (manually calculated by taking train length divided by speed), the quad would then fly to a location where the front of the train will "be at" for the return (you could enter this location prior to taking off using DroidPlanner and a cached Map).
I don't know if the Iris+ has RSSI monitoring, but if it does, you could add that to your OSD to monitor status of your receiver's signal strength.
I think this would be a fun project. It is similar in nature to a distance test I was going to run on my hex. My hex can fly about 36 - 40 minutes on a 16,000mAh 6S battery averaging about 17A at 30MPH. My hex has a Pix F/C and I was going to see if it could fly 10 miles each way based on an AUTO program that was pre-loaded in the F/C. I found a location where I could do this with only a 50' elevation change. My plan was to just drive beside the hex while it was in flight (just to monitor it). Once it reached the 10 mile target waypoint, it would turn around a head back to the starting point. Not as complicated as the train task, but similar in determining all the parameters it takes to make it work. The challenge is what is so cool.
Si estas pensando de usar la sistema de APM para su proyecto, tienes que tener
en cuenta que el software que maneja los vehículos tiene una serie de cosas que
revisa antes de armar para el vuelo.
Uno de esos es revisando si la velocidad esta mas alta que 50cm/s.
Busque "bad velocity" en este articulo pa entender....
Esto no quiere decir que es impossible, si no que vas a tener que modificar un
parte critical del software que maneja la sistema operativa del vehículo.
Me parece que seria mas util usar una sistem para detectar un cambio electro/magnetico
cuando se introduce un nueva carga al tren. Usando sensores en cada caro, hasta podrías
identificar en que caro se esta detectado el cambio.
|
https://diydrones.com/group/iris/forum/topics/iris-for-moving-traing-cars-inspection?commentId=705844%3AComment%3A1924140&groupId=705844%3AGroup%3A1445744
|
CC-MAIN-2019-30
|
refinedweb
| 1,431
| 78.18
|
The following form allows you to view linux man pages.
#include <math.h>
double pow(double x, double y);
float powf(float x, float y);
long double powl(long double x, long double y);
Link with -lm.
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
powf(), powl():
_BSD_SOURCE || _SVID_SOURCE || _XOPEN_SOURCE >= 600 ||
_ISOC99_SOURCE || , a
domain error occurs, and a NaN is returned.
If the result overflows, a range error occurs, and the functions return
HUGE_VAL, HUGE_VALF, or HUGE_VALL, respectively, with the mathemati-
cally correct sign.
If result underflows, and is not representable, a range error occurs,
and 0.0 is returned.
Except as specified below, if x or y is a NaN, the result is a NaN.
If x is +1, the result is 1.0 (even if y is a NaN).
If y is 0, the result is 1.0 (even if x is a NaN).
If x is +0 (-0), and y is an odd integer greater than 0, the result is
+0 (-0).
If x is 0, and y greater than 0 and not an odd integer, the result is
+0.
If x is -1, and y is positive infinity or negative infinity, the result
is 1.0.
If the absolute value of x is less than 1, and y is negative infinity,
the result is positive infinity.
If the absolute value of x is greater than 1, and y is negative infin-
ity, the result is +0.
result is negative infinity.
If x is negative infinity, and y greater than 0 and not an odd integer,
the result is positive infinity.
If x is positive infinity, and y less than 0, the result is +0.
If x is positive infinity, and y greater than 0, the result is positive
infinity.
If x is +0 or -0, and y is an odd integer less than 0, a pole error
occurs and HUGE_VAL, HUGE_VALF, or HUGE_VALL, is returned, with the
same sign as x.
If x is +0 or -0, and y is less than 0 and not an odd integer, a pole
error occurs and +HUGE_VAL, +HUGE_VALF, or +HUGE_VALL, is returned.
See math_error(7) for information on how to determine whether an error
has occurred when calling these functions.
The following errors can occur:
Domain error: x is negative, and y is a finite noninteger
errno is set to EDOM. An invalid floating-point exception
(FE_INVALID) is raised.
Pole error: x is zero, and y is negative
errno is set to ERANGE (but see BUGS). A divide-by-zero float-
ing-point exception (FE_DIVBYZERO) is raised.
Range error: the result overflows
errno is set to ERANGE. An overflow floating-point exception
(FE_OVERFLOW) is raised.
Range error: the result underflows
errno is set to ERANGE. An underflow floating-point exception
(FE_UNDERFLOW) is raised.
C99, POSIX.1-2001. The variant returning double also conforms to SVr4,
4.3BSD, C89.
webmaster@linuxguruz.com
|
http://www.linuxguruz.com/man-pages/powf/
|
CC-MAIN-2017-43
|
refinedweb
| 483
| 65.32
|
preg_match_all('/<.*?>/', $string, $matches)
One of the advantages of PCRE or POSIX is that some special constructs are supported. For instance, usually regular expressions are matched greedily. Take, for instance, this regular expression:
<.*>
When trying to match this in the following string:
<p>Text, html and <b>PHP</b>.</p>
what do you get? You get the complete string. Of course, the pattern also matches on
<p>, but regular expressions try to match as much as possible. Therefore, you usually have to do a clumsy workaround, such as
<[^>]*>. However, it can be done more easily. You can use the
? modifier after the
* quantifier to activate nongreedy matching.
Finding All Tags Using Non-greedy PCRE
<?php $string = '<p>Text, html and <b>PHP</b>.</p>'; preg_match_all('/<.*?>/', $string, $matches); foreach ($matches[0] as $match) { echo htmlspecialchars("$match "); } ?>
Which outputs:
<p> <b> </b> </p>
Validating Mandatory Input
function checkNotEmpty($s) { return (trim($s) !== ''); }
When validating form fields (see tutorial 4 for more about HTML forms), several checks can be done. However, you should test as little as possible. For instance, when recently trying to order concert tickets for a U.S. concert, I always failed because it expected a U.S. telephone number, which couldn't be provided.
The best check is to check whether there is any input at all. However, what is considered to be "any input"? If someone enters just whitespace (that is, space characters and other nontext characters), is the form field filled out correctly?
The best way is to use
trim() before checking whether there is anything inside the variable or expression. The function
trim() removes all kinds of whitespace characters, including the space character, horizontal and vertical tabs, carriage returns, and line feeds. If, after that, the string is not equal to an empty string, the (mandatory) field has been filled out.
The file
check.php contains sample calls and all following calls to validation functions in the file
check.inc.php.
|
https://www.brainbell.com/tutorials/php/Finding_Tags_With_Regular_Expressions.htm
|
CC-MAIN-2020-05
|
refinedweb
| 324
| 68.87
|
Dev Journal: Detecting Tapping and Pressing inputs using Java.
Posted by tom_mai78101, 22 March 2014 · 1,328 views
inputtapping tap pressing press detection detect keylistener java
In Java, the fastest way of detecting key inputs is to use a class implementing the KeyListener interface, and then adding the listener to the Swing component. Fastest, but not exactly feasible for some others.
The key to detecting inputs to determine if it's tapping or pressing is by using threads. I use a thread pool for easier thread handling. Below shows the codes, while I try to explain how it works.
public class NewInputHandler implements KeyListener { public Map<Key, Integer> mappings = new HashMap<Key, Integer>(); private ExecutorService threadPool = Executors.newCachedThreadPool(); private Keys keys;First, we need to create a thread pool in order to manage threads easily. You can tell that I used a hashmap, this is for holding the key codes per key. If the Key is, for example, the "A" key, the Integer portion will contain the key code of A, which you can obtain via
KeyEvent.getKeyCode();method. I also created a new class object, Keys, which holds key states, "tapped" or "pressed". More on that later on.
public NewInputHandler(Keys keys) { this.keys = keys; mappings.put(keys.up, KeyEvent.VK_UP); mappings.put(keys.down, KeyEvent.VK_DOWN); mappings.put(keys.left, KeyEvent.VK_LEFT); mappings.put(keys.right, KeyEvent.VK_RIGHT); mappings.put(keys.W, KeyEvent.VK_W); mappings.put(keys.S, KeyEvent.VK_S); mappings.put(keys.A, KeyEvent.VK_A); mappings.put(keys.D, KeyEvent.VK_D); }In the code above, I pass in the Keys object, and then put all available key controls into the hashmap. This part is a bit self-explanatory, but basically the hashmap now contains the key codes for the corresponding keys.
@Override public void keyPressed(KeyEvent event) { for (Key v : mappings.keySet()) { if (mappings.get(v) == event.getKeyCode()) { if (!v.keyStateDown) { final Key key = v; key.isTappedDown = true; key.isPressedDown = false; key.keyStateDown = true; this.threadPool.execute(new Runnable() { @Override public void run() { try { Thread.sleep(100); } catch (InterruptedException e) { } if (key.keyStateDown) { key.isPressedDown = true; key.isTappedDown = false; } } }); break; } else break; } } }Now, this is one half of the core of detecting tapping and pressing keys. By navigating through the hashmap, then finding the key that has been pressed, we can then be sure to edit its key states. After that, we create a new thread worker that helps determine when the user is actually tapping the keyboard, or actually pressing it. I let it sleep for 100 milliseconds, as this given value is enough to detect and tell the difference both tapping and pressing. Finally, we edit the properties of the key that was selected and active.
@Override public void keyReleased(KeyEvent event) { for (Key k : mappings.keySet()) { if (mappings.get(k) == event.getKeyCode()) { k.isPressedDown = false; k.isTappedDown = false; k.keyStateDown = false; break; } } } @Override public void keyTyped(KeyEvent arg0) { //Ignore. Used for sending Unicode character mapped as a system input. }This is the other half of the core. When a key has been released, we have to mark all occurrences of the key as false, in order to prevent input overlapping issues. Just be wary that there's a missing bracket, so people there may be anything but quiet.
So now, when you start the game, tapping (or quickly press) the touch screen will say that you have tapped a key, or pressed a key. That is it for today.
|
http://www.gamedev.net/blog/1771/entry-2259449-dev-journal-detecting-tapping-and-pressing-inputs-using-java/
|
CC-MAIN-2016-22
|
refinedweb
| 575
| 67.45
|
updated copyright years
\ search order wordset 14may93py \ Copyright (C) 1995,1996,1997,1998,2000,2003,2005,2007 struct.fs $10 Value maxvp \ current size of search order stack $400 Value maxvp-limit \ upper limit for resizing search order stack 0 AValue vp \ will be initialized later (dynamic) \ the first cell at vp contains the search order depth, the others \ contain the wordlists, starting with the last-searched one. : get-current ( -- wid ) \ search \G @i{wid} is the identifier of the current compilation word list. current @ ; : set-current ( wid -- ) \ search \G Set the compilation word list to the word list identified by @i{wid}. current ! ; :noname ( -- addr ) vp dup @ cells + ; is context : vp! ( u -- ) vp ! ; : definitions ( -- ) \ search \G Set the compilation word list to be the same as the word list \G that is currently at the top of the search order. context @ current ! ; \ wordlist Vocabulary also previous 14may93py Variable slowvoc 0 slowvoc ! \ Forth-wordlist AConstant Forth-wordlist : mappedwordlist ( map-struct -- wid ) \ gforth \G Create a wordlist with a special map-structure. align here swap A, 0 A, voclink @ A, 0 A, dup wordlist-link voclink ! dup initvoc ; : wordlist ( -- wid ) \ search \G Create a new, empty word list represented by @i{wid}. slowvoc @ IF \ this is now f83search because hashing may be loaded already \ jaw f83search ELSE Forth-wordlist wordlist-map @ THEN mappedwordlist ; : Vocabulary ( "name" -- ) \ gforth \G Create a definition "name" and associate a new word list with it. \G The run-time effect of "name" is to order. vp @ 1- dup 0= -50 and throw vp! ; \ vocabulary find 14may93py : (vocfind) ( addr count wid -- nfa|false ) \ !! generalize this to be independent of vp drop 0 vp @ -DO ( addr count ) \ note that the loop does not reach 0 2dup vp i cells + @ (search-wordlist) dup if ( addr count nt ) nip nip unloop exit then drop 1 , ' drop A, \ create dummy wordlist for kernel slowvoc on vocsearch mappedwordlist \ the wordlist structure ( -- wid ) \ we don't want the dummy wordlist in our linked list 0 Voclink ! slowvoc off \ Only root 14may93py Vocabulary Forth ( -- ) \ -- ) \ search \G If @var{n}=0, empty the search order. If @var{n}=-1, set the \G search order to the implementation-defined minimum search order \G (for Gforth, this is the word list @code{Root}). Otherwise, \G replace the existing search order with the @var{n} wid entries \G such that @var{wid1} represents the word list that will be \G searched first and @var{widn} represents the word list that will \G be searched last. dup -1 = IF drop only exit THEN dup check-maxvp dup vp! 0 swap -DO ( wid1 ... widi ) vp i cells + ! \ note that the loop does not reach 0 1 -loop ; : seal ( -- ) \ gforth \G Remove all word lists from the search order stack other than the word \G list that is currently on the top of the search order stack. context @ 1 set-order ; [IFUNDEF] .name : id. ( nt -- ) \ gforth i-d-dot \G Print the name of the word represented by @var{nt}. \ this name comes from fig-Forth name>string type space ; ' id. alias .id ( nt -- ) \ F83 dot-i-d \G F83 name for @code{id.}. ' id. alias .name ( nt -- ) \ gforth-obsolete dot-name \G Gforth <=0.5.0 name for @code{id.}. [THEN] : .voc ( wid -- ) \ gforth dot-voc \G print the name of the wordlist represented by @var{wid}. Can \G only print names defined with @code{vocabulary} or \G @code{wordlist constant}, otherwise prints @samp{???}. dup >r wordlist-struct %size + dup head? true = if ( wid nt ) dup name>int dup >code-address docon: = swap >body @ r@ = and if id. rdrop exit endif endif drop r> body> >head-noprim id. ; : order ( -- ) \ search-ext \G Print the search order and the compilation word list. The \G word lists are printed in the order in which they are searched \G (which is reversed with respect to the conventional way of \G displaying stacks). The compilation word list is displayed last. \ The standard requires that the word lists are printed in the order \ in which they are searched. Therefore, the output is reversed \ with respect to the conventional way of displaying stacks. get-order 0 ?DO .voc LOOP 4 spaces get-current .voc ; : vocs ( -- ) \ gforth \G List vocabularies and wordlists defined in the system. voclink BEGIN @ dup WHILE dup 0 wordlist-link - .voc REPEAT drop ; Root definitions ' words Alias words ( -- ) \ tools \G Display a list of all of the definitions in the word list at the top \G of the search order. ' Forth Alias Forth \ alias- search-ext ' forth-wordlist alias forth-wordlist ( -- wid ) \ search \G @code{Constant} -- @i{wid} identifies the word list that includes all of the standard words \G provided by Gforth. When Gforth is invoked, this word list is the compilation word \G list and is at the top of the search order. ' set-order alias set-order ( wid1 ... widu u -- ) \ alias- search ' order alias order ( -- ) \ alias- search-ext Forth definitions
|
https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/search.fs?rev=1.34;sortby=log;only_with_tag=MAIN
|
CC-MAIN-2021-25
|
refinedweb
| 819
| 71.85
|
On Thu, Jun 21, 2012 at 12:22 AM, Mark Callow <callow_mark@hicorp.co.jp> wrote: > David, > > Thank you very much for your reply. You are very much welcome. > On 21/06/2012 15:49, David Sheets wrote: > > ... > > After > webgl/extensions$ grep -R additions * > > Edited like > > ... > > I was hoping for a way that does not break the build of existing extension > spec's. How about if I add a <webgl-additions> node by cloning the > <additions> node? None of the official extensions appear to use this section and I know of no unofficial extensions that use this tag. Also, I don't think WebGL extensions should be adding any normative text to OpenGL ES 2.0 chapters. The present section title and associated meaning are entirely due to the direct transcription of the OpenGL ES 2.0 extension template. Unless there are other objections, I believe <additions> should change to WebGL spec additions. Here is the bug and patch: <> > ... > > extensions/template/extension.xml describes in comments the source > generating <> > using the major features of the system (notably missing the recently > added cross-specification references). > > I know. I've been following it. Speaking of extension.xml I get an instance > of the following error for each extension processed by xsltproc each time I > run make: > > warning: failed to load external entity "/extension.xml" Does this error occur on $ make registry.xml ? What concrete command is executed after that incantation? Does the error occur on $ make OES_texture_float/index.html ? > I can't find this file. Is it supposed to refer to the template file? I also > can't see why I get multiple instances of the error message as it is only > referred to in the makefile as a prerequisite for the generation of a single > file: registry.xml. > > Any idea? A copy of the stdout/stderr would be helpful. Something is failing in the map over the extension list, EXTS. What operating environment are you executing "make" under? I have submitted a minor patch at <> that may fix your problem if you have extensively edited the EXTS list in Makefile. > What XML editor do you use? Does it understand any other XML schema > languages? > > XMLMind. It understands DTD's W3D XML Schema and RELAX NG Schema. > > > I am interested to know more about your use case. Which components > (e.g. API, state, GLSL) of the standard does your proposal extend? > Does the extension introduce any new namespaces or reference any > external namespaces? > > It extends the WebGL API. It also extends state and GLSL but those parts of > will be coming from an existing OES extension. Once I have the proposal in > reasonable shape I'll post it to this list for discussion. Great! One of the most useful applications of these specification documents is the automated extraction of GLSL symbols and types for shading language processors. If your extension proposal extends the GLSL global namespace with new macros, types, variables, or functions, I would like to work with you to declare this in extension.xml in a structured way. Sincerely, David > Regards > > -Mark ----------------------------------------------------------- You are currently subscribed to public_webgl@khronos.org. To unsubscribe, send an email to majordomo@khronos.org with the following command in the body of your email: unsubscribe public_webgl -----------------------------------------------------------
|
https://www.khronos.org/webgl/public-mailing-list/archives/1206/msg00220.html
|
CC-MAIN-2015-22
|
refinedweb
| 541
| 67.96
|
Build Your Own ASP.NET Website Using C# And VB.NET, Chapter 2 – ASP.NET BasicsBy. Note that you can download these chapters in PDF format if you’d rather print them out and read them offline.
So far, you’ve installed the necessary software to get going and have been introduced to some very simple form processing techniques. As the next few chapters unfold, we’ll introduce more advanced topics, including controls, programming techniques, and more. Before we can begin developing applications with ASP.NET, however, you’ll need to understand the inner workings of a typical ASP.NET page. This will help you identify the various parts of the ASP.NET page referenced by the many examples within the book. In this chapter, we’ll talk about some key mechanisms of an ASP.NET page, specifically:
- Page structure
- View state
- Namespaces
- Directives
We’ll also cover two of the "built-in" languages supported by the .NET Framework: VB.NET and C#. As this section begins to unfold, we’ll explore the differences, similarities, and power that the two languages provide in terms of creating ASP.NET applications.
So, what exactly makes up an ASP.NET page? The next few sections will give you an in-depth understanding of the constructs of a typical ASP.NET page.
ASP.NET Page Structure
ASP.NET pages are simply text files with the .aspx file name extension that can be placed on an IIS server equipped with ASP.NET. When a browser requests an ASP.NET page, the ASP.NET runtime (as a component of the .NET Framework’s Common Language Runtime, or CLR) parses and compiles the target file into a .NET Framework class. The application logic now contained within the new class is used in conjunction with the presentational HTML elements of the ASP.NET page to display dynamic content to the user. Sounds simple, right?
An ASP.NET page consists of the following elements:
- Directives
- Code declaration blocks
- Code render blocks
- ASP.NET server controls
- Server-side comments
- Server-side include directives
- Literal text and HTML tags
It’s important to remember that ASP.NET pages are just text files with an
.aspx extension that are processed by the runtime to create standard HTML, based on their contents. Presentational elements within the page are contained within the
<body> tag, while application logic or code can be placed inside
<script> tags. Remember this pattern from the sample at the end of the previous chapter?
Figure 2.1 illustrates the various parts of that page.
Figure 2.1. All the elements of an ASP.NET page are highlighted. Everything else is literal text and HTML tags.
As you can see, this ASP.NET page contains examples of all the above components (except server-side includes) that make up an ASP.NET page. You won’t often use every single element in a given page, but you should become familiar with these elements, the purpose that each serves, and how and when it’s appropriate to use them.
Directives
The directives section is one of the most important parts of an ASP.NET page. Directives control how a page is compiled, specify settings when navigating between pages, aid in debugging (error-fixing), and allow you to import classes to use within your page’s code. Directives start with the sequence
<%@, followed by the directive name, plus any attributes and their corresponding values, then end with
%>. Although there are many directives that you can use within your pages, the two most important are the
Import and
Page directives. We will discuss directives in greater detail later, but, for now, know that the
Import and
Page directives are the most useful for ASP.NET development. Looking at the sample ASP.NET page in Figure 2.1, you can see that a
Page directive was used at the top of the page as shown:
<%@ Page Language="VB" %>
<%@ Page Language="C#" %>
The
Page directive, in this case, specifies the language that’s to be used for the application logic by setting the
Language attribute appropriately. The value provided for this attribute, in quotes, specifies that we’re using either VB.NET or C#. There’s a whole range of different directives; we’ll see a few more later in this chapter.
Unlike ASP, in ASP.NET, directives can appear anywhere on a page, but are most commonly written as the very first lines.
Code Declaration Blocks
In Chapter 3, VB.NET and C# Programming Basics we’ll talk about code-behind pages and how they let us separate our application logic from an ASP.NET page’s HTML presentation code. If you’re not working with code-behind pages, however, code declaration blocks must be used to contain all the application logic of your ASP.NET page. This application logic defines variables, subroutines, functions, and more. In our page, we place the code inside
<script> tags, like so:
<script runat="server">
Sub mySub()
' Code here
End Sub
</script>
Here, the tags enclose some VB.NET code, but it could just as easily be C# if our page language were set thus:
<script runat="server">
void mySub() {
// Code here
}
</script>
Both of these code snippets contain comments – explanatory text that will be ignored by ASP.NET, but which serves to describe how the code works.
In VB.NET code, a single quote or apostrophe (
') indicates that the remainder of the line is to be ignored as a comment.
In C# code, two slashes (
//) does the same. C# code also lets you span a comment over multiple lines by beginning it with
/* and ending it with
*/.
Before .NET emerged, ASP also supported such script tags using a
runat="server" attribute, although they could only ever contain VBScript, and, for a variety of reasons, they failed to find favor among developers. Code declaration blocks are generally placed inside the
<head> tag of your ASP.NET page. The sample ASP.NET page shown in Figure 2.1, for instance, contained the following code declaration block:
<script runat="server">
Sub Page_Load()
lblMessage.Text = "Hello World"
End Sub
</script>
Perhaps you can work out what the equivalent C# code would be:
<script runat="server">
void Page_Load() {
lblMessage.Text = "Hello World";
}
</script>
The
<script runat="server"> tag accepts two other attributes, as well. You can set the language used in the block with the
language attribute:
<script runat="server" language="VB">
<script runat="server" language="C#">
If you don’t specify a language within the code declaration block, the ASP.NET page will use the language provided by the
language attribute of the
Page directive. Each page may only contain code in a single language; for instance, it is not possible to mix VB.NET and C# in the same page.
The second attribute available is
src, which lets you specify an external code file to use within your ASP.NET page:
<script runat="server" language="VB" src="mycodefile.vb">
<script runat="server" language="C#" src="mycodefile.cs">
Code Render Blocks
You can use code render blocks to define inline code or inline expressions that execute when a page is rendered, and you may recognize these blocks from traditional ASP. Code within a code render block is executed immediately as it is encountered, usually when the page is loaded or rendered for the first time, and every time the page is loaded subsequently. Code within a code declaration block, on the other hand, occurring within script tags, is only executed when it is called or triggered by user or page interactions. There are two types of code render blocks: inline code and inline expressions, both of which are typically written within the body of the ASP.NET page.
Inline code render blocks execute one or more statements and are placed directly inside a page’s HTML within
<% and
%> characters.
Inline expression render blocks can be compared to
Response.Write() in classic ASP. They start with
<%= and end with
%>, and are used to display values of the variables and methods on a page.
Looking back at Figure 2.1, you can see both types of code render blocks:
<% Dim Title As String = "Zak Ruvalcaba" %>
<%= Title %>
This equates to the following C#:
<% String Title = "Zak Ruvalcaba"; %>
<%= Title %>
The first line represents an inline code render block and must contain complete statements in the appropriate language. Here, we’re setting the value of the
Title variable to the string
Zak Ruvalcaba. The last line is an example of an inline expression render block used to write out the value of the
Title variable,
Zak Ruvalcaba, onto the page.
ASP.NET Server Controls
At the heart of ASP.NET pages lies the server controls, which represent dynamic elements that your users can interact with. There are four basic types of server control: ASP.NET controls, HTML controls, validation controls, and user controls.
All ASP.NET controls must reside within a
<form runat="server"> tag in order to function correctly. The only two exceptions to this rule are the
HtmlGenericControl and the
Label Web control.
Server controls offer the following advantages to ASP.NET developers:
- We can access HTML elements easily from within our code: we can change their characteristics, check their values, or even dynamically update them straight from our server-side programming language of choice.
- ASP.NET controls retain their properties even after the page has been processed. This process is known as view state. We’ll be covering view state later in this chapter. For now, just know that view state prevents the user from losing data that has already been entered into a form once it’s been sent to the server for processing. When the response comes back to the client’s browser, text box values, drop-down list selections, etc., are all retained through view state.
- With ASP.NET controls, developers are able to separate the presentational elements (everything the user sees) and application logic (dynamic portions of the ASP.NET page) of a page so that each can be considered separately.
Because ASP.NET is all about controls, we’ll be discussing them in greater detail as we move through this book. For instance, in the next few chapters, we’ll discuss HTML controls and Web controls (Chapter 4, Web Forms and Web Controls), Validation controls (Chapter 5, Validation Controls), Data controls (Chapter 9, The DataGrid and DataList Controls), and so on.
Server-Side Comments
Server-side comments allow you to include, within the page, comments or notes that will not be processed by ASP.NET. Traditional HTML uses the
<!-- and
--> character sequences to delimit comments; anything found within these will not be displayed to the user by the browser. ASP.NET comments look very similar, but use the sequences
<%-- and
--%>.
Our ASP.NET example contains the following server-side comment block:
<%-- Declare the title as string and set it --%>
The difference between ASP.NET comments and HTML comments is that ASP.NET comments are not sent to the client at all. Don’t use HTML comments to try and comment out ASP.NET code. Consider the following example:
<!--
<button runat="server" id="myButton" onServerClick="Click">Click
Me</button>
<% Title = "New Title" %>
-->
Here, it looks as if a developer has attempted to use an HTML comment to hide not only an HTML button control, but a code render block as well. Unfortunately, HTML comments will only hide things from the browser, not the ASP.NET runtime. So, in this case, while we won’t see anything in the browser that represents these two lines, they will, in fact, have been processed by ASP.NET, and the value of the variable
Title will be changed to
New Title. The code could be modified to use server-side comments very simply:
<%--
<button runat="server" id="myButton" onServerClick="Click">Click
Me</button>
<% Title = "New Title" %>
--%>
Now, the ASP.NET runtime will ignore the contents of this comment, and the value of the
Titlevariable will not be changed.
Server-Side Include Directives
Server-side include directives enable developers to insert the contents of an external file anywhere within an ASP.NET page. In the past, developers used server-side includes when inserting connection strings, constants, and other code that was generally repeated throughout the entire site.
There are two ways your server-side includes can indicate the external file to include: using either the
fileor the
virtualattribute. If we use
file, we specify its filename as the physical path on the server, either as an absolute path starting from a drive letter, or as a path relative to the current file. Below, we see a
fileserver-side include with a relative path:
<!-- #INCLUDE file="myinclude.aspx" -->
virtualserver-side includes, on the other hand, specify the file's location on the Website, either with an absolute path from the root of the site, or with a path relative to the current page. The example below uses an absolute virtual path:
<!-- #INCLUDE virtual="/directory1/myinclude.aspx" -->
Note that although server-side includes are still supported by ASP.NET, they have been replaced by a more robust and flexible model known as user controls. Discussed in Chapter 16, Rich Controls and User Controls, user controls allow for developers to create a separate page or module that can be inserted into any page within an ASP.NET application.
Literal Text and HTML Tags
The final element of an ASP.NET page is plain old text and HTML . Generally, you cannot do without these elements, and HTML is the means for displaying the information from your ASP.NET controls and code in a way that's suitable for the user. Returning to the example in Figure 2.1 one more time, let's focus on the literal text and HTML tags:
<%@ Page Language="VB" %>
<html>
<head>
<title>Sample Page</title>
<script runat="server">
Sub ShowMessage(s As Object, e As EventArgs)
lblMessage.Text = "Hello World"
End Sub
</script>
</head>
<body>
<form runat="server">
<%-- Declare the title as string and set it --%>
<asp:Label
<% Dim Title As String = "Zak Ruvalcaba's Book List" %>
<%= Title %>
</form>
</body>
</html>
As you can see in the bold code, literal text and HTML tags provide the structure for presenting our dynamic data. Without them, there would be no format to the page, and the browser would be unable to understand it.
Now you should understand what the structure of an ASP.NET page looks like. As you work through the examples in this book, you'll begin to realize that in many cases you won't need to use all these elements. For the most part, all of your development will be modularized within code declaration blocks. All of the dynamic portions of your pages will be contained within code render blocks or controls located inside a
<form runat="server">tag.
In the following sections, we'll outline the various languages used within ASP.NET, talk a little about view state, and look at working with directives in more detail.
View State
As I mentioned briefly in the previous section, ASP.NET controls automatically retain their data when a page is sent to the server by a user clicking a submit button. Microsoft calls this persistence of data view state. In the past, developers would have to hack a way to remember the item selected in a drop-down menu or keep the contents of a text box, typically using a hidden form field. This is no longer the case; ASP.NET pages, once submitted to the server for processing, automatically retain all information contained within text boxes, items selected within drop-down menus, radio buttons, and check boxes. Even better, they keep dynamically generated tags, controls, and text. Consider the following ASP page, called
sample.asp:
<html>
<head>
<title>Sample Page using VBScript</title>
</head>
<body>
<form method="post" action="sample.asp">
<input type="text" name="txtName"/>
<input type="Submit" name="btnSubmit" text="Click Me"/>
<%
If Request.Form("txtName") <> "" Then
Response.Write(Request.Form("txtName"))
End If
%>
</form>
</body>
</html>
If you save this example in the
WebDocssubdirectory of
wwwrootthat you created in Chapter 1, Introduction to .NET and ASP.NET, you can open it in your browser by typing, to see that view state is not automatically preserved. When the user submits the form, the information that was previously typed into the text box is cleared, although it is still available in
Request.Form("txtName"). The equivalent page in ASP.NET,
ViewState.aspx, demonstrates data persistence using view state:
Example 2.1.
ViewState.aspx
<html>
<head>
<title>Sample Page using VB.NET</title>
<script runat="server" language="VB">
Sub Click(s As Object, e As EventArgs)
lblMessage.Text = txtName.Text
End Sub
</script>
</head>
<body>
<form runat="server">
<asp:TextBox
<asp:Button
<asp:Label
</form>
</body>
</html>
Example 2.2.
ViewState.aspx
<html>
<head>
<title>Sample Page using C#</title>
<script runat="server" language="C#">
void Click(Object s, EventArgs e) {
lblMessage.Text = txtName.Text;
}
</script>
</head>
<body>
<form runat="server">
<asp:TextBox
<asp:Button
<asp:Label
</form>
</body>
</html>
In this case, the code uses ASP.NET controls with the
runat="server"attribute. As you can see in Figure 2.2, the text from the box appears on the page when the button is clicked, but also notice that the data remains in the text box! The data in this example is preserved because of view state:
Figure 2.2. ASP.NET supports view state. When a page is submitted, the information within the controls is preserved.
You can see the benefits of view state already. But where is all that information stored? ASP.NET pages maintain view state by encrypting the data within a hidden form field. View the source of the page after you've submitted the form, and look for the following code:
<input type="hidden" name="__VIEWSTATE" value="dDwtMTcyOTAyO
DAwNzt0PDtsPGk8Mj47PjtsPHQ8O2w8aTwzPjs+O2w8dDxwPGw8aW5uZXJodG
1sOz47bDxIZWxsbyBXb3JsZDs+Pjs7Pjs+Pjs+Pjs+d2wl7GlhgweO9LlUihS
FaGxk6t4=" />
This is a standard HTML hidden form field with the value set to the encrypted data from the form element. As soon as you submit the form for processing, all information relevant to the view state of the page is stored within this hidden form field.
View state is enabled for every page by default. If you do not intend to use view state, you can turn it off, which will result in a slight performance gain in your pages. To do this, set the
EnableViewStateproperty of the
Pagedirective to false:
<%@ Page EnableViewState="False" %>
Speaking of directives, it's time we took a closer look at these curious beasts!
Working With Directives
For the most part, ASP.NET pages resemble traditional HTML pages, with a few additions. In essence, just using an extension like
.aspxon an HTML file will make the .NET Framework process the page. However, before you can work with certain, more advanced features, you will need to know how to use directives.
We've already talked a little about directives and what they can do earlier in this chapter. You learned that directives control how a page is created, specify settings when navigating between pages, aid in finding errors, and allow you to import advanced functionality to use within your code. Three of the most commonly used directives are:
Page
Defines page-specific attributes for the ASP.NET page, such as the language used.
Import
Makes functionality defined elsewhere available in a page through the use of namespaces. You will become very familiar with this directive as you progress through this book.
As you will see in Chapter 16, Rich Controls and User Controls, you would use this directive to link a user control to the ASP.NET page.
You will become very familiar with these three directives, as they're the ones that we'll be using the most in this book. You've already seen the
Pagedirective in use. The
Importdirective imports extra functionality for use within your application logic. The following example, for instance, imports the
<%@ Import Namespace="System.Web.Mail" %>
The
Registerdirective allows you to register a user control for use on your page. We'll cover these in Chapter 16, Rich Controls and User Controls, but the directive looks something like this:
<%@ Register TagPrefix="uc" TagName="footer" Src="footer.ascx" %>
ASP.NET Languages
As we saw in the previous chapter, .NET currently supports many different languages and there is no limit to the number of languages that could be made available. If you're used to writing ASP, you may think the choice of VBScript would be obvious. With ASP.NET however, Microsoft has done away with VBScript and replaced it with a more robust and feature-rich alternative: VB.NET.
ASP.NET's support for C# is likely to find favor with developers from other backgrounds. This section will introduce you to both these new languages, which are used throughout the remainder of the book. By the end of this section, you will, I hope, agree that the similarities between the two are astonishing - any differences are minor and, in most cases, easy to figure out.
Traditional server technologies are much more constrained in the choice of development language they offer. For instance, old-style CGI scripts were typically written with Perl or C/C++, JSP uses Java, Coldfusion uses CFML, and PHP is a language in and of itself. .NET's support for many different languages lets developers choose based on what they're familiar with, and start from there. To keep things simple, in this book we'll consider the two most popular, VB.NET and C#, giving you a chance to choose which feels more comfortable to you, or stick with your current favorite if you have one.
VB.NET
Visual Basic.NET or VB.NET is the result of a dramatic overhaul of Microsoft's hugely popular Visual Basic language. With the inception of Rapid Application Development (RAD) in the nineties, Visual Basic became extremely popular, allowing inhouse teams and software development shops to bang out applications two-to-the-dozen. VB.NET has many new features over older versions of VB, most notably that it has now become a fully object-oriented language. At last, it can call itself a true programming language on a par with the likes of Java and C++. Despite the changes, VB.NET generally stays close to the structured, legible syntax that has always made it so easy to read, use, and maintain.
C#
The official line is that Microsoft created C# in an attempt to produce a programming language that coupled the simplicity of Visual Basic with the power and flexibility of C++. However, there's little doubt that its development was at least hurried along. Following legal disputes with Sun about Microsoft's treatment (some would say abuse) of Java, Microsoft was forced to stop developing its own version of Java, and instead developed C# and another language, which it calls J#. We're not going to worry about J# here, as C# is preferable. It's easy to read, use, and maintain, because it does away with much of the confusing syntax for which C++ became infamous.
Summary
In this chapter, we started out by introducing key aspects of an ASP.NET page including directives, code declaration blocks, code render blocks, includes, comments, and controls. As the chapter progressed, you were introduced to the two most popular languages that ASP.NET supports, which we'll use throughout the book.
In the next chapter, we'll create more ASP.NET pages to demonstrate some form processing techniques and programming basics, before we finally dive in and look at object oriented programming for the Web.
Look out for more chapters from Build Your Own ASP.NET Website Using C# And VB.NET in coming weeks. If you can't wait, download all the sample chapters, or order your very own copy now!
No Reader comments
|
https://www.sitepoint.com/asp-dot-net-basics/
|
CC-MAIN-2016-44
|
refinedweb
| 3,974
| 64.41
|
Quandry with the following C code (Intermediate)
Discussion in 'C Programming' started by BMarsh, Jan 12, 2005.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
Synopsys Error: Cannot open intermediate fileChad, Apr 23, 2004, in forum: VHDL
- Replies:
- 0
- Views:
- 765
- Chad
- Apr 23, 2004
Client-Side Object Reference QuandryFred, Jul 12, 2004, in forum: ASP .Net
- Replies:
- 3
- Views:
- 1,172
- Bobby Ryzhy
- Jul 12, 2004
namespace/dictionary quandryJack Carter, Sep 18, 2004, in forum: Python
- Replies:
- 9
- Views:
- 450
- Jack Carter
- Sep 22, 2004
design quandry .., Oct 26, 2005, in forum: C++
- Replies:
- 2
- Views:
- 334
Synthesizing code with intermediate real valuesTopi, May 5, 2011, in forum: VHDL
- Replies:
- 6
- Views:
- 991
- Brian Drummond
- May 6, 2011
|
http://www.thecodingforums.com/threads/quandry-with-the-following-c-code-intermediate.436482/
|
CC-MAIN-2015-48
|
refinedweb
| 157
| 66.37
|
Having a responsive UI is an essential element for a great app. While you may have taken this for granted in the apps you've built so far, as you start to add more advanced features, such as networking or database capabilities, it can be increasingly difficult to write code that's both functional and performant. The example below illustrates just what can happen if long running tasks, such as downloading images from the Internet, are not handled correctly. While the image functionality works, the scrolling is jumpy, making the UI look unresponsive (and unprofessional!).
To avoid the problems with the above app, you'll need to learn a bit about something called threads. A thread is a bit of an abstract concept, but you can think of it as a single path of execution for code in your app. Each line of code you write is an instruction that's to be executed in-order on the same thread.
You've already been working with threads in Android. Every Android app has a default "main" thread. This is (usually) the UI thread. All the code you've written so far is on the main thread. Each instruction (i.e. a line of code) waits for the previous one to finish before the next line executes.
However, in a running app, there are more threads in addition to the main thread. Behind the scenes, the processor doesn't actually work with separate threads, but rather, switches back and forth between the different series of instructions to give the appearance of multitasking. A thread is an abstraction that you can use when writing code to determine which path of execution each instruction should go. Working with threads other than the main thread, allows your app to perform complex tasks, such as downloading images, in the background while the app's user interface remains responsive. This is called concurrent code, or simply, concurrency.
In this codelab, you'll learn about threads, and how to use a Kotlin feature called coroutines to write clear, non-blocking concurrent code.
Prerequisites
- Knowledge of basic Kotlin programming concepts including loops and functions, taught in Pathway 1: Introduction to Kotlin
- How to use lambda functions in Kotlin, taught in Pathway 3: Collections in Kotlin
What you'll learn
- What concurrency is and why it's important
- How to use coroutines and Threads to write non-blocking concurrent code
- How to access the main thread to safely perform UI updates when performing tasks in the background
- How and when to use different concurrency pattern (Scope/Dispatchers/Deferred)
- How to write code that interacts with network resources
What you'll build
- In this codelab, you'll write some small programs to explore working with threads and coroutines in Kotlin
What you need
Multithreading and concurrency
So far we have treated an Android app as a program with a single path of execution. You can do a lot with that single path of execution, but as your app grows, you need to think about concurrency.
Concurrency allows multiple units of code to execute out of order or seemingly in parallel permitting more efficient use of resources. The operating system can use characteristics of the system, programming language, and concurrency unit to manage multitasking.
Why do you need to use concurrency? As your app gets more complex, it's important for your code to be non-blocking. This means that performing a long-running task, such as a network request, won't stop the execution of other things in your app. Not properly implementing concurrency can make your app appear unresponsive to users.
You'll take a look at several examples demonstrating concurrent programming in Kotlin. All the examples can be run in the Kotlin Playground:
A thread is the smallest unit of code that can be scheduled and run in the confines of a program. Here's a small example where we can run concurrent code.
Run the code several times. You'll see varied output. Sometimes the threads will appear to run in sequence and other times the content will be interspersed.
Using threads is a simple way to start working with multiple tasks and concurrency, but are not problem-free. A number of problems can arise when you use
Thread directly in your app's UI. This thread is often called the main thread or UI thread.
Because this thread is responsible for running your app's UI, it's important for the main thread to be performant so that the app will run smoothly. Any long-running tasks will block it until completion and cause is beyond your control. You can't always expect predictable output when working with threads directly.
For example the following code uses a simple loop to count from 1 to 50, but in this case, a new thread is created for each time the count is incremented. Think about what you'd expect the output to look like and then run the code a few times.
fun main() { var count = 0 for (i in 1..50) { Thread { count += 1 println("Thread: $i count: $count") }.start() } }
Was the output what you look at the "count" for some of the iterations, you'll notice that it remains unchanged after multiple threads. Even more odd, the count reaches 50 at Thread 43 even though the output suggests this is only the second thread to execute. Judging from the output alone, it's impossible to know what the final value of
count is.
This is just one way threads can lead to unpredictable behavior. When working with multiple threads, you may also run into what's called a race condition. This is when multiple threads try to access the same value in memory at the same time. Race conditions can result in hard to reproduce, random looking bugs, which may cause your app to crash, often unpredictably.
Performance issues, race conditions, and hard to reproduce bugs are some of the reasons why we don't recommend working with threads directly. Instead, you'll learn about a feature in Kotlin called Coroutines that will help you write concurrent code... For the reasons we talked about concerning the main thread, this is not recommended outside example code. When you use coroutines in your apps, we will use other scopes.. You will not be using it often in typical Android code.
import kotlinx.coroutines.* import java.time.LocalDateTime import java.time.format.DateTimeFormatter val }: Deferred<T>
The
async() function returns a value of type
Deferred. A
Deferred is a cancelable
Job that can hold a reference to a future value. By using a
Deferred, you can still call a function as if it immediately returns a value - a
Deferred just serves as a placeholder, since,, you may have noticed that the
getValue() function is also defined with the
suspend keyword. The reason is that it calls
delay(), which is also a
suspend function. Whenever a function calls another
suspend function, then it should also be a
suspend function.
If this is the case, then why wouldn't the
main() function in our example be marked with
suspend? It does call
getValue(), after all.
Not necessarily.
getValue() is actually called in the function passed into
runBlocking(), which is a
suspend function, similar to the ones passed into
launch() and
async(). However,
getValue() is not called in
main() itself, nor is
runBlocking() a
suspend function, so
main() is not marked with
suspend. If a function does not call a
suspend function, then it does not need to be a
suspend function itself.
At the beginning of this codelab, you saw the following example that used multiple threads. With your knowledge of coroutines, rewrite the code to use coroutines instead of
Thread.
Note: You don't have to edit the
println() statements, even though they reference
Thread.
fun main() { val states = arrayOf("Starting", "Doing Task 1", "Doing Task 2", "Ending") repeat(3) { Thread { println("${Thread.currentThread()} has started") for (i in states) { println("${Thread.currentThread()} - $i") Thread.sleep(50) } }.start() } }
import kotlinx.coroutines.* fun main() { val states = arrayOf("Starting", "Doing Task 1", "Doing Task 2", "Ending") repeat(3) { GlobalScope.launch { println("${Thread.currentThread()} has started") for (i in states) { println("${Thread.currentThread()} - $i") delay(5000) } } } }
You've learned
- Why concurrency is needed
- What a thread is, and why threads are important for concurrency
- How to write concurrent code in Kotlin using coroutines
- When and when not to mark a function as "suspend"
- The roles of a CoroutineScope, Job, and Dispatcher
- The difference between Deferred and Await
|
https://developer.android.com/codelabs/basic-android-kotlin-training-introduction-coroutines
|
CC-MAIN-2021-21
|
refinedweb
| 1,424
| 61.77
|
Splitting up messages in a text file
Rich Charlton
Greenhorn
Joined: Nov 11, 2004
Posts: 19
posted
Dec 12, 2004 07:14:00
0
Hi,
Im working on a basic email filter, which will read in a text file of several hundred email messages, increment a counter for an individual message, tokenise it, count number of occurences of token in that email then store in a dictionary. I have built a couple of classes that do certain things and they work ok. Although i cant figure out how i can split up each individual message in the text file. At the moment i just read the whole text file in, with no method of splitting the text file up into individual messages. The messages in the text file are split by 10 dashes (----------). Here are my classes so far:
Tokeniser.java
import java.io.*; import java.util.*; import javax.swing.JOptionPane; import java.util.ArrayList; public class Tokeniser { public HashSet unCommonHS = new HashSet(); public int numberOfTokens = 0; public Tokeniser() { String fileName = "SPAM.txt"; String aLine; ComWords cm = new ComWords(); HashSet cwSet = cm.getCommonWordsHS(); if (cm==null) return; try { BufferedReader bR = new BufferedReader(new FileReader(fileName)); while((aLine=bR.readLine())!= null) { checkForCommonWord (aLine, cwSet, unCommonHS); } bR.close(); Object[] ucw = unCommonHS.toArray(); for (int i=0; i<ucw.length; i++) { numberOfTokens++; System.out.print(ucw[i]+" "); if (i!=0 && (i%1)==0) System.out.println(); } Set sm = new TreeSet(unCommonHS); for (Iterator it = sm.iterator(); it.hasNext(); ) { numberOfTokens++; it.next(); } System.out.println("Number of Tokens In File :" + numberOfTokens); } catch(IOException e) { System.out.println("Train Failed"); e.printStackTrace(); return; } } private void checkForCommonWord(String line, HashSet commonSet, HashSet uncommonHS) { StringTokenizer st = new StringTokenizer(line.toLowerCase(), "!-/.,@?><#~'��,'��� - �'�� �� :;][}{|�$%`'�/-^�&�*()1234567890-=_+*`�"); while (st.hasMoreElements()) { Object ob = st.nextElement(); if (!commonSet.contains(ob)) uncommonHS.add(ob); } } public String returnString() { return unCommonHS.toString(); } public static void main(String[] args) { Tokeniser t = new Tokeniser(); } }
Comwords.java (removes common words like 'the', 'a' etc)
import java.io.*; import java.util.*; import javax.swing.JOptionPane; import java.util.ArrayList; import java.util.Hashtable; class ComWords { public ComWords() {} public HashSet getCommonWordsHS () { HashSet cwSet = new HashSet(); String fileName = "cwords.txt"; try { BufferedReader bR = new BufferedReader(new FileReader(fileName)); String aLine=null; while((aLine=bR.readLine())!= null) storeWord(cwSet, aLine); bR.close(); return cwSet; } catch(IOException e) { System.out.println("CommonWords Failed"); e.printStackTrace(); return null; } } private void storeWord(HashSet set, String line) { StringTokenizer st=new StringTokenizer(line," "); while (st.hasMoreElements()) { set.add(st.nextElement()); } } }
Does anyone have any ideas how i might solve this problem?
Thanks
Stan James
(instanceof Sidekick)
Ranch Hand
Joined: Jan 29, 2003
Posts: 8791
posted
Dec 12, 2004 15:48:00
0
Can you just
test
for a line of ten dashes immediately after the read - the first line inside your loop?
For a real change of direction, if you have JDK 5 look at Scanner. That could probably read one message at a time by using the dashes as a delimiter.
Any chance you'll be bitten by somebody including a line of dashes in their mail message?
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
I agree. Here's the link:
subject: Splitting up messages in a text file
Similar Threads
How do I read and write a binary file?
iterating over a file using arraylist
declaring array of vectors
Converting String to an array.
Adapting My Word Counter Program to also Search and Replace
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/375398/java/java/Splitting-messages-text-file
|
CC-MAIN-2015-22
|
refinedweb
| 616
| 51.44
|
We have this really cool tool that parses a python file into a AST and then morphs it according to some transformation and then spits out python code corresponding to the new AST. I've been playing around with it lately to come up with some transformations. One of the things I wanted to do was convert all return statements to yield expressions. That will give a good amount of coverage for yield. So this is the code I want to convert:
def f(a,b):
return a + b
def g(f, *args):
return f(*args)
f(3,42)
g(f, 3, 42)
Now, its easy enough to change a "return <expr>" to "yield <expr>" but now I cannot just call the function directly as f(). That will return the generator. I need to do f().next(). Now one naive approach would be to go change all the invocations of the function to f().next(). That would work except for the cases where I pass the function as a parameter to another function (like g in the example). Now when g calls calls f, it could blow up because it will get a generator instead of f's returnvalue. I could solve that also by doing replacing every f with lambda : f().next(). So the call to g become this:
g(lambda *args: f(*args).next(), 3, 42)
That almost solves the problem except that even fs which are not references to this function (like the locally scoped f variable inside the g method) will get replaced since in a parsed AST we don't have any idea of scope (that happens during the conversion of the Python AST to the DLR AST). Now come python decorators to the rescue. In python you can wrap a function with a decorator like so:
@dec
def f():
pass
dec can be defined as a function. Then python will start treating f as dec(f) and any calls to f() will be semantically equivalent to dec(f)(). This is great because now python will take care of that little scoping problem for me. So now the morphed code is simply:
def yieldwrapper(func):
f = lambda *args, **kwargs: func(*args, **kwargs).next()
f.__name = func.__name
return f
@yieldwrapper
def f(a,b):
yield a + b
@yieldwrappeer
def g(f, *args):
yield f(*args)
f(3,42)
g(f, 3, 42)
The name of the wrapper is also set to the function's name so that it can masquerade as the real function just in case anyone's looking. Except for a few weird issues, this worked for most of the tests.
|
http://blogs.msdn.com/b/srivatsn/archive/2008/03/17/decorators-to-convert-return-statements-to-yield-statements.aspx
|
CC-MAIN-2014-23
|
refinedweb
| 438
| 76.25
|
view raw
I would like to be able to draw a line between two subplots in Matplotlib. Currently, I use the method provided in this SO topic: Drawing lines between two plots in Matplotlib thus using transFigure and matplotlib.lines.Line2D
However, when I zoom on my figure (both subplots share the same x and y axes), the line does not update i.e. it keeps the same coordinate in the figure frame but not in my axes frames.
Does it exist a simple way to cope with this?
As the comment in the linked question (Drawing lines between two plots in Matplotlib) suggests, you should use a
ConnectionPatch to connect the plots. The good thing about this
ConnectionPatch is not only that it is easy to realize, but also it will move and zoom together with the data.
Here is an example of how to use it.
import matplotlib.pyplot as plt from matplotlib.patches import ConnectionPatch import numpy as np fig, (ax1, ax2) = plt.subplots(1,2, sharex=True, sharey=True) x,y = np.arange(23), np.random.randint(0,10, size=23) x=np.sort(x) i = 10 ax1.plot(x,y, marker="s", linestyle="-.", c="r") ax2.plot(x,y, marker="o", linestyle="", c="b") con = ConnectionPatch(xyA=(x[i],y[i]), xyB=(x[i],y[i]), coordsA="data", coordsB="data", axesA=ax2, axesB=ax1, arrowstyle="-") ax2.add_artist(con) plt.show()
|
https://codedump.io/share/X51n8Z5bQLkV/1/plot-a-line-between-two-subplots-while-handling-the-zoom-with-matplotlib
|
CC-MAIN-2017-22
|
refinedweb
| 235
| 67.25
|
I've got a legacy Perl CGI page running on Apache that processes a sizable Stand out spreadsheet price of data, contributing to the database as necessary. It will get processed in categories of data, and every number of data will get delivered to the database.
After each call towards the database, my system's available memory decreases considerably to the stage where there's no memory left. After I finally obtain the 'Premature finish of script headers' error and HTTP Code 500 is came back towards the client, the memory is freed back somewhere.
Searching with the (complicated) code, I can not find in which the memory leak may be occurring. Can there be some trick or tool will be able to use to find out in which the memory is certainly going?
Rapid answer is it sucks to become you. There is not a pleasant, ready-to-use program that you could go to have your call answered. I apologize which i could not become more help, but without seeing any code, etc, there really is not much better suggest that anybody can provide.
I can not discuss your unique situation, but here are a few things I have done previously. You need to discover the overall area that's leading to the issue. It isn't very different than other debugging techniques. Usually I've found that there are no elegant means to fix this stuff. You simply roll-up your masturbator sleeves and stick your arms elbow-deep within the muck regardless of how bad it smells.
First, run this program outdoors from the web server. Should you still begin to see the problem in the command line, be at liberty: you simply (mostly) eliminated an issue with the net server. This may take some work to create a wrapper script to setup the net atmosphere, however it eventually ends up being much simpler since you don't have to wreck havoc on restarting the server, etc to totally reset the atmosphere.
If you cannot replicate the issue outdoors the server, you are able to still do things i recommend next, it is simply more annoying. Whether it's a webserver problem and no problem in the command line, the job becomes the invention concerning the distinction between individuals two. I'd experienced situations like this.
When not an issue with the net server, start bisecting the script as if you would for just about any debugging problem. For those who have logging enabled, switch it on watching this program run while recording its real memory use. When will it inflate? It may sound like it is simplified lower with a database calls. If you can to operate this in the command line or even the debugger, I'd look for a pair appropriate breakpoints pre and post the memory increase and progressively bring them closer together. You may use modules for example Devel::Size to check out memory dimensions for data structures you think.
After that it is simply thinning lower the suspects. Once you discover the suspect, try to replicate it inside a short example script. You need to eliminate as numerous adding-factor options as you possibly can.
When you think you've found the problem code, you may can request another question that shows the code should you still do not understand what's happening.
Should you desired to get really fancy, you can write your personal Perl debugger. It isn't very difficult. You receive a opportunity to run some subroutines within the
DB namespace at the start or finish of claims. You've your debugging code list memory profiles for that items you suspect and search for jumps in memory dimensions. I would not do this unless of course anything else fails.
|
http://codeblow.com/questions/how-do-i-debug-a-potential-memory-leak-inside-a-perl-cgi/
|
CC-MAIN-2018-17
|
refinedweb
| 632
| 62.17
|
Creating an empty Pandas DataFrame, then filling it?
NEVER grow a DataFrame!
TLDR; (just read the bold text)
Most answers here will tell you how to create an empty DataFrame and fill it out, but no one will tell you that it is a bad thing to do.
Here is my advice: Accumulate data in a list, not a DataFrame.
Use a list to collect your data, then initialise a DataFrame when you are ready. Either a list-of-lists or list-of-dicts format will work,
pd.DataFrame accepts both.
data = []for a, b, c in some_function_that_yields_data(): data.append([a, b, c])df = pd.DataFrame(data, columns=['A', 'B', 'C'])
Pros of this approach:
It is always cheaper to append to a list and create a DataFrame in one go than it is to create an empty DataFrame (or one of NaNs) and append to it over and over again.
Lists also take up less memory and are a much lighter data structure to work with, append, and remove (if needed).
dtypesare automatically inferred (rather than assigning
objectto all of them).
A
RangeIndexis automatically created for your data, instead of you having to take care to assign the correct index to the row you are appending at each iteration.
If you aren't convinced yet, this is also mentioned in the documentation:
Iteratively appending rows to a DataFrame can be more computationallyintensive than a single concatenate. A better solution is to appendthose rows to a list and then concatenate the list with the originalDataFrame all at once.
But what if my function returns smaller DataFrames that I need to combine into one large DataFrame?
That's fine, you can still do this in linear time by growing or creating a python list of smaller DataFrames, then calling
pd.concat.
small_dfs = []for small_df in some_function_that_yields_dataframes(): small_dfs.append(small_df)large_df = pd.concat(small_dfs, ignore_index=True)
or, more concisely:
large_df = pd.concat( list(some_function_that_yields_dataframes()), ignore_index=True)
These options are horrible
append or
concat inside a loop
Here is the biggest mistake I've seen from beginners:
df = pd.DataFrame(columns=['A', 'B', 'C'])for a, b, c in some_function_that_yields_data(): df = df.append({'A': i, 'B': b, 'C': c}, ignore_index=True) # yuck # or similarly, # df = pd.concat([df, pd.Series({'A': i, 'B': b, 'C': c})], ignore_index=True)
Memory is re-allocated for every
append or
concat operation you have. Couple this with a loop and you have a quadratic complexity operation.
The other mistake associated with
df.append is that users tend to forget append is not an in-place function, so the result must be assigned back. You also have to worry about the dtypes:
df = pd.DataFrame(columns=['A', 'B', 'C'])df = df.append({'A': 1, 'B': 12.3, 'C': 'xyz'}, ignore_index=True)df.dtypesA object # yuck!B float64C objectdtype: object
Dealing with object columns is never a good thing, because pandas cannot vectorize operations on those columns. You will need to do this to fix it:
df.infer_objects().dtypesA int64B float64C objectdtype: object
loc inside a loop
I have also seen
loc used to append to a DataFrame that was created empty:
df = pd.DataFrame(columns=['A', 'B', 'C'])for a, b, c in some_function_that_yields_data(): df.loc[len(df)] = [a, b, c]
As before, you have not pre-allocated the amount of memory you need each time, so the memory is re-grown each time you create a new row. It's just as bad as
append, and even more ugly.
Empty DataFrame of NaNs
And then, there's creating a DataFrame of NaNs, and all the caveats associated therewith.
df = pd.DataFrame(columns=['A', 'B', 'C'], index=range(5))df A B C0 NaN NaN NaN1 NaN NaN NaN2 NaN NaN NaN3 NaN NaN NaN4 NaN NaN NaN
It creates a DataFrame of object columns, like the others.
df.dtypesA object # you DON'T want thisB objectC objectdtype: object
Appending still has all the issues as the methods above.
for i, (a, b, c) in enumerate(some_function_that_yields_data()): df.iloc[i] = [a, b, c]
The Proof is in the Pudding
Timing these methods is the fastest way to see just how much they differ in terms of their memory and utility.
Benchmarking code for reference.
Here's a couple of suggestions:
Use
date_range for the index:
import datetimeimport pandas as pdimport numpy as nptodays_date = datetime.datetime.now().date()index = pd.date_range(todays_date-datetime.timedelta(10), periods=10, freq='D')columns = ['A','B', 'C']
Note: we could create an empty DataFrame (with
NaNs) simply by writing:
df_ = pd.DataFrame(index=index, columns=columns)df_ = df_.fillna(0) # with 0s rather than NaNs
To do these type of calculations for the data, use a numpy array:
data = np.array([np.arange(10)]*3).T
Hence we can create the DataFrame:
In [10]: df = pd.DataFrame(data, index=index, columns=columns)In [11]: dfOut[11]: A B C2012-11-29 0 0 02012-11-30 1 1 12012-12-01 2 2 22012-12-02 3 3 32012-12-03 4 4 42012-12-04 5 5 52012-12-05 6 6 62012-12-06 7 7 72012-12-07 8 8 82012-12-08 9 9 9
If you simply want to create an empty data frame and fill it with some incoming data frames later, try this:
newDF = pd.DataFrame() #creates a new dataframe that's emptynewDF = newDF.append(oldDF, ignore_index = True) # ignoring index is optional# try printing some data from newDFprint newDF.head() #again optional
In this example I am using this pandas doc to create a new data frame and then using append to write to the newDF with data from oldDF.
If I have to keep appending new data into this newDF from more than one oldDFs, I just use a for loop to iterate over pandas.DataFrame.append()
|
https://codehunter.cc/a/python/creating-an-empty-pandas-dataframe-then-filling-it
|
CC-MAIN-2022-21
|
refinedweb
| 972
| 63.9
|
A Simple CMS in Sinatra, Part III
A Simple CMS in Sinatra
- A Simple Content Management System in Sinatra
- A Simple CMS in Sinatra, Part II
- A Simple CMS in Sinatra, Part III
In part two of this tutorial series we finished putting together the basics of our Simple Content Management System. It’s now possible to create, edit, and delete pages with pretty URLs.
In this post, we’re going to look at adding an Administration section and defining which pages are public facing and which are private.
Logging In and Logging Out
First off, we’ll add some functionality to allow a user to log in and out. To do this, we need to configure our application to use sessions. Change the configure block near the top of main.rb to the following:
configure do Mongoid.load!("./mongoid.yml") enable :sessions end
We can now access the session hash to keep track of whether a user is logged in from request to request.
Next, we will create a helper method called
admin? that is
true if the
session[:admin] hash is
true (this will be set as
true when a user logs in and set to
nil when the user logs out). Add the following line below the
configure block in main.rb:
helpers do def admin? session[:admin] end end
We now have a convenient way to check if a user is logged in or not in views and route handlers. Let’s use this to add a login button to our application.
We want this to be on every page, so change the layout.slim file in the views folder to the following:
doctype html html head title= @title || "Simple Sinatra CMS" link rel="stylesheet" href="/styles/main.css" body - if admin? == slim :admin - else a.rounded.button href="/login" Log In h1.logo a href="/pages" Simple Sinatra CMS == yield
The section after the body now checks to see if the user is logged in (by checking if the
admin? helper method that we just created is
true). If they are, then we will display the admin partial view (we’ll create this in a minute).
If the user is not logged in, then a link is displayed to allow them to log in. This link has the classes
.rounded.button that we created in the last tutorial so it appears as a button. It also contains the route
/login, which we’ll handle shortly.
Let’s get the admin view sorted – save the following code as admin.slim in the views folder:
#admin nav ul li a.round.button href='/pages' Pages li a.round.button href='/pages/new' Add a page - if @page li a.round.button href="/pages/#{@page.id}/edit" Edit this page li a.round.button href="/pages/delete/#{@page.id}" Delete this page a.rounded.button href="/logout" Log Out
This adds the “create new page” button as well as buttons for edit and delete that only show if you are looking at an actual page. These buttons should only be available if the user is logged in. In the previous version of our application, they were placed in other views, so we need to remove them now and limit them to only being in the admin view.
In index.slim, remove the following line of code:
a.button.round href='/pages/new' Add a new page
Now in show.slim, remove the following code:
a.button href="/pages/#{@page.id}/edit" Edit a.button href="/pages/delete/#{@page.id}"
To finish off, we need to create the actual route handlers that are used to log the user in and out. Add the following lines above the page route handlers in main.rb:
get('/login'){session[:admin]=true; redirect back} get('/logout'){session[:admin]=nil; redirect back}
These route handlers set the
session[:admin] hash to
true when the user logs in and to
nil when the user logs out by visiting the relevant URL. This information is retained in the session hash using cookies and is then used by the
admin? helper method that we just created to check whether the user is logged in or out. Both route handlers also use the handy
back helper method that Sinatra provides. This will take the user back to whichever page they were viewing before they logged in or out.
This should now all be working. Make sure you start the server running and then navigate to and have a play around at logging in and out. You should only be able to see the buttons for adding, editing and deleting pages if you are logged in.
The Most Insecure Security Ever?
The eagle-eyed amongst you might have noticed that this authentication system is not particularly strong – there isn’t even a password! This is because having a strong auth procedure is an important business and beyond the scope of this tutorial. What we have done here is put the pieces in place so that a suitable auth solution can then be baked in to the relevant places.
You might want take a look at the sinatra-auth gem (or one of the many other gems that provide auth). You could use Twitter Authentication or you could even roll your own solution.
For the purposes of this tutorial though, I just wanted to demonstrate how the application would function differently depending on whether the user is logged in or not.
Protecting Admin Features
We have a number of routes in our application that we don’t particularly want all users to access. In fact, most of the routes fall under this category, we actually only want the ‘show’ URLs to be visible to everybody.
We have successfully hidden the buttons that link to the admin routes, but that might not stop a determined or curious user from simply typing the URLs directly into the browser address bar. For example, you can just navigate to ‘’ to add a new page.
We need to add some security to the routes themselves to actually only allow users to access these routes if they are logged in. This is done by creating a
protected! helper method.
Add the following method inside the
helpers block that we created earlier:
def protected! halt 401,"You are not authorized to see this page!" unless admin? end
This uses Sinatra’s
halt helper method to stop the request in its tracks and issue a 401 ‘unauthorized’ error with a custom message (this can be a string or you could create a view for it). Notice that we use the
admin? helper method that we created at the beginning of this tutorial to check if the user is logged in, so the request will only be halted if the user is NOT logged in.
This method now needs to be added to all the route handlers that we want to protect. This means we need to change a lot of our route handlers to the following:
get '/pages/new' do protected! @page = Page.new slim :new end post '/pages' do protected! page = Page.create(params[:page]) redirect to("/pages/#{page.id}") end put '/pages/:id' do protected! page = Page.find(params[:id]) page.update_attributes(params[:page]) redirect to("/pages/#{page.id}") end get '/pages/delete/:id' do protected! @page = Page.find(params[:id]) slim :delete end delete '/pages/:id' do protected! Page.find(params[:id]).destroy redirect to('/pages') end get '/pages/:id/edit' do protected! @page = Page.find(params[:id]) slim :edit end
Now none of these routes will be accessible to any users that are not logged in, even if they type the route directly into the browser.
Different URLs for Different Users
In the last post, we created pretty URLs that are based on the title of the page. These are the URLs that we would like to use for the majority of users – they look better and we don’t want them to see the IDs of each page exposed in the URL.
When a user is logged in however, we want to use the URL that refers to the ID of the page. To do this we need to add a helper called
url our helper block in main.rb:
def url_for page if admin? "/pages/" + page.id else "/" + page.permalink end end
This basically uses an
if ... else block to check if the user is logged in using our same
admin? helper. If the user is logged in, then the URL will use the page’s ID, otherwise it will use the page’s pretty URL.
We can now just use this helper whenever we want to link to a page, confident that the correct URL will be displayed. Also, if we want to change the naming structure for the page URLs then we only have to do it in this one place.
The only place where we currently link to pages is in the page partial, so we need to change the page.slim view to the follwoing:
li a href="#{url_for page}" =page.title
Add Some Style
Just to finish off, we can make the list of buttons in the admin section appear as a horizontal list by adding the following piece of Sass code to the bottom of styles.scss:
#admin ul{ list-style: none; margin: 0; padding: 0; li{ display: inline-block; } }
Now restart the server and play around with using the different functionality – check that the links are different depending on whether you are logged in or out.
That’s All Folks!
In this post, we’ve added an admininstration section to our content management system, albeit with possibly the least secure authentication system around.
Despite the lack of security, the ability for users to log in has allowed us to separate the different views and functionality of the application. In the next post we’ll be looking at how to cache the pages as well as adding timestamps and versioning to the pages.
Please leave any comments below, as well as any requests for how you’d like to see this CMS develop.
- Anonymous
- David
|
http://www.sitepoint.com/a-simple-cms-in-sinatra-part-iii/
|
CC-MAIN-2014-35
|
refinedweb
| 1,689
| 72.05
|
I am reading in a text file and basically, when a line has the user defined string, it prints the line and counts each time that word occurred. However, according to Notepad++, the word "the" should occur 3,781 times, but when I run the program I only get 3,404 times... and I believe it has to do with the string member function find().
/******************************************** * File: stringSearch.cpp * Purpose: * Write a program that asks for a user to * enter the name of a file and a string * to search for. The program will then * display all lines the string occurs in * and how many times it occurs. **********************************************/ #include <iostream> #include <fstream> #include <string> using namespace std; /*the = 3404 times, should be 3,781*/ int main() { string infile, stringBuffer, stringBufCopy, userString; size_t strFound; int stringCount = 0; int stringIt = 0; cout << "Enter file name: "; getline(cin, infile); fstream file(infile, ios::in); if(!file) { while(!file) { cout << "Invalid file entry!" << endl; cout << "Re-enter file: "; getline(cin, infile); fstream file(infile, ios::in); } } cout << "Enter string to search for: "; getline(cin, userString); while(!file.eof()) { getline(file, stringBuffer, '.'); // Get line from file until a '.' occurs stringBufCopy = stringBuffer; strFound = stringBuffer.find(userString); if(strFound != string::npos) { while(strFound != string::npos) { stringCount++; cout << stringBuffer << endl; strFound = stringBuffer.find(userString, strFound+1); // HERE maybe? } } } cout << "The string: \"" << userString << "\" was found " << stringCount << " times!" << endl; file.close(); cin.clear(); cin.sync(); cin.get(); return 0; }
Also, one more question... What does string::npos mean?
EDIT:: I know there are some redundancies and I will change and modify, but I have just been moving things around and adding and removing stuff, so for now, it's just getting it working properly, and will then take out redundancies and useless code.
This post has been edited by IngeniousHax: 27 June 2012 - 04:35 PM
|
http://www.dreamincode.net/forums/topic/284139-quick-logic-question/
|
CC-MAIN-2016-22
|
refinedweb
| 305
| 73.07
|
Python svglib can allow us to convert svg file to png and pdf, in this tutorial, we will introduce how to convert a svg to pdf file, you can do by following our steps.
If you want to convert svg to png, you can read this tutorial.
Best Practice to Python Convert SVG to PNG with SvgLib – Python Tutorial
Import library
from svglib.svglib import svg2rlg from reportlab.graphics import renderPDF, renderPM
Load svg file
drawing = svg2rlg("home.svg")
Convert svg to pdf file
renderPDF.drawToFile(drawing, "file.pdf")
This is really just the example of the front page of the project. How about having many svgs to one pdf?
There is an easy way to convert many svgs to one pdf:
Step 1: convert svgs to pngs one by one.
Here is an tutorial:
Step 2: convert pngs to pdf
We can use img2pdf package.
Here is an tutoril:
Meanwhile, you also can convert many svgs to many pdf files and merge these pdf files to one.
Here is an tutorial:
|
https://www.tutorialexample.com/a-simple-guide-to-python-convert-svg-to-pdf-with-svglib-python-tutorial/
|
CC-MAIN-2021-31
|
refinedweb
| 172
| 74.39
|
DEBSOURCES
Skip Quicknav
sources / shogun / 0.6.3-1 / src / README
- avoid whitespaces at end of lines and never use them for indentation; only
ever use tabs for indentations
- semicolons and commas ;, should be placed directly after a variable/statement
x+=1;
set_cache_size(0);
for (int i=0; i<10; i++)
...
- brackets () and (greater/lower) equal sign ><= should should not contain
unecessary spaces, e.g:
int a=1;
int b=kernel->compute();
if (a==1)
{
}
exceptions are logical subunits
if ( (a==1) && (b==1) )
{
}
- breaking long lines and strings
limit yourselves to 80 columns
for (INT vec=params->start; vec<params->end &&
!CSignal::cancel_computations(); vec++)
{
//foo
}
however exceptions are OK if readability is increased (as in function definitions)
- don't put multiple assignments on a single line
- functions look like
INT* fun(INT* foo)
{
return foo;
}
and are separated by a newline, e.g:
INT* fun1(INT* foo1)
{
return foo;
}
INT* fun2(INT* foo2)
{
return foo2;
}
- same for if () else clauses, while/for loops
if (foo)
do_stuff();
if (foo)
{
do_stuff();
do_more();
}
MACROS & IFDEFS:
- use macros sparingly
- avoid defining constants using macros (bye bye typechecking), use
const_PYTHON
...
#else //HAVE_PYTHON
...
#endif //HAVE_PYTHON
TYPES:
- types (use only these!):
CHAR (8bit char(maybe signed or unsigned))
BYTE (8bit unsigned char)
WORD (16bit unsigned short)
UINT (32bit unsinged int)
INT (32bit int)
LONG (64bit int)
DREAL (double)
LONGREAL (long double) classify() returns a new CLabels then the .i file
should contain %newobject CClassifier::classify();/Simple Simple/Sparse
-featuretype Real/Byte/...
- preprocessors
-featureclass Simple/Sparse
-featuretype Real/Byte/...
- kernel
-featureclass Simple/Sparse
-featuretype Real/Byte/...
-kerneltype Linear/Gaussian/...
VERSIONING SCHEME:
- an automagic version will be created from the date of the last svn update.
if that is not enough make releases:
e.g.: svn cp trunk releases/shogun_0.1.0
|
https://sources.debian.org/src/shogun/0.6.3-1/src/README.developer/
|
CC-MAIN-2020-34
|
refinedweb
| 294
| 53.1
|
Just before Dreamforce, Dave Carroll and I hosted the Winter ’13 developer preview webinar talking about what’s new to the platform. One of the major features that became generally available was Visualforce Charting. Visualforce Charting is an easy way to create customized business charts, based on data sets you create directly from SOQL queries or by building the data set in JavaScript. Sandeep Bhanot posted an article a while back going over how to build layered line and bar charts with a controller, and in this article I am going to go over new chart types and techniques.
Visualforce charts are rendered client-side using JavaScript. This allows charts to be animated and, on top of that, chart data can load and reload asynchronously which can make the page feel more responsive. In this article, I want to highlight a few of the new chart types that were recently released, and demonstrate some of the advanced rendering capabilities.
You can build this data set similarly to how you would build a data set for a pie chart. The main difference is that there is only one list element to build the complete chart. In the example below, I am summarizing the total amount of opportunities closed this month related to an account.
public class GaugeChartController { public String acctId {get;set;} public GaugeChartController(ApexPages.StandardController controller){ acctId = controller.getRecord().Id; } public List<gaugeData> getData() { Integer TotalOpptys = 0; Integer TotalAmount = 0; Integer thisMonth = date.Today().month(); AggregateResult ClosedWonOpptys = [select SUM(Amount) totalRevenue, CALENDAR_MONTH(CloseDate) theMonth, COUNT(Name) numOpps from Opportunity where AccountId =: acctId and StageName = 'Closed Won' and CALENDAR_MONTH(CloseDate) =: thisMonth GROUP BY CALENDAR_MONTH(CloseDate) LIMIT 1]; List<gaugeData> data = new List<gaugeData>(); data.add(new gaugeData(Integer.valueOf(ClosedWonOpptys.get('numOpps')) + ' Opptys', Integer.valueOf(ClosedWonOpptys.get('totalRevenue'))));
return data; } public class gaugeData { public String name { get; set; } public Integer size { get; set; } public gaugeData(String name, Integer data) { this.name = name; this.size = data; } } }
The structure for building this chart is almost identical to what we’ve seen already with the previously existing chart types. To populate the chart with data, you need to build a list with an inner class (aka wrapper class). One thing to note, your wrapper class must have a ‘name’ field if you want to display a tooltip on hover. Once your data set is constructed, you can output it on your Visualforce page with a few simple components:
<apex:page <apex:chart <apex:axis <apex:gaugeSeries </apex:chart> </apex:page>
The chart to the left is what the output looks like initially. If you try to use CSS to manipulate the width and/or height frame, you will get to watch the frame dynamically continue to cut off your text. Thankfully the frame problem can be alleviated using a little bit of JavaScript.
First give your chart a name. This makes the component recognizable as a JavaScript object for additional configurations or dynamic operations. Note this name must be unique across all chart components, and if the encompassing top-level component (<apex:page> or <apex:component>) is namespaced, the chart name will be prefixed with it (ie. MyNamespace.MyChart).
In my page above I named the chart “MyChart.” I was able to manipulate all of the axes by using a simple on() method call. The code below is the snippet that I put into my page, and the picture below that displays the new results.
<script> MyChart.on('beforeconfig', function(config) { config.axes[0].margin=-10; }); </script>
The radar chart is a unique chart to build. In order to create this data set, you will need to create a list of maps rather than using a wrapper class. For my example, I am plotting customer satisfaction ratings related to an account. Each of these ratings are stored as number fields to plot on a circle. In Winter ’13 you can now query field sets, so I have put all of the rating fields from my account into a field set (named RadarSet) to build out my chart.
One thing to note, you don’t have to use a field set to build this chart. I used a field set because I thought it would be more elegant to dynamically query for and generate chart data. You could build your own hardcoded SOQL query if you wanted, but by using field sets you can easily change the fields in the query without having to change the code, or you could also take this code and make another radar chart quickly off of a different field set.
public class RadarDemo{ public List<Map<Object,Object>> data = new List<Map<Object,Object>>(); public String acctId {get;set;} public RadarDemo(ApexPages.StandardController controller){ acctId = controller.getRecord().Id ; } public List<Schema.FieldSetMember> getFields() { return SObjectType.Account.FieldSets.RadarSet.getFields(); } public List<Map<Object,Object>> getData() { String query = 'SELECT '; List<String> fieldNames = new List<String>(); for(Schema.FieldSetMember f : getFields()){ query += f.getFieldPath() + ', '; fieldNames.add(f.getFieldPath()); } query += 'Id, Name FROM Account where Id=\'' + acctId + '\' LIMIT 1'; SObject myFieldResults = Database.Query(query); Schema.DescribeSObjectResult R = myFieldResults.getSObjectType().getDescribe(); Map<String, Schema.SObjectField> fieldMap = R.fields.getmap(); //creates a map of labels and api names Map<String,String> labelNameMap = new Map<String,String>(); for(String key : fieldMap.keySet()){ labelNameMap.put(fieldMap.get(key).getDescribe().getName(), fieldMap.get(key).getDescribe().getlabel()); } //creates a map of labels and values for(String f : fieldNames){ String fieldLabel = labelNameMap.get(f); String fieldValue = String.valueOf(myFieldResults.get(f)); Map<Object, Object> m = new Map<Object,Object>(); m.put('field', fieldLabel); m.put('value', fieldValue); data.add(m); } return data; } }
To explain the code above, I have broke down the order of operations in my class as follows:
The Visualforce page is again pretty trivial. In addition, some of my field labels were a little long and were cut off by the frame again. The JavaScript hack above didn’t do the trick, but there is a static ID associated with the frame that cuts off the chart, so I was able to apply sizing to that explicitly so that the chart would render properly.
<apex:page <style> #vfext4-ext-gen1026 { width:800px !important; } </style> <apex:chart <apex:legend <apex:axis <apex:radarSeries </apex:chart> </apex:page>
The final graph I’m going to go over is the scatter series chart. In addition to feeding data directly into the chart from a standard getter method in your Apex class, you can provide the component with the name of a JavaScript function that generates the data. The actual JavaScript function is defined in or linked from your Visualforce page and has the opportunity to manipulate the results before passing it to your chart, or to perform other user interface or page updates.
In my example I am querying for all opportunities related to a campaign and plotting them on a graph with the x-axis displaying the expected revenue value and the y-axis displaying the actual amount. By default it will display all opportunities, but I am using an actionFunction on a selectList to rerender the chart to display records of a specific lead source.
First, let’s take a look at my JavaScript function on my Visualforce page constructing the chart data.
function getRemoteData(callback) { ScatterChartController.getRemoteScatterData(function(result, event) { var sourceType = $('[id*="leadSource"]').val(); var newResultList = new Array(); var index = 0; if(event.status && result && result.constructor === Array) { for(i = 0; i < result.length; i++){ if(result[i].type == sourceType ){ newResultList[index] = result[i]; index++; } else if (sourceType == 'All') { newResultList[index] = result[i]; index++; } } callback(newResultList); } }); }
In my function I first call a remote action, a method inside my Apex class, called getRemoteScatterData. After I check that the result returned from the remote action is valid, I loop through the results to either save everything if ‘All’ is selected or only the opportunities with the selected lead source. I used a jQuery selector to grab the selected value because Visualforce appends extra characters to the id of the selectList.
By default my selectList shows everything, but you could easily add in more JavaScript functionality like adding a show/hide to only show the chart after a value is selected. There is a lot of flexibility with how you want the chart to render in here.
My remote action (displayed below) is very simple in comparison. It runs the query on all opportunities related to the campaign and then saves the result to a list using my wrapper class. The values being sent to my wrapper class are stored to variables ‘name’, ‘type’, ‘expected’, and ‘amount’ respectively. This is why in my JavaScript function I can reference result[i].type.
@RemoteAction public static List getRemoteScatterData() { List data = new List(); List opps = [select Name, Id, Amount, ExpectedRevenue, LeadSource from Opportunity where CampaignId =: campId]; for(Opportunity opp : opps){ data.add(new scatterData(opp.Name, opp.LeadSource,Integer.valueOf(opp.ExpectedRevenue), Integer.valueOf(opp.Amount))); } return data; }
Back on my Visualforce page, I have two main sections other than the JavaScript that I want to breakdown. First, we have the chart. This pulls in the ‘newResultList’ returned from the callback in my getRemoteData function. The component does some behind-the-scenes magic on the back end, so for proper syntax you only need to reference the name of the function you are calling. Map the associated attributes with the appropriate variables in your wrapper class, and add an outputPanel around the chart for advanced rendering.
<apex:outputPanel <apex:chart <apex:scatterSeries <apex:axis <apex:chartLabel /> </apex:axis> <apex:axis <apex:chartLabel /> </apex:axis> </apex:chart> </apex:outputPanel>
In order to rerender this chart, I created an actionFunction that gets called onChange of the list value. I had to do this because there is no reRender attribute on the selectList tag, but the actionFunction action is a simple method in my class (public PageReference NoOp()) that does nothing except return null. When the NoOp method finishes, the chart will rerender and call the JavaScript function again using the new value in the list to sort the chart points.
<apex:actionFunction <apex:selectList <apex:selectOptions </apex:selectList>
Visualforce charting enables you to quickly generate rich, animated charts, without having to use a 3rd party system for the meat of it. There are definitely some kinks in the process right now being that it just went GA in Winter ’13, but I’m sure we’ll be seeing updates and enhancements with upcoming releases in the future.
I have done a few examples here, and there are more examples elsewhere on developer.force.com, but there’s no better way to learn than to try it out yourself. In addition to the examples I have put here, I have also uploaded a full sample pack on GitHub including a complete Apex controller and Visualforce page for each chart type. Take a look at those examples, and feel free to reach out to me via twitter if you have any questions.
|
https://developer.salesforce.com/blogs/developer-relations/2012/10/animated-visualforce-charts.html
|
CC-MAIN-2018-34
|
refinedweb
| 1,828
| 53.81
|
Introduction
In Part 3 of this article series we looked at the general tree data structure. A tree is a data structure that consists of nodes, where each node has some value and an arbitrary number of children nodes. Trees are common data structures because many real-world problems exhibit tree-like behavior. For example, any sort of hierarchical relationship among people, things, or objects can be modeled as a tree.
A binary tree is a special kind of tree, one that limits each node to no more than two children. A binary search tree, or BST, is a binary tree whose nodes are arranged such that for every node n, all of the nodes in n's left subtree have a value less than n, and all nodes in n's right subtree have a value greater than n. As we discussed, in the average case BSTs offer log2 n asymptotic time for inserts, deletes, and searches. (log2 n is often referred to as sublinear because it outperforms linear asymptotic times.)
The disadvantage of BSTs is that in the worst-case their asymptotic running time is reduced to linear time. This happens if the items inserted into the BST are inserted in order or in near-order. In such a case, a BST performs no better than an array. As we discussed at the end of Part 3, there exist self-balancing binary search trees, ones that ensure that, regardless of the order of the data inserted, the tree maintains a log2 n running time. In this article, we'll briefly discuss two self-balancing binary search trees: AVL trees and red-black trees. Following that, we'll take an in-depth look at skip lists. Skip lists are a really based on the comparison of the value of the current node and the node being inserted, until the path reaches a dead end. At this point, the newly inserted node is plugged into the tree at this reached dead end. Figure 1 illustrates the process of inserting a new node into a BST.
Figure 1. Inserting a new node into a BST
As Figure 1 shows, when making the comparison at the current node, the node to be inserted travels down the left path if its value is less than the current node, and down the right if its value is greater than the current's. Therefore, the structure of the BST is relative to the order with which the nodes are inserted. Figure 2 depicts a BST after nodes with values 20, 50, 90, 150, 175, and 200 have been added. Specifically, these nodes have been added in ascending order. The result is a BST with no breadth. That is, its topology consists of a single line of nodes rather than having the nodes fanned out.
Figure 2. A BST after nodes with values of 20, 50, 90, 150, 175, and 200 have been added
BSTs—which offer sublinear running time for insertions, deletions, and searches—perform optimally when their nodes are arranged in a fanned out manner. This is because when searching for a node in a BST, each single step down the tree reduces the number of nodes that need to be potentially checked by one half. However, when a BST has a topology similar to the one in Figure 2, the running time for the BST's operations are much closer to linear time because each step down the tree only reduces the number of nodes that need to be searched by one. To see why, consider what must happen when searching for a particular value, such as 175. Starting at the root, 20, we must navigate down through each right child until we hit 175. That is, there is no savings in nodes that need to be checked at each step. Searching a BST like the one in Figure 2 is identical to searching an array—each element must be checked one at a time. Therefore, such a structured BST will exhibit a linear search time.
It is important to realize that the running time of a BST's operations is related to the BST's height. The height of a tree is defined as the length of the longest path starting at the root. The height of a tree can be defined recursively as follows:
- The height of a node with no children is 0.
- The height of a node with one child is the height of that child plus one.
- The height of a node with two children is one plus the greater height of the two children.
To compute the height of a tree, start at its leaf nodes and assign them a height of 0. Then move up the tree using the three rules outlined to compute the height of each leaf nodes' parent. Continue in this manner until every node of the tree has been labeled. The height of the tree, then, is the height of the root node. Figure 3 shows a number of binary trees with their height computed at each node. For practice, take a second to compute the heights of the trees yourself to make sure your numbers match up with the numbers presented in the figure below.
Figure 3. Example binary trees with their height computed at each node
A BST exhibits log2 n running times when its height, when defined in terms of the number of nodes, n, in the tree, is near the floor of log2 n. (The floor of a number x is the greatest integer less than x. So, the floor of 5.38 would be 5 and the floor of 3.14159 would be 3. For positive numbers x, the floor of x can be found by simply truncating the decimal part of x, if any.) Of the three trees in Figure 3, tree (b) has the best height to number of nodes ratio, as the height is 3 and the number of nodes present in the tree is 8. As we discussed in Part 1 of this article series, loga b = y is another way of writing ay = b. log2 8, then, equals 3, because 23 = 8. Tree (a) has 10 nodes and a height of 4. log2 10 equals 3.3219 and change, the floor of that being 3. So, 4 is not the ideal height. Notice that by rearranging the topology of tree (a)—by moving the far-bottom right node to the child of one of the non-leaf nodes with only one child—we could reduce the tree's height by one, thereby giving the tree an optimal height to node ratio. Finally, tree (c) has the worst height to node ratio. With its 5 nodes it could have an optimal height of 2, but due to its linear topology is has a height of 4.
The challenge we are faced with, then, is ensuring that the topology of the resulting BST exhibits an optimal ratio of height to the number of nodes. Because the topology of a BST is based upon the order in which the nodes are inserted, intuitively you might opt to solve this problem by ensuring that the data that's added to a BST is not added in near-sorted order. While this is possible if you know the data that will be added to the BST beforehand, it might not be practical. If you are not aware of the data that will be added—like if it's added based on user input, or is added as it's read from a sensor—then there is no hope of guaranteeing the data is not inserted in near-sorted order. The solution, then, is not to try to dictate the order with which the data is inserted, but to ensure that after each insertion the BST remains balanced. Data structures that are designed to maintain balance are referred to as self-balancing binary search trees.
A balanced tree is a tree that maintains some predefined ratio between its height and breadth. Different data structures define their own ratios for balance, but all have it close to log2 n. A self-balancing BST, then, exhibits log2 n asymptotic running time. There are numerous self-balancing BST data structures in existence, such as AVL trees, red-black trees, 2-3 trees, 2-3-4 trees, splay trees, B-trees, and others. In the next two sections, we'll take a brief look at two of these self-balancing trees—AVL trees and red-black trees.
Examining AVL Trees
In 1962 Russian mathematicians G. M. Andel'son-Vel-skii and E. M. Landis invented the first self-balancing BST, called an AVL tree. AVL trees must maintain the following balance property—for every node n, the height of n's left and right subtrees can differ by at most 1. The height of a node's left or right subtree is the height computed for its left or right node using the technique discussed in the previous section. If a node has only one child, then the height of childless subtree is defined to be -1.
Figure 4 shows, conceptually, the height-relationship each node in an AVL tree must maintain. Figure 5 provides three examples of BSTs. The numbers in the nodes represent the nodes' values; the numbers to the right and left of each node represent the height of the nodes' left and right subtrees. In Figure 5, trees (a) and (b) are valid AVL trees, but trees (c) and (d) are not, since not all nodes adhere to the AVL balance property.
Figure 4. The height of left and right subtrees in an AVL tree cannot differ by more than one.
Figure 5. Example trees, where (a) and (b) are valid AVL trees, but (c) and d are not.
Note Realize that AVL trees are binary search trees, so in addition to maintaining a balance property, an AVL tree must also maintain the binary search tree property.
When creating an AVL tree data structure, the challenge is to ensure that the AVL balance remains regardless of the operations performed on the tree. That is, as nodes are added or deleted, it is vital that the balance property remains. AVL trees maintain the balance through rotations. A rotation slightly reshapes the tree's topology such that the AVL balance property is restored and, just as importantly, the binary search tree property is maintained.
Inserting a new node into an AVL tree is a two-stage process. First, the node is inserted into the tree using the same algorithm for adding a new node to a BST. That is, the new node is added as a leaf node in the appropriate location to maintain the BST property. After adding a new node, it might be the case that adding this new node caused the AVL balance property to be violated at some node along the path traveled down from the root to where the newly inserted node was added. To fix any violations, stage two involves traversing back up the access path, checking the height of the left and right subtrees for each node along this return path. If the heights of the subtrees differs by more than 1, a rotation is performed to fix the anomaly.
Figure 6 illustrates the steps for a rotation on node 3. Notice that after stage 1 of the insertion routine, the AVL tree property was violated at node 5, because node 5's left subtree's height was two greater than its right subtree's height. To remedy this, a rotation was performed on node 3, the root of node 5's left subtree. This rotation fixed the balance inconsistency and also maintained the BST property.
Figure 6. AVL trees stay balanced through rotations
In addition to the simple, single rotation shown in Figure 6, there are more involved rotations than are sometimes required. A thorough discussion of the set of rotations potentially needed by an AVL tree is beyond the scope of this article. What is important to realize is that both insertions and deletions can disturb the balance property to which that AVL trees must adhere. To fix any perturbations, rotations are used.
Note To familiarize yourself with insertions, deletions, and rotations from an AVL tree, check out the AVL tree applet at. This Java applet illustrates how the topology of an AVL tree changes with additions and deletions.
By ensuring that all nodes' subtrees' heights differ, red or black. Red-black trees are complicated further by the concept of a specialized class of node referred to as NIL nodes. NIL nodes are pseudo-nodes that exist as the leaves of the red-black tree. That is, all regular nodes—those with some data associated with them—are internal nodes. Rather than having a NULL pointer for a childless regular node, the node is assumed to have a NIL node in place of that NULL value. This concept can be understandably confusing. Hopefully the diagram in Figure 7 will clear up any confusion.
Figure 7. Red-black trees add the concept of a NIL node.
Red-black trees are trees that have the following four properties:
- Every node is colored either red or black.
- Every NIL node is black.
- If a node is red, then both of its children are black.
- Every path from a node to a descendant leaf contains the same number of black nodes.
The first three properties are pretty self-explanatory. The fourth property, which is the most important of the four, simply states that starting from any node in the tree, the number of black nodes from that node to any leaf (NIL), must be the same. In Figure 7, take the root node as an example. Starting from 41 and going to any NIL, you will encounter the same number of black nodes—3. For example, taking a path from 41 to the left-most NIL node, we start on 41, a black node. We then travel down to node 9, then node 2, which is also black, then node 1, and finally the left-most NIL node. In this journey we encountered three black nodes—41, 2, and the final NIL node. In fact, if we travel from 41 to any NIL node, we'll always encounter precisely three black nodes.
Like the AVL tree, red-black trees are another form of self-balancing binary search tree. Whereas the balance property of an AVL tree was explicitly stated as a relationship between the heights of each node's left and right subtrees, red-black trees guarantee their balance in a more conspicuous manner. It can be shown that a tree that implements the four red-black tree properties has a height that is always less than 2 * log2(n+1), where n is the total number of nodes in the tree. For this reason, red-black trees ensure that all operations can be performed within an asymptotic running time of log2 n.
Like AVL trees, any time a red-black tree has nodes inserted or deleted, it is important to verify that the red-black tree properties have not been violated. With AVL trees, the balance property was restored.
Recall that with a binary tree, each node in the tree contains some bit of data and a reference to its left and right children. A linked list can be thought of as a unary tree. That is, each element in a linked list has some data associated with it, and a single reference to its neighbor. As Figure 8 illustrates, each element in a linked list forms a link in the chain. Each link is tied to its neighboring node, the node on its right.
Figure 8. A four-element linked list
When we created a binary tree data structure in Part 3, the binary tree data structure only needed to contain a reference to the root of the tree. The root itself contained references to its children, and those children contained references to their children, and so on. Similarly, with the linked list data structure, when implementing a structure we only need to keep a reference to the head of the list because each element in the list maintains a reference to the next item in the list.
Linked lists have the same linear running time for searches as arrays. That is, to find if the element Sam is in the linked list in Figure 8, we have to start at the head and check each element one by one. There are no shortcuts as with binary trees or hashtables. Similarly, deleting from a linked list takes linear time because the linked list must first be searched for the item to be deleted. Once the item is found, removing it from the linked list involves reassigning the deleted item's left neightbor's neighbor reference to the deleted item's neighbor. Figure 9 illustrates the pointer reassignment that must occur when deleting an item from a linked list.
Figure 9. Deleting an element from a linked list
The asymptotic time required to insert a new element into a linked list depends on whether or not the linked list is a sorted list. If the list's elements need not be sorted, insertion can occur in constant time because we can add the element to the front of the list. This involves creating a new element, having its neighbor reference point to the current linked list head, and, finally, reassigning the linked list's head to the newly inserted element.
If the linked list elements need to be maintained in sorted order, then when adding a new element the first step is to locate where in the list it belongs. This is accomplished by exhaustively iterating from the beginning of the list to the element until the spot where the new element belongs. Let e be the element immediately before the location where the new element will be added. To insert the new element e's processor reference must now point to the newly inserted element, and the new element's neighbor reference needs to be assigned to e's old neighbor. Figure 10 illustrates this concept graphically.
Figure 10. Inserting elements into a sorted linked list
Notice that linked lists do not provide direct access, like an array. That is, if you want to access the ith element of a linked list, you have to start at the front of the list and walk through i links. With an array, though, you can jump straight to the ith element. Given this, along with the fact that linked lists do not offer better search running times than arrays, you might wonder why anyone would want to use a linked list.
The primary benefit of linked lists is that adding or removing items does not involve messy and time-consuming, linked lists are fairly simple to implement. The main challenge comes with the threading or rethreading of the neighbor links with insertions or deletions, but the complexity of adding or removing an element from a linked list pales in comparison to the complexity of balancing an AVL or red-black tree..
Skip Lists: A Linked List with Self-Balancing BST-Like Properties
Back in 1989 William Pugh, a computer science professor at the University of Maryland, was looking at sorted linked lists one day thinking about their running time. Clearly a sorted linked list takes linear time to search time is due in part to the fact that the elements are sorted, as well as the varying heights. To search for, say, Dave, we'd start at the head element, which is a dummy element whose height is the same height as the maximum element height in the list. The head element does not contain any data, it merely serves as a place to start searching.
We start at the highest link because it lets us skip over lower elements. We begin by following the head element's top link to Bob. At this point we can ask ourselves, does Bob come before or after Dave? If it comes before Dave, then we know Dave, if he's in the list, must exist somewhere to the right of Bob. If Bob comes before Dave, then Bob must exist somewhere between where we're currently positioned and Bob. In this case, Dave comes after Bob alphabetically, so we can repeat our search again from the Bob element. Notice that by moving onto Bob, we are skipping over Alice. At Bob, we repeat the search at the same level. Following the top-most pointer we reach Dave, Dave comes after Cal, we know that Cal must exist somewhere between Bob and Dave. Therefore, we move down to the next lower reference level and continue our comparison.
The efficiency of such a linked list arises because we are able to move two elements over every time instead of just one. This makes the running time on the order of n/2, which, while better than a regular sorted link list, is still an asymptotically linear running time. Realizing this, Pugh wondered what would happen if rather than limiting the height of an element to 2, it was instead allowed to go up to log2 n for n elements. That is, if there were 8 elements in the linked list, there would be elements with height up to 3; if there were 16 elements, there would be elements with height up to 4. As Figure 12 shows, by intelligently choosing the heights of each of the elements, the search time element, Bob, has a reference to a node 21 elements ahead—Dave. Dave, the 22 element, has a reference 22 elements ahead—Frank. Had there been more elements, Frank—the 23 element—would have a reference to the element 23 elements ahead.
The disadvantage of the approach illustrated in Figure 12 is that adding new elements or removing existing ones can wreck havoc on the precise structure. That is, if Dave is deleted, now Ed becomes the 22 element, and Gil the 23 element, and so on. This means all of the elements to the right of the deleted element will need to have their height and references readjusted. The same problem crops up with inserts. This redistribution of heights and references would not only complicate the code for this data structure, but would also reduce the insertion and deletion running times to linear.
Pugh noticed that this pattern created 50%% at height 2, and so on. What Pugh discovered was that such a randomized linked list was not only very easy to create in code, but that it also exhibited log2 n running time for insertions, deletions, and lookups. Pugh named his randomized lists skip listseList Classes
A skip list, like a binary tree, is made up of a collection of elements. Each element in a skip list has some data associated with it, a height, and a collection of element references. For example, in Figure 12 the Bob element has the data Bob, a height of 2, and two element references: one to Dave and one to Cal. Before creating a skip list class, we first need to create a class that represents an element in the skip list. I
SkipList class. The
SkipList class, as we'll see, contains a single reference to the head element. It also provides methods for searching the list, enumerating through the list's elements, adding elements to the list, and removing elements from the list.
Note For a graphical view of skip lists in action, be sure to check out the skip list applet at. You can add and remove items from a skip list and visually see how the structure and height of the skip list is altered with each operation.
Creating the SkipList Class
The SkipList class provides an abstraction of a skip list. It contains public methods like:
- Add(value): adds a new item to the skip list.
- Remove(value): removes an existing item from the skip list.
- Contains(value): returns true if the item exists in the skip list, false otherwise.
And public properties such as:
- Height: the height of the tallest element in the skip list.
- Count: the total number of elements in the skip list.
The skeletal structure of the class is shown below. Over the next several sections we'll examine the skip list's operations and fill in the code for its methods.
public class SkipList value) { ... } }
We'll fill in the code for the methods in a bit, but for now pay close attention to the class's private member variables, public properties, and constructors. There are three relevant private member variables:
_head, which is the list's head element. Remember that a skip list has a dummy head element (refer back to Figures 11 and 12 for a graphical depiction of the head element).
_count, an integer value keeping track of how many elements are in the skip list.
_rndNum, an instance of the
Random the head is always equal to the tallest skip list element, we can simply return the head element's
Height property. The
Count property simply returns the current value of the private member variable
count. (
count, as we'll see, is incremented in the
Add() method and decremented in the
Remove() method.)
Notice there are pick random numbers but instead use a function to generate the random numbers. The random number generating function works by starting with some value, called the seed. Based on the seed, a sequence of random numbers are computed. Slight changes in the seed value lead to seemingly random changes in the series of numbers returned.
If you use the
Random class's default constructor, the system clock is used to generate a seed. You can optionally specify a specific seed value, however. The benefit of specifying a seed is that if you use the same seed value, you'll get the same sequence of random numbers. Being able to get the same results is beneficial when testing the correctness and efficiency of a randomized algorithm like the skip list.
Searching a Skip List
The algorithm for searching a skip list for a particular value is fairly straightforward. Non-formally, the search process can be described as follows: we start with the head element's top-most reference. Let e be the element referenced by the head's top-most reference. We check to see if the e's value is less than, greater than, or equal to the value for which we are searching. If it equals the value, then we have found the item we're looking for. If it's greater than the value we're looking for then if the value exists in the list, it must be to the left of e, meaning it must have a lesser height than e. Therefore, we move down to the second level head node reference and repeat this process.
If, on the other hand, the value of e is less than the value we're looking for then the value, if it exists in the list, must be on the right hand side of e. Therefore, we repeat these steps for the top-most reference of e. This process continues until we
Take a moment to trace the algorithm over the skip list shown in Figure 13. The red arrows show the path of checks when searching the skip lists. Skip list (a) shows the results when searching for Ed; skip list (b) shows the results when searching for Cal; skip list (c) shows the results when searching for Gus, which does not exist in the skip list. Notice that throughout the algorithm we are moving in a right, downward direction. The algorithm never moves to a node to the left of the current node, and never moves to a higher reference level.
Figure 13. Searching over a skip list.
The code for the
Contains in a new element into a simple linked list. Figure 14 shows a diagram of a skip list and the threading process that needs to be done to add the element Gus. For this example, imagine that the randomly determined height for the Gus element was 3. To successfully thread in the Gus element, we'd need to update Frank's level 3 and 2 references, as well as Gil's level 1 reference. Gus's level 1 reference would point to Hank. If there were additional nodes to the right of Hank, Gus's level 2 reference would point to the first element to the right of Hank with height 2 or greater, while Gus's level 3 reference would point to the first element right of Hank with height 3 or greater.
Figure 14. Inserting elements into a skip list
In order to properly rethread the skip list after inserting the new element, we need to keep track of the last element encountered for each height. In Figure 14, Frank was the last element encountered for references at levels 4, 3, and 2, while Gil was the last element encountered for reference level 1. In the insert algorithm below, this record of last elements for each level is maintained by the
updates array.>(value, ChooseRandomHeight(head.Height + 1)); _count++; // increment the count of elements in the skip list // if the node's level is greater than the head's level, increase the head's level if (n.Height > _head.Height) { _head.IncrementHeight(); _head[_head.Height - 1] = n; } // splice the new node into the list(), a check is done to make sure that the data being entered is not a duplicate. I chose to implement my skip list such that duplicates are not allowed; however, skip lists can handle duplicate values just fine. If you want to allow for duplicates, simply remove this check.
Next, a
The
ChooseRandomHeight() method uses a simple technique to compute heights so that the distribution of values matches Pugh's initial vision. This distribution can be achieved by flipping a coin and setting the height to one greater than however many heads in a row were achieved. That is, if upon the first flip you get a tails, then the height of the new element will be one. If you get one heads and then a tails, the height will be 2. Two heads followed by a tails indicates a height of three, and so on.% chance of getting a node over a height of 5, so we'd likely have many wasted levels.
Pugh suggests a couple of solutions to this problem. One is to simply ignore it. Having superfluous levels doesn't require any change in the code of the data structure, nor does it affect the asymptotic running time. Another solution proposed by Pugh calls for using "fixed dice" when choosing the random level, which is the approach I chose to use for the
SkipList class. With the "fixed dice" approach, you restrict the height of the new element to be a height of at most one greater than the tallest element currently in the skip list. The actual implementation of the
ChooseRandomHeight() method is shown below, which implements this "fixed dice" approach. Notice that a
maxLevel input parameter is passed in, and the
while loop exits prematurely if level reaches this maximum. In the
Add the average number of references per element, but increase the likelihood of the search taking substantially longer than expected. For more details, be sure to read Pugh's paper, which is mentioned in the References section at the end of this article.
Deleting an Element from a Skip List
Like adding an element to a skip list, removing an element involves a two-step process. (int i = 0; i < _head.Height; i++) { if (updates[i][i] != current) break; else updates[i][i] = current[i]; } // finally, see if we need to trim the height of the list if (_head[_head.Height - 1] == null) // we removed the single, tallest item... reduce the list height _head.DecrementHeight();'s Running Time
In "Skip Lists: A Probabilistic Alternative to Balanced Trees," Pugh provides a quick proof showing that the skip list's search, insertion, and deletion running times are asymptotically bounded by log2 n in the average case. However, a skip list can exhibit linear time in the worst case, but the likelihood of the worst to have height 1 chosen for their randomly selected height. Such a skip list would be, essentially, a normal linked list, not unlike the one shown in Figure 8. As we discussed earlier, the running time for operations on a normal linked list is linear.
While such worst-case scenarios are possible, realize that they are highly improbable. To put things in perspective, the likelihood of having a skip list with 100 height 1 elements is the same likelihood of flipping a coin 100 times and having it come up tails all 100 times. The chances of this happening are precisely 1 in 1,267,650,600,228,229,401,496,703,205,376. Of course with more elements, the probability goes down even further. For more information be sure to read about Pugh's probabilistic analysis of skip lists in his paper.
Examining Some Empirical Results
Included in the article's download is the
SkipList class along with a testing Windows Forms application. With this testing application, you can manually add, remove, and inspect the list, and can see the nodes of the list displayed. Also, this testing application includes a "stress tester," where you can indicate how many operations to perform and an optional random seed value. The stress tester then creates a skip list, adds at least half as many elements as operations requested, and then, with the remaining operations, does a mix of inserts, deletes, and queries. At the end you can see review a log of the operations performed and their result, along with the skip list height, the number of comparisons needed for the operation, and the number of elements in the list.
The graph in Figure 16 shows the average number of comparisons per operation for increasing skip list sizes. Note that as the skip list doubles in size, the average number of comparisons needed per operation only increases by a small amount (one or two more comparisons). To fully understand the utility of logarithmic growth, consider how the time for searching an array would fare on this graph. For a 256 element array, on average 128 comparisons would be needed to find an element. For a 512 element array, on average 256 comparisons would be needed. Compare that to the skip list, which for skip lists with 256 and 512 elements require only 9 and 10 comparisons on average.
Figure 16. Viewing the logarithmic growth of comparisons required for an increasing number of skip list elements.
Conclusion
In Part 3 of this article series we looked at binary trees and binary search trees. BSTs provide an efficient log2 n running time in the average case. However, the running time is sensitive to the topology of the tree, and a tree with an suboptimal ratio of breadth to height can reduce the running time of a BST's operations to linear time.
To remedy this worst-case running time of BSTs, which could happen quite easily since the topology of a BST is directly dependent on the order with which items are added, computer scientists have been inventing a myriad of self-balancing BSTs, starting with the AVL tree created in the 1960s. While data structures such as the AVL tree, the red-black tree, and numerous other specialized BSTs offer log2 n running time in both the average and worst case, they require especially complex code that can be difficult to correctly create.
An alternative data structure that offers the same asymptotic running time as a self-balanced BST, is William Pugh's skip list. The skip list is a specialized, sorted link list,.
|
https://msdn.microsoft.com/en-us/library/ms379573(VS.80).aspx
|
CC-MAIN-2018-43
|
refinedweb
| 6,014
| 68.3
|
Prev
Java JVM Code Index
Headers
Your browser does not support iframes.
Re: Strings...immutable?
From:
Joshua Cranmer <Pidgeot18@epenguin.zzn.com>
Newsgroups:
comp.lang.java.programmer
Date:
Sun, 18 Mar 2007 18:02:28 GMT
Message-ID:
<U8fLh.7226$el3.6471@trndny01>
jupiter wrote:
Heap and stack are two different memory storage spaces. One is
for Object and one is for references to Object.
It helped me to "see" in my mind that a String is put on the heap
as an Object, while references remain only on the stack pointing to
the Object address.
So, s starts out being a reference and remains a reference. When
it points to "hello" it points to the "hello" Object on the heap.
When s points to "hellogoodbye" it's pointing to a new heap Object.
So they are different objects, and nothing has been mutated.
I think that's right. Is that right? I think somebody will
correct me if not.
The memory management in the JVM is much more complicated than that (I
don't know much about, so anyone correct me if I get something wrong).
There is the basic stack frame (per thread?), which stores all of the
local variables, Object references, etc. Then there is the heap space,
which stores the actual objects. There is also a separate space for the
class references and internal String references -- String's interns are
NOT stored in the heap.
So, the "hello" object has a pointer on the stack frame, a reference to
the String variable in the heap, a reference to the class AND the
interned String in the other memory space.
As two asides:
1. a java.lang.OutOfMemoryError: heap space cannot be due to having too
many String interns, it can only be due to using too much data.
2. This should return true, using the Sun JVM:
public class Foo {
final static String bar1 = "Hello";
final static String bar2 = "Hello";
public static boolean equal() {
return bar1 == bar2;
}
}
Generated by PreciseInfo ™
"I am concerned for the security of our greate nation;
not so much because of any threat from without,
but because of the insidious forces working from within."
-- General Douglas MacArtur
|
http://preciseinfo.org/Convert/Articles_Java/JVM_Code/Java-JVM-Code-070318200228.html
|
CC-MAIN-2021-49
|
refinedweb
| 365
| 73.78
|
- Getting Started With React
Friday, June 24, 2016 by martijn broeders
What You'll Be Creating
M.
What Can React Do?
- Build lightning fast, responsive isomorphic web apps, agnostic of frameworks. React makes no assumptions about the technology stack it resides in.
- Virtual DOM manipulation provides you with a simple programming model which can be rendered in the browser, on the server or the desktop with React Native.
- Data flow bindings with React are designed as a one-way reactive data flow. This reduces boilerplate requirements and is easier to work with than traditional methods.
Hello World.
What’s'));
Installing React.
Using the Facebook CDN
For the fastest way to get going, just include the React and React Dom libraries from the fb.me CDN as follows:
<!-- The core React library --> <script src=""></script> <!-- The ReactDOM Library --> <script src=""></script>
Installation From NPM
The React manual recommends using React with a CommonJS module system like browserify or webpack.
The React manual also recommends using the
reactand
react-domnpm packages. To install these on your system, run the following at the bash terminal prompt inside your project directory, or create a new directory and
cdto it first.
$ npm install --save react react-dom $ browserify -t babelify main.js -o bundle.js
You will now be able to see the React installation inside the
node_modulesdirectory.
Installation From Git Source
Dependencies
You need to have Node V4.0.0+ and npm v2.0.0+. You can check your node version with
node versionand npm with
npm version
Updating Node via NVM
I recommend using the nvm - node version manager to update and select your node version. It’s easy to acquire nvm by simply running:
curl -o- | bash
This script clones the nvm repository to
~/.nvmand adds the source line to your profile (
~/.bash_profile,
~/.zshrcor
~/.profile).
If you wish to manually install
nvmyou can do so via
gitwith:
git clone ~/.nvm && cd ~/.nvm && git checkout `git describe --abbrev=0 --tags`
To activate nvm with this method, you need to source it from shell with:
. ~/.nvm/nvm.sh
Note: Add this line to your
~/.bashrc,
~/.profile, or
~/.zshrcfiles respectively to have it automatically sourced upon login.
Using NVM
With nvm now installed, we can get any version of node we require, and can check the list of installed versions with
node list.
Build React From Git Source
Clone the repository with git into a directory named
reacton your system with:
git clone
Once you have the repo cloned, you can now use
grunttodirectory to see some basic examples working!
Using the Starter Kit">.
Using a Separate JavaScript File
Create a new file at
src/helloworld.jsand place the following code inside it:
ReactDOM.render( <h1>Hello, world!</h1>, document.getElementById('example') );
Now all you need to do is reference it in your HTML, so open up the
helloworld.htmland load the script you just created using a
scripttag with a
text/babeltype attribute as so:
<script type="text/babel" src="src/helloworld.js"></script>
Refresh the page and you will see the
helloworld.jsbeing rendered by babel.
Note: Some browsers (for example Chrome) will fail to load the file unless it’s served via HTTP, so ensure you’re using a local server. I recommend the browsersync project.
Offline Transformation
You can also use the command-line interface (CLI) to transform your JSX via using the babel command-line tools. This is easily acquired via the npm command:
$ sudo npm install --global babel
The
--globalor
-gflag for short will install the babel package globally so that it is available everywhere. This is a very good practice with using Node for multiple projects and command-line tools.
Now that babel is installed, let’s do the translation of the
helloworld.jswe just created in the step before. At the command prompt from the root directory where you unzipped the starter kit, run:
$ babel src --watch --out-dir build
Now the file
build/helloworld.jswtag via the
text/babeltypeflag. Or you can go a step further and utilize
webpackand
browsersyncto fully automate your development workflow. To do that in the easiest path possible, we can automate the setup of a new project with a Yeoman generator.
Installing With Yeoman Scaffoldingscaff.
Usage
To use yeoman, first install it, and if you do not have yeoman’s required counterparts
gulp,
bowerand
grunt-cli, install them as so:
$ sudo npm install -g yo bower grunt-cli gulp
Now install the React scaffolding with:
$ sudo npm install -g generator-react-fullstack
Now create a directory for your project and
cdto it:
$ mkdir react-project $ cd react-project
Finally use the
yocommandflag,
Converting Your Existing Site to JSX
Facebook provides an online tool if you need to just convert a snippet of your HTML into JSX.
For larger requirements there is a tool on
npmfor that named
htmltojsx. Download it with:
npm install htmltojsx
Using it via the command line is as simple as:
$ htmltojsx -c MyComponent existing_code.htm
Because
htmltojsxis a node module, you can also use it directly in the code, for example:
var HTMLtoJSX = require('htmltojsx'); var converter = new HTMLtoJSX({ createClass: true, outputClassName: 'HelloWorld' }); var output = converter.convert('<div>Hello world!</div>');
List Examplewasfiledirectory, create a
package.jsonfile with the following:
{ "name": "UserList", "version": "0.0.0", "private": true, "main": "./UserList.js" }
Also, still inside the
UserListdirectory, create the
UserList.jsfile.
Summary
We have used the Yeoman scaffolding generator
react-fullstackto start a React web app based on the starter kit. For a further explanation of the file and directory layout, check out the readme in the react starter kit git repo.
From here we edited the
index.jadefile so it was nulled out and began creating our own display view, making a new component named
UserList.
Inside
components/UserList/UserList.jsweattribute.
To display the list we include it inside the
ContentPage.jsfile with
import UserList from '.../UserList';and define some test data with:
var listData = [ {first:'Peter',last:'Tosh'}, {first:'Robert',last:'Marley'}, {first:'Bunny',last:'Wailer'}, ];
Inside
ContentPage.jswe call the
UserListcomponent with the JSX
<UserList data={listData} />.
Now the
UserListcomponent can access the
dataattribute via
this.props.data.
Any time we pass a value with an attribute of a component, it can be accessed via
this.props. You can also define the type of data that must be provided by using the
propTypesstatic variable within its class.
Extended Components vs. React.createClass Syntaxthat:
- getting React
- how to use Babel with React
- using JSX
- creating a component via extend method
- using your component and passing it data
In the coming parts, we will discuss how to use JSX further, how to work with a database as a persistent data source, and also how React works with other popular web technologies such as PHP, Rails, Python and .NET.
Leave a comment › Posted in: Daily
|
http://www.4elements.com/blog/comments/getting_started_with_react
|
CC-MAIN-2017-51
|
refinedweb
| 1,137
| 56.25
|
.
What are Excel Services UDF DLLs).
First step – adding reference>
Second step – Create the UDF project]
public class MyUdfClass
{
[UdfMethod]!
Third Step – So how do I actually use it?:
Creating the workbook
All that’s left is to create the workbook – in this case, we will create a workbook with a parameter that will allow us to query various host names:
Make sure you add a:
So what did we learn.
>Creating UDFs for Excel Services is incredibly simple. All you need is something that can create .NET 2.0
>assemblies and about 5 lines of code. Using UDFs, you can augment the existing Excel Services formulas
> and add your own.
This is sweet, but why isn't it included on the client version of Excel? Automation add-in are a pain and slow, and calling .NET code from an XLL is not the easiest thing.
A description of what's supported (type wise) in Excel UDFs.
There are some 3rd party options for creating managed UDFs for the Excel client. The free ExcelDna library is an option () as is the commercial ManagedXLL project.
For ExcelDna I will consider building compatibility with the Excel Services UDF story - it looks quite easy from what I see so far. If I get it right, this will allow you to use the same assemblies in the Excel client and in Excel Services.
Excel Services UDFs can use any dirty trick in the .NET book (as long as not blocked by the admin). In this post, we will show how a UDF can greatly increase performance of multiple calls made to web-services.
Excel Services does not support External Workbook References. I show how you can use a UDF to get similar functionality.
I have made a few posts about UDFs over the past couple of months. One of the things I neglected to explain...
Shahar Prish, one of the developers on the Excel Services team, has recently posted a few entries on...
This is an example ofwhat is possible when web services is baked into the platform. Excel Services...
Sharepoint 2007 UDF User Defined Functions
Instead of re-hashing information I've found elsewhere I figured a pre-reqs post would be good.
One...
Instead of re-hashing information I've found elsewhere I figured a pre-reqs post would be good. One of
So I just finished coding the solution for my chapter on Excel Services. The more I dig deep in this
This is an example ofwhat is possible when web services is baked into the platform. Excel Services in
Today’s author: Christian Stich, a program manager on the Excel Services team who likes to combine his
Today’s author: Christian Stich, a program manager on the Excel Services team who likes to combine his
I am trying to call a MOSS web service from Excel UDF with the calling users security context with Trusted Subsystem and i am getting the following error. Can anyone point the correct way without using Delegation mode?
Unable to cast object of type 'Microsoft.Office.Excel.Server.CalculationServer.WorkOnBehalfIdentity' to type 'System.Security.Principal.WindowsIdentity'.
thanks
Rathna
Great options to add udf in your various excel projects for providing more funtinality into the user excel spreadsheet. Thanks for this nice piece of information to share with us. Take a look here how the things are happening in real time or live
How créate a UDF for Excel service SharePoint 2013 that realice this
Private Sub Worksheet_BeforeDoubleClick(ByVal Target As Excel.Range, Cancel As Boolean)
Select Case activeCell.Value
Case "hello"
SheetViews("Hoja2").Select()
End Select
Cancel = True
End Sub
Hi,
I tried the above steps on Sharepoint 2010. However, It is not working for me.
Any help will be appreciated !
|
https://blogs.msdn.microsoft.com/cumgranosalis/2006/04/04/how-udfs-work-in-excel-services-a-primer/
|
CC-MAIN-2017-30
|
refinedweb
| 627
| 65.42
|
The version 16.8 of React introduced a new concept called hooks, this change have created a lot of excitement in development community. One of the interesting changes that this version introduces is useReducer, this hook can help us to stop using Redux (if we consider this necessary) and use a simpler solution.
Why we want to move from Redux to another solution?
If you like Redux, probably you will have enough with redux-hook and you can stop reading this article right now 😳. But in my case, I always feel that Vuex or other simpler solutions are a better solution in the 90% of the projects where I work, and I tried to add this simplicity to our projects.
What we want?
As always when you are creating something, there are some requirements to achieve.
- Has to be simpler than using Redux, otherwise we will use redux.
- If we can find some parts made from 3rd party will try to use them.
- If we use 3rd party libraries that libraries must use new React APIs.
- We have to be able to understand this new approach with the classical schema about Redux, Vuex, and other kind of techniques.
- We want to avoid magic strings and other typical bad practices.
- We have to be able to use namespaces in the global state approach.
Creating the Store
First of all we have to create the ‘store’ folder, under ‘src’ folder.
The index file will export all the namespaces or modules of my store, to be able to import any of them into my components and hooks.
Each namespace is going to be a new file with the following structure:
- name: The name of our namespace.
- Type: A Enum-like object with unique names for the cases of our reducers.
- state: The state content initialized.
- reducers: An object with all the reducers that we need.
Let’s see it in an example:
Now we are going to use ‘react-hookstore’ package as a helper to define our stores, this is how they define it:
A very simple and small (1k gzipped!) state management lib for React that uses the bleeding edge React’s
useStatehook. Which basically means no magic behind the curtains, only pure react APIs being used to share state across components.
Using the store
Now is time to use the store wherever we need, the easiest way is to import the stores that we need and invoke the hook ‘useStore’.
As you can see, in this example we are not using ‘actions’ and directly calling to the reducers, for more complex scenarios we are going to use actions.
Using actions
One of the best things of all this way to use global state is that actions can be hooks 🤯 using the last example we are going to move it to a hook, to divide our code and reduce the coupling.
First of all, we move all the ‘data’ logic to the hook, to detect which kind of logic could be moved to a hook I think on this:
If the logic could be moved to a Service, you can move it to a hook, if the logic is candidate to be a Helper or Util, continue using this approach.
And now, let’s use this hook in the component.
We did it 🥳, if you want to practice with the code of this article you can use this link from codesandbox.
|
https://medium.com/@CKGrafico/moving-global-state-from-redux-to-react-hooks-9734b656c6c6
|
CC-MAIN-2020-40
|
refinedweb
| 569
| 68.7
|
article
10 Django apps you're not using but should be
Flávio Juvenal • 4 June 2015
There are some open-source Django apps that make our lives as Django developers easier, but sometimes we don't even know they exist! Good third-party apps can give you new features at little expense, make your tests easier or even improve the performance of your deployment process. Please take a look at the following list of 10 Django apps you're not using but should be:
1. collectfast
Django
collectstatic command can be really slow if you're using S3. collectfast solves this issue by comparing the md5 sum of S3 files and ignoring
modified_time. The results of the hash lookups are cached locally using the project's default Django cache. This makes deployment much faster.
2. django-bower
Stop bloating your Django project with frontend dependencies like jQuery or Bootstrap. This problem has been solved by Bower, which is like pip for the web. But instead of installing Bower and adding
bower_components/ to your project (which will bloat your project anyway), it's better to use django-bower and add
/components/ (which has
bower_components/ inside) to
.gitignore. If you want to deploy a project with django-bower to Heroku, please refer to this two-part post of this blog.
3. model_mommy
Easily create test fixtures with model_mommy. It's specially made for Django and it can recursively create models to fill ForeignKeys and M2M relationships, thus reducing the amount of code required to setup test cases.
4. django-extra-views and 5. django-formset-js
Django formsets is a great tool for building complex forms with hierarchical relationships. But with default Django tools it's very verbose to use them, since they don't have the corresponding class based views as regular forms. Also, additional JavaScript is necessary for dynamically adding them in the frontend. Fortunately, this problem is solved by two Django libs: django-extra-views has class based views for easily adding formsets and django-formset-js has a nice JS library for rendering formsets in templates.
6. django-widget-tweaks
Rendering Django forms according to what your designer wants is very awkward with default template tags. django-crispy-forms solves this with layouts, which are Python files that specifies CSS classes, ids and other attributes for the fields, but at Vinta we don't agree with that approach since it puts frontend specific information on *.py files. The best thing is to keep all presentation formatting in *.html (templates) and *.css files. django-widget-tweaks makes this possible by letting you specify CSS classes and additional attributes while rendering form fields in templates with a simple syntax:
{% render_field form.title class+="css_class_1 css_class_2" %}.
7. django-passwords
Validating passwords to force the user to use a strong password is a solved problem. Don't reimplement this in your code, just use django-passwords and you can reject passwords below a certain length, present in a dictionary or in a common sequence list.
8. django-role-permissions
Shameless plug here 😀. This app was made by us, Vinta Software Studio, for making it easier to implement role based permissions in a Django project. For example, you may want that only users of a certain role (like manager) should be able to see some views. With django-role-permissions this is easily done with a view mixin or a decorator. It's built on top of
django.contrib.auth
Group and
Permission models, so it doesn't add any other models to your project.
9. django-manager-utils
Sometimes you need to get a model or
None. django-manager-utils provides you a custom
Manager for that. Other additional useful methods are available too.
10. django.js
Ever needed to get a URL from a url name in JavaScript, like
from django.core.urlresolvers import reverse does in Python? This can be done with django.js. It also makes it easy to access basic current user info, CSRF token and other stuff.
Do you feel this list is missing an essential but unknown Django app? Feel free to write about it on the Comments section below. Thanks!
|
https://www.vinta.com.br/blog/2015/10-django-apps-youre-not-using-but-should-be/
|
CC-MAIN-2021-04
|
refinedweb
| 695
| 64
|
Project in jsp
Project in jsp Hi,
I'm doing MCA n have to do a project 'Attendance Consolidation' in JSP.I know basic java, but new to jsp. Is there any JSP source code available for reference...? pls help me
Struts Books
;
Free
Struts Books
The Apache... Servlet and
Java Server Pages (JSP) technologies.
... for you Java programmers who have some JSP familiarity, but little or no prior
Download free java
Download free java Hi,
How to get Java for free? It is possible to download and install java on windows 64bit system?
Thanks
java/jsp code to download a video
java/jsp code to download a video how can i download a video using jsp/servlet
JSP Project
JSP Project Register.html
<html>
<body >
<form...;
Process.jsp
<%@ page language="java" %>
<%@ page import="java.util.*"%>
<%!
%>
<jsp:useBean id="formHandler" class="test.FormBean" scope
Java Free download
Java Free download Hi,
What is the url for downloading Java for free? Is java Free? Can I get the url to download Java without paying anything? Actually I am student and learning Java. I have to download and install Java on my
Java Programming Books
Java Programming Books
... open Java standards model for Java? Servlet, JavaServer Pages? (JSP?), Enterprise....
To help you navigate the Java APIs and fast-track your project development
JSP
project guidance - JSP-Servlet
form, can anyone guide me through the project ? Hi maverick
Here is the some free project available on our site u can visit and find the solution...project guidance i have to make a project on resume management
library management system jsp project
library management system jsp project i need a project of library management system using jsp/java bean
JSP PDF books
;
The free servlet and JSP Books
Slides and exercises from Marty Hall's world...JSP PDF books
Collection is jsp books in the pdf format
JSF Books
.
Books
of Core java Server Faces... to Java developers working in J2SE with a JSP/Servlet engine like Tomcat, as well...JSF Books
online examination system project in jsp
online examination system project in jsp How many and which data tables are required for online examination system project in jsp in java.
please give me the detailed structure of each table
upload and download a file - Java Beginners
upload and download a file how to upload a file into project folder in eclipse and how to download the same using jsp
Hi Friend,
Try the following code:
1)page.jsp:
Display file upload form to the user
j2me ebook download for free - Java Beginners
j2me ebook download for free could you please send me a link get the j2me ebook for free of cost Hi Friend,
Please visit the following link:
Thanks
|
JSP PDF books
| Free JSP Books
| Free
JSP Download |
Authentication...
uing JDBC in JSP |
Download CSV File from
Database in JSP
JSP... java script in jsp
| Create and use Custom error page in JSP
|
Custom Iterator Hi ,
I am working in JSP. In my project i have to generate my entire database records to pdf,excel,csv format , so which concept i have to use... available on internet.
If you have to write Java program for this then you
My Favorite Java Books
Java NotesMy Favorite Java Books
My standard book questions
When I think... language and want to
learn Java, these are good books.
Head First Java....
Favorite Java books from JavaLobby poll
I've extracted many of the books
project
project how to code into jsp of forgot password
Download file - JSP-Servlet
Servlet download file from server I am looking for a Servlet download file example
File Download in jsp
File Download in jsp file upload code is working can u plz provide me file download
online examination system project in jsp
online examination system project in jsp I am doing project in online examination system in java server pages.
plz send me the simple source code in jsp for online examination system
java project
java project Connecting to MySQL database and retrieving and displaying data in JSP page by using netbeans
jsp
the code.</p>
<p>first form(project Manager.jsp)
<%@ page language = "java" import = "java.sql.<em>" import = "java.util.</em>" import = "java.io.*" errorPage = "" %>
<jsp:useBean id = "formHandler.
download - JSP-Servlet
download here is the code in servlet for download a file.
while...;
/**
*
* @author saravana
*/
public class download extends HttpServlet...();
System.out.println("inside download servlet");
BufferedInputStream
searching books
searching books how to write a code for searching books in a library through jsp
Application server to download
Application server to download Which Application server can be downloaded for free to use personally at home to practice JSP,EJB etc
JSP Tutorials Resource - Useful Jsp Tutorials Links and Resources
. You should be able to program in Java.
This tutorial teaches JSP...;
Introduction
To JSP
Java Server Pages... is based on Java, an
object-oriented language. JSP offers a robust platform
upload and download files - JSP-Servlet
upload and download files HI!!
how can I upload (more than 1 file) and download the files using jsp.
Is any lib folders to be pasted? kindly... and download files in JSP visit to :
project query
project query I am doing project in java using eclipse..My project is a web related one.In this how to set sms alert using Jsp code. pls help me
Send Email From JSP & Servlet
J2EE Tutorial - Send Email From JSP &
Servlet... classpath to c:\jsdk2.0\src (java servlet development kit).
(We are using... for executing servlets and JSP .
It is a joint effort
php download free
php download free From where i download php free
Project - JSP-Servlet
Project Can you send me the whole project on jsp or servlet so that i can refer it to create my own :
the topic is Advertisement Management System
Atleast tell me how many modules does it include
java project - Java Beginners
java project project for internet banking Hi friend,
I am sending you a link. This link will help you.
Please visit for more information.
Need E-Books - JSP-Servlet
Java XML Books
;
Free
Java XML Books...
Java XML Books
Java
and XML Books
One night
J2ME Books
;
Free
J2ME Books
J2ME programming camp...;
Books: Java Platform, Micro Edition
Java...;
Java
books hit the wire
Wireless Java has
MINI PROJECT
MINI PROJECT CAN ANYONE POST A MINI PROJECT IN JAVA?
Hi...
You can have the following projects as per ur requirement. Free and easy to download.. All projects are JAVA based
Sale and purchase in java swing
Java upload and download images using buttons in jsp?
how to upload and download images using buttons in jsp? how to upload and download images using buttons in jsp
JSP
JSP FILE UPLOAD-DOWNLOAD code USING JSP
Ajax Books
Java+XML books under his belt and the Freemans are responsible for the excellent...
Ajax Books
AJAX - Asynchronous JavaScript and XML - some books and resource links
These books
JAVA JAZZ UP - Free online Java magazine
JAVA JAZZ UP - Free online Java magazine
Our this issue contains:
Java Jazz Up Issue 1 Index
Java Around the Globe
Using Java means a better user" %>
< video
upload and download video how to upload and download video in mysql databse using jsp?
plz give me demo of this with table...
1)page.jsp
<%@ page language="java" %>
<Html>
<HEAD><TITLE>
JAVA Web project deployment
JAVA Web project deployment Hi friends,
I have created a website using servlet and JSP with SQL server on the backend. Now I have to deploy the project so that I can open the project and someone can work on it without installing
Top 10 PC Games for Free Download
Top 10 PC Games for Free Download
.games {
clear: both;
width: 100... PC games for free download is one of the most popularly searched items... for free download.
Battlefield 1942
java code to upload and download a file Is their any java code to upload a file and download a file from databse,
My requirement is how can i... and Download visit to :
http
|
http://www.roseindia.net/tutorialhelp/comment/88429
|
CC-MAIN-2014-15
|
refinedweb
| 1,344
| 72.05
|
So, no new MacBooks today, huh? Guess that's yet another (soon to be forgotten) missed opportunity for the rumor sites to keep quiet...
Anyway, I'd like to thank Marcus Roberts for knocking Alastair Reynolds' Century Rain off my Amazon WishList. I meant to order it last time around, but I was already overstepping my "personal entertainment" budget with, er... Lemmings (it is oddly like corporate management in a way, especially when they pour down cliffs...).
For those of you who have taken an interest in Jazz and developed it beyond my mere dabbling, I assume Marcus is not the Jazz pianist by the same name, since there was an UK address on the invoice...
Thank you very much indeed - I love Alastair Reynolds' stuff, and will start reading it this very night (the rest of the Shadow Saga can wait a bit).
Blast From The Past
On another topic, I'm trying out content conversion scripts to move this Wiki to Yaki (which, er... sort of works right now), and while looking at the logs I noticed quite a few hits on a year-old post.
Turns out it's just been re-linked to, which means I really ought to re-think the namespace for blog posts - the year needs to be more obvious.
|
http://the.taoofmac.com/space/blog/2006/05/09
|
crawl-002
|
refinedweb
| 217
| 75.54
|
If you wanted to know just about everything about C++ templates then the book C++ Templates—The Complete Guide by David Vandevoorde and Nicolai Josuttis, (ISBN 0-201-73484-2) is a readable reference book you can use. Normally, discussing a book would appear in a book review. However, since the authors have done such a good job of describing C++ templates, I though the topic and the book deserved more complete coverage. You may recall Nicolai as the author of the excellent text, The C++ Standard Library, ISBN 0-201-37926-0. David is the author of C++ Solutions: Companion to the C++ Programming Language , Third Edition, ISBN 0-201-30965-3. Both authors are also longtime members of the ANSI/ISO C++ X3J16 Standardization Committee.
Ordering Convention
Most of us write the following
const or
volatile declaration thus:
volatile int vInt; const char *pcChar;
The authors suggest that a better way is:
int volatile vInt; // 1 char const *pcChar; // 2
Here,
const and
volatile appear before what they qualify. Since I read declarations from right to left, in //1, I get
vInt is a volatile int and in //2, I get
pcChar is a pointer to a
const char. The real power comes when we use this ordering in
typedef statements. For example,
typedef int *PINT; typedef PINT const CPINT; typedef const PINT PCINT;
Here we can see that the first
typedef (pointer to int) can be used in the second
typedef (
const pointer to
int). The third
typedef completely changes the meaning of the type (pointer to
const int).
typename Keyword
Historically, we have used the keyword,
class, in template parameter lists:
template< class T > . . .
However, T could be replaced by things other than a class name, so the use of the new keyword
typename is preferred.
Reducing Ambiguities
Often, instantiating a template can result in an ambiguity error that is difficult to understand. Here is a simple example:
template< typename T > plot2D(T& x, T& y); plot2D(3,4); // ok plot2D(3.14,1); // error - types of arguments differ
The ambiguity occurs because all parameters are supposed to be of same type, but in the second case, it is unclear whether that type should be an
int or a
double.
In this simple example (and many others like it), there are 3 ways of disambiguating the statement:
plot2D<int>(3.14,1); // 3 plot2D(static_cast<int>(3.14), 1); // 4
In //3, the template type is forced to be
int. In //4, casting forces all the argument types to be the same. The third way to eliminate this problem uses more sophisticated templates.
Overloading function template
It is possible to have both template and non-template versions of the same function. Often, non-template versions are called specializations. In such situations, function overloading and its rules are used to resolve any ambiguities. In addition to normal function overloading rules, a few extra rules are needed for templates.
In instantiating a template function, no type conversions are done.
All things being equal, specialized template functions are preferred over template ones.
If instantiating a template function results in a better match, the better match is used. By better match, we mean that things like conversions are not need to make a match.
If the empty angle brackets notation is used, non-template functions are ignored in matching argument.
Here are some examples:
int cmp(int const& a, int const& b); template<typename T> T cmp(T const& a, T const &b); cmp(1,2); // 5 cmp(4.3, -1.2); // 6 cmp('x','s'); // 7 cmp<>(-3,2); // 8 cmp<int>(4.3,4); // 9
In //5, the non-template function is the best match (Rule 2). In //6, template argument deduction instantiates the
cmp<double> function (Rule 3). In //7, the instantiated function is
cmp<char> (Rule 3). In //8, the notation forces use of the template function and results in a
cmp<int> function being instantiated and used instead of the non-template version of the
cmp function (Rule 4).
Partial Specialization
When a template class or function has more than one template parameter, one or more may be specified creating a partially specialized template. However, if one or more partial specializations match the same template, an ambiguity occurs. For example,
#include <typeinfo> #include <stdio.h> typedef char const CC; // general template function template<typename T1,typename T2> void f(T1 t1, T2 t2) { CC* s1=typeid(T1).name(); CC* s2 = typeid(T2).name(); printf("f(%s,%s)\n",s1,s2); } // partial specialization 1 // both types the same template<typename T> void f(T t1, T t2) { CC* s1 = typeid(T).name(); CC* s2 = typeid(T).name(); printf("f1(%s,%s)\n",s1,s2); } // partial specialization 2 // 2nd parameter is non-type template< typename T> void f(T t1, int t2) { CC* s1 = typeid(T).name(); CC* s2 = typeid(int).name(); printf("f(%s,%s)\n",s1,s2); } // partial specialization 3 // parameters are all pointers template< typename T1, typename T2> void f(T1* t1, T2* t2) { CC* s1 = typeid(T1*).name(); CC* s2 = typeid(T2*).name(); printf("f(%s,%s)\n",s1,s2); } int main(int argc,char* argv[]) { char const* s1=”one”; f(1,21.3); f(1.2,-3); f('a','z'); f(s1,2); f(&s1,"two"); return 0; }
The output from this program is:
f(int,double) f(double,int) f(char,char) f(CC *,int) f(CC **,char *)
Note that calling f(3,5) is ambiguous because both
f(T,T) and
f(T,int) match this call.
Non-type Template Parameters
Both classes and functions can use non-type template parameters. When used, non-type parameters become part of the classes type and functions signature. Thus,
template<typename T, int n> class C { T a[n]; // . . . }; C<char, 10> c10A; C<char, 5> c5A; C<int, 10> i10A;
Each of
c10A,
c5A, and
i10A are all distinct types.
An example of a template function using non-type parameter follows:
template<typename T, int n> T g(T t) { return t+n; } int main() { printf("g<double,5>(1.3)=%g\n", g<double,5>(1.3)); printf("g<char,4>(‘a’)=%c\n", g<char,4>(‘a’)); }
The output of this short program is:
g<double,5>(1.3)=6.3 g<char,4>('a')=e
Non-type template parameters have restrictions: they must be integral values, enumerations, or instance pointers with external linkage. They can’t be string literals nor global pointers since both have internal linkage.
Keyword
typename
You may have wondered why we needed the new keyword
typename. Besides being a better choice for template parameters, it is needed to disambiguate certain declarations. E.g.,
template<typename T, int n> class C { typename T::X *p; // . . . };
Without the keyword
typename, the declaration for p becomes an expression: the value of
T::X is multiplied by the value of
p.
Generally, prefix all declarations using template parameters with
typename.
this pointer
Normally derived and base classes share the value of
this. However, lookup rules for template classes are different. Template base class members are not lookup when searching derived template class member functions.
template<typename T> class B { void foo(); } template<typename T> class D: public B<T> { void bar() { foo(); } }
In
bar(),
foo() would not be found. To get
B’s foo, you need to say
this->foo(). Again, the rule given in the book states that any base class member used in a derived class should be qualified by
this-> or
B<T>::.
Conclusion
I have not begun to cover many of the interesting aspects of templates in this short article. And I have barely covered the other fun parts of this book. Get it.
Reg Charney
This article was originally published on the ACCU USA website in December 2002 at
Thanks to Reg for allowing us to reprint it.
|
https://accu.org/index.php/journals/2014
|
CC-MAIN-2019-22
|
refinedweb
| 1,314
| 55.34
|
Inserting audio and video
In Microsoft Expression Blend, you can add media files to your project such as audio and video.
For instructions about how to add a media files to your project, see Insert an image file into the active document and Insert an audio or video file into the active document.
Audio
Expression Blend supports audio file types such AIF, AIFC, AIFF, ASF, AU, MID, MIDI, MP2, MP3, MPA, MPE, RMI, SND, WAV, WMA, and WMD. These are all file formats that Windows Media Player 10 supports.
Note
Microsoft Silverlight supports only the MP3 and WMA file types.
After you add an audio file to your project, you can add it to the artboard by double-clicking the audio file name in the Projects panel, or by setting the Source property of an existing MediaElement control to the name of the audio file.
Note
You can't reverse an audio clip in Expression Blend by reversing the storyboard that contains the audio timeline.
Video
Expression Blend supports video file types such as ASF, AVI, DVR-MS, IFO, M1V, MPEG, MPG, VOB, WM, and WMV. These are all file formats that Windows Media Player 10 supports.
Note
Silverlight supports only the WMV file type.
You will not be able to insert other video file types into a document, although you will be able to add them to your project by using a MediaElement control. You can add a MediaElement control from the Assets panel
to your document and then modify its Source property to point to a media file type that Expression Blend does not recognize to make sure that the video plays in your application at run time.
Note
You can't reverse a video clip in Expression Blend by reversing the storyboard that contains the video timeline.
Note
To work with media in Expression Blend, you must have Windows Media Player 10 installed on your computer. You can download Windows Media Player 10 from the Windows Media website.
Audio and video in WPF projects
After you insert an audio file or video clip into your document, you can control its playback using the media timeline that was created for it in the Objects and Timeline panel. You can do the following things with a media timeline:
Modify the properties of a media element selected in the Objects and Timeline panel. You can change properties such as volume, balance, and speed in the Media category of the Properties panel.
Manually move the timeline by selecting the Selection tool
in the Tools panel, and then dragging the gray time bar. You can also right-click the shaded time bar to select the looping options.
For more information, see the topics listed in Set the loop duration.
Copy and paste the media element in the Objects and Timeline panel, example is a very simple example in C# to show the minimal necessary lines of code to start a storyboard:
using System.Windows.Media; using System.Windows.Media.Animation; // In a method... Storyboard audioResourceWav; audioResourceWav = (Storyboard)this.Resources["AudioResource_wav"]; audioResourceWav.Begin(this);.
Audio and video in Silverlight projects
After you insert an audio file or video clip into your document, you can control its run-time behavior (such as playback, download progress, and buffering progress) using the properties and events of the Silverlight MediaElement object that was created for it in the Objects and Timeline panel.
For more information, see MediaElement States (Silverlight) on MSDN.
Alternatively, you can use Microsoft Expression Encoder to create a full-featured media player to display your media by using a Silverlight template.
For more information, see Customize an Expression Encoder template for Silverlight.
|
https://docs.microsoft.com/en-us/previous-versions/visualstudio/design-tools/expression-studio-3/ee371150(v=expression.30)
|
CC-MAIN-2018-22
|
refinedweb
| 608
| 59.23
|
hcreate, hdestroy, hsearch - manage hash search table
[XSI]
#include <search.h>#include <search.h>
int hcreate(size_t nel);
void hdestroy(void);
ENTRY *hsearch(ENTRY item, ACTION action); a pointer into a hash table indicating the location at which an entry can be found. The item argument is a structure of type ENTRY (defined in the <search.h> header) containing two pointers: item.key points to the comparison key (a char *), and item.data (a void *) points to any other data to be associated with that key. The comparison function used by hsearch() is strcmp()..
These functions need not be:
- [ENOMEM]
- Insufficient storage space is available.
The following example reads in strings followed by two numbers and stores them in a hash table, discarding duplicates. It then reads in strings and finds the matching entry in the hash table and prints it out.#include <stdio.h> #include <search.h> #include <string.h>
struct info { /* This is the info stored in the table */ int age, room; /* other than the key. */ };
#define NUM_EMPL 5000 /* # of elements in search table. */
int main(void) { char string_space[NUM_EMPL*20]; /* Space to store strings. */ struct info info_space[NUM_EMPL]; /* Space to store employee info. */ char *str_ptr = string_space; /* Next space in string_space. */_ptr, &info_ptr->age, &info_ptr->room) != EOF && i++ < NUM_EMPL) {
/* Put information in structure, and structure in item. */ item.key = str_ptr; item.data =; }
The hcreate() and hsearch() functions may use malloc() to allocate space.
None..
|
http://www.opengroup.org/onlinepubs/009695399/functions/hsearch.html
|
crawl-001
|
refinedweb
| 236
| 69.58
|
Tuesday July 11th, second Tuesday of the month. IT professionals working for a Microsoft shop know the drill: patch Tuesday.
MS06-035 Vulnerability in Server Service Could Allow Remote Code Execution. One vulnerability fixed by this patch is the “Mailslot Heap Overflow Vulnerability – CVE-2006-1314”. According to the Microsoft Security Bulletin, a mitigating factor for this vulnerability is “Microsoft Windows XP Service Pack 2 and Microsoft Windows Server 2003 Service Pack 1 do not have services listening on Mailslots in default configurations“. Good, but what about non-default configurations? When do you have mailslots on your machine?
Maislots are an Inter-Process Communication (IPC) protocol. It can be used by processes (running programs) to communicate with each other.
It’s easy to create programs using mailslots.
Your server program listens to a mailslot by creating a file starting with \\.\mailslot followed by the name of the mailslot (e.g. \\.\mailslot\listener) and starts reading from that file.
Your client program talks to a mailslot by creating a file starting with \\server\mailslot followed by the name of the mailslot (e.g. \\MyServer\mailslot\listener) and writing a message to it. The Server Service will transport your message from your client program to your server program.
More details can be found on MSDN and sample code is available on The Code Project.
Hence any program designed to use mailslots can open a mailslot on your Windows PC, making your Windows XP SP2 machine vulnerable. You can list the mailslots opened on a machine by enumerating the files in the \\.\mailslot directory.
I wrote a simple C# 2.0 console application to do this:
using System; using System.Collections.Generic; using System.Text; using System.IO; namespace ListMailSlots { class ListMailSlots { static void Main(string[] args) { foreach (string file in Directory.GetFiles
(@".mailslot", "*.*", SearchOption.AllDirectories)) { Console.WriteLine(file); } } } }
Mail me or post a comment if you want the compiled program.
Running this program on a fresh Windows XP SP2 install shows nothing: as stated by Microsoft, a default install has no mailslots.
But on a Windows Server 2000 SP4, the result is different:
messngr Alerter 53cb31a0UnimodemNotifyTSP
The mailslot \\.\mailslot\messngr is used by the Messenger service (the service that displays a popup when you issue a NET SEND command).
Alerter is used by the Alerter service to display administrative alerts.
These services are disabled on Windows XP SP2 and Windows 2003 SP1. In fact, when you enable and start these services on a default install, the mailslots will be created and my program will list them.
53cb31a0\UnimodemNotifyTSP is used by the Telephony service.
There is another way to list mailslots using Process Explorer by Sysinternals: start PE and search (File Handle or DLL…) for \Device\Mailslot:
This will also show you the process that opened the mailslot. svchost.exe is a generic process to host Windows services, you’ll have to open the properties of the process and select the Services tab to view which Services are hosted by the process.
I’ve also discovered (with my program) that McAfee uses a mailslot.
This gives you a method to check if a Windows machine has mailslots and hence if it’s vulnerable.
Few details have been published about this vulnerability, the best I found is by TippingPoint. I wonder when H D Moore will publish an exploit module for his Metasploit framework.
Cybertrust has issued an alert for this vulnerability, warning for a possible new worm like Slammer. Wait and see…
Leave a Reply (comments are moderated)
|
https://blog.didierstevens.com/2006/07/13/do-you-have-mailslots-on-your-windows-pc/
|
CC-MAIN-2017-47
|
refinedweb
| 586
| 57.37
|
according to the inline comments in include/linux/scatterlist.h:"* If bit 0 is set, then the page_link contains a pointer to the next sg * table list. Otherwise the next entry is at sg + 1."but if that's the case, then the implementation of sg_next() seems abit weird:=================================/** * sg_next - return the next scatterlist entry in a list * @sg: The current sg entry * * Description: * Usually the next entry will be @sg@ + 1, but if this sg element is part * of a chained scatterlist, it could jump to the start of a new * scatterlist array. * **/static inline struct scatterlist *sg_next(struct scatterlist *sg){#ifdef CONFIG_DEBUG_SG BUG_ON(sg->sg_magic != SG_MAGIC);#endif if (sg_is_last(sg)) return NULL; sg++; if (unlikely(sg_is_chain(sg))) sg = sg_chain_ptr(sg); return sg;}================================ note how the comment says that the next entry will "usually" besg+1, "but" not if it's actually a pointer. however, as i read the code above, sg is *always* incremented beforethat testing. is that correct? am i just misreading something? orcould the comment have been a bit clearer
|
http://lkml.org/lkml/2007/10/27/62
|
CC-MAIN-2014-41
|
refinedweb
| 174
| 63.9
|
[
]
Stephan van Hugten commented on AXIS2-4662:
-------------------------------------------
It's good that you take the bigger scope in mind. Let me comment on the above mentioned blogpost
and example:
It's important for a framework to have this feature as a native implementation, because you
can see with the other frameworks that these are patched or hooked into the existing code-base.
While the possibility might be good, you must watch out that this does not endanger the interchangeability.
Look at what happened to JSF 1.x. People took their ideas and plugged them into JSF with different
visions about how AJAX, view handling or state saving should work. This lead to a whole slew
of integration issues.
Your first attempt is already a good start, but I would advise convention over configuration,
i.e. to include default configuration XMLs with a default transport configuration, much like
CXF does. As a user you only would have to define an engine and sometimes services. The transports
you include would automatically be added to the default engine or you could define to which
engine you add them.
Annotated services could be picked up by a BeanProcessor or as part of the Spring Context
Scan. If you define a repository directory, it would pick up the AARs in there.
I sure want to help you with that.
> Improve Spring Integration for Axis2
> ------------------------------------
>
> Key: AXIS2-4662
> URL:
> Project: Axis2
> Issue Type: Improvement
> Components: kernel
> Affects Versions: 1.5.1
> Reporter: Stephan van Hugten
> Attachments: POC_Axis2.zip
>
>
> I wanted to create an application that has tight integration between Axis2 webservices
and Spring. There is already a solution presented at the Axis2 website,,
but I found that solution very cumbersome in my opinion and doesn't support the JSR 181 annotations.
> With my proposed approach it is possible to fully integrate the Axis2 run-time with a
spring container, whether it is stand-alone or in a web server such as Tomcat. This solution
also supports both the JSR 181 annotated classes and the regular AAR-files.
> To fully integrate Axis2 with Spring I have overridden the SimpleAxis2Server class used
by the standard stand-alone run-time. A full listing of this class is included in my example
application.
> The important stuff is in line 21 up to 36. First it determines the absolute path of
the repository and config location parameters. Then it passes those to the AxisRunner constructor
(lines 10 to 13) and starts the server. After it successfully starts the Axis2 server it returns
the bean to the Spring Container.
> After the creation of the bean it will invoke setDeployedWebservices (lines 46 to 51)
which will cycle through the passed webservice classes and deploy them at the created run-time.
That's it! No additional configuration or packaging is needed. If the Spring container starts
up, so does the Axis2 run-time and the webservices get deployed.
> The needed configuration in order to integrate Axis2 is quite simple. Below is a complete
listing of my applicationContext.xml (Spring 2.5.6):
> <?xml version="1.0" encoding="UTF-8"?>
> <beans xmlns="namespace stuff">
>
> <bean name="axisServer" class="com.example.poc.server.AxisRunner" factory-
> <constructor-arg
> <constructor-arg
> <property name="deployedWebservices">
> <props>
> <prop key="WeatherSpringService">
> com.example.poc.webservice.WeatherSpringService
> </prop>
> </props>
> </property>
> </bean>
> </beans>
> With a little bit more effort I think it's also possible to integrate this solution with
the Spring component scan, making it possible to annotate the webservice classes and the run-time
with @component. I have tested my war-project with Tomcat 6 and Sun Webserver 7.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@axis.apache.org
For additional commands, e-mail: java-dev-help@axis.apache.org
|
http://mail-archives.apache.org/mod_mbox/axis-java-dev/201003.mbox/%3C376821262.577241269951267534.JavaMail.jira@brutus.apache.org%3E
|
CC-MAIN-2018-13
|
refinedweb
| 639
| 56.35
|
The QElapsedTimer class provides a fast way to calculate elapsed times. More...
#include <QElapsedTimer>
Note: All functions in this class are reentrant.
This class was introduced in Qt 4.7. savings()).
Some of the clocks and QTimer..
This is the system's monotonic clock, expressed in milliseconds since an arbitrary point in the past. This clock type is used on Unix systems which support POSIX monotonic clocks (_POSIX_MONOTONIC_CLOCK).
This clock does not overflow.
The tick counter clock type is based on the system's or the processor's tick counter, multiplied by the duration of a tick. This clock type is used on Windows and Symbian false if this object was invalidated by a call to invalidate() and has not been restarted since.
See also invalidate(), start(), and restart().).().
Restarts the timer and returns the time elapsed since the previous start. This function is equivalent to obtaining the elapsed time with elapsed() and then starting the timer again with start(), but it does so in one single operation, avoiding the need to obtain the clock value twice.(),().().
Returns true if this object and other contain different times.
Returns true if this object and other contain the same time.
|
https://doc-snapshots.qt.io/4.8/qelapsedtimer.html
|
CC-MAIN-2019-26
|
refinedweb
| 198
| 74.29
|
I have a Python3 script that loads the file parameters/parameters.xml. The script is built into an app with PyInstaller. Started up the following ways, the script and app work fine:
- Windows command line
- Windows double click in explorer
- Mac OS X command line
When starting the app from OS X Finder it is not able to find XML file.
Code snipped that calls the file:
try:
self.paramTree = ET.parse("../parameters/parameters.xml")
except:
self.paramTree = ET.parse("parameters/parameters.xml")
self.paramRoot = self.paramTree.getroot()
You access a file through a relative path. It looks like the current working directory is not set in the same way in your last case (OS X Finder): this would cause the file to not be found.
You can therefore set the working directory based on the location of your program:
import os # __file__ is a relative path to the program file (relative to the current # working directory of the Python interpreter). # Therefore, dirname() can yield an empty string, # hence the need for os.path.abspath before os.chdir() is used: prog_dir = os.path.abspath(os.path.dirname(__file__)) os.chdir(prog_dir) # Sets the current directory
You might set the current working directory to something a bit different from this, depending on what the script expects (maybe the parent directory of the script:
os.path.join(prog_dir, os.pardir)).
This should even remove the need for doing
try: since the script uses paths relative to the current working directory, the current directory should instead be set first.
|
https://codedump.io/share/wQzT1p6fjKqD/1/python-mac-os-x-not-loading-in-external-xml-file-when-it-is-an-app
|
CC-MAIN-2017-17
|
refinedweb
| 256
| 62.27
|
import "github.com/hashicorp/vault/vendor/github.com/hashicorp/errwrap"
Package errwrap implements methods to formalize error wrapping in Go.
All of the top-level functions that take an `error` are built to be able to take any error, not just wrapped errors. This allows you to use errwrap without having to type-check and type-cast everywhere.
Contains checks if the given error contains an error with the message msg. If err is not a wrapped error, this will always return false unless the error itself happens to match this msg.
ContainsType checks if the given error contains an error with the same concrete type as v. If err is not a wrapped error, this will check the err itself.
Get is the same as GetAll but returns the deepest matching error.
GetAll gets all the errors that might be wrapped in err with the given message. The order of the errors is such that the outermost matching error (the most recent wrap) is index zero, and so on.
GetAllType gets all the errors that are the same type as v.
The order of the return value is the same as described in GetAll.
GetType is the same as GetAllType but returns the deepest matching error.
Walk walks all the wrapped errors in err and calls the callback. If err isn't a wrapped error, this will be called once for err. If err is a wrapped error, the callback will be called for both the wrapper that implements error as well as the wrapped error itself.
Wrap defines that outer wraps inner, returning an error type that can be cleanly used with the other methods in this package, such as Contains, GetAll, etc.
This function won't modify the error message at all (the outer message will be used).
Wrapf wraps an error with a formatting message. This is similar to using `fmt.Errorf` to wrap an error. If you're using `fmt.Errorf` to wrap errors, you should replace it with this.
format is the format of the error message. The string '{{err}}' will be replaced with the original error message.
WalkFunc is the callback called for Walk.
Wrapper is an interface that can be implemented by custom types to have all the Contains, Get, etc. functions in errwrap work.
When Walk reaches a Wrapper, it will call the callback for every wrapped error in addition to the wrapper itself. Since all the top-level functions in errwrap use Walk, this means that all those functions work with your custom type.
Package errwrap imports 3 packages (graph). Updated 2018-10-28. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/hashicorp/vault/vendor/github.com/hashicorp/errwrap
|
CC-MAIN-2019-30
|
refinedweb
| 440
| 75
|
The technology industry is witnessing advancements at an increasing rate. Programming and development have stepped up to the degree that the demand for skilled engineers is increasing rapidly. Individuals looking to hire JSX qualified experts can find them easily on freelancer.com. This article seeks to help aspiring programmers and developers understand JSX.
Understanding JSX
JavaScript XML - or JSX - is an extension of syntax to JavaScript. While one can use React in the absence of JSX, it adds elegance to React.
Like XML, JSX tags have children, a tag name, and features. When a feature value is in quotes, it becomes a string. If the value is in braces, it becomes a confined JavaScript expression. When browsing the internet in search of React material, one is likely to see the terms ES5, JSX, and ES6, which can be puzzling.
ES5 or ECMAScript is a regular JavaScript. The completion of this 5th upgrade to JavaScript occurred in 2009. All leading browsers have supported it since its release. ES6 is the latest JavaScript version to provide attractive functional and syntactic additions. It was launched in 2015, and nearly all popular browsers support it, but it may be some time before older browser versions incorporate it. Internet Explorer, for example, has close to a 12% market share of the browser, but does not support ES6.
Understanding ES6
Every module in ES6 is determined by its individual file. The variables or performance determined in a module are not noticeable from the outside, unless a developer exports them explicitly. Therefore, programmers can write a code into their module and export the values for access by other sections of their application. ES6 modules are declarative; so, to export specific variables from a module, one needs to use the keyword ‘export’. If one wants to utilize the exported variables in another module, one should use the keyword ‘import’.
How to get the best out of ES6?
To get ES6 working in multiple browsers, try the following steps:
Try transpiling (transforming a source code written in one language to another with the same abstraction level) to allow a series of browsers to understand the JavaScript. In this case, one can convert Es6 to Es5 JavaScript.
Try including a polyfill or shim that offers extra performance included in E6 that may or may not be included in a browser.
How to use the ES6 Module Transpiler
An ES6 transpiler is a tool that selects an ES6 module and converts it to an ES5 compatible code in AMD or CommonJS style. One can install it through npm by using the command below:
npm install -g es6-module-transpiler
The command below instructs the transpiler to convert the modules into CommonJS version and put them in the outer directory:
compile-modules convert -I scripts -o out app.js utility.js --format commonjs
When the conversion is complete, the converted modules will be similar to out/app.js
It is possible to convert many of the codes in this article into ES5. All React components herein contain a render operation which determines the HTML product of the React component. JSX (JavaScript eXtension) enables developers to create JavaScript which resembles HTML. In past paradigms, it has been bad practice to include markup and JavaScript in a similar place. However, merging the outlook with the performance makes the view easy to understand. Consider having a React element that distributes an h1 HTML label. JSX enables one to acknowledge this component in a way that is similar to HTML:
Class Header extends React.Component { render() { return ( <h1 className='large'>Hello World</h1>
The distribution () operation appearing in Hello World seems to be returning HTML. However,it is JSX that translates to common JavaScript at execution. Below is how the component looks after translation:
class HelloWorld extends React.Component { render() { return ( React.createElement( 'h1', {className: 'large'}, 'Hello World'JSX resembles HTML. However, it is but a brief way of writing a React.createcomponent() acknowledgement. When components render, they produce a series of virtual representation or React elements of the HTML elements. React then determines the alterations to carry out on to the DOM depending on the React element illustration. In the HelloWorld element, React writes an HTML to DOM which will be similar to the command below:
<h1 class='large'>Hello World</h1>The class prolonged syntax used in the initial React element is an ES6 pattern. It enables developers to create objects with the Object-alignment style. The class pattern in ES6 translates to the following command:
var HelloWorld = function() {}Object.extends(HelloWorld, React.Component)HelloWorld.prototype.render = function() {}Due to the fact that React is JavaScript, words reserved for Javascript cannot be used. They include ‘for’ and ‘class’.
React provides the feature class ‘Name’. This appears on Hello World to specify the big class in the h1 label. To create simple JavaScript rather than depend on the JSX compiler, one can write the function: React.createcomponent() without worrying about the abstraction layer. JSX makes reading complex components easy. Take a look at the JSX below:
<h1>Welcome back Ari</h1>div>
<img src="profile.jpg" alt="Profile photo" /></div>JavaScript forwarded to the portal is similar to this:
React.createElement("div", null, React.createElement("h1", null, "Welcome back Ari") React.createElement("img", {src: "profile.jpg", alt: "Profile photo"}),
The JSX pattern is the best option when it comes to defining fixed HTML elements.
What are some of the things about JSX beginners should know?
JSX is a view library
React is not an MVC (Model View Controller) framework. It is a library that renders views. Individuals who are used to MVCs have to adjust to the fact that React forms the V part of the programming equation. This means they will have to define their Move and Controller using a specific framework to achieve an appealing React code.
Ensure components are small
A good developer must understand that small modules/classes are easy to test, maintain, and understand. This also applies to React components. Many beginners underrate how small their components should be. While the correct size depends on various factors, it is advisable to make the components quite small.
Do not use state when writing components
Ensure to write stateless components. This is because:
State makes testing of components hard
Testing components as functions of input props is less complex. Before testing a stateful component’s behavior, one needs to convert them. Additionally, one will need to identify all the different combinations of state which the component is likely to mutate, and props which a component has little control over, and find out the ones to test and how to carry out the procedure.
State makes reasoning about components difficult
Reading codes from stateful components requires developers to track every activity. Some of the questions that developers should bear in mind include: Is the state initialized yet? Would anything happen if I changed the state here? Does this state have a race condition? Understanding stateful components, therefore, is an arduous task.
State makes sharing of information on other parts of the app difficult
When components own a state, passing them down the component hierarchy is easy. However, passing it through a different direction is difficult. It is vital to have components own a specific part of the state. Developers, however, should exercise caution before adding a state to a component.
Finally
Understanding JSX helps programmers and developers to execute beautiful React codes. It is important to carry out extensive research to understand the different React concepts. This will help you understand the kind of problems it can solve.
We appreciate your comments. Please leave us your feedback below, and remember to share this article on your social media platforms.
|
https://www.br.freelancer.com/community/articles/what-is-jsx
|
CC-MAIN-2021-25
|
refinedweb
| 1,286
| 56.86
|
You!
If you would like to receive an email when updates are made to this post, please register here
RSS.
I would suggest adding flags to the compiler. C# 2.0's warnings would be the default level. Add 3 more levels: no warnings, critical warnings (a subset of 2.0's warnings), the default (C# 2.0 standard), then add one (or more) verbose levels as required.
I'm not familiar with the C# toolchain here, so I can't guage the impact on people's toolsets - I would assume that if you had commandline access to the compiler, you can pass in a -w[0-3] as a compiler argument. IDE's would have to be modified to add a control to set a warning level.
You can also use this strategy and simply re-prioritize the warnings the compiler generates (based on how critical they are) so that the *amount* of warnings emitted approximates C# 2.0's verbosity.
#4. Another case where initialize but never accessed is read is when using the record/replay semantics of mocking frameworks. In order to record a getter, you need to do something like "var foo = Bar.Quux;" I think that this is not serious enough to generate a compiler warning and that static analysis is where we want this sort of automated check.
I'd say 4) is the only correct solution, but with a caveat. Treating unuse as a 'warning' just seem incorrect to me. It's a bit of information about how to improve your code, but it's only a warning if the field is un-intentional, and chances are it's not.
More likely you just havn't gotten around to using it YET. (This happens to me all the time). Possibly the code has drifted to the point where it's no longer used, but C# isn't old enough for that to be the most common case yet.
The most common case, is that the code is in flux, and the fields are there for future use. The warnings become a thing that you have to code around; Requiring extra metadata (or worse - unused code!) in order to supress a mal-behavior of the compiler. The C++ compiler has a bunch of these now. (Warning, I autogenerated a GUID for your interface - Yeah I know. I MEANT YOU TO !!!).
So what you REALLY need to do is option 6. Take everthing that you treat as a 'warning' now that is really LINT and change it to a level 5 notification that you have to ENABLE to see, and/or that 'treat warnings as errors' doesn't treat as an error.
Incidently, more and more I use C# to generate type information that I then run through tlbexp to get .tlb files that I then include in C++. I find this a far superior way to define interfaces that I want to implement in C++, and MIGHT want to use in managed code in the future. Much cleaner than MIDL or Attributed C++. Though I"m unhappy that MS seems to have ceased improving MIDL, it desparately needs something like MarshalAs().
tj
We already have 5 in FxCop. It's called the SuppressMessageAttribute - we use this to indicate that we shouldn't warn on a particular construct.
I personally believe (and not because of my vested interest in FxCop ;)) that C# should remain pure, and avoid having intimate knowledge about particular Framework classes and leave that to Code Analysis tools.
Unfortunately, there are particular source constructs that we simply cannot check. For example, unused constants are particularly hard to detect at the IL level, whereas, at the AST level the compiler can detect this with 100% certainty (excluding reflection).
I think #2 is good enough. I certainly don't want to have less warnings -- running FxCop is always an extra step, and it's better to have those warnings right away. Creating a new attribute for what can be done with a pragma and a comment seems like overkill, though.
I'm voting for #6, if that's not asking too much ;-)
Really, #5 would be a good option, and it would be up to anybody who doesn't like it to just not use it. If #5 is not possible, you'd probably be right to go with #2 for now.
Actually, any system that makes supressing certain warnings easier by indicating intentions rather than specifying warning numbers would make the code both less ugly (since they usually destroy indentation, those #pragmas are quite evil) and arguably more portable between compilers. Make sure those attributes are inheritable, because I'd like to be able to combine them with my own attributes. (For instance, if I write code that initializes some field, I'm quite likely to provide an attribute too. If I can inherit this from your attribute, the use won't have to specify two of them.)
Compiling with "treat warnings as errors" is everybodys own decision, it should not be used as a reason to cripple the warning system of C#.
Talking about attributes, are there any plans to get rid of the limitations for attributes in c#/the CLR?
- no order
- no generics
- no lambdas
- no compound attributes
I'm aware you're not responsible for the CLR's restrictions, but C# could in fact overcome all of these (except the order, but that could be simulated by compound attributes) by deriving attribute classes per instance. I'll include a sample transformation:
class C {
[Invariant<string> (s => Regex.IsMatch (s, "[a-zA-Z]+")]
public string Name { get; set; }
}
would become:
[InvariantAttribute_someGuid]
[CompilerGenerated]
private class InvariantAttribute_someGuid: InvariantAttribute<string> {
public Invariant_someGuid () : base (s => Regex.IsMatch (s, "[a-zA-Z]+")) {}
}
With that simple transformation, anything would become possible with attributes!
(Actually, in C# it's not possible to derive a generic type from Attribute, not even an abstract generic type. However, without trying I'd guess that abstract generic attributes are fine with the CLR, since the concrete attribute type has to close the base type anyway. Is that correct?)
Definitely go with #5. I'd much rather have the compiler make me stop and think about each instance of this problem and decide whether it truly is a problem or not. An attribute like those used to tell static code analysis to shut up about individual items (For example; Yes, I know "RegEx" is an abbreviation.) would be perfect.
Unused/unread vars are, IMHO, quite common at early stages of the development. Warnings are too "scary" in these cases, as they could be place holders or something else, and still they do not affect generation of a correct program.
For pure CSC.EXE, I'm for number #4. Compiler must do compiling and anything that does not affect correct program generation should be left to other tools. It *might* result also faster compiling. IMHO, checking for unread fits here. Maybe a single warning "WARNING: Static analysis suggested. Unread/unused... things found here" could help.
My #6 for CSC+IDE combination:
I'd like a combination of Compiler/IDE that marks these cases as TODO. A compiler/IDE injected TODO attribute in these cases would keep me on the track what I need to check.
After all done, I'd could turn the TODO flags off or remove the vars.
Based on you description of added errors (unused public field on an internal class) I'm unable to reproduce a situation without a warning (at warning level 4, which I assume is what you meant).
e.g.
internal class MyClass
{
public int field;
//...
MyClass myClass = new MyClass();
...always generates CS0649 on "field".
Apart from that, I vote for option 2. If the field is truly not being used, anyone complaining about a new warning about a clear design anomaly either doesn't want to be told they did something wrong or don't want to have to bother fixing it... Both are no reason to avoid adding the warning. I don't consider expecting an public field in an internal class being accessed via Reflection a valid design reason to avoid this warning.
+1 for roberthahn's proposal. A new warning level (call it 5) where they types of warnings are reported, (while leaving default of 4), allows people to enable these types of warnings while preserving the previous behavior for existing code bases.
I prefer no warnings, such kind of things are the tools reponsibility, for example resharper does it so well, so it might be a behaviour of ORCAS
but I join Stefan Wenig about attributes, a lot of good tricks are disabled because of these limitations
like
[AttributeUsage(AttributeTargets.Property)]
[LazyLoadedPropert<T>(delegate(T target, PropertyInfo property){
//code to fetch the lazy loaded property
})]
1: Hate it... give me the information
2: Rude, but #3 suggestion
3: Nope, too vague and hard to discribe... probably to hard to get right all the time in the compiler.
4: REALLY hate it, FxCop is too "optional"... we've got enough code out there.
5: Okay, #2 suggestion
6: Add another keyword to C# that indicates that the fields is reflectable instead of using an attribute. If done correctly could be used to allow "safe" reflection of private fields in lower-trust environment. As for what the keyword should be: friend or visible sound nice
public class Foo
visible int zot;
Peter Richie: I apologize, I did not exactly characterize the problem in my original description. The set of "missing" warnings is hard to characterize succinctly.
You can reproduce the problem by having an internal class with a public field which is written but not read. That gives no warning, but it could.
An internal class with a public field which is not written DOES give a warning in C# 2.0.
I vote for #4.
Situations that might or might not be mildly bad, but the compiler does not have enough information to tell are exactly want FX-Cop and tools like lint are for.
It would be nice if FX-Cop was bundled into the next release of Visual Studio, with a simple interface to "Build at FX-Cop level". Of course FX-Cop should have a nice way to be told in the code, "I meant to do that, move along"
FxCop is far harder to use than the standard compiler in the IDE. I've definitely used these compiler messages usefully before, so #4 (remove them) seems unfortunate. Given that people use treat warnings as errors, and probably use it in automated build scripts which you should probably avoid breaking, emitting false positives is not nice. #5 (use attributes) seems crufty, but if you consider the possible uses of write-only or uninitialized variables (usually reflection), then you're probably already dealing with an audience unfazed by an extra attribute.
I'ld say a combination of #4 and #5 is most attractive: please don't show these warnings as "real" warnings which might break warnings-are-errors people and distract others, but don't remove them, just place them in a lower warning class which by default is not shown, and can't (easily) be configured to be treated as an error. Then potentially add the attributes nevertheless - an attribute signifying that the odd design is intentional. These attributes have value anyways, and you'ld only need em to hide errors in a really explicit mode which devs need to proactively activate.
The first thing I thought of before even seeing your list of ideas was an attribute based approach.
This is a pertinent issue to my organization. We use a custom OR-mapping framework that uses reflection as described in your first scenario. We get many instances of the warning for unused/uninitialized fields and unfortunately sometimes our developers end up with the bad habit of ignoring compiler warnings - ANY warnings. It would be excellent if something like Option #5 could be implemented.
I would vote for #5 iff (if and only if) this attribute can solve some other compiler problems, like inlining or type inference optimization - not just for a warning. An attribute like [External] to flag that the field is used in externall (or unsafe) code could be used. The problem is System.Reflection(.Emit) that can really mess things up but I don't think that everyone else should pay the price just because I like to play around with Reflection.Emit. Using attributes/keywords to make optimizations possible (or rather switch off optimizations) is already incorporated with the volatile-keyword so using attributes/keywords to solve Reflection.Emit or unsafe code problems wouldn't really change the feeling of the language - but it would have the huge downsize that it would break a lot of c#2.0-code... .
On a similar note: I get warnings about unused fields in private nested structs if I only reach these fields indirectly in unsafe code using pointer-manipulations and pointer-casts to get to them (which is possible and "safe" if I have a fixed layout on the struct). E.g.:
Struct1* s1Ptr = &s;
Struct2* s2Ptr = (Struct2*)s1Ptr;
In this case I would get a warning about the fields in Struct2 not being used, even if they might be used What I am trying to say is that #3 is an OK solution since you are giving warnings in to many cases already anyway and I think the developers realize that you can't think of all the strange cases. If you don't like to add even more possibly incorrect warnings I would vote for roberthahn's solution (i.e. one more warning level).
I vote #2. There was never a guarantee that the GC wouldn't collect unused fields after their last use, was there? (It does, after all, reserve the right -- and exercises that right! -- to collect an object even while one of its own methods is running.) If it's that important that the reference stay alive in beyond its last use, the class can do a GC.KeepAlive() on it in the destructor.
Welcome to the XXVIII Community Convergence. In these posts I try to wrap up events that have occurred
I vote #5. We use reflection a lot, and it violates standard information hiding, so it is vitally important developers reading the code can identify these fields. An attribute would solve both problems, and conveniently issue a warning when the attribute is forgotten.
Basically what's needed in my opinion is greater flexibility and control in the hands of the developer, while at the same time keeping the tedium to a minimum.
But another way is to look at the settings for warning levels as being slightly too crude. Assuming all warnings have a severity level they also have a separate type of impact on your code. CS0649 etc isn't necessarily a warning - but it can only be determined on a case by case basis.
#5 Works very well in that it would play nicely with documentation - signalling that this property will be modified in ways beyond statical analysis etc. It's also the only proposed solution with a case by case flexibility.
I definitely like #5. It combines the requirement that someone properly plan / explicitly define those properties... something that should be minor if they are going to be using something as hardcore (and often not warranted in use) as refelection, takes no runtime clock cycles, and still properly punishes the lazy [omitted] on my dev team that like to copy / paste entire classes and only use part of them.
#5!
IIRC, in C++ there also was something linke UnusedParameter which was there to supress the warning.
|
http://blogs.msdn.com/ericlippert/archive/2007/05/15/should-we-produce-warnings-for-unused-uninitialzied-internal-fields.aspx
|
crawl-002
|
refinedweb
| 2,619
| 61.56
|
How to Create a .NET Compact Framework Application by Using Visual Studio .NET 2003 (Windows CE 5.0)
Mike Hall
Microsoft
January 2005
Applies to:
Microsoft® Visual Studio® .NET 2003
Microsoft .NET Compact Framework
Microsoft Windows® CE
Summary: This article provides step-by-step instructions for working with Visual Studio .NET 2003 to build and debug a .NET Compact Framework application in C#. You will build a C# forms-based application that will be deployed to a Windows CE–based emulator. This application will handle mouse down, move, and up events and will capture user scribble data. The scribble data can then be viewed on a desktop .NET Framework application also written in C#. This article is divided into three parts, each containing a number of exercises that you can complete. It will take approximately 60 minutes to complete all of the steps. (18 printed pages)
Download Windows CE 5.0 Embedded Development Labs.msi from the Microsoft Download Center.
Contents
Introduction
Part 1: Creating the Initial Application
Part 2: Handling Mouse Down, Move, and Up Events
Part 3: Debugging the C# Scribble Application
Summary
Introduction
In this article, you will write an application to capture mouse movement on a Microsoft® Windows® CE–based device, which is similar to capturing a signature on a portable device.
To complete the exercises in this article, ensure that your workstation has the following software installed:
- Microsoft Windows XP Professional
- Microsoft Visual Studio® .NET 2003
For this article, you will use the Microsoft Windows CE .NET emulator, which emulates an x86-based device. The steps you will follow in this article are identical to those needed to build an application for Windows CE–based devices.
This exercise has a download including the CodeClip application, which you can use to help you work through this exercise.
For more information about the subjects in this article, visit the Microsoft Windows Embedded Developer Center or the Microsoft Visual Studio Developer Center.
Part 1: Creating the Initial Application
In this part of the exercise, you will perform the following procedures:
- Create the base application
- Set options for the project
- Change the text shown on the title bar of the form
- Rename the menu command for exiting the application
- Build and test the application
You will see just how quickly an application can be built to handle user input and communicate over a corporate network or the Internet.
To create the base application
- Open Visual Studio .NET 2003 by using the desktop shortcut.
- On the File menu, click New Project.
- Under Project Types, select Visual C# Projects.
- Under Templates, select Smart Device Application.
- In the Name box, type Scribble
The following illustration shows the preceding selections in the New Project dialog box.
- Click OK.
You will now set the options for your project. There are a number of project types that you can create.
You can also target Pocket PC or Windows CE–based devices. You may be wondering why there are two wizard options (and if you have the Smartphone SDK, there are three options). After all, the applications are going to be generated in Microsoft intermediate language (MSIL) format, which is processor and operating system independent. The reason is that Pocket PC devices and Windows CE–based devices typically have different screen layouts. Pocket PC devices have their menus at the bottom of the screen, whereas Windows CE–based devices typically have their menus at the top of the screen. The wizard will create a framework application that includes the correct form size and menu layout.
To set options for the project
- In the Smart Device Application Wizard, select Windows CE as the platform, and then select Windows Application as the project type, as shown in the following illustration.
- Click OK.
Figure 1 shows how the user interface of Visual Studio .NET looks after the core application has been generated. Notice that the left side of the window shows various controls that you can add to your form. The center of the window shows your application's form and is also the area that you can use to edit or add source code. The right side of the window shows Solution Explorer (the workspace) and an area that you can use to configure parts of the application.
Figure 1. Core application in Visual Studio .NET
At this point, you can change the default text shown on the title bar of the form.
To change the text shown on the title bar
- Select the Text property for Form1, as shown in the following illustration.
- Change the text from Form1 to Scribble.
Notice that the form title has changed. Now, you can add a menu to the application, which will make it easy for you to exit the application.
- From the Visual Studio Toolbox, click and drag MainMenu onto the form.
You will notice that the menu is added to the form, as shown in the following illustration.
Adding menu commands is extremely simple.
- On the form's menu, click Type Here.
- Type File, and then press ENTER.
- Type Send Scribble, and then press ENTER.
- Type Exit, and then press ENTER.
- Select the Exit menu command.
As shown in the following illustration, the properties change to reflect the menu command that you have selected.
By default, the File | Exit menu command has a name of menuItem2. If you were to add code to this menu command, the click handler would be shown as menuItem2_Click. Renaming this menu command will better reflect its purpose.
To rename the menu command for exiting the application
- Use the Properties pane to change menuItem2 to FileExit.
- Double-click the File | Exit menu command on your form.
This step opens the code editing pane, as shown in the following illustration.
- In the FileExit_Click handler, type this.Close( ); as shown in the following illustration.
Notice how Microsoft IntelliSense® prompts you to show you what you can do with the "this" item.
You're now ready to build and test your application. (Of course, at this stage, the application isn't finished yet. You still need to add some functionality to the application to add support for scribbling in the client area.)
To build and test the application
- Click Build | Build Scribble.
The application should build without warnings or errors. You're now ready to deploy the application.
- Click Debug | Start without Debugging. (You will debug later in the article.)
- When you are prompted with a list of devices that you can deploy to, select Windows CE .NET Emulator (Default) for the purposes of this exercise, as shown in the following illustration.
- Click Deploy.
This step starts the Windows CE .NET emulator and deploys the Microsoft .NET Compact Framework. After the .NET Compact Framework is deployed, the application starts as shown in the following illustration.
Now that the application is running, you can start adding code to handle the mouse down, mouse move, and mouse up events.
- In the scribble application, click File | Exit.
Part 2: Handling Mouse Down, Move, and Up Events
Your scribble application will capture mouse movement from the user, but only when the mouse button is down. Therefore, you need a Boolean to show whether the mouse is down (capturing), or up (not capturing). You will check the Boolean in the mouse move handler. If the Boolean is set, you will add the new X,Y point to an array of points (which you also need to add to the application).
You're going to keep track of all the points drawn in the client area of the application. You will store this list of points in an ArrayList.
In this part of the exercise, you will perform the following procedures:
- Create a class to store points
- Add code to the class
- Create an array
- Switch to design view
- Select the MouseDown event
- Add code to the MouseDown event handler
- Add code to the MouseUp event handler
- Build the application
To create a class to store points
- On the Project menu, click Add Class.
- In the Name box, type csPoint.cs as shown in the following illustration.
- Click Open.
This step opens the source for the new class. Notice that the class is part of the same application namespace, "Scribble".
- Delete the class definition for csPoint.
This step leaves the following code.
You now need to add some code to the class. Instead of typing the code by hand, you can paste the code from a tool called CodeClip. CodeClip is a helper application that makes it simple to copy preselected code fragments to the Clipboard. CodeClip appears as a transparent blue banner on the top of the window.
To add code to the class
- Start the CodeClip tool.
- Open the Compact Framework Lab.
- Double-click the csPoint item.
This step copies the csPoint code to the clipboard.
- Paste the code into your csPoint class file.
Your class file will now look like the following. This is a very simple class that stores the X and Y positions for a mouse down event or a mouse move event.
The application will need to store some information about whether the mouse is down, and the previous mouse location. You will also need an array to hold the points, which will be global variables in your application.
To create an array
- Click the Form1.cs tab, as shown in the following illustration, to get back to the main application code.
- Locate the FileExit function.
- Use CodeClip to copy the Globals code to the clipboard.
- Paste the Globals code into your application just before the FileExit handler.
You have a number of options for writing applications for Windows CE: Microsoft Win32®, Microsoft Foundation Classes (MFC), or managed code. To add mouse down, up, and move event handlers to a Win32 application, you would need to know which messages are sent to your application from Windows; these are WM_LBUTTONDOWN, WM_MOUSEMOVE, and WM_LBUTTONUP. Microsoft eMbedded Visual C++® and MFC make adding handlers somewhat easier through the wizard, which lists each of the Windows messages that the current class can handle.
The following procedure shows how easy it is to add mouse event handlers to the .NET Compact Framework application.
On the project tabs, you will notice Form1.cs shown in bold. This is your current open document. You are looking at the code view for your form, and you need to switch to design view to add the event handlers.
To switch to design view
- In the Project pane, click the Form1.cs [Design] tab, as shown in the following illustration.
In the Properties pane in the lower right of Visual Studio, you can change a number of attributes for this form, such as the name, color, and font size. Figure 2 shows the Properties pane.
Next to the Properties button, you will see the Events button. This button is used to select events for the currently selected form or control.
Figure 2. Properties pane
To select the MouseDown event
- In the Properties pane, click the lightning bolt.
- Scroll through the Properties pane until you locate the mouse events, as shown in the following illustration.
- Double-click the MouseDown event.
This step adds a MouseDown handler to the application, and also opens the code editing pane.
You now need to add some code to the MouseDown handler. Again, the most efficient method is to use CodeClip.
To add code to the MouseDown handler
- Start the CodeClip tool.
- Select the MouseDown item, as shown in the following illustration.
- Click the Copy button.
- Switch back to Visual Studio .NET.
- Click the Edit | Paste menu command to paste the text.
This code sets the bMouseDown Boolean to true, stores the current X and Y positions, and adds a point to the array.
- In the Project pane, click the Form1.cs [Design] tab.
- Locate and double-click the MouseMove event.
- Click the CodeClip banner to display the CodeClip dialog box.
- Select the MouseMove item, and then click the Copy button.
- Switch back to Visual Studio .NET.
- Click the Edit | Paste menu command to paste the text.
The following code for the MouseMove event is slightly more complicated than the code for the MouseDown event. You check to see whether the mouse is down (note that you still get a MouseMove notification even if the mouse buttons are not down). If the mouse button is down, you create a graphics object (similar to a DeviceContext in Win32 programming). You then draw a line from the previous point to the new point. In this case, you are drawing the line in red; this color is easy to change because in Win32, all colors are referenced by their RGB (red, green, and blue) value.
To add code to the MouseUp handler
- In the Project pane, click the Form1.cs [Design] tab.
- Locate and double-click the MouseUp event.
- Click the CodeClip banner to display the CodeClip dialog box.
- Select the MouseUp item, and then click the Copy button.
- Switch back to Visual Studio .NET.
- Click the Edit | Paste menu command to paste the text.
In the MouseUp handler, you simply set the bMouseDown Boolean to false to show that the mouse is now up, and add a point with values of -1, -1 to show that you've ended a line segment, as shown in the following code.
Now it's time to build the application.
To build the application
- Click Build | Build Scribble.
The application should build without warnings or errors. You're now ready to deploy the application.
- Click Debug | Start without Debugging. (You will debug later in the article.)
- When you are prompted with a list of devices you can deploy to, select Windows CE .NET Emulator (Default) for the purposes of this exercise, as shown in the following illustration.
- Click Deploy to deploy and start the application.
- Move the mouse and show that no ink is displayed in the application. The application will display ink only when the mouse button is down.
- Click and hold the left mouse button, move the mouse, and then release the mouse button.
The line that you made in the application should look similar to the lines in the following illustration.
Congratulations! You have just written a scribble application in C#.
- In the scribble application, click File | Exit.
Part 3: Debugging the C# Scribble Application
So far, you've built and run the application. It can be useful to debug any new application. In the case of writing an application for Windows CE (whether through eMbedded Visual C++ or Visual Studio .NET 2003), the operating system and application are running on remote devices — devices connected to the development computer. You therefore need to download the application to the device before debugging. As far as the user experience is concerned, debugging a Windows CE application is very similar to debugging a desktop application.
In this part of the exercise, you will set a breakpoint on the mouse down handler and will trace the flow of the application by stepping through it.
To set a breakpoint on the mouse down handler and step through the application
- Click the Form1.cs tab, as shown in the following illustration, to get back to the main application code.
- Locate the Form1_MouseDown function.
- Set a breakpoint on the
bMouseDown=; line by clicking the line and pressing F9.
true
You will see a red breakpoint symbol being displayed in the margin next to the
bMouseDown=; line, as shown in the following illustration.
true
You're now ready to run the application.
- Click Debug | Start or press F5.
This step builds the application (if needed) and downloads the application to the emulator.
- After the application has started in the emulator, click in the client area of the application.
The application will break on the
bMouseDown=true;line, as shown in the following illustration. You can now step through the lines of code.
There are a number of items that you can look at while the application is running. The lower-left pane of the Visual Studio application may show search results or another item. You can change this pane to show the current local variables.
- Expand the this item in the list of local variables, as shown in the following illustration.
You will notice that bMouseDown is currently set to false, and that the X and Y values are currently set to zero.
- Step through the application by pressing F11.
You will notice that the X, Y, and bMouseDown values change as you step through the code.
When you reach the line of code that adds a new ArrayList item, you also need to "new up" an instance of the csPoint class. As you step through the code, you will step into the csPoint class.
- Press F5 to run the application.
- Click File | Exit.
Summary
Following is a summary of the exercises provided in this article:
- Created a C# smart device application for Windows CE
- Added support for mouse move, mouse down, and mouse up events on the form.
- Added drawing support, including the creation of a custom pen
- Debugged the C# Scribble application
|
http://msdn.microsoft.com/en-us/library/aa446906.aspx
|
CC-MAIN-2014-23
|
refinedweb
| 2,840
| 66.33
|
Directory Listing
bring mac change over
Fixing naming
Updates for homebrew - doco to follow
Fix typo in homebrew11
Changelog
optimistic?
Fix
Prep for 5.2
Adding Adam
Added Ander
Added slightly more doco on lazy
Added a footnote to the start of the linearPDEs chapter of the userguide that describes the linear PDE using nabla notation.
Fixed some minor typos
fixed filename.
ncLong is deprecated
Remove MathJax replacement (this should be handled from options file now).
Can now specify mathjax path in options file.
Deal with overfull boxes Scaled some of the compound figures to 95%
Correct case issue in cite to visit. Split very long lines in .bib file
Latex fixes
Fixing overfull hboxes
Fix some BiBTex issues
New version with gradient test corrected
New version with gradient test corrected
Temporary last version of run_specialOnSpeckly.py until I correct the new version
Some bugs fixed regarded to r6670
Fixed some index swap bugs belonging to gradient calculus on Speckley.
A few testsadded on run_specialOnSpeckley covering gradient error on Speckley
Raising exception for worng q type on linearPDE call
Type error corrected
More tests checking diameter calculous on util.py
Fix some debugging left
actually fix the filename
typographic bug
Bug fixed on tensor_transposed_mult method. It failed when second element rank was 2.
Stop using hardcoded filenames in example scripts
Use py3 version of sphinx Fixes #417
Removing some from X import *
some docstring fixes s
symlink
prepare for defaulting badger to py3
Added test for IncompressibleIsotropicFlowCartesianOnFinley with higher NE
Increased max_correction_step=50->100 parameter on update method under IncompressibleIsotropicFlowCartesian class so that tolerance level is reached on run_models.py on finley.
Added tests for Diameter type functions. Corrected spelling error in call.
Make sid include netcdf4 support by default
Added netcdfv4 deps to install guide
makeTagMap and skeleton test The test is there and is hooked into Finley, Dudley and Ripley. The test does not do enough.
Make everyone sad by touching all the files Copyright dates update
Fix for typeError in ripley::setDecompositionPolicy.
minimizers: bug in tolerance setting fixed.
remove dependency on esysUtils
Sphinxing continues
It continues
Misc fixes Including converting some :cvar : to :note: (because sphinx wants to cross reference them)
Don't generate doc for escriptcore. It was confusing sphinx and anything in there is either exposed elsewhere or is not necessary for public use
Switching imports over
More import fixes
cleanup index page So we don't list all the sub packages on the main index
Fix some problems which my test install didn't find
more minor tweaks
More cleanup This work consists of fixing formatting errors in doc strings and converting from ... imports into import ... as to reduce the namespace pollution which is confusing sphinx
Fix some sphinx errors
Switch doc generating to python3-sphinx. This will probably create some breakages. Upside is that doco is being generated properly again (if you use py3).
Update badger's idea of which distro we are on
Add a package docstring to escript
more spelling
Spelling mistake in man page which lintian found
see if our builds succeed with intel 2018.1...
check for sympy in the specified python rather than the one scons is using
fixed typo.
should fix the logic for #416...
g++ 7 fixes for Wimplicit-fallthrough. Yes these are only comment changes but they are parsed, see
Added missing break - caught by g++ 7.2
initial options for capybara
Fixing idxtype long builds of finley.
Building up options files for capybara.
Modify netcdf dependencies checks to look for the files actually being included. (netcdfcpp.h or ncVar.h). It turns out that behind the scenes that they both depend on netcdf.h, but it's a good idea in theory.
Report exceptions from saveCSV if the cause is locally known.
gather function fixed.
Unicode causing a problem
Work on NetCDF4 inversion. This does not work completely yet.
added missing include.
Adding function to test what type of netcdf file we are looking at
Fix final warning in #402 Initialise an uninitialised fd_set
Final tests for #269 Also, add explanation of how hasInf and hasNaN are not mutually exclusive. To doc-strings
Sigh
Merging changes from release branch
Removing to make way for more complete version from another branch.
More work on the _Inf functions
C++ functions Inf versions of the Nan replacement functions No python interface or unit tests yet
Missing file
Some notes on trilinos
Printing tagged Data now reports if tag is unused. That is, a tag is part of the Data but not in the list of used tags for that function space. (This will slow down this op a bit).
Only summarise data if it is expanded
Change list for 5.2. Update dev list. Tell people to look at install guide for discussion of Trilinos requirements but haven't actually added that bit yet.
Describe mpi_no_host option
Fix lazy related test failures.
Fix broken symlinks in debian build Add a dependency to doc package which has the targets in it.
regenerated gradient tests for both real and complex data and moved them to a separate file.
typos fixed.
Regenerated interpolation tests for both real and complex data. Moved them into a separate file as with the integrals before.
fixed typo.
Implement #412 - complex support for paso's coupler. Removed guards from domains accordingly as complex interpolations without trilinos are supported now.
Implemented complex dofToNodes in ripley which should fix the current segfaults. NodesToDof still missing. Also, builds without Trilinos will fail some complex tests as these interpolations require trilinos.
Regenerated tests for integration for real & complex Data. Moved them into a separate file which is included from the spatial functions tests so will be picked up without changes by the domains.
Complex interpolation in speckley.
complex integrals for speckley. Fixes #398
Finished complex interpolation cases in dudley.
complex integration and interpolation in dudley.
Don't call openmp function if you aren't in openmp
Add mpi_no_host option Add option to prevent --host option being added to launcher. Not sure how far this needs to be spread..
new test for tensor_transpose_mult
Finish phase() Added phase() function to escript.util. Added unit test for phase() Fixes #385
Fix issue #409 minGlobalDataPoint will now throw if there are no values to process.
Add compare tool. Trying to get trilinos to build? Want to work out what is the difference between your configuration file and one that "works"? Compile up tools.trconfcmp.cc and give it the too files. It will tell you which variables are exclusive to one or the other as well as listing variables with different types and values.
fixed minor typo.
Add a temporary work-around for #401. Internally, tensor_transposed_mult(a,b) directly calls tensor_mult(a, tranpose(b)). This is not efficient, but it should give the correct answers until we can do a more efficient fix. I've needed to modify some of the unit tests to account for the changed behaviour.
Fix some unused vars
Options for ferret
commenting some imports
minor tweaks
Fixed various. Mixed complex and reals are now tested properly in binary ops. Overloaded ops are now tested.
Fix a missing complex->data conversion
Dont assume a tensor product has a real result
Add test for overloaded + Also add param to generator that will prevent testing: numpy op Data Because numpy applies its own interpretation
Fixing the last commit. (I did test it)
Add two tests
Adding eigenvalues_and_eigenvector test. This one was a bit tricky and so it is tested with fewer steps. Added option to not use update steps.
Better version of earlier fix. This version does not allow lazy non-expanded expressions of any depth other than 0. Fixed missing POW operator collapsing. Fixed height and child counts when collapsing (needed to make fields mutable)
Remove unused test category
Fix nasty concurrency bug. Multiple threads were trying to collapse() subtrees. Prevent POS from being called by c++ unit tests for lazy (python handles this differently).
Fix incorrect collapse behaviour When nodes were collapsed, they weren't updating m_op and m_opgroup properly.
Fix whereZero bugs Add new opgroup G_UNARY_PR (always real) and put the whereZero whereNonZero ops in it. Reorder some declarations to stop the compiler complaining.
Add missing deepCopy.
More option files
Add Xenial options
Noting dependency on legacy-netcdf for debian styles
Adding yakkety options
Remove AUTOLAZY fiddles
Correct error messages about anti-hermitian
Fix error in m_iscompl determination
Stray print removal
"Reordering statements involving reference params" considered dangerous.
|
https://svn.geocomp.uq.edu.au/escript/release/5.2/?sortdir=down&view=log
|
CC-MAIN-2020-50
|
refinedweb
| 1,410
| 57.57
|
I am currently going through a book. It has C programs that call Assembly routines. I would like to type in some of the programs to see them work.
The C programs are in .c files and the Assembly routines are in .asm files.
How do I create a project in Visual Studio that will compile and assemble and link these files into a program that I can run?
I have access to Visula Studio Versions: 6, 2005, 2010
Sample c file:Code:
#include <stdio.h>
int first=1, second=2, third=0; // these are the integers with
// whch we want to work.
void main(void)
{
printf("\nBefore adding third =%d",third);
Add_Ext(); // this call the assembly
// program that will add
// externals.
printf("\nAfter adding third = %d",third);
} // end main
Sample asm file:Code:
.MODEL MEDIUM ; This tells the procedure to use the
; MEDIUM memory model.
EXTRN first:WORD, second:WORD, third:WORD
.CODE ; This is the beginning of the code segment.
_Add_Ext PROC FAR ; The procedure is type FAR.
mov AX, first ; This moves the first number into the
; accumulator.
add AX, second ; This adds the second number to the
; accumulator.
mov third, AX ; This stores the result back into
; third.
_Add_Ext ENDP ; The procedure ends here.
END ; This is the end of the code segment.
|
https://forums.devx.com/printthread.php?s=cd6eb7388c4285711d5233c6e08ac194&t=185509&pp=15&page=1
|
CC-MAIN-2021-31
|
refinedweb
| 217
| 77.53
|
On 23/04/12 10:20, Samuel Thibault wrote: > [Hurd has] an incomplete procfs already. It doesn't have /proc/mounts, > because it's not a trivial thing to implement: since mounts are > distributed, there is no central place where filesystems are to be > recorded. For what it's worth, the same is true on Linux, because each process can potentially be in its own mount namespace. Linux solves this (while remaining compatible with things that look in /proc/mounts) by having /proc/mounts always be a symlink to /proc/self/mounts, where /proc/self is polymorphic: if you read it from process 123, it appears to be a symlink to /proc/123. Files matching /proc/[0-9]+/mounts list the mounts that are applicable/visble to the process whose pid is the directory name. S
|
https://lists.debian.org/debian-devel/2012/04/msg00521.html
|
CC-MAIN-2016-40
|
refinedweb
| 136
| 64.24
|
Duck Typing in Python
Sign up for FREE 1 month of Kindle and read all our books for free.
Get FREE domain for 1st year and build your brand new site
Reading time: 30 minutes | Coding time: 10 minutes
Python is a dynamic language which is strongly typed. Dynamic binding is the capability to use an instance without regard for its type. It is handled entirely through a simple attribute lookup process. Whenever an attribute is accessed as
object.attribute:
attributeis located by searching within the instance itself. This yields a positive result when the
attributebeing searched for is an instance variable or method.
If the instance does not have the required
attribute, the instance's class definition is searched. All the class variables, class methods and static methods fall into this category.
If both these lookups do not return any result, the interpreter then proceeds to the base classes of the
object. The first match found is returned.
The critical aspect of this binding process is its independence of the type of
object. Thus, if you try a lookup such as
object.name, it will work on any
object that happens to have a
name attribute independent of the class of the
object. This behavior is colloquially referred to as duck typing about the adage “if it looks like, quacks like, and walks like a duck, then it is a duck.”
The idea is that you don't need a type to invoke an existing method on an object - if a method is defined on it, you can invoke it.
Where should I use it in my code?
More often than not we write programs, either deliberately or unwittingly that rely on duck typing.
While defining classes, the magic methods(or dunder methods) implement some sort of protocol supported by the language. For example,
__iter__and
__next__methods are used to implement the Iterator protocol. The instances of the class can now be used in
forloops.
- if we declare the
__len__and
__getitem__methods inside a class, it's instances are called
sequences. All the objects of this class are now subscribable by index (similar to
list) as well as can be iterated over.
The Problem of Tabulation
Suppose we are asked to display a dictionary of country names and it's capital in a tabular format. Let us try to write a generic utility function which displays a given input mapping in a tabular form:
Attempt 01
def tabulate_mapping(mapping, headers): '''Tabulates the input mapping as columns with given headers. >>> mapping = {'India': 'New Delhi', ... 'USA': 'Washington', ... 'England': 'London'} >>> headers = ('Country', 'Captial') >>> tabulate_mapping_01(mapping, headers) Country | Capital | ---------------------------------- England | London | India | New Delhi | USA | Washington | ''' heading = '' for header in headers: ##### NOTE 1 heading += '{:>15}'.format(header) + ' |' print(heading) print('-'*len(heading)) for k, v in mapping.items(): ##### NOTE 2 print('{:>15} |{:>15} |'.format(k, v))
The code should look pretty straightforward to any programmer who has written even a few lines of Python. However, there are two salient features about the code which do not strike us at first sight. After some experimenting, you will figure out the following:
- Any iterable can be passed as
headersi.e.
list,
tuple, user-defined sequences. Even unordered collections such as
dict,
setwould also work.
- Any object which implements an
itemsmethod can be printed using
tabulate_mappingfunction.
Duck Typing At Work!
Let us create a class which extends the Python built-in
list and implements a
items method.
class MyList(list): ''' A user-defined collection data type which extends the built-in list. >>> alphabet = MyList(*'ABC') >>> print(alphabet) ['A', 'B', 'C'] >>> alphabet[1] 'B' >>> for i, item in alphabet.items(): ... print(i, item) 1 A 2 B 3 C ''' def __init__(self, *args): super().__init__(args) def items(self): '''Alternate utility to `enumerate` on the instance''' for i, item in enumerate(self, start=1): yield i, item
Now, by the grace of Duck Typing, we can use the
tabulate_mapping function with instances of
MyList class. However, what if we do not want malicious objects to use our code this way. We all know about SQL Injection. It's about time we become aware of our privacy.
The Curious Case of Input Validation
In the function
tabulate_mapping, observe that we have not written any code for validating the inputs. With Duck Typing in the fray, it is important to understand the need for a guideline on validating inputs.
Programmers coming from a background of a statically typed background would be inclined to perform static type checks on the object. As we have seen, strict type check can be severely restrictive to the point where the developers miss out on the cool features Python has to offer.
i.e Too much of
isinstance and
hasattr works but it is neither fun nor Pythonic!
We will perform different kinds of input validation for the
tabulate_mapping function based on the requirements:
- If we want the
tabulate_mappingto be a generic utility which can work on any
mappingobject with
itemsand iterable
headers.
def tabulate_mapping_v1(mapping, headers): # Validate headers try: headers_iter = iter(headers) except TypeError as e: raise ValueError('headers needs to be an iterable.') # Validate mapping try: mapping_items = mapping.items() except AttributeError as e: raise ValueError('mapping should have an items method.') heading = '' for header in headers_iter: heading += '{:>15}'.format(header) + ' |' print(heading) print('-'*len(heading)) for k, v in mapping_items: print('{:>15} |{:>15} |'.format(k, v))
- If we want to restrict the
mappingobject to an instance of a subclass of the built-in
dictand the
headersare specified to be a
listobject:
def tabulate_mapping_v2(mapping, headers): # Validate headers if not isinstance(headers, list) raise ValueError('headers needs to be a list object.') # Validate mapping if not isinstance(mapping, dict): raise ValueError('mapping needs to be a dict object.') heading = '' for header in headers_iter: heading += '{:>15}'.format(header) + ' |' print(heading) print('-'*len(heading)) for k, v in mapping_items: print('{:>15} |{:>15} |'.format(k, v))
Conclusion
Whenever writing Python, we need to clearly define the allowed interfaces that can be used in place of an object rather than based on classes the object.
There is no distinction between the right way and the wrong way when it comes to dealing with objects in Python. The most Pythonic code at a given situation depends entirely on the requirements of the user and the practicality of the use cases. The consideration of Python's Data Model and duck typing can help one write clearer, readable and Pythonic code.
|
https://iq.opengenus.org/duck-typing-in-python/
|
CC-MAIN-2021-21
|
refinedweb
| 1,069
| 63.7
|
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
As people might have noticed I've been somewhat interested in how to type
g = \f x y. (f x, f y)
Now you might think HM, as implemented in various languages, does a nice job typing that. Ocaml gives the most readable result as
# let g f x y = (f x, f y);;
val g : ('a -> 'b) -> 'a -> 'a -> 'b * 'b =
Or you could go higher rank and type it
g :: (forall a b. a -> b) -> a -> b -> (c, d)
g = \f x y -> (f x, f y)
But, for various reasons, I want to type it
def g: (a -> f a) -> b -> c -> (f b, f c) =
[f, x, y -> (f x, f y)]
Since I am lazy and assume everything has been done in literature somewhere, does anyone have a reference with a typing system like that?
PS. 'Solved' in Haskell. Courtesy of Sjoerd Visscher, Haskell allows for quantification over type constructors; though I want to avoid explicit quantifcation.
g :: (forall a. a -> f a) -> b -> c -> (f b, f c)
g f x y = (f x, f y)
PPS. Gashe noticed that Haskell is unable to recognize that f should sometimes be unificated with 'id = /\a. a'
The type system you're looking for is completely standard, and named F-omega (Fω). In OCaml it lives only in the module system, in Haskell there is a cut-down version of it (that's not good enough for this specific example) called "higher-kinded type constructors". Scala probably has some version of it (possibly usable, possibly not), and Coq and Agda subsume it.
Btw, the type annotation you give is not yet correct, because you have to quantify over 'f' as well.
(Type inference for System F (which is subsumed by Fomega) is already undecidable, so there is no hope that you would be able to type-check all such examples without extra type annotations. In fact, Fomega probably lacks principal types, so you could not do a good job of type inference even with a semi-decidable algorithm, but there possibly exist equally-expressive extensions of it that do have principality, we don't know.)
I don't want to quantify. (Quantification over types doesn't make much sense from the perspective of a type inference algorithm. Apart from the logic perspective, it just seems to inform the type checker where to keep a type scheme reinstantiable. Conversely, you can read the type (a -> f a) -> X also as (forall f a. a -> f a) -> X, something the compiler should be able to derive.))
I simply, and only, want to unify over type constructors. I am not sure this is system F-omega, it feels doable to me.
Is there a good reason why unification over type constructors would be intractable? And should I care?
This is how I would do it in Haskell:
{-# LANGUAGE RankNTypes #-}
g :: (forall a. a -> f a) -> b -> c -> (f b, f c)
g f x y = (f x, f y)
Is this what you want?
Without the quantifiers. Didn't know Haskell allowed for quantification over type constructors.
If I'm not mistaken, it's actually rather painful to recover the simple HM type ((a -> b) -> a -> a -> (b, b)) from this version (you have to apply a term transformation instead of just type instantiation), as you have to define a phony newtype and then unbox it.
(a -> b) -> a -> a -> (b, b)
In this regard, I consider it is not a proper solution to the problem of giving a general type to \f x y -> (f x, f y).
\f x y -> (f x, f y)
Edit: in fact this type is not general enough to subsume the ML type, even in Fomega. The right type has one more higher-kinded type variable. See Matt M's remark below.
Huh? What's your point? Elaborate please.
HM just seems to have decided that f a is some existential and b = c?
A solution is "general" when it subsumes other, more specific solutions. The definition you accepted as a "solution" to your problem is not able to type-check g (+1) 2 3, which trivially worked with the ML type.
g (+1) 2 3
Uhm. I assumed that f may be the identity on types. Is that the problem?
Yes. Type constructors are "constructors", they do not compute. List and Array and (int ->) are constructors, \a -> a is not.
List
Array
(int ->)
\a -> a
Who told you that? (I said the identity on types: /\ a. a; is that the problem?)
Edit: I don't know of a good reference that explains the limitation of Haskell's type constructor restriction. One could always come back at the source, namely Mark Jones' 93 A system of constructor classes: overloading and implicit higher-order polymorph.
Can't you get around it with a type, or newtype, definition? Something like 'type Id a = a', or 'newtype Id a = a', or 'type IntId Int = Int'? I.e., if it wants a type constructor then just provide it manually?
Looks more like a unification problem than anything else. A corner-case they didn't think of.
An identity newtype wrapper is how you would get around this.
Looks more like a unification problem than anything else. A corner-case they didn't think of.
Heuristically speaking, if GHC gets something wrong it is by thinking more or too much, not too little. (In contrast for instance with Scalac, and Scala is the language I use most).
This is not a corner case at all. While this case is trivial, it's an instance of higher-order unification, which is known to be undecidable and certainly outside of Hindley-Milner. Actually, this point is even stated in gasche's quote from the paper — it's the second paragraph.
Adding a special case for simple instances is not the kind of thing they'd do, because the result can easily get unpredictable (again, see Scalac). That's what I'd call "thinking more".
The only usable type inference system attacking this problem is Agda, which uses (IIUC) higher-order pattern unification by Dale Miller. And type inference in the system still ends up being often unpredictable — which is acceptable in practice because of how interactive their typechecker is.
I'm no GHC developer, but I still find that "assuming X didn't think of this" can be ineffective (especially when X is somewhat "smart", like GHC developers, or at least not known to be stupid). Instead, wondering "why X does not handle this?" allows finding the actual answer ("handling this in general is a hard problem"). IOW: whether you care about politeness or not, arrogance is pragmatically ineffective.
I already asked whether I should care about undecideability. It is often not a problem in practice. HM might be decidable but is exptime too as far as I remember; it is trivial to blow up but that simply never happens.
But I'll look at Agda, thank you for that. If the case of unification I assumed wasn't added then I wonder why it was not, or was it, theoretically investigated since, as gashe noted, the example would naturally subsume the HM inference.
Who is arrogant? I my view that is the person who postulated that preposterous assumption in the first place and sloppily reads too much into questions he, or her, cannot answer and is bothered his beloved algorithm might not be as good as he thinks.
Before all the rest, let me say that (higher-order) unification will answer "f a = a" with either "\x. x" or "\x. a" — hence you already lose principal types.
There are also many more answers, but many (I think all) are beta-eta-equivalent to these two.
I'm not the expert on the higher-order unification, but I can add something to the discussion. My (heuristic) summary is that the algorithms here are complex, and that in any case they don't "just work" but might answer "I don't know" in cases where the solution looks obvious, or might require more control. More importantly for me, the issue seems complex enough for me to not actually learn it. But my understanding is superficial.
There is indeed a somewhat usable algorithm for the general problem of unification by Huet — but IIUC, it might output constraints it does not know how to solve. Here's a reference request on the topic, with different statements on the complexity. One answer points to 80 pages within Conal Elliott's PhD thesis, the other answer (by Elliott himself) seems to suggest that (another) Huet's paper is not really hard.
Huet's algorithm is implemented in Teyjus v1, an implementation of λProlog, which uses higher-order unification for logic programming. In version 2, they changed to pattern unification (as in Agda), which is less expressive in practically useful cases (see the link for a problem where the solution looks obvious but is not found — though the example is more complex), but has principal unifiers. There also and for more information than you'll ever want.
I tried out Teyjus 1, but I wasn't able to make sense of all results of higher-order unification. Given the amount of documented bugs of V2 (see other issues at the tracker), maybe also V1 was buggy, and my problems might not depend on the algorithm itself.
Another higher-order logic language is Elf, as implemented by Twelf. Twelf is actually used in practice (for programming language metatheory), but it avoids using either Huet's algorithm or Miller's restricted variant, because the latter is too restrictive for them, so they give their own extension. The reference is "Logic Programming in the LF Logical Framework", Sec. 3.4 onwards. I cannot say I understand everything they say, to the contrary: I have implemented first-order unification once, but I absolutely did not manage to understand what they explain (though the presentation IIRC does not help).
Their examples are more illuminating — see 4.4, and note the extra knob of open vs closed.
On the discussion, I think the linked paper has the "official" answer, already in the quote — they decided to avoid higher-order unification.
OTOH, you had asked correctly "I assumed that f may be the identity on types. Is that the problem?" and the answer is "yes, that's the problem". I think this point was discussed, but apparently you questioned the difference between a type constructor and a type function, and nobody has fully explained the reason, so I'll give it a shot.
IIUC, that's because unifying SomeConstructor T with f t makes the two terms syntactically equal, while unifying f a with a and getting f = \a.a (not /\a.a, that has the wrong kind) makes the two terms beta-equal — beta-equality is at least decidable here (thanks to a sound kind system), but synthesizing f is harder. Also, if you say that (\a.a) a = a, you're using beta-reduction on types, so yes, this implies type functions and computations on types. List (as in "List Int") is instead not a function because List Int is only equal to List Int.
Algebraically speaking, the fundamental question is: are there non-trivial equations between types? If so (as in, beta-reduction) you now have a rewrite system for type, and thus computation. Otherwise, you have just type constructors.
And to be sure: I've been wanting sometimes higher-order inference, but most of the examples were more complicated than simple identities. Googling "type lambdas in Scala" finds harder examples quickly.
Haskell does it so that is a good indication that nobody minds, if I got it correct, a lot anymore about undecidability.
I'll look up the difference between \a. a and /\a.a to see whether it denotes an abstraction or unification symbol. From some papers I gathered it may be abstraction, maybe the symbol changes meaning between type formalisms, and I assumed the dichotomy between expression and type reductions. I'll change it when needed.
That paper dives into the topic of type classes over type constructors. I don't see the relation.
Except that is the paper that introduced higher-kinded polymorphism in Haskell.
Before writing my first answer to this thread, I paused for a moment and asked myself: "answering marco is usually a waste of time; is this one worth it?" I decided it was a genuinely interesting question that seems to be formulated in good faith. It probably was, but nevertheless I should have known better.
I usually think the same thing about you too, Gabriel.
Even though marco doesn't seem to appreciate your answers in this thread, I certainly do, and probably others too.
I questioned the relevance. That's legal in my country.
Being arrogant, selfish, unappreciative, patronizing, and rude is also legal in your country. That doesn't mean it's welcome or advisable. And, whether you intend it or not, many of your phrases give that impression.
Of course, I don't have much room to complain about such things. But I'm oft impressed with how well gasche and some others tolerate blunt spoken persons like myself or you.
BTW, 'Dude' itself comes across to me as a belittling title. Every time you use it, I find myself instantly irritated with you. Just an FYI. Is it interpreted differently in your community?
Hell no. It's just a funny expression I picked up from Comedy Central. South Park, to be more precise.
I usually use it in the same manner; like when Cartman says something unexpected. Usually that means I am laughing my head off.
And I don't mind being rude when it sharpens the conversation to a point such that we don't end up with philosophy. If I am interested, that is.
So, uhm. Whatever. It's just the Internet and I don't care much. Best wishes to you all, for the rest.
Seconded. It's painful to read through the discussion and gather meaningful points from all the ovoerfluous, more often than not inappropriate, words.
Whatever. I have cordially asked for explanations everywhere, failed to get a lot of meaningful answers, allowed for a great many mistakes, and it was Gashe who responded with a personal sneer. To quote:
"answering marco is usually a waste of time; is this one worth it?"
Gang mentality doesn't work on a former lecturer.
It's not really a gang mentality when people separately come to the same conclusion.
Sure
Agreed. There is a lot of thought and effort in your answers Gabriel, it is interesting and useful to many of us here.
The problem is that (+) isn't defined on all a. In other words, the problem is that quantifier that you want to get rid of, but I don't see how the type would make any sense without it.
(+)
a
That is correct, I was thinking of the more general type
g :: (forall a . f a -> g a) -> f b -> f c -> (g b, g c)
g f x y = (f x, f y)
which does subsume the ML type in Fomega, but not in Haskell. The (good!) difference is that you can insert the appropriate term-level coercions to obtain the ML type from this function, which was not possible at all with Sjoerd's type.
(I was worried because I did this exact same exercice a few weeks ago, and didn't have too much trouble playing the newtype dance at the time, while I tried to do it again with Sjoerd's code and failed miserably. That explains it.)
For the record, here is the term-level coercion you have to add to recover the ML typing from this type.
{-# LANGUAGE RankNTypes #-}
g :: (forall a . f a -> g a) -> f b -> f c -> (g b, g c)
g f x y = (f x, f y)
newtype Const a b = K { unK :: a }
gml :: (a -> b) -> a -> a -> (b, b)
gml f a1 a2 = let (K b1, K b2) = g (K.f.unK) (K a1) (K a2) in (b1, b2)
Yeah, that's a more interesting type. I wondered what you were talking about :).
Does it help when I write it in uppercase? Those who do mathematical philosophy wouldn't have much of a problem with the above type. I like C so I write stuff in lowercase, uppercase is annoying to programmers.
Anyway. It doesn't have to make sense to you. It needs to make sense to programmers/mathematicians and the compiler. Can you give a case where a type like that could be interpreted ambiguously? Or no inference can be given?
What you want are type level functions, which effectively enable arbitrary computation at the type level. For me this is too much... I want to write algorithms at the value level not the type level. I might want to express things at the type level, but I don't want to get into the 'how' to implement them. This is why I have chosen a logic-language for my type system, rather than a functional one. So my language will allow types like f(X) but they mean match predicate 'f', and unify X with the arguments (just like in Prolog).
Meaningless statement unless you show me. I just want some primitive form of unification on type constructors. I don't see how that would immediately imply type level computations.
(A type level computation is also a point of perspective. If "type list = /\ a -> nil | cons a (list a)" then "list int" denotes a type level computation. I agree I also don't want a Turing complete type language, I think, but this is an empty statement.)
Unification on type constructors, across a database of definitions... and you have re-invented Prolog :-) Which sounds pretty similar to what I am doing.
No. That's what I once said I thought they did in Haskell. And it looks to me that they keep patching holes around it in ghc so I think I was right there.
No closed world assumptions. Bad idea.
The open/closed world issue only affects the definition of negation in the logic language. If you don't allow negation there is no difference between closed and open world.
Really you need to specify a bit more about how you intend your system to work.
I was more counting on all the academics doing all the hard work. Which is why I asked for references.
Ah, insufficient clarification, I thought the example was clear. I want to be able to type
I.e., give more mathematical types to programs and see whether one can get away doing so without quantifiers, since I don't like those much.
I am not entirely sure that is possible without the rendering the type system both ambiguous as well as intractable. Something heuristically just looks tractable to me.
I don't like Haskell's rank types system. In this particular case you can see that Sjoerd's type could have been derived heuristically; but I don't know of other cases and Haskell refusal to unify an arbitrary constructor with 'id' I find a weakness. Moreover I am guessing you need something like an algebra over type constructors, something where if 'f a=c' it is derived that 'f=id, a=c'; moreover, elimination rules where composed constructors are trivialized to single constructors, i.e., "(f . g) (x)" becomes "h (x)". Didn't see that so far.
Since these are the types one would write down naively, and there are manners of transforming them into rankn types, possibly heuristically, it looks doable to me; the other question is whether one can avoid rankn completely, by naive unification, because that's obviously a problem.
There must have been research into typing functional programs with more naive mathematical types hence I asked. But it doesn't seem it has been done so far; or people here are unaware of it.
This type:
g: (a -> f a) -> b -> c -> (f b, f c)
is uninhabited, as it means this:
g: forall (f:*->*) a b c. (a -> f a) -> b -> c -> (f b, f c)
You need the quantifier:
g: (forall a. a -> f a) -> b -> c -> (f b, f c)
so that you can instantiate the function parameter at types b and c.
Also, your edit to the post with discussion about choosing f to be identity on types isn't right. See my discussion with Gabriel -- he was thinking of a more general type. If you take f to be identity on types, then the function parameter must have type (forall a. a -> a), meaning it must be the identity function.
What part of you can heuristically derive Sjoerd's type, but I am not sure that is ambiguous, didn't you get? And I questioned whether you can get away with not thinking about rankn types but as types as an algebra. No idea whether that is possible, but at least it might be backed by an informal heuristical translation to rankn.
I didn't list your discussion on 'deriving a more general type' on purpose.
What part of [thing marco posted] didn't you get?
AH well. At least you've shown that an ambiguous type exists if you assume rankn typing; which I try to avoid. But as I stated somewhere else, why wouldn't the compiler decide that
g: (a -> f a) -> b -> c -> (f b, f c)
Must mean
g: (forall a .(a -> f a)) -> b -> c -> (f b, f c)
In this particular example that seems to boil down to a trivial analysis on the scope of 'a'. Why does the programmer need to inform the compiler on how to do the inferencing? Maybe there are good reasons. Maybe they are not so good. I wouldn't know, and nobody has shown good reasons so far.
Then the unification with 'id' is something I assumed but isn't done in Haskell. I don't care about Haskell; is this a weakness of system F?
May I remind you that this post starts with: "pesky rankn types" and I therefore purposefully didn't give rankn types? Because I want to get rid of them.
The question from the start is whether the compiler can decide, or guess, the correct meaning of a more mathematical type, and what systems exist for that.
Ah well. This topic is closed. We're not getting any further on it with our limited knowledge.
something where if 'f a=c' it is derived that 'f=id, a=c'
If you phrase this as a (higher-order) unification problem, f could also be a constant type function. How do you know that solution is not wanted? (I think there are programming idioms using constant type functions in Haskell, though I can't name any terribly useful one).
I am just thinking aloud along this lines. I thought the type theorists would already have solved it; hence the question for references. Turns out they didn't. I haven't given it much thought myself.
They're both first-order unification — see Wikipedia. So no surprise there. But it can't unify on type functions.
It can try to unify over any datatype you give it which implies term algebras, even 'datatypes' describing higher-kinded type grammars.
It gives you a form of backtracking unification but that would in general not be worth it; unless you feel like exploring it that way.
Sorry but I'm not entirely sure what you mean. If you're denying "that it can't unify on type functions", I think the point of confusion is that I'm convinced (with others) there's a clear difference between constructors from arbitrary functions, while you don't buy it. If so, I tried answering in my last comment on higher-order unification.
What's there to be confused about. At the heart of type checking is unification, prolog gives you unification, therefore it might sometimes be a handy tool to experiment with. It can unify over datatype representing the AST and it is Turing complete so whatever it can, or cannot do, in academic terms, it can implement any algorithm.
Interestingly, this is pricelsely the property of type constructors (and presumably deconstructors) that I was mis-naming as 'generative'.
Its interesting because I think its this property that let's you incorporate IO cleanly into a pure language. You only need the monad formalism in Haskell because it is a lazy language. In a strict language the sequencing part of a monad is unnecessary, as order is strict in any case, but you can still use the type constructor/deconstructor to hide the IO or other side effects because they do not compute. This way the arrow in function application can be pure.
Edit: on second thoughts maybe this is irellevant. This works: g (\a -> Just a) 2 3
It isn't coincidence. I was thinking of signatures too.
So the property I was thinking of is that when you have something like:
f :: IO (a, a)
f = getchar >>= (\a . getchar >>= (\b . return (a, b)))
what tells the compiler that 'a' is not the same value as 'b'. Its not in the 'bind' operator as it has 'pure' arrows, it must be in the type constructor. More specifically: (>>=) :: m a -> (a -> m b) -> m b, the arrows are pure function application in Haskell, so there is nothing special about the type of (>>=) that permits IO (IE tells the compiler it cannot memoise the result of getchar). The conclusion is it must be the type constructor/destructor. Again I can't see anything special about IO in the type signature compared to other type-constructors, so that would imply the compiler assumes that all type constructors/deconstructors are impure, and you always pay the unwrapping cost (it cannot be optimised out, unless you assume the contents of a type have not changed... This is related to types being able to be 'bottom' in Haskell. Also related is the lazy nature of lists, as given [a], we don't know when taking the head whether the list is empty, so this confirms the 'impure' deconstructor idea in Haskell. Of course in a strict language this does not apply and the type constructors could be pure? So:
ML functions: impure type-constructors: pure
Haskell functions: pure type-constructors: impure
Clean: ?? (pure/pure?) with uniqueness types.
Does anyone know the correct terminology / explanation for this?
I thought we've been over this? The (complete) type of the IO monad tells you you can't take anything out.
Well. Except for UnsafePerformIO which is (also) unsafe to use since it can always be optimized away without you knowing it. (I guess they'll track that in the future. Makes sense: First the imperative programming, then the impure, but safe, constructs.)
The only thing special is that you can't take anything out of the IO monad. That's all you need. Everything else you want to state about it, including the monad laws to some extent, is therefor horsedung.
What is special about the type of bind or return? The arrows are the same as for any other function (in Haskell).
What about the impurity of lazy lists? What about the state monad?
The monad is nothing special, its just a constructor class, requiring functions bind and return be defined with a certain type.
The 'special' thing in Haskell is the impurity of type constructors/destructors. The 'special' thing in ML is the impurity of the function arrow.
Everything in Haskell is pure. It has nothing to do with arrows.
And again, everything interesting about the IO monad, isn't that it's a monad, but that you cannot take anything out. A monad, or rather: the IO monad, just happens to satisfy that condition. You might call it a free algebra but that isn't even to the point.
Maybe you should ask someone else to show you?
Something has to be impure. In Haskell its type constructors. Consider a List, how do you know if it is empty or not? and note that you can perform IO in a pure haskell function using a lazy list as the input or output argument.
I think you are wrong on this one, and you will have to do better than hand waving to convince me otherwise.
Edit: Okay I see your point about not being able to take anything out... but I am not sure its relevant.
Would a language with pure-strict functions work with IO in impure/strict type constructors? I think so... and this is kind of the proof that it is not the monad - which provides sequencing in an impure language that is the necessary bit for IO, but the type-constructor.
Stubborn. Aren't we? Okay, I'll take up the glove and demonstrate anything you're interested in.
Show me that a pure/strict language cannot use impure/strict type-constructors to safely incorporate IO in the type system? For example:
getchar :: IO Char
let a = unIO $ getchar
b = unIO $ getchar
in (a, b)
In a strict language with pure arrows, IO is not a monad, its just an impure type-constructor.
No idea what you're talking about. I already fail to see why constructors should be impure? (Ah. The complete algebra including unIO is impure.)
No I really don't see it. From a different perspective getchar is a program which hasn't been run yet. I gather unIO runs it. The strictness doesn't matter neither does the perceived impurity of the data constructor matter.
What's your point?
The compiler is free to memoise the results of any pure function if the parameters are the same, and it can treat a zero argument function as a value. Getchar the program is a value, and the deconstructor unIO runs it returning a different value each time it is deconstructed.
Well. I agree with that observation. But that doesn't imply I get what your question was?
I am starting to see where you're going. Uhm. Why do you feel the strictness matter? A lazy language like Haskell could do the same, and optimize (one occurrence of) unsafePerformIO out; which was exactly the point I made before?
I don't see where the strictness matters in this question. So maybe we should drop that, I will, and just think about the purity of it. Which means, I don't see what you're telling me since I agree with what you say about that particular program. And as I said: It doesn't matter that IO is a monad, as long as it hides stuff.
So: laziness doesn't seem to matter neither does the fact that IO might be a monad. So what is the question again?
In a pure language you can't have impure type constructors and keep the language pure? Yeah, I agree. Though, as long as you've got no observers you can is what the monad tells you. (Though you can also not reason about it, or hardly, without observers, which is what I expect Wadler to have gotten wrong in some paper. I expect him to have gotten it right w.r.t. to some mental model he has about monadic IO, not w.r.t. to the math, if you look really, really, carefully.)
With a lazy language there is no guarantee IO will get run in the expected order, or even at all. If the result of a function is never used any side-effects will never be generated. Note: I am assuming the case where we 'ignore' side effects. Haskell therefore needs to impose a sequencing on the IO, which is strict, rather than the normal lazy semantics. The monad does that with bind (>>=).
Actually although there is nothing wrong with that program, the following reveals the point you were trying to make earlier and I was ignoring:
f = let a = unIO $ getchar
b = unIO $ getchar
in (a, b)
Without the monad, and without any other kind of marker, nothing forces "f" to be "IO" or to make the compiler aware it contains side-effects. So when you said you can put stuff into a monad but not take it out, it was relevant, but what I was sating about the 'impure/special' deconstructor for IO was also relevant - I just didnt want the two topics to get mixed up at that point.
What concerns me about the monadic approach is that the pure functional program generates an impure program as its output that is then run 'magically' at the end of main. This feels like an extra layer that makes things harder. Writing A to generate B seems harder than just writing B, and I am not sure it offers any advantages at all.
What worries me about the ML approach is the lack of control of side-effects. The other problem is the lack of nullary functions - which if pure would just be values, but can have side effects in an impure language. Even if we have two different arrows, one pure, one impure, the nullary function problem remains.
So I think a 'runnable' side effect mark, which could allow tied into access permissions if the side effect mark is a set of marks, which get inherited by functions using other functions. something like:
f :: Char [instream]
g :: a -> () [outstream]
h :: Int -> Block [fileread]
i :: Int -> Block -> () [filewrite]
And a function that uses multiple:
main :: Int [instream, outstream, fileread, filewrite]
There is nothing preventing functions like:
f :: Int [fileread] -> Int [filewrite]
Which i think offer interesting encapsulation of operations like a file block copy.
Yah. The complete algebra for monadic IO (i.e., the monadic operators plus the io functions plus unsafePerformIO) has only one (impure) deconstructor you're not supposed to know about: unsafePerformIO. I fully agree with that observation and made it myself though that bought me some flak on LtU before. And because it's an impure construct in a pure language it may be compiled out.[2]
As far as what the IO monad buys you in terms of programmer convenience: a solution to directly bind OS calls while keeping the program, academically, pure. I told you about stream processing functions, it was a pain to do. The problem was coordination, as in:
do
fp <- openFile "sample.txt"
If you want to do such a thing with event lists you'ld need to generate a request to open a file on the outstream event list and then catch the file handle on the instream event list. And somehow hope no other events were generated. Clean might solve it differently being an older, somewhat more academic, language than Haskell. Maybe they thread a world state around through uniqueness typing but I forgot. Haven't looked at the language for ages. (Though I am starting to like the idea of uniqueness typing more, and more, since the FP crows seems to have succumbed to a form of monaditis.)
Your idea is to create an impure strict language with mark annotations for impure operations? I wouldn't know. Personally, sorry, I expect nobody will use it. Either you have a (lazy) pure language, and you hide/encapsulate/track the impurity, or stuff is just (strict) impure; the design space just doesn't seem bigger than that. Well. I am not stopping you but are you not sure you haven't been indoctrinated with the Haskell popular lingo too much? In a strict language I really don't care things might be impure; it's a great nice to have.[1] Unless you have a very good reason to track it, it's just an academic exercise I personally can't care much about.
Of course, I am not stopping you. Go ahead. The more languages the merrier. (Well, if you want a PhD then to do it of course; that's another ball game. But then you shouldn't be talking to me.)
Last point on the nullary function I posted a joke about. I, again, fully agree with you and there is a serious undertone to it. It makes me doubt the relevance of their type soundness proofs. It might be that in the soundness proof nobody thought that nullary functions exist and may be impure; then again, it may also be totally unrelated.
[1]. Actually, I would want the opposite. Controlled impurity in a lazy pure language; that stopped me from using Haskell and I ended with ML. But you seem to have proven that's impossible. Though my idea of conflating objects with modules might be a way out; no idea, yet.
[2] That's another reason I think E. Meijer's ACM paper is a teaser. He's somewhat older than me, knows both Clean and Haskell, and knows very well that you can't have uncontrolled impurity in a pure language. His "we should use unsafePerformIO sometimes" reads like an in-crowd joke. As do a number of other examples.
Check out the Eff language. I think Idris has an algebraic effect system, too.
For my language, I'm using an abstraction that I call processes. A process is an object that receives a message, updates its state, and then responds by sending a message out. It ends up being similar to algebraic effects.
These techniques improve over simple effect typing by leaving the effect handling open to rebinding. So I can have a first class process that sends "print" messages and run it in a context where I get to specify the handler for that message and then after it's run for a while package it up as a value again. This also provides a nice interface for continuation capturing.
I find it useful to view programming as building pure values in some mathematical universe of discourse and letting real world effects come from the interpretation of those values by machines. This certainly makes it easy to integrate with a theorem prover.
f :: Int [fileread] -> Int [filewrite]
That type doesn't make much sense to me.
Yes, that should have been more like:
f :: Int -> (Int -> () [writefile]) [readfile]
It might be better to put the effects before the type to avoid brackets:
f :: Int -> [readfile] Int -> [write file] ()
You're talking about using (IO a) as the type of impure computations that result in a? That's a fine motivation, but if you make the impure language expressive enough to define (>>=) and return obeying the monad laws, then you end up having a monad anyway. (This isn't to say you have to make it that expressive, but it looks like you'd want to.)
(IO a)
(>>=)
return
Idris has a ! syntax which I'm pretty sure works just like the unIO you're thinking about.
!
unIO
Common subexpression evaluation of (unIO $ getchar) would indeed break the program, so that shouldn't be a permitted optimization.
(unIO $ getchar)
Yah that's true from a perspective. It's the hidden "unsafeperformIO" wrapped around the Haskell "main", or whatever it is, which is "impure" to some extent.
Or it is pure. Philosophy.
Something has to be impure. In Haskell its type constructors.
I may have read over this thread too quickly, but you seem to have some confusion over how IO works. You can think of an IO a as data that describes instructions of what to do to produce an a at runtime. The important part of the monadic structure is the continuation passing style that it encodes. Consider the following IOProg datatype:
IO a
IOProg
data IOProg = GetChar (Char -> IOProg) | PrintChar Char (() -> IOProg) | End
With this example type, you'd represent a program that reads a character and then prints it as this:
GetChar ( \c -> PrintChar c ( \_ -> End ))
This isn't even a monad, but is how IO used to work in the early days of Haskell (or so I'm told) - continuation passing. A nicer way to package this up is by noticing the monadic structure of continuations producing IOProgs:
type IO a = (a -> IOProg) -> IOProg -- continuation type
return :: a -> IO a
return x = \f -> f x
bind :: IO a -> (a -> IO b) -> IO b
bind f g k = f (\x -> g x k)
GetChar :: IO Char
PrintChar :: Char -> IO ()
closeIO :: IO () -> IOProg -- Called by the consumer of main
closeIO p = p End
The moral is that Haskell stays pure by only ever building values that describe programs that will have some effect at run-time. No function application at the type or value level ever produces an effect. Effects happen at run-time by interpreting the instructions that your Haskell program builds up in IO.
Is what I remember. A Haskell program, or Gopher, or Miranda, just consumed a lazy list of chars and possibly produced a list of chars as output.
That showed that FP was able to do IO purily.
I don't think they went continuation passing style but went lazy event list. At least, I still think of unsafePerformIO as something which lazily both consumes and produces an event list while interpreting a program. (That's academically equivalent to interpreting chars, of course.) Might be wrong.
From that perspective Haskell is completely pure. It transforms one event list into another one, purily. (Though I guess if UnsafePerformIO isn't given the event list explicitly, it's an impure construct somewhere else. Ah, and the continuation passing style solves the synchronization problem with lists.)
But anyway. Since you're not supposed to know about unsafePerformIO you don't know how your 'program' is run so you might as well think of it as a pure construct.
Is a lazy list pure? If the list is reading data from some external source we cannot know when the list will end. In fact if the list is from a keyboard buffer, it might be empty now, but not empty in the future.
To me it seems the compiler cannot assume purity in any deconstructor, so it must assume they are all impure?
I would say both arguments stick. Since you're deconstructing a lazy list which depends on input you might say it's impure.
But mathematically everybody agrees that a stream of events is a constant. Therefore Haskell is pure.
I favor the Haskell is pure interpretation but I agree the other case is there too.
(BTW. My mental model of deconstructing a lazy list is academically right but probably, in the sense of very likely, wrong when you look at the implementation. Of course, they probably do it directly. But you can think of IO instances as being run against a lazy list of events academically.)
Haskell originally had both stream-based and continuation-based I/O; the stream-based version operated on lists of requests/responses, not chars (which allowed more things than reading/writing characters from stdin/out, say operating on other files). But I read the two are somehow equivalent (didn't get the details).
This is described in Sec. 7 of A history of Haskell: being lazy with class.
In:
GetChar ( \c -> GetChar( \d -> Print (a,b)))
What tells the compiler that the GetChar is not a value? Normally the compiler would assume a function with the same parameters has the same result (referential transparency), so with a nullary function RT indicates it is a simple value, and the function can be optimised out and replaced with the memoised value. Something about the type of GetChar must tell the compiler that it is not a value and that it must be evaluated fresh for each use. In haskell all the function arrows are pure, so the only thing it can be is the IO type deconstructor.
But its type is IO Char, not Char. Identical calls to GetChar do produce the same result in my example code above. In your example snippet, GetChar is not called twice with the same parameter. The outer call has argument (\c -> GetChar( \d -> Print (a,b))), and the inner call only has argument (\d -> Print (a,b)). But I wouldn't get too hung up on when two GetChar values are the same, because an IO Char is just instructions for producing a Char at a run-time (in my example code, it is literally just data). Just because you follow those same instructions two times doesn't mean you'll get the same character pressed both times.
Also, I'd be careful reading in your reading of David's remarks, even though I think I agree with them. The comment he made about the IO deconstructor being impure might lead to confusion. There is no IO destructor value in Haskell. The language specifies how IO values will be interpreted by the environment and that's all.
Yes, getchar is a value in Haskell due to the type "IO Char", and 'unsafePerformIO" can run it... I was talking about a naive 'getchar' implemented without monads, where you might just give it the type "Char" and expect it to work. In ML all arrows can possibly contain side-effects so although the type Char is not okay (it would be a value) "something -> Char" is fine, so pass a dummy argument or a stream id in and your okay. In Haskell the arrow is pure but IO Char can hide the impurity in the deconstructor for IO.
Ssssshhh... That's actually a bug in ocaml types at the moment.
(Ducks and runs.)
DISCLAIMER: This comment was brought to you in good faith to inspire pure unadulterated good fun. I don't want people to get upset about it. Of course.
that would imply the compiler assumes that all type constructors/deconstructors are impure, and you always pay the unwrapping cost
One can understand IO as a pure constructor with an impure 'deconstructor' (really, an interpreter), which is conventionally available via unsafePerformIO or by defining `main`.
However, this doesn't imply a compiler must assume other 'deconstructors' are impure! It's easy enough to create a compiler that understands IO as a special (built-in) use case - perhaps along with ST, STM, and a few others for performance reasons. For `Identity` or `Maybe` or `StateT` or `ReaderT`, for example, it would not be difficult to use known, purely functional deconstructors when guiding optimizations.
Of course, if you start writing generic code - e.g. `(Runtime m, Monad m) ⇒ Int → m Int` - the compiler won't be able to assume much of anything about the specific monad in use - it might be the IO monad or any other. It might be able to specialize in context, but for many use cases (like System.Plugins) the specific context might not be statically available to the compiler. In this case, it is indeed true that the compiler must make some difficult-to-optimize assumptions. At best, you'll get a few useful generic optimizations like `return . f >=> return . g = return . (g . f)`.
Anyhow, if I read you correctly, you're trying to generalize from IO to other constructors when no such generalization is necessary (except for generic code).
That is exactly what I was thinking with regard to the impure deconstructor.
I guess this is how it is done (with special cases in the compiler), but I find it somewhat unsatisfactory that IO appears in the type system just like any other type constructor, but gets special treatment.
I think it might be neater to have a type mark/annotation that indicates impurity explicitly.
I think it might be neater to have a type mark/annotation that indicates impurity explicitly.
If you observe that IO is a monad but the monadic structure doesn't matter that much, in the sense that there are alternatives, and you observe that you're not supposed to know about unsafePerformIO then IO is exactly that. A marker.
You might also be interested in “Type Inclusion Constraints and Type Inference†by Aiken and Wimmers. IIRC their algorithm would infer \(g :: \forall a b c d. (a \to c \cap b \to d) \to a \to b \to c \times d\).
I am scanning it. Nicely written.
But I am interested in type constructors because I am thinking a bit about what a module system abstractly is, and simultaneously if I can give programmers a better experience by getting rid of higher-rank types.
So, hmm, doesn't seem to apply.
First-class-polymorphism, and adapting Mark P Jones' FCP is how I intend to avoid higher ranked types needing to be used directly.
Looks like I am not going to be able to get rid of higher-rank types since the problem cannot be defined very well and there isn't a 'database' of lambda terms to test a (heuristical) decision procedure against.
So, higher-rank it is. Anyway. I guess programmers somewhere will like higher-rank types since it gives them the feeling they might have learned something.
Looking at some of Daan Leijen's stuff now. And I think I remember Erik Meijer wrote an inferencer for a system F like system somewhere.
|
http://lambda-the-ultimate.org/node/4961
|
CC-MAIN-2021-39
|
refinedweb
| 8,261
| 71.95
|
/* * config.h -- configure various defines for tcsh * * All source files should #include this FIRST. * * Edit this to match your system type. */ #ifndef _h_config #define _h_config #undef _FILE_OFFSET_BITS /****************** System dependant compilation flags ****************/ /* * POSIX This system supports IEEE Std 1003.1-1988 (POSIX). */ #define POSIX /* *. */ #define, or 3, depending the version of System V * you are running. Or set it to 0 if you are not SYSV based */ #define SYSVREL 4 /* * YPBUGS Work around Sun YP bugs that cause expansion of ~username * to send command output to /dev/null [they are back!] */ #undef YPBUGS /* * NISPLUS Make sure that fd's 0, 1, and 2 always are open so that * Sun's NIS+ doesn't get them, making ~-expansion hang. */ #undef NISPLUS /* * /****************** local defines *********************/ /* * From: peter@zeus.dialix.oz.au (Peter Wemm) * If exec() fails look first for a #! [word] [word]... * Work around OS deficiency which cannot start #!/bin/sh scripts */ #define HASHBANG #define setmode(fd,mode) /*nothing*/ #endif /* _h_config */
|
http://opensource.apple.com//source/tcsh/tcsh-64/tcsh/config/bs2000
|
CC-MAIN-2016-44
|
refinedweb
| 157
| 65.42
|
Hi! I’m building a little star wars app in React.
I have a CardsList component that receives a title and an array as props. The array can be an array of characters, films, planets , etc.
The CardsList components returns a Card Deck with some CardItems in it. Each card item when clicked shows a modal with some info.
For example it shows, name and birth year for a character; title and director for a film, and so on.
What is the best way to tackle this? I’ve come up with this code, but it’s not working as expected, and i think that overall isn’t the right way to achieve it.
import React from 'react'; import Modal from 'react-bootstrap/Modal'; import ListGroup from 'react-bootstrap/ListGroup'; const ModalItem = ({show, handleModalClose, item}) => { const {name, height, mass, birth_year, gender, homeworld, films, species, vehicles, starships} = item; const {title, episode_id, opening_crawl, director, release_date} = item; return ( <Modal show={show} <Modal.Header closeButton onClick={handleModalClose}> <Modal.Title>{name}</Modal.Title> </Modal.Header> <Modal.Body> <ListGroup variant="flush"> <ListGroup.Item> {`Height: ${height}` || `Title: ${title}`} </ListGroup.Item> <ListGroup.Item>Mass: {mass}</ListGroup.Item> <ListGroup.Item>Birth Year: {birth_year}</ListGroup.Item> <ListGroup.Item>Gender: {gender}</ListGroup.Item> <ListGroup.Item>Homeworld: {homeworld}</ListGroup.Item> </ListGroup> </Modal.Body> </Modal> ) } export default ModalItem;
|
https://forum.freecodecamp.org/t/render-different-value-based-on-props/441263
|
CC-MAIN-2021-10
|
refinedweb
| 215
| 59.4
|
This part of the manual provides reference material.
Appendix A, "Problems and Solutions"
Appendix B, "Error Messages"
Appendix C, "Information in NIS+ Tables"
Appendix D, "FNS Reference Formats and Syntax"
This.
The
NIS_OPTIONS environment variable can be set to control various NIS+ debugging options.
Options are specified after the
NIS_OPTIONS command separated by spaces with the option set enclosed in double quotes. Each option has the format name=value. Values can be integers, character strings, or filenames depending on the particular option. If a value is not specified for an integer value option, the value defaults to 1.
NIS_OPTIONS recognizes the following options: enter:
This section describes problems that may be encountered in the course of routine NIS+ namespace administration work. Common symptoms include:
"Illegal object type" for operation message.
Other "object problem" error messages
Initialization failure
Checkpoint failures
Difficulty adding a user to a group
Logs too large/lack of disk space/difficulty truncating logs
Cannot delete groups_dir or org_dir
Symptoms
"Illegal object type" for operation message
Other "object problem" error messages
There are a number of possible causes for this error message:
You have attempted to create a table without any searchable columns.
A database operation has returned the status of DB_BADOBJECT (see the nis_db man page for information on the db error codes).
You are trying to add or modify a database object with a length of zero.
You attempted to add an object without an owner.
The operation expected a directory object, and the object you named was not a directory object.
You attempted to link a directory to a LINK object.
You attempted to link a table entry.
An object that was not a group object was passed to the nisgrpadm command.
An operation on a group object was expected, but the type of object specified was not a group object.
An operation on a table object was expected, but the object specified was not a table object.
Make sure that:
You can ping the NIS+ server to check that it is up and running as a machine.
The NIS+ server that you specified with the -H option is a valid server and that it is running the NIS+ software.
rpc.nisd is running on the server.
The nobody class has read permission for this domain.
The netmask is properly set up on this machine.
If checkpoint operations with a nisping -C command consistently fail, make sure you have sufficient swap and disk space. Check for error messages in syslog. Check for core files filling up space.
A user must first be an NIS+ principal client with a LOCAL credential in the domain's
cred table before the user can be added as a member of a group in that domain. A DES credential alone is not sufficient.
Failure.
Lack of sufficient disk space will cause a variety of different error messages. (See "Insufficient Disk Space" for additional information.)
First, check to make sure that the file in question exists and is readable and that you have permission to write to it.
You can use ls /var/nis/trans.log to display the transaction log.
You can use nisls -l and niscat to check for existence, permissions, and readability.
You can use syslog to check for relevant messages.
The most likely cause of inability to truncate an existing log file for which you have the proper permissions is lack of disk space. (The checkpoint process first creates a duplicate temporary file of the log before truncating the log and then removing the temporary file. If there is not enough disk space for the temporary file, the checkpoint process cannot proceed.) Check your available disk space and free up additional space if necessary.
Domain.
Always delete org_dir and groups_dir before deleting their parent directory. If you use nisrmdir to delete the domain before deleting the domain's groups_dir and org_dir, you will not be able to delete either of those two subdirectories.
When removing or disassociating a directory from a replica server you must first remove the directory's org_dir and groups_dir subdirectories before removing the directory itself. After each subdirectory is removed, you must run nisping on the parent directory of the directory you intend to remove. (See "Removing a Directory".)
If you fail to perform the nisping operation, the directory will not be completely removed or disassociated.
If this occurs, you need to perform the following steps to correct the problem:
Remove /var/nis/rep/org_dir on the replica.
Make sure that org_dir.domain does not appear in /var/nis/rep/serving_list on the replica.
Perform a nisping on domain.
From the master server, run nisrmdir -f replica_directory.
If the replica server you are trying to dissociate is down or out of communication, the nisrmdir -s command will return a Cannot remove replica name: attempt to remove a non-empty table error message.
In such cases, you can run nisrmdir -f -s replicaname on the master to force the dissociation. Note, however, that if you use nisrmdir -f -s to section covers problems related to the namespace database and tables. Common symptoms include error messages with operative clauses such as:
Abort_transaction: Internal database error
Abort_transaction: Internal Error, log entry corrupt
CALLBACK_SVC: bad argument
as well as when rpc.nisd fails.
See also "NIS+ Ownership and Permission Problems".
Symptoms:
Various Database and transaction log corruption error messages containing the terms:
Possible Causes:
You have multiple independent rpc.nisd daemons running. In normal operation, rpc.nisd can spawn other child rpc.nisd daemons. This causes no problem. However, if two parent rpc.nisd daemons are running at the same time on the same machine, they will overwrite each other's data and corrupt logs and databases. (Normally, this could only occur if someone started running rpc.nisd by hand.)
Diagnosis:
Run ps -ef | grep rpc.nisd. Make sure that you have no more than one parent rpc.nisd process.
Solution:
If you have more than one "parent" rpc.nisd entries, you must kill all but one of them. Use kill -9 process-id, then run the ps command again to make sure it has died.
If you started rpc.nisd with the -B option, you must also kill the rpc.nisd_resolv daemon.
If an NIS+ database is corrupt, you will also have to restore it from your most recent backup that contains an uncorrupted version of the database. You can then use the logs to update changes made to your namespace since the backup was recorded. However, if your logs are also corrupted, you will have to recreate by hand any namespace modifications made since the backup was taken.
If an NIS+ table is too large, rpc.nisd may fail.
Diagnosis:
Use nisls to check your NIS+ table sizes. Tables larger than 7k may cause rpc.nisd to fail.
Solution:
Reduce the size of large NIS+ tables. Keep in mind that as a naming service NIS+ is designed to store references to objects, not the objects themselves.
This.
This section describes problem in which NIS+ was unable to find some object or principal. Common symptoms include:
Error messages with operative clauses such as:
Can't find suitable transport forname"
The most likely cause of some NIS+ object not being found is that you mistyped or misspelled its name. Check the syntax and make sure that you are using the correct name.
A likely cause of an "object not found" problem is specifying an incorrect path. Make sure that the path you specified is correct. Also make sure that the
NIS_PATH environment variable is set correctly.
Remember.
The NIS+ object may not have been found because it does not exist, either because it has been erased or not yet created. Use nisls -lin the appropriate domain to check that the object exists.
When you create or modify an NIS+ object, there is a time lag between the completion of your action and the arrival of the new updated information at a given replica. In ordinary operation, namespace information may be queried from a master or any of its replicas. A client automatically distributes queries among the various servers (master and replicas) to balance system load. This means that at any given moment you do not know which machine is supplying you with namespace information. If a command relating to a newly created or modified object is sent to a replica that has not yet received the updated information from the master, you will get an "object not found" type of error or the old out-of-date information. Similarly, a general command such as nisls may not list a newly created object if the system sends the nisls query to a replica that has not yet been updated.
You can use nisping to resync a lagging or out of sync replica server.
Alternatively, you can use the -M option with most NIS+ commands to specify that the command must obtain namespace information from the domain's master server. In this way you can be sure that you are obtaining and using the most up-to-date information. (However, you should use the -M option only when necessary because a main point of having and using replicas to serve the namespace is to distribute the load and thus increase network efficiency.)
One or more of the files in /var/nis/data directory has become corrupted or erased. Restore these files from your most recent backup.
In Solaris Release 2.4 and earlier, the /var/nis directory contained two files named hostname.dict and hostname.log. It also contained a subdirectory named /var/nis/hostname. Starting with Solaris Release 2.5, the two files were renamed trans.log and data.dict, and the subdirectory is named /var/nis/data.
Do not rename the /var/nis or /var/nis/data directories or any of the files in these directories that were created by nisinit or any of the other NIS+ setup procedures.
In Solaris Release 2.5, the content of the files were or later versions of rpc.nisd. Therefore, you should not rename either the directories or the files.
Symptoms:
Sometimes an object is there, sometimes it is not. Some NIS+ or UNIX commands report that an NIS+ object does not exist or cannot be found, while other NIS+ or UNIX commands do find that same object.
Diagnoses:
Use nisls to display the object's name. Look carefully at the object's name to see if the name actually begins with a blank space. (If you accidentally enter two spaces after the flag when creating NIS+ objects from the command line with NIS+ commands, some NIS+ commands will interpret the second space as the beginning of the object's name.)
Solution:
If an NIS+ object name begins with a blank space, you must either rename it without the space or remove it and then recreate it from scratch.
Symptoms:.)
You cannot use the nisln command (or any other command) to create links between entries in tables. NIS+ commands do not follow links at the entry level.
This".
This.
This section describes common slow performance and system hang problems.
Error messages with operative clauses such as:
Other common symptoms:
You issue a command and nothing seems to happen for far too long.
Your system, or shell, no longer responds to keyboard or mouse commands.
NIS+ operations seem to run slower than they should or slower than they did before.
Someone has issued an nisping or nisping -C command. Or the rpc.nisd daemon is performing a checkpoint operation.
Do not reboot! Do not issue any more nisping commands.
When performing a nisping or checkpoint, the server will be sluggish or may not immediately respond to other commands. Depending on the size of your namespace, these commands may take a noticeable amount of time to complete. Delays caused by checkpoint or ping commands are multiplied if you, or someone else, enter several such commands at one time. Do not reboot. This kind of problem will solve itself. Just wait until the server finishes performing the nisping or checkpoint command.
During a full master-replica resync, the involved replica server will be taken out of service until the resync is complete. Do not reboot--just wait.
NIS_PATH
Make sure that your
NIS_PATH variable is set to something clean and simple. For example, the default: org_dir.$:$. A complex
NIS_PATH, particularly one that itself contains a variable, will slow your system and may cause some operations to fail. (See "
NIS_PATH Environment Variable" for more information.)
Do not use nistbladm to set nondefault table paths. Nondefault table paths will slow performance.
Do not use table paths because they will slow performance.
Too many replicas for a domain degrade system performance during replication. There should be no more than 10 replicas in a given domain or subdomain. If you have more than five replicas in a domain, try removing some of them to see if that improves performance.
A recursive group is a group that contains the name of some other group. While including other groups in a group reduces your work as system administrator, doing so slows down the system. You should not use recursive groups.
When rpc.nisd starts up it goes through each log. If the logs are long, this process could take a long time. If your logs are long, you may want to checkpoint them using nisping -C before starting rpc.nisd.
Symptoms:
If you used the -M option to specify that your request be sent to the master server, and the rpc.nisd daemon has died on that machine, you will get a "server not responding" type error message and no updates will be permitted. (If you did not use the -M option, your request will be automatically routed to a functioning replica server.)
Possible Cause:
Using uppercase letters in the name of a home directory or host can sometimes cause rpc.nisd to die.
Diagnosis:
First make sure that the server itself is up and running. If it is, run ps -ef | grep rpc.nisd to see if the daemon is still running.
Solution:
If the daemon has died, restart it. If rpc.nisd frequently dies, contact your service provider.
Symptoms:
It takes too long for a machine to locate namespace objects in other domains.
Possible Cause:
You do not have nis_cachemgr running.
Diagnosis:
Run ps -ef | grep nis_cachemgr to see if it is still running.
Solution
Start nis_cachemgr on that machine.
Symptoms:
A server performs slowly and sluggishly after using the NIS+ scripts to install NIS+ on it.
Possible Cause:
You forgot to run nisping -C -a after running the nispopulate script.
Solution:
Run nisping -C -a to checkpoint the system as soon as you are able to do so.
Symptoms:
You run niscat and get an error message indicating that the server is busy.
Possible Cause:
The server is busy with a heavy load, such as when doing a resync.
The server is out of swap space.
Diagnosis:
Run swap -s to check your server's swap space.
Solution:
You must have adequate swap and disk space to run NIS+. If necessary, increase your space.
Symptoms:
Setting the host name for an NIS+ server to be fully qualified is not recommended. If you do so, and NIS+ queries then just hang with no error messages, check the following possibilities:
Possible Cause:
Fully qualified host names must meet the following criteria:
The domain part of the host name must be the same as the name returned by the domainname command.
After the setting the host name to be fully qualified, you must also update all the necessary /etc and /etc/inet files with the new host name information.
The host name must end in a period.
Solution:
Kill the NIS+ processes that are hanging and then kill rpc.nisd on that host or server. Rename the host to match the two requirements listed above. Then reinitialize the server with nisinit. (If queries still hang after you are sure that the host is correctly named, check other problem possibilities in this section.)
This section describes problems having to do with lack of system resources such as memory, disk space, and so forth.
Error messages with operative clauses such as:
"Cannot [do something] with log" type messages
Lack of sufficient memory or swap space on the system you are working with will cause a wide variety of NIS+ problems and error messages. As a short-term, temporary solution, try to free additional memory by killing unneeded windows and processes. If necessary, exit your windowing system and work from the terminal command line. If you still get messages indicating inadequate memory, you will have to install additional swap space or memory, or switch to a different system that has enough swap space or memory.
Under some circumstances, applications and processes may develop memory leaks and grow too large. you can check the current size of an application or process by running:
The sz (size) column shows the current memory size of each process. If necessary, compare the sizes with comparable processes and applications on a machine that is not having memory problems to see if any have grown too large..
On a heavily loaded machine it is possible that you could reach the maximum number of simultaneous processes that the machine is configured to handle. This causes messages with clauses like "unable to fork". The recommended method of handling this problem is to kill any unnecessary processes. If the problem persists, you can reconfigure the machine to handle more processes as described in your system administration documentation.
This section describes NIS+ problems that a typical user might encounter.
User cannot log in.
User cannot rlogin to other domain
There are many possible reasons for a user being unable to log in:
User forgot password. To set up a new password for a user who has forgotten the previous one, run passwd for that user on another machine (naturally, you have to be the NIS+ administrator to do this).
Mistyping password. Make sure the user knows the correct password and understands that passwords are case-sensitive and that the letter "o" is not interchangeable with the numeral "0," nor is the letter "l" the same as the numeral "1."
"Login incorrect" type message. For causes other than simply mistyping the password, see "Login Incorrect Message".
The user's password privileges have expired (see "Password Privilege Expiration").
An inactivity maximum has been set for this user, and the user has passed it (see "Specifying Maximum Number of Inactive Days").
The user's nsswitch.conf file is incorrect. The passwd entry in that file must be one of the following five permitted configurations:
passwd: files
passwd: files nis
passwd: files nisplus
passwd: compat
passwd: compat passwd_compat: nisplus
Any other configuration will prevent a user from logging in.
(See "nsswitch.conf File Requirements" for further details.)
Symptoms:
Users who recently changed their password are unable to log in at all, or are able to log in on some machines but not on others.
Possible Causes:
It may take some time for the new password to propagate through the network. Have users try to log in with the old password.
The password was changed on a machine that was not running NIS+ (see "User Cannot Log In Using New Password").
Symptoms:
User tries to rlogin to a machine in some other domain and is refused with a "Permission denied" type error message.
Possible Cause:
To rlogin to a machine in another domain, a user must have LOCAL credentials in that domain.
Diagnosis:
Run nismatch username.domainname. cred.org_dir in the other domain to see if the user has a LOCAL credential in that domain.
Solution:
Go to the remote domain and use nisaddcred to create a LOCAL credential for the user in the that domain.
The".
This.
This).:
Error messages in console or syslog with operative phrases like the following are most often caused by syntax errors in DNS data and boot files:
Non-authoritative answer:
error receiving zone transfer
Check the relevant files for spelling and syntax errors.
A common syntax error is misuse of the trailing dot in domain names (either using the dot when you should not, or not using it when you should). See "Trailing Dots in Domain Names".:
When you run fnlist to see what is in the initial context, you see nothing.
Possible Cause:
This is caused by an NIS+ configuration problem. The organization associated with the user and machine running the fn* commands do not have an associated ctx_dir directory.
Diagnosis:
Use the nisls command to see whether there is a ctx_dir directory.
Solution:
If there is no ctx_dir directory, run fncreate -t org/nis+_domain_name/ to create the ctx_dir directory.:
You run fnlist with an organization name, expecting to see suborganizations, but instead see nothing.
Possible Cause:
This is caused by an NIS+ configuration problem. Suborganizations must be NIS+ domains. By definition, an NIS+ domain must have a subdirectory named org_dir.
Diagnosis:
Use the nisls command to see what subdirectories exist. Run nisls on each subdirectory to verify which subdirectories have an org_dir. The subdirectories with an org_dir are suborganizations.
Solution:
Not applicable.:
You use the -s option with fnbind and fncreate, but for certain names you get "name in use."
Possible Cause:
fnbind -s and fncreate -soverwrite the existing binding if it already exists; but if the old binding is one that must be kept to avoid orphaned contexts, the operation fails with a "name in use" error because the binding could not be removed. This is done to avoid orphaned contexts.
Diagnosis:
Run the fnlist command on the name to verify that it is a context.
Solution:
Run the fndestroy command to remove the context before running fnbind or fncreate on the same name.
Symptom:
When you do an fndestroy or fnunbind on certain names that you know do not exist, you receive no indication that the operation failed.
Possible Cause:
The operation did not fail. The semantics of fndestroy and fnunbind are that if the terminal name is not bound, the operation returns success.
Diagnosis:
Run the fnlookup command on the name. You should receive the message, "name not found."
Solution:
Not applicable.
This section alphabetically lists some common error messages. For each message there is an explanation and, where appropriate, a solution or a cross-reference to some other portion of this manual.
Appendix A, Problems and Solutions, describes various type of problems and their solutions. Where appropriate, error messages in this appendix are cross-referenced to the corresponding section in Appendix A, Problems and Solutions.
Some of the error messages documented in this chapter are documented more fully in the appropriate man pages.
Some of the error messages documented in this chapter are documented more fully in the appropriate man pages.
Error.:
abort_transaction: Failed to action NIS+ objectname
The abort_transaction routine failed to back out of an incomplete transaction due to a server crash or some other unrecoverable error. See "NIS+ Database Problems" for further information.
abort_transaction: Internal database error abort_transaction: Internal error, log entry corrupt NIS+ objectname
These two messages indicate some form of corruption in a namespace database or log. See "NIS+ Database Problems" for additional information.
add_cleanup: Cant allocate more rags.
This message indicates that your system is running low on available memory. See "Insufficient Memory" for information on insufficient memory problems.
add_pingitem: Couldn't add directoryname to pinglist (no memory)
See "Insufficient Memory" for information on low memory problems.
add_update: Attempt add transaction from read only child. add_update Warning: attempt add transaction from read only child
An attempt by a read-only child rpc.nisd process to add an entry to a log. An occasional appearance of this message in a log is not serious. If this message appears frequently, contact the Sun Solutions Center.
Attempting to free a free rag!
This message indicates a software problem with rpc.nisd. The rpc.nisd should have aborted. Run ps -ef | grep rpc.nisd to see if rpc.nisd is still running. If it is, kill it and restart it with the same options as previously used. If it is not running, restart it with the same options as previously used. Check /var/nis to see if a core file has been dumped. If there is a core file, delete it.
If you started rpc.nisd with the -YB option, you must also kill the rpc.nisd_reply daemon.
Attempt to remove a non-empty table
An attempt has been made by nistbladm to remove an NIS+ table that still contains entries. Or by nisrmdir to remove a directory that contains files or subdirectories.
If you are trying to delete a table, use niscat to check the contents of the table and nistbladm to delete any existing contents.
If you are trying to delete a directory, use nisls -l -R to check for existing files or subdirectories and delete them first.
If you are trying to dissociate a replica from a domain with nisrmdir -s, and the replica is down or otherwise out of communication with the master, you will get this error message. In such cases, you can run nisrmdir -f -s replicaname on the master to force the dissociation. Note, however, that if you use nisrmdir -f -sto message is generated by the NIS+ error code constant: NIS_NOTEMPTY. See the nis_tables man page for additional information.
attribute no permission
FNS error message. The caller did not have permission to perform the attempted attribute operation.
attribute value required
FNS error message. The operation attempted to create an attribute without a value, and the specific naming system does not allow this.
authdes_marshal: DES encryption failure
DES encryption for some authentication data failed. Possible causes:
Corruption of a library function or argument.
A problem with a DES encryption chip, if you are using one.
Call the Sun Solutions Center for assistance.
authdes_refresh: keyserv is unable to encrypt session key
The keyserv process was unable to encrypt the session key with the public key that it was given. See "Keyserv Failure" for additional information.
authdes_refresh: unable to encrypt conversation key
The keyserv process could not encrypt the session key with the public key that was given. This usually requires some action on your part. Possible causes are:
The keyserv process is dead or not responding. Use ps -ef to check whether the keyserv process is running on the keyserv host. If it is not, then start it, and then run keylogin.
The client has not performed a keylogin. Do a keylogin for the client and see if that corrects the problem.
The client host does not have credentials. Run nismatch on the client's home domain cred table to see if the client host has the proper credentials. If it does not, create them.
A DES encryption failure. See the authdes_marshal: DES encryption failure error message).
See "NIS+ Security Problems" for additional information regarding security key problems.
authdes_refresh: unable to synchronize clock
This indicates a synchronization failure between client and server clocks. This will usually correct itself. However, if this message is followed by any time stamp related error, you should manually resynchronize the clocks. If the problem reoccurs, check that remote rpcbind is functioning correctly.
authdes_refresh: unable to synch up w/server
The client-server clock synchronization has failed. This could be caused by the rpcbind process on the server not responding. Use ps -ef on the server to see if rpcbind is running. If it is not, restart it. If this error message is followed by any time stamp-related message, then you need to use rdate servername to manually resync the client clock to the server clock.
authdes_seccreate: keyserv is unable to generate session key
This indicates that keyserv was unable to generate a random DES key for this session. This requires some action on your part:
Check to make sure that keyserv is running properly. If it is not, restart it along with all other long-running processes that use Secure RPC or make NIS+ calls such as automountd, rpc.nisd and sendmail. Then do a keylogin.
If keyserv is up and running properly, restart the process that logged this error.
authdes_seccreate: no public key found for servername
The client side cannot get a DES credential for the server named servername. This requires some action on your part:
Check to make sure that servername has DES credentials. If it does not, create them.
Check the switch configuration file to see which name service is specified and then make sure that service is responding. If it is not responding, restart it.
authdes_seccreate: out of memory
See "NIS+ System Resource Problems" for information on insufficient memory problems.
authdes_seccreate: unable to gen conversation key
The keyserv process was unable to generate a random DES key. The most likely cause is that the keyserv process is down or otherwise not responding. Use ps -ef to check whether the keyserv process is running on the keyserv host. If it is not, then start it and run keylogin.
If restarting keyserv fails to correct the problem, it might be that other processes that use Secure RPC or make NIS+ calls are not running (for example, automountd, rpc.nisd, or sendmail). Check to see whether these processes are running; if they are not, restart them.
See "NIS+ Security Problems" for additional information regarding security key problems.
authdes_validate: DES decryption failure
See authdes_marshal: DES decryption failure for authentication data failure.
authdes_validate: verifier mismatch
The time stamp that the client sent to the server does not match the one received from the server. (This is not recoverable within a Secure RPC session.) Possible causes:
Corruption of the session key or time stamp data in the client or server cache
Server deleted from this cache a session key for a still active session.
Network data corruption.
Try re-executing the command.
authentication failure
FNS error message. The operation could not be completed because the principal making the request cannot be authenticated with the name service involved. If the service is NIS+, check that you are identified as the correct principal (run the command nisdefaults) and that your machine has specified the correct source for publickeys. Check that the /etc/nsswitch.conf file has the entry, publickey: nisplus.
bad reference
FNS error message. FNS could not interpret the contents of the reference. This can result if the contents of the reference have been corrupted or when the reference identifies itself as an FNS reference, but FNS doesn't know how to decode it.
CacheBind: xdr_directory_obj failed.
The most likely causes for this message are:
Bad or incorrect parameters being passed to the xdr_directory_obj routine..
Cache expired
The entry returned came from an object cache that has expired. This means that the time-to-live value has gone to zero and the entry might have changed. If the flag -NO_CACHE was passed to the lookup function, then the lookup function will retry the operation to get an unexpired copy of the object.
This message is generated by the NIS+ error code constant: NIS_CACHEEXPIRED. See the nis_tables and nis_names man pages for additional information.
Callback: - select failed message nnnn
An internal system call failed. In most cases this problem will correct itself. If it does not correct itself, make sure that rpc.nisd has not been aborted. If it has, restart it. If the problem reoccurs frequently, contact the Sun Solutions Center.
CALLBACK_SVC: bad argument
An internal system call failed. In most cases this problem will correct itself. If it does not correct itself, make sure that rpc.nisd has not been aborted. If it has, restart it. If the problem reoccurs frequently, contact the Sun Solutions Center.
Cannot grow transaction log error string
The system cannot add to the log file. The reason is indicated by the string. The most common cause of this message is lack of disk space. See "Insufficient Disk Space".
Cannot obtain Initial Context
FNS error message. Indicates an installation problem. See "Cannot Obtain Initial Context".".
Cannot write one character to transaction log, errormessage
An attempt has been made by the rpc.nisd daemon to add an update from the current transaction into the transaction log, and the attempt has failed for the reason given in the message that has been returned by the function. Additional information can be obtained from the write routine's man page.
Can't compile regular expression variable
Returned by the nisgrep command when the expression in keypat was malformed.
Can't get any map parameter information.
See "NIS Problems and Solutions"
Can't find name service for passwd
Either there is no nsswitch.conf file or there is no passwd entry in the file, or the passwd entry does not make sense or is not one of the allowed formats.
Can't find name 's secret key
Possible causes:
You might have incorrectly typed the password.
There might not be an entry for name in the cred table.
NIS+ could not decrypt the key (possibly.
*** servername.domainname can't find machinename; Server failed.
DNS error message. See "Server Failed and Zone Expired Problems" and "Other DNS Syntax Errors".
Can't find server name for address 127.0.0.1; server failed.
DNS error message. This message usually indicates that your primary master server is using an outdated named.ca file with invalid information. If your network is connected to the Internet, you need to get a current named.ca file from the authority that administers your top level domain (.com, for instance). For .com, .edu, .gov, .mil, .org, and others, that authority is InterNIC. If your network is not connected to the Internet, you need to check your named.ca file for errors.
checkpoint_log: Called from read only child ignored.
This is a status message indicating that a read-only process attempted to perform an operation restricted to the parent process, and the attempt was aborted. No action need be taken.
checkpoint_log: Unable to checkpoint, log unstable.
An attempt was made to checkpoint a log that was not in a stable state. (That is, the log was in a resync, update, or checkpoint state.) Wait until the log is stable, and then rerun the nisping command.
check_updaters: Starting resync.
This is a system status message. No action need be taken.
Child process requested to checkpoint!
This message indicates a minor software problem that the system is capable of correcting. If these messages appear often, you can change the threshold level in your /etc/syslog.conf file. See the syslog.conf man page for details.
Column not found: columnname
The specified column does not exist in the specified table.
communication failure
FNS error message. FNS could not communicate with the name service to complete the operation.
configuration error
An error resulted because of configuration problems. Examples:
(1) The bindings table is removed out-of-band (outside of FNS).
(2) A host is in the NIS+ hosts directory object but does not have a corresponding FNS host context.
context not empty
FNS error message. An attempt has been made to remove a context that still contains bindings.
continue operation using status values
FNS error message. The operation should be continued using the remaining name and the resolved reference returned in the status.
Could not find string 's secret key key in an /etc/publickey file that is different from the NIS+ password recorded in the cred table.
See "NIS+ Security Problems" for information on diagnosing and solving these types of problem.
Could not generate netname
The Secure RPC software could not generate the Secure RPC netname for your UID when performing a keylogin. This could be due to the following causes:
You do not have LOCAL credentials in the NIS+ cred table of the machine's home domain.
You have a local entry in /etc/passwd with a UID that is different from the UID you have in the NIS+ passwd table.
string: could not get secret key for 'stringkey in an /etc/publickey file that is different from the NIS+ password recorded in the cred table.
See "NIS+ Security Problems" for information on diagnosing and solving these type of problem.
Couldn't fork a process!
The server could not fork a child process to satisfy a callback request. This is probably caused by your system reaching its maximum number of processes. You can kill some unneeded processes, or increase the number of processes your system can handle. See "Insufficient Processes" for additional information.
Couldn't parse access rights for column string
This message is usually returned by the nistbladm -u command when something other than a + (plus sign), a - (minus sign), or an = (equal sign) is entered as the operator. Other possible causes are failure to separate different column rights with a comma, or the entry of something other than r,d,c, or m for the type of permission. Check the syntax for this type of entry error. If everything is entered correctly and you still get this error, the table might have been corrupted.
Database for table does not exist
At attempt to look up a table has failed. See "NIS+ Object Not Found Problems" for possible causes.
This message is generated by the NIS+ error code constant: NIS_NOSUCHTABLE. See the nis_tables and nis_names man pages for additional information.
_db_add: child process attempting to add/modify _db_addib: non-parent process attempting an add
These messages indicate that a read-only or nonparent process attempted to add or modify an object in the database. In most cases, these messages do not require any action on your part. If these messages are repeated frequently, call the Sun Solutions Center.
db_checkpoint: Unable to checkpoint string
This message indicates that for some reason NIS+ was unable to complete checkpointing of a directory. The most likely cause is that the disk is full See "Insufficient Disk Space" for additional information).
_db_remib: non-parent process attempting an remove _db_remove: non-parent process attempting a remove
These messages indicate that a read-only or non-parent process attempted to remove a table entry. In most cases, these messages do not require any action on your part. If these messages are repeated frequently, call the Sun Solutions Center.
Do you want to see more information on this command?
This indicates that there is a syntax or spelling error on your script command line.
Entry/Table type mismatch
This occurs when an attempt is made to add or modify an entry in a table, and the entry passed is of a different type from the table. For example, if the number of columns is not the same. Check that your update correctly matches the table type.
This message is generated by the NIS+ error code constant: NIS_TYPEMISMATCH. See the nis_tables man page for additional information.
error
FNS error message. An error that cannot be classified as one of the other errors listed above occurred while processing the request. Check the status of the naming services involved in the operation and see whether any of them are experiencing extraordinary problems.
**ERROR: chkey failed again. Please contact your network administrator to verify your network password.
This message indicates that you typed the wrong network password.
If this is the first time you are initializing this machine, contact your network administrator to verify the network password.
If this machine has been initialized before as an NIS+ client of the same domain, try typing the root login password at the Secure RPC password prompt.
If this machine is currently an NIS+ client and you are trying to change it to a client of a different domain, remove the /etc/.rootkey file, and rerun the nisclient script, using the network password given to you by your network administrator (or the network password generated by the nispopulate script).
Error: Could not create a valid NIS+ coldstart file
This message is from nisinit, the NIS+ initialization routine. It is followed by another message preceded by a string that begins: "lookup:..". This second message will explain why a valid NIS+ cold-start file could not be created.
**ERROR: could not restore file filename
This message indicates that NIS+ was unable to rename filename.no_nisplus to filename.
Check your system console for system error messages.
If there is a system error message, fix the problem described in the error message and rerun nisclient -i.
If there aren't any system error messages, try renaming this file manually, and then rerun nisclient -i.
**ERROR: Couldn't get the server NIS+_server's address.
The script was unable to retrieve the server's IP address for the specified domain. Manually add the IP address for NIS+_server into the /etc/hosts or /etc/inet/ipnodes file, then rerun nisclient -i.
**ERROR: directory directory-path does not exist.
This message indicates that you typed an incorrect directory path. Type the correct directory path.
**ERROR: domainname does not exist.
This message indicates that you are trying to replicate a domain that does not exist.
If domainname is spelled incorrectly, rerun the script with the correct domain name.
If the domainname domain does not exist, create it. Then you can replicate it.
**ERROR: parent-domain does not exist.
This message indicates that the parent domain of the domain you typed on the command line does not exist. This message should only appear when you are setting up a nonroot master server.
If the domain name is spelled incorrectly, rerun the script with the correct domain name.
If the domain's parent domain does not exist, you have to create the parent domain first, and then you can create this domain.
**ERROR: Don't know about the domain "domainname". Please check your domainname.
This message indicates that you typed an unrecognized domain name. Rerun the script with the correct domain name.
**ERROR: failed dumping tablename table.
The script was unable to populate the cred table because the script did not succeed in dumping the named table.
If niscat tablename .org_dir fails, make sure that all the servers are operating, then rerun the script to populate the tablename table.
If niscat tablename.org_dir is working, the error might have been caused by the NIS+ server being temporarily busy. Rerun the script to populate the tablename table.
**ERROR: host hostname is not a valid NIS+ principal in domain domainname. This host name must be defined in the credential table in domain domainname. Use nisclient -c to create the host credential
A machine has to be a valid NIS+ client with proper credentials before it can become an NIS+ server. To convert a machine to an NIS+ root replica server, the machine first must be an NIS+ client in the root domain. Follow the instructions on how to add a new client to a domain, then rerun nisserver -R.
Before you can convert a machine to an NIS+ nonroot master or a replica server, the machine must be an NIS+ client in the parent domain of the domain that it plans to serve. Follow the instructions on how to add a new client to a domain, then rerun nisserver -M or nisserver -R.
This problem should not occur when you are setting up a root master server.
Error in accessing NIS+ cold start file is NIS+ installed?
This message is returned if NIS+ is not installed on a machine or if for some reason the file /var/nis/NIS_COLD_START could not be found or accessed. Check to see if there is a /var/nis/NIS_COLD_START file. If the file exists, make sure your path is set correctly and that NIS_COLD_START has the proper permissions. Then rename or remove the old cold-start file and rerun the nisclient script to install NIS+ on the machine.
This message is generated by the cache manager that sends the NIS+ error code constant: NIS_COLDSTART_ERR. See the write and open man pages for additional information on why a file might not be accessible.
Error in RPC subsystem
This fatal error indicates the RPC subsystem failed in some way. Generally, there will be a syslog message on either the client or server side indicating why the RPC request failed.
This message is generated by the NIS+ error code constant: NIS_RPCERROR. See the nis_tables and nis_names man pages for additional information.
**ERROR: it failed to add the credential for root.
The NIS+ command nisaddcred failed to create the root credential when trying to set up a root master server. Check your system console for system error messages:
If there is a system error message, fix the problem described in the error message and then rerun nisserver.
If there aren't any system error messages, check to see whether the rpc.nisd process is running. If it is not running, restart it and then rerun nisserver.
**ERROR: it failed to create the tables.
The NIS+ command nissetup failed to create the directories and tables. Check your system console for system error messages:
If there is a system error message, fix the problem described in the error message and rerun nisserver.
If there aren't any system error messages, check to see whether the rpc.nisd process is running. If it is not running, restart it and rerun nisserver.
**ERROR: it failed to initialize the root server.
The NIS+ command nisinit -r failed to initialize the root master server. Check your system console for system error messages. If there is a system error message, fix the problem described in the error message and rerun nisserver.
**ERROR: it failed to make the domainname directory
The NIS+ command nismkdir failed to make the new directory domainname when running nisserver to create a nonroot master. The parent domain does not have create permission to create this new domain.
If you are not the owner of the domain or a group member of the parent domain, rerun the script as the owner or as a group member of the parent domain.
If rpc.nisd is not running on the new master server of the domain that you are trying to create, restart rpc.nisd.
**ERROR: it failed to promote new master for the domainname directory
The NIS+ command nismkdir failed to promote the new master for the directory domainname when creating a nonroot master with the nisserver script.
If you do not have modify permission in the parent domain of this domain, rerun the script as the owner or as a group member of the parent domain.
If rpc.nisd is not running on the servers of the domain that you are trying to promote, restart rpc.nisd on these servers and rerun nisserver.
**ERROR: it failed to replicate the directory-name directory
The NIS+ command nismkdir failed to create the new replica for the directory directory-name.
If rpc.nisd is not running on the master server of the domain that you are trying to replicate, restart rpc.nisd on the master server, rerun nisserver.
If rpc.nisd is not running on the new replica server, restart it on the new replica and rerun nisserver.
**ERROR: invalid group name. It must be a group in the root-domain domain.
This message indicates that you used an invalid group name while trying to configure a root master server. Rerun nisserver -r with a valid group name for root-domain.
**ERROR: invalid name "client-name" It is neither an host nor an user name.
This message indicates that you typed an invalid client-name.
If client-name was spelled incorrectly, rerun nisclient -c with the correct client-name.
If client-name was spelled correctly, but it does not exist in the proper table, put client-name into the proper table and rerun nisclient -c. For example, a user client belongs in the passwd table, and a host client belongs in the hosts table.
**ERROR: hostname is a master server for this domain. You cannot demote a master server to replica. If you really want to demote this master, you should promote a replica server to master using nisserver with the M option.
You cannot directly convert a master server to a replica server of the same domain. You can, however, change a replica to be the new master server of a domain by running nisserver -M with the replica host name as the new master. This automatically makes the old master a replica.
**ERROR: missing hostnames or usernames.
This message indicates that you did not type the client names on the command line. Rerun nisclient -c with the client names.
**ERROR: NIS+ group name must end with a "."
This message indicates that you did not specify a fully qualified group name ending with a period. Rerun the script with a fully qualified group name.
**ERROR: NIS+ server is not running on remote-host. You must do the following before becoming a NIS+ server: 1. become a NIS+ client of the parent domain or any domain above the domain which you plan to serve. (nisclient) 2. start the NIS+ server. (rpc.nisd)
This message indicates that rpc.nisd is not running on the remote machine that you are trying to convert to an NIS+ server. Use the nisclient script to become an NIS+ client of the parent domain or any domain above the domain you plan to serve; start rpc.nisd on remote-host.
**ERROR: nisinit failed.
nisinit was unable to create the NIS_COLD_START file.
Check the following:
That the NIS+ server you specified with the -H option is running--use ping
That you typed the correct domain name
That rpc.nisd is running on the server
That the nobody class has read permission for this domain
**ERROR: NIS map transfer failed. tablename table will not be loaded.
NIS+ was unable to transfer the NIS map for this table to the NIS+ database.
If the NIS server host is running, try running the script again. The error might have been due to a temporary failure.
If all tables have this problem, try running the script again using a different NIS server.
**ERROR: no permission to create directory domainname
The parent domain does not have create permission to create this new domain. If you are not the owner of the domain or as a group member of the parent domain, rerun the script as the owner, or as a group member of the parent domain.
**ERROR: no permission to replicate directory domainname.
This message indicates that you do not have permission to replicate the domain. Rerun the script as the owner or as a group member of the domain.
error receiving zone transfer
DNS error message. This usually indicates a syntax error in one of the primary server's DNS files. See "Other DNS Syntax Errors".
**ERROR: table tablename .org_dir.domainname does not exist." tablename table will not be loaded."
The script did not find the NIS+ table tablename.
If tablename is spelled incorrectly, rerun the script with the correct table name.
If the tablename table does not exist, use nissetup to create the table if tablename is one of the standard NIS+ tables. Or use nistbladm to create the private table tablename. Then rerun the script to populate this table.
If the tablename table exists, the error might have been caused by the NIS+ server being temporarily busy. Rerun the script to populate this tablename table.
**ERROR: this name "clientname" is in both the passwd and hosts tables. You cannot have an username same as the host name.
client-name appears in both the passwd and hosts tables. One name is not allowed to be in both of these tables. Manually remove the entry from either the passwd or hosts table. Then, rerun nisclient -c.
**ERROR: You cannot use the -u option as a root user.
This message indicates that the superuser tried to run nisclient -u. The -u option is for initializing ordinary users only. Superusers do not need be initialized as NIS+ clients.
**ERROR: You have specified the Z option after having selected the X option. Please select only one of these options [list]. Do you want to see more information on this command?
The script you are running allows you to choose only one of the listed options.
Type y to view additional information.
Type n to stop the script and exit.
After exiting the script, rerun it with just one of the options.
**ERROR: you must specify a fully qualified groupname.
This message indicates that you did not specify a fully qualified group name ending with a period. Rerun the script with a fully qualified group name.
**ERROR: you must specify both the NIS domainname (-y) and the NIS server host name (-h).
This message indicates that you did not type either the NIS domain name and/or the NIS server host name. Type the NIS domain name and the NIS server host name at the prompt or on the command line.
**ERROR: you must specify one of these options: -c, -i, -u, -r.
This message indicates that one of these options, -c, -i, -u, -r was missing from the command line. Rerun the script with the correct option.
**ERROR: you must specify one of these options: -r, -M or -R"
This message indicates that you did not type any of the -r or the -M or the -R options. Rerun the script with the correct option.
**ERROR: you must specify one of these options: -C, -F, or -Y
This message indicates that you did not type either the -Y or the -F option. Rerun the script with the correct option.
**ERROR: You must be root to use -i option.
This message indicates that an ordinary user tried to run nisclient -i. Only the superuser has permission to run nisclient -i.
Error while talking to callback proc
An RPC error occurred on the server while it was calling back to the client. The transaction was aborted at that time and any unsent data was discarded. Check the syslog on the server for more information.
This message is generated by the NIS+ error code constant: NIS_CBERROR. See the nis_tables man page for additional information.
First/Next chain broken
This message indicates that the connection between the client and server broke while a callback routine was posting results. This could happen if the server died in the middle of the process.
This message is generated by the NIS+ error code constant: NIS_CHAINBROKEN.
getzone: print_update failed
DNS error message. This usually indicates a syntax error in one of the primary server's DNS files. See "Other DNS Syntax Errors".
Generic system error
Some form of generic system error occurred while attempting the request. Check the syslog record on your system for error messages from the server.
This message usually indicates that the server has crashed or the database has become corrupted. This message might also be generated if you incorrectly specify the name of a server or replica as if it belonged to the domain it was servicing rather than the domain above. See "Domain Name Confusion" for additional information.
This message is generated by the NIS+ error code constant: NIS_SYSTEMERROR. See the nis_tables and nis_names man pages for additional information.
illegal name
FNS error message. The name supplied is not a legal name.
Illegal object type for operation
See "Illegal Object Problems" for a description of these type of problems.
This message is generated by the NIS+ error code constant: DB_BADOBJECT.
incompatible code sets
FNS error message. The operation involved character strings from incompatible code sets, or the supplied code set is not supported by the implementation.
in.named [nnnn]: lame server on hostname
DNS error message. Lame delegation is when an NS record in the hosts file of a parent domain server identifies another server as authoritative for a subdomain zone, but that server is not authoritative for that zone. The NS records in the parent's hosts file must be a superset that includes all the authoritative servers in all delegated sub zones.
insufficient permission to update credentials.
This message is generated by the nisaddcred command when you have insufficient permission to execute an operation. This could be insufficient permission at the table, column, or entry level. Use niscat -o cred.org_dir to determine what permissions you have for that cred table. If you need additional permission, you or the system administrator can change the permission requirements of the object as described in Chapter 10, Administering NIS+ Access Rights, or add you to a group that does have the required permissions as described in Chapter 12, Administering NIS+ Groups.
See "NIS+ Ownership and Permission Problems" for additional information about permission problems.
insufficient resources
FNS error message. The name service used by FNS does not have sufficient resources to complete the request. Check memory and disk availability on the name servers involved.
invalid attribute identifier
FNS error message. The attribute identifier is in a format not acceptable to the naming system, or its contents are not valid for the format specified for the identifier.
invalid attribute value
FNS error message. The value supplied is not in the correct form for the given attribute.
invalid enumeration handle
FNS error message. The enumeration handle supplied is invalid. The handle could have been from another enumeration, an update operation might have occurred during the enumeration, or there might have been some other reason.
Invalid Object for operation
Name context. The name passed to the function is not a legal NIS+ name.
Table context. The object pointed to is not a valid NIS+ entry object for the given table. This could occur if it had a mismatched number of columns, or a different data type (for example, binary or text) than the associated column in the table.
This message is generated by the NIS+ error code constant: NIS_INVALIDOBJ. See the nis_tables and nis_names man pages for additional information.
invalid syntax attributes
FNS error message. The syntax attributes supplied are invalid or insufficient to fully specify the syntax.
invalid usecs Routine_name: invalid usecs
This message is generated when the value in the tv_usecs field of a variable of type struct time stamp is larger than the number of microseconds in a second. This is usually due to some type of software error.
tablename is not a table
The object with the name tablename is not a table object. For example, the nisgrep and nismatch commands will return this error if the object you specify on the command line is not a table.
link error
FNS error message. An error occurred while resolving an XFN link with the supplied name.
link loop limit reached
FNS error message. A nonterminating loop was detected due to XFN links encountered during composite name resolution, or the implementation-defined limit was exceeded on the number of XFN links allowed for a single operation.
Link Points to illegal name
The passed name resolved to a LINK type object and the contents of the object pointed to an invalid name.
You cannot link table entries. A link at the entry level can produce this error message.
This message is generated by the NIS+ error code constant: NIS_LINKNAMEERROR. See the nis_tables and nis_names man pages for additional information.
Load limit of number reached!
An attempt has been made to create a child process when the maximum number of child processes have already been created on this server. This message is seen on the server's system log, but only if the threshold for logging messages has been set to include LOG_WARNING level messages.
login and keylogin passwords differ.
This message is displayed when you are changing your password with nispasswd and the system has changed your password, but has been unable to update your credential entry in the cred table with the new password and also unable to restore your original password in the passwd table. This message is followed by the instructions:
These instructions are then followed by a status message explaining why it was not possible to revert back to the old password. If you see these messages, be sure to follow the instructions as given.
Login incorrect
The most common cause of a "login incorrect" message is mistyping the password. Try it again. Make sure you know the correct password. Remember that passwords are case-sensitive (uppercase letters are considered different than lowercase letters) and that the letter "o" is not interchangeable with the numeral "0," nor is the letter "l" the same as the numeral "1".
For other possible causes of this message, see "Login Incorrect Message".
log_resync:".
malformed link
FNS error message. A malformed link reference was found during a fn_ctx_lookup_link() operation. The name supplied resolved to a reference that was not a link.
Malformed Name or illegal name
The name passed to the function is not a legal or valid NIS+ name.
One possible cause for this message is that someone changed an existing domain name. Existing domain names should not be changed. See "Changed Domain Name".
This message is generated by the NIS+ error code constant: NIS_BADNAME. See the nis_tables man page for additional information.
_map_addr: RPC timed out.
A process or application could not contact NIS+ within its default time limit to get necessary data or resolve host names from NIS+. In most cases, this problem will solve itself after a short wait. See "NIS+ Performance and System Hang Problems" for additional information about slow performance problems.
Master server busy full dump rescheduled
This message indicates that a replica server has been unable to update itself with a full dump from the master server because the master is busy. See "Replica Update Failure" for additional information.
String Missing or malformed attribute
The name of an attribute did not match with a named column in the table, or the attribute did not have an associated value.
This could indicate an error in the syntax of a command. The string should give an indication of what is wrong. Common causes are spelling errors, failure to correctly place the equals sign (=), an incorrect column or table name, and so forth.
This message is generated by the NIS+ error code constant: NIS_BADATTRIBUTE. See the nis_tables man page for additional information.
Modification failed
Returned by the nisgrpadm command when someone else modified the group during the execution of your command. Check to see who else is working with this group. Reissue the command.
This message is generated by the NIS+ error code constant: NIS_IBMODERROR.
Modify operation failed
The attempted modification failed for some reason.
This message is generated by the NIS+ error code constant: NIS_MODFAIL. See the nis_tables and nis_names man pages for additional information.
servername named [nnnn]: directory directoryname: No such file or directory.
DNS error message. This usually indicates a syntax or spelling error in a DNS boot or data file.
servername named [nnnn]: /etc/named.boot: line n unknown field `name'
DNS error message. This often indicates a spelling error in the DNS named.boot file. For example, "primary" or "secondary" might be misspelled.
servername named [nnnn]: servername has CNAME and other data (illegal)
DNS error message. This often indicates a syntax error in, or misuse of, a CNAME record for machine servername.
servername named [nnnn]: domainname Line n: Database format error (n.n.n.n.n)
DNS error message. The resource record for the machine in domain name1, whose IP address is n.n.n.n might be missing the type (usually IN) or have some other syntax error.
servername named [nnnn]: Line n Unknown type: n.n.n.n.
DNS error message. The DNS hosts file resource record for the machine whose IP address is n.n.n.n does not include the type (usually IN).
servername named [nnnn]: secondary zone zonename expired.
DNS error message. See "Server Failed and Zone Expired Problems".
servername named [nnnn]: zoneref: Masters for secondary zone zonename unreachable
DNS error message. See "Server Failed and Zone Expired Problems".
name in use
FNS error message. The name supplied is already bound in the context.
name not found
FNS error message. The name supplied was not found.
Name not served by this server
A request was made to a server that does not serve the specified name. Normally this will not occur; however, if you are not using the built-in location mechanism for servers, you might see this if your mechanism is broken.
Other possible causes are:
Cold-start file corruption. Delete the /var/nis/NIS_COLD_START file and then reboot.
Cache problem such as the local cache being out of date. Kill the nis_cachemgr and /var/nis/NIS_SHARED_DIRCACHE, and then reboot. (If the problem is not in the root directory, you might be able to kill the domain cache manager and try the command again.)
Someone removed the directory from a replica.
This message is generated by the NIS+ error code constant: NIS_NOT_ME. See the nis_tables and nis_names man pages for additional information.
Named object is not searchable
The table name resolved to an NIS+ object that was not searchable.
This message is generated by the NIS+ error code constant: NIS_NOTSEARCHABLE. See the nis_tables man page for additional information.
Name/entry isn't unique
An operation has been requested based on a specific search criteria that returns more than one entry. For example, you use nistbladm -rto delete a user from the passwd table, and there are two entries in that table for that user name as shown as follows:
You can apply your command to multiple entries by using the -R option rather than -r. For example, to remove all entries for arnold:
NIS make terminated
A problem caused your NIS make operation to terminate before successful conclusion. Check your NIS make file for problems and syntax errors.
NIS: server not responding for domain domainname. Still trying
See "NIS Problems and Solutions".
NIS+ error
The NIS+ server has returned an error, but the passwd command determines exactly what the error is.
NisDirCacheEntry:write: xdr_directory_obj failed
The most likely causes for this message is that an attempt to allocate system memory failed. See "Insufficient Memory" for a discussion of memory problems. If your system does not seem to be short of memory, contact the Sun Solutions Center.
NIS+ operation failed
This generic error message should be rarely seen. Usually it indicates a minor software problem that the system can correct on it own. If it appears frequently, or appears to be indicating a problem that the system is not successfully dealing with, contact the Sun Solutions Center.
This message is generated by the NIS+ error code constant: NIS_FAIL.
string: NIS+ server busy try again later.
See "NIS+ Performance and System Hang Problems" for possible causes.
NIS+ server busy try again later.
Self explanatory. Try the command later.
See also "NIS+ Performance and System Hang Problems" for possible causes.
NIS+ server for string not responding still trying
See "NIS+ Performance and System Hang Problems" for possible causes.
NIS+ server not responding
See "NIS+ Performance and System Hang Problems" for possible causes.
NIS+ server needs to be checkpointed. Use nisping -Cdomainname
Checkpoint immediately! Do not wait!
This message is generated at the LOG_CRIT level on the server's system log. It indicates that the log is becoming too large. Use nisping -C domainname to truncate the log by checkpointing.
See also " Logs Grow too Large" for additional information on log size.
NIS+ servers unreachable
This soft error indicates that a server for the desired directory of the named table object could not be reached. This can occur when there is a network failure or the server has crashed. A new attempt might succeed. See the description of the -HARD_LOOKUP flag in the nis_tables and nis_names man pages.
This message is generated by the NIS+ error code constant: NIS_NaMEUNREACHABLE.
NIS+ service is unavailable or not installed
Self-explanatory. This message is generated by the NIS+ error code constant: NIS_UNAVAIL.
NIS+: write ColdStart File: xdr_directory_obj failed
The most likely causes for this message are:
Bad or incorrect parameters..
nis_checkpoint_svc: readonly child instructed to checkpoint ignored.
This is simply a status message indicating that a read-only process attempted to perform an operation restricted to the parent process, and the attempt was aborted. No action need be taken.
nis_dumplog_svc: readonly child called to dump log, ignore
This is simply a status message indicating that a read-only process attempted to perform an operation restricted to the parent process, and the attempt was aborted. No action need be taken.
nis_dump_svc: load limit reached.
The maximum number of child processes permitted on your system has been reached.
nis_dump_svc: one replica is already resyncing.
Only one replica can resync from a master at a time. Try the command later.
See "Replica Update Failure" for information on these three error messages.
nis_dump_svc: Unable to fork a process.
The fork system call has failed. See the fork man page for possible causes.
nis_mkdir_svc: readonly child called to mkdir, ignored
This is simply a status message indicating that a read-only process attempted to perform an operation restricted to the parent process, and the attempt was aborted. No action need be taken.
nis_ping_svc: readonly child was pung ignored.
This is simply a status message indicating that a read-only process attempted to perform an operation restricted to the parent process, and the attempt was aborted. No action need be taken.
nis_rmdir_svc: readonly child called to rmdir, ignored
This is simply a status message indicating that a read-only process attempted to perform an operation restricted to the parent process, and the attempt was aborted. No action need be taken.
nisaddcred: no password entry for uid userid nisaddcred: unable to create credential.
These two messages are generated during execution of the nispopulate script. The NIS+ command nisaddcred failed to add a LOCAL credential for the user ID userid on a remote domain. (This only happens when you are trying to populate the
passwd table in a remote domain.)
To correct the problem, add a table path in the local passwd table:
The remote-domain must be the same domain that you specified with the -d option when you ran nispopulate. Rerun the script to populate the passwd table.
No file space on server
Self-explanatory.
This message is generated by the NIS+ error code constant: NIS_NOFILESPACE.
No match
This is most likely an error message from the shell, caused by failure to escape the brackets when specifying an indexed name. For example, failing to set off a bracketed indexed name with quote marks would generate this message because the shell would fail to interpret the brackets as shown as follows:
The correct syntax is:
No memory
Your system does not have enough memory to perform the specified operation. See "NIS+ System Resource Problems" for additional information on memory problems.
Non NIS+ namespace encountered
The name could not be completely resolved. This usually indicates that the name passed to the function resolves to a namespace that is outside the NIS+ name tree. In other words, the name is contained in an unknown directory. When this occurs, this error is returned with an NIS+ object of type DIRECTORY.
This message is generated by the NIS+ error code constant: NIS_FOREIGNNS. See the nis_tables or nis_names man pages for additional information.
No password entry for uid userid No password entry found for uid userid
Both of these two messages indicate that no entry for this user was found in the passwd table when trying to create or add a credential for that user. (Before you can create or add a credential, the user must be listed in the passwd table.)
The most likely cause is misspelling the user's userid on the command line. Check your command line for correct syntax and spelling.
Check that you are either in the correct domain, or specifying the correct domain on the command line.
If the command line is correct, check the passwd table to make sure the user is listed under the userid you are entering. This can be done with nismatch:
If the user is not listed in the passwd table, use nistbladm or nisaddent to add the user to the passwd table before creating the credential.
no permission
FNS error message. The operation failed because of access control problems. See ""No Permission" Messages (FNS)". See also "No Permission".
No shadow password information
This means that password aging cannot be enforced because the information used to control aging is missing.
no such attribute
FNS error message. The object did not have an attribute with the given identifier.
no supported address
FNS error message. No shared library could be found under the /usr/lib/fn directory for any of the address types found in the reference bound to the FNS name. Shared libraries for an address type are named according to this convention: fn_ctx_address_type.so. Typically there is a link from fn_ctx_address_type.so to fn_ctx_address_type.so.1.
For example, a reference with address type onc_fn_nisplus would have a shared library in the path name: /usr/lib/fn/fn_ctx_onc_fn_nisplus.so.
not a context
FNS error message. The reference does not correspond to a valid context.
Not found String Not found
Names context. The named object does not exist in the namespace.
Table context. No entries in the table matched the search criteria. If the search criteria was null (return all entries), then this result means that the table is empty and can safely be removed.
If the -FOLLOW_PATH flag was set, this error indicates that none of the tables in the path contain entries that match the search criteria.
This message is generated by the NIS+ error code constant: NIS_NOTFOUND. See the nis_tables and nis_names man pages for additional information.
See also "NIS+ Object Not Found Problems" for general information on this type of problem.
Not Found no such name
This hard error indicates that the named directory of the table object does not exist. This could occur when the server that should be the parent of the server that serves the table, does not know about the directory in which the table resides.
This message is generated by the NIS+ error code constant: NIS_NOSUCHNAME. See the nis_names and nis_names man pages for additional information.
See also "NIS+ Object Not Found Problems" for general information on this type of problem.
Not master server for this domain
This message might mean that an attempt was made to directly update the database on a replica server.
This message might also mean that a change request was made to a server that serves the name, but it is not the master server. This can occur when a directory object changes and it specifies a new master server. Clients that have cached copies of that directory object in their /var/nis/NIS_SHARED_DIRCACHE file should run ps to obtain the process ID of the nis_cachemgr, kill the nis_cachemgr process, remove the /var/nis/NIS_SHARED_DICACHE file, and then restart nis_cachemgr.
This message is generated by the NIS+ error code constant: NIS_NOTMASTER. See the nis_tables and nis_names man pages for additional information.
Not owner
The operation you attempted can only be performed by the object's owner, and you are not the owner.
This message is generated by the NIS+ error code constant: NIS_NOTOWNER.
operation not supported
FNS error message. The operation is not supported by the context. For example, trying to destroy an organization is not supported.
Object with same name exists
An attempt was made to add a name that already exists. To add the name, first remove the existing name and then add the new name or modify the existing named object.
This message is generated by the NIS+ error code constant: NIS_NAMEEXISTS. See the nis_tables and nis_names man pages for additional information.
parse error: string (key variable)
This message is displayed by the nisaddent command when it attempts to use database files from a /etc directory and there is an error in one of the file's entries. The first variable should describe the problem, and the variable after key should identify the particular entry at fault. If the problem is with the /etc/passwd file, you can use /usr/sbin/pwck to check it.
partial result returned
FNS error message. The operation returned a partial result.
Partial Success
This result is similar to NIS_NOTFOUND, except that it means the request succeeded but resolved to zero entries.
When this occurs, the server returns a copy of the table object instead of an entry so that the client can then process the path or implement some other local policy.
This message is generated by the NIS+ error code constant: NIS_PARTIAL. See the nis_tables man page for additional information.
Passed object is not the same object on server
An attempt to remove an object from the namespace was aborted because the object that would have been removed was not the same object that was passed in the request.
This message is generated by the NIS+ error code constant: NIS_NOTSAMEOBJ. See the nis_tables and nis_names man pages for additional information.
Password does not decrypt secret key for name
Possible causes:
You might have incorrectly typed the password.
There might not be an entry for name in the cred table.
NIS+ could not decrypt the key (possibly because the entry might be corrupt).
The Secure RPC password does not match the login password.
The nsswitch.conf file might be directing the query to a local password in an /etc/passwd file that is different from the NIS+ password recorded in the cred table. (Note that the actual encrypted passwords are stored locally in the /etc/shadow file.)
See "NIS+ Security Problems" for information on diagnosing and solving these types of problems.
Password has not aged enough
This message indicates that your password has not been in use long enough and that you cannot change it until it has been in use for N (a number of) days. See "Changing Your Password" for further information.
Permission denied
Returned when you do not have the permissions required to perform the operation you attempted. See "NIS+ Ownership and Permission Problems" for additional information.
This message might be related to a login or password matter, or an NIS+ security problem. The most common cause of a Permission denied message is that the password of the user receiving it has been locked by an administrator or the user's account has been terminated. See Chapter 11, Administering Passwords, and the "NIS+ Security Problems" section of Appendix A, Problems and Solutions.
Permissions on the password database may be too restrictive
You do not have authorization to read (or otherwise use) the contents of the passwd field in an NIS+ table. See Chapter 10, Administering NIS+ Access Rights, for information on NIS+ access rights.
Please notify your System Administrator
When displayed as a result of an attempt to update password information with the passwd command, this message indicates that the attempt failed for one of many reasons. For example, the service might not be available, a necessary server is down, there is a "permission denied" type problem, and so forth. See "NIS+ Security Problems" for a discussion of various types of security problems.
Please check your /etc/nsswitch.conf file
The nsswitch.conf file specifies a configuration that is not supported for passwd update. See "nsswitch.conf File Requirements" for supported configurations.
Probable success
Name context. The request was successful; however, the object returned came from an object cache and not directly from the server. (If you do not want to see objects from object caches, you must specify the flag -NO_CACHE when you call the lookup function.)
Table context. Even though the request was successful, a table in the search path was not able to be searched, so the result might not be the same as the one you would have received if that table had been accessible.
This message is generated by the NIS+ error code constant: NIS_S_SUCCESS. See the nis_tables and nis_names man pages for additional information.
Probably not found
The named entry does not exist in the table; however, not all tables in the path could be searched, so the entry might exist in one of those tables.
This message is generated by the NIS+ error code constant: NIS_S_NOTFOUND. See the nis_tables man page for additional information.
Query illegal for named table
A problem was detected in the request structure passed to the client library.
This message is generated by the NIS+ error code constant: NIS_BADREQUEST. See the nis_tables man page for additional information.
Reason: can't communicate with ypbind.
See "NIS Problems and Solutions"
replica_update: Child process attempting update, aborted
This is simply a status message indicating that a read-only process attempted an update and the attempt was aborted.
replica_update: error result was string
This message indicates a problem (identified by string) in carrying out a dump to a replica. See "Replica Update Failure" for further information.
replica_update: error result was Master server busy, full dump rescheduled replica_update: master server busy rescheduling the resync. replica_update: master server is busy will try later. replica_update: nis dump result Master server busy, full dump rescheduled
These messages all indicate that the server is busy and the dump will be done later.
replica_update: nis dump result nis_perror errorstring
This message indicates a problem (identified by the error string) in carrying out a dump to a replica. See "Replica Update Failure" for further information.
replica_update: nnnn updates nnnn errors
A status message indicating a successful update.
replica_update: WARNING: last_update (directoryname) returned 0!
A NIS+ process could not find the last update time stamp in the transaction log for that directory. This will cause the system to perform a full resync of the problem directory.
Results Sent to callback proc
This is simply a status message. No action need be taken.
This message is generated by the NIS+ error code constant: NIS_CBRESULTS. See the nis_tables man page for additional information.
root_replica_update: update failed string: could not fetch object from master.
This message indicates a problem in carrying out a dump to a replica. See "Replica Update Failure" for further information.
RPC failure: "RPC failure on yp operation.
This message is returned by ypcat when a NIS client's nsswitch.conf file is set to files rather than nis, and the server is not included in the /etc/hosts or /etc/inet/ipnodes file
Security exception on local system. UNABLE TO MAKE REQUEST.
This message might be displayed if a user has the same login ID as a machine name. See "User Login Same as Machine Name" for additional information.
date: hostname: sendmail (nnnn) : gethostbyaddr failed
One common cause of this problem is entering IP addresses in NIS+, NIS, files, or DNS data sets with leading zeros. For example, you should never enter an IP address as 151.029.066.001. The correct way to enter that address is: 151.29.66.1.
Server busy, try again
The server was too busy to handle your request.
For the add, remove, and modify operations, this message is returned when either the master server for a directory is unavailable or it is in the process of checkpointing its database.
This message can also be returned when the server is updating its internal state.
In the case of nis_list, if the client specifies a callback and the server does not have enough resources to handle the callback.
Retry the command at a later time when the server is available.
This message is generated by the NIS+ error code constant: NIS_TRYAGAIN. See the nis_tables and nis_names man pages for additional information.
Server out of memory
In most cases this message indicates a fatal result. It means that the server ran out of heap space.
This message is generated by the NIS+ error code constant: NIS_NOMEMORY. See the nis_tables and nis_names man pages for additional information.
Sorry
This message is displayed when a user is denied permission to login or change a password, and for security reasons the system does not display the reason for that denial because such information could be used by an unauthorized person to gain illegitimate access to the system.
Sorry: less than nn days since the last change
This message indicates that your password has not been in use long enough and that you cannot change it until it has been in use for N days. See "Changing Your Password" for further information.
Success
(1) The request was successful. This message is generated by the NIS+ error code constant: NIS_SUCCESS. See the nis_tables man page for additional information.
(2) FNS error message. Operation succeeded.
_svcauth_des: bad nickname
The nickname received from the client is invalid or corrupted, possibly due to network congestion. The severity of this message depends on what level of security you are running. At a low security level, this message is informational only; at a higher level, you might have to try the command again later.
for principalname: invalid timestamp received from principalname
The time stamp received from the client is corrupted, or the server is trying to decrypt it using the wrong key. Possible causes:
Congested network. Retry the command.
Server cached out the entry for this client. Check the network load.
_svcauth_des: key_decryptsessionkey failed for principalname
The keyserv process failed to decrypt the session key with the given public key. Possible causes are:
The keyserv process is dead or not responding. Use ps -ef to check if the keyserv process is running on the keyserv host. If it is not, then restart it and run keylogin.
The server principal has not keylogged in. Run keylogin for the server principal.
The server principal (host) does not have credentials. Run nismatch hostname.domainname. cred.org_dir on the client's home domain cred table. Create new credentials if necessary.
keyserv might have been restarted, in which case certain long-running applications, such as rpc.nisd, sendmail, and automountd, also need to be restarted.
DES encryption failure. Call the Sun Solutions Center.
_svcauth_des: no public key for principalname
The server cannot get the client's public key. Possible causes are:
The principal has no public key. Run nismatch on the cred table of the principal's home domain. If there is no DES credential in that table for the principal, use nisaddcred to create one, and then run keylogin for that principal.
The name service specified by a nsswitch.conf file is not responding.
_svcauth_des: replayed credential from principalname
The server has received a request and finds an entry in its cache for the same client name and conversation key with the time stamp of the incoming request before that of the one currently stored in the cache.
The severity of this message depends on what level of security you are running. At a low security level, this message is primarily for your information. At a higher level, you might have to take corrective action as described below.
Possible causes are:
The client and server clocks are out of sync. Use rdate to resync the client clock to the server clock.
The server is receiving requests in random order. This could occur if you are using multithreading applications. If your applications support TCP, then set /etc/netconfig (or your
NETPATH environment variable) to tcp.
_svcauth_des: timestamp is earlier than the one previously seen from principalname
The time stamp received from the client on a subsequent call is earlier than one seen previously from that client. The severity of this message depends on what level of security you are running. At a low security level, this message is primarily for your information; at a higher level, you might have some corrective action as described below.
Possible causes are:
The client and server clocks are out of sync. Use rdate to resynch the client clock to the server clock.
The server cached out the entry for this client. The server maintains a cache of information regarding the current clients. This cache size equals 64 client handles.
_svcauth_des: timestamp expired for principalname
The time stamp received from the client is not within the default 35-second window in which it must be received. The severity of this message depends on what level of security you are running. At a low security level, this message is primarily for your information; at a higher level, you might have to take corrective action as described below.
Possible causes are:
The 35-second window is too small to account for slow servers or a slow network.
The client and server clocks are so far out of sync that the window cannot allow for the difference. Use rdate to resynchronize the client clock to the server clock.
The server has cached out the client entry. Retry the operation.
syntax not supported
FNS error message. The syntax type is not supported.
Too Many Attributes
The search criteria passed to the server had more attributes than the table had searchable columns.
This message is generated by the NIS+ error code constant: NIS_TOOMANYATTRS. See the nis_tables man page for additional information.
too many attribute values
FNS error message. The operation attempted to associate more values with an attribute than the naming system supports.
Too many failures - try later
Too many tries; try again later
These messages refer to logging in or changing your password. They indicate that you have had too many failed attempts (or taken too long) to either log in or change your password. See "The Login incorrect Message" or "Password Change Failures" for further information.
Unable to authenticate NIS+ client
This message is generated when a server attempts to execute the callback procedure of a client and gets a status of RPC_AUTHERR from the RPC clnt_call(). This is usually caused by out-of-date authentication information. Out-of-date authentication information can occur when the system is using data from a cache that has not been updated, or when there has been a recent change in the authentication information that has not yet been propagated to this server. In most cases, this problem should correct itself in a short period of time.
If this problem does not self-correct, it might indicate one of the following problems:
Corrupted /var/nis/NIS_SHARED_DIRCACHE file. Kill the cache manager, remove this file, and restart the cache manager.
Corrupted /var/nis/NIS_COLD_START file. Remove the file and then run nisinit to recreate it.
Corrupted /etc/.rootkey file. Run keylogin -r.
This message is generated by the NIS+ error code constant: NIS_CLNTAUTH.
Unable to authenticate NIS+ server
In most cases, this is a minor software error from which your system should quickly recover without difficulty. It is generated when the server gets a status of RPC_AUTHERR from the RPC clnt_call.
If this problem does not quickly clear itself, it might indicate a corrupted /var/nis/NIS_COLD_START, /var/nis/NIS_SHARED_DIRCACHE, or /etc/.rootkey file.
This message is generated by the NIS+ error code constant: NIS_SRVAUTH.
Unable to bind to master server for name 'string'
See "NIS+ Object Not Found Problems" for information on this type of problem. This particular message might be caused by adding a trailing dot to the server's domain name in the /etc/defaultdomain file.
Unable to create callback.
The server was unable to contact the callback service on your machine. This results in no data being returned.
See the nis_tables man page for additional information.
Unable to create process on server
This error is generated if the NIS+ service routine receives a request for a procedure number which it does not support.
This message is generated by the NIS+ error code constant: NIS_NOPROC.
string: Unable to decrypt secret key for string.
Possible causes:
You might have incorrectly typed the password.
There might not be an entry for name in the cred table.
NIS+ could not decrypt the key.
unavailable
FNS error message. The name service on which the operation depends is unavailable.
Unknown error
This is displayed when the NIS+ error handling routine receives an error of an unknown type.
Unknown object
The object returned is of an unknown type.
This message is generated by the NIS+ error code constant: NIS_UNKNOWNOBJ. See the nis_names man page for additional information.
update_directory: nnnn objects still running.
This is a status message displayed on the server during the update of a directory during a replica update. You do not need to take any action.
User principalname needs Secure RPC credentials to login but has none.
The user has failed to perform a keylogin. This problem usually arises when the user has different passwords in /etc/shadow and a remote NIS+ passwd table.
Warning: couldn't reencrypt secret key for principalname
The most likely cause of this problem is that your Secure RPC password is different from your login password (or you have one password on file in a local /etc/shadow file and a different one in a remote NIS+ table) and you have not yet done an explicit keylogin. See "NIS+ and Login Passwords in /etc/passwd File" and " Secure RPC Password and Login Passwords Are Different" for more information on these types of problems.
WARNING: db::checkpoint: could not dump database: No such file or directory
This message indicates that the system was unable to open a database file during a checkpoint. Possible causes:
The database file was deleted.
The server is out of file descriptors.
There is a disk problem
You or the host do not have correct permissions.
WARNING: db_dictionary::add_table: could not initialize database from scheme
The database table could not be initialized. Possible causes:
There was a system resource problem See "NIS+ System Resource Problems").
You incorrectly specified the new table in the command syntax.
The database is corrupted.
WARNING: db_query::db_query:bad index
In most cases this message indicates incorrect specification of an indexed name. Make sure that the indexed name is found in the specified table. Check the command for spelling and syntax errors.
**WARNING: domain domainname already exists.
This message indicates that the domain you tried to create already exists.
If you are trying to promote a new nonroot master server or are recovering from a previous nisserver problem, continue running the script.
If domainname was spelled incorrectly, rerun the script with the correct domain name.
**WARNING: failed to add new member NIS+_principle into the groupname group. You will need to add this member manually: 1. /usr/sbin/nisgrpadm -a groupname NIS+_principal
The NIS+ command nisgrpadm failed to add a new member into the NIS+ group groupname. Manually add this NIS+ principal by typing:
**WARNING: failed to populate tablename table.
The nisaddent command was unable to load the NIS+ tablename table. A more detailed error message usually appears before this warning message.
**WARNING: hostname specified will not be used. It will use the local hostname instead.
This message indicates that you typed a remote host name with the -H option. The nisserver -rscript does not configure remote machines as root master servers.
If the local machine is the one that you want to convert to an NIS+ root master server, no other action is needed. The nisserver -rscript will ignore the host name you typed.
If you actually want to convert the remote host (instead of the local machine) to an NIS+ root master server, exit the script. Rerun the nisserver -rscript on the remote host.
**WARNING: hostname is already a server for this domain. If you choose to continue with the script, it will try to replicate the groups_dir and org_dir directories for this domain.
This is a message warning you that hostname is already a replica server for the domain that you are trying to replicate.
If you are running the script to fix an earlier nisserver problem, continue running the script.
If hostname was mistakenly entered, rerun the script with the correct host name.
**WARNING: alias-hostname is an alias name for host canonical_hostname. You cannot create credential for host alias.
This message indicates that you have typed a host alias in the name list for nisclient -c. The script asks you if you want to create the credential for the canonical host name, since you should not create credentials for host alias names.
**WARNING: file directory-path/tablename does not exist! tablename table will not be loaded.
The script was unable to find the input file for tablename.
If directory-path/tablename is spelled incorrectly, rerun the script with the correct table name.
If the directory-path/tablename file does not exist, create and update this file with the proper data. Then rerun the script to populate this table.
**WARNING: NIS auto.master map conversion failed. auto.master table will not be loaded.
The
auto.master map conversion failed while trying to convert all the dots to underscores in the auto_master table. Rerun the script with a different NIS server.
**WARNING: NIS netgroup map conversion failed. netgroup table will not be loaded.
The
netgroup map conversion failed while trying to convert the NIS domain name to the NIS+ domain name in the
netgroup map. Rerun the script with a different NIS server.
**WARNING: nisupdkeys failed on directory domainname. This script will not be able to continue. Please remove the domainname directory using `nisrmdir'.
The NIS+ command nisupdkeys failed to update the keys in the listed directory object. If rpc.nisd is not running on the new master server that is supposed to serve this new domain, restart rpc.nisd. Then use nisrmdir to remove the domainname directory. Finally, rerun nisserver.
WARNING: nisupdkeys failed on directory directory-name You will need to run nisupdkeys manually: 1. /usr/lib/nis/nisupdkeys directory-name
The NIS+ command nisupdkeys failed to update the keys in the listed directory object. Manually update the keys in the directory object by typing:
**WARNING: once this script is executed, you will not be able to restore the existing NIS+ server environment. However, you can restore your NIS+ client environment using "nisclient -r" with the proper domainname and server information. Use "nisclient -r" to restore your NIS+ client environment.
These messages appear if you have already run the script at least once before to set up an NIS+ server. They indicate that NIS+-related files will be removed and recreated as needed if you decide to continue running this script.
If it is all right for these NIS+ files to be removed, continue running the script.
If you want to save these NIS+ files, exit the script by typing "n" at the Do you want to continue? prompt. Then save the NIS+ files in a different directory and rerun the script.
**WARNING: this script removes directories and files related to NIS+ under /var/nis directory with the exception of the NIS_COLD_START and NIS_SHARED_DIRCACHE files which will be renamed to <file>.no_nisplus. If you want to save these files, you should abort from this script now to save these files first.
See "WARNING: once this script is executed,..." above.
**WARNING: you must specify the NIS domainname.
This message indicates that you did not type the NIS domain name at the prompt. Type the NIS server domain name at the prompt.
**WARNING: you must specify the NIS server hostname. Please try again.
This message indicates that you did not type the NIS server host name at the prompt. Type the NIS server host name at the prompt.
Window verifier mismatch
This is a debugging message generated by the _svcauth_des code. A verifier could be invalid because a key was flushed out of the cache. When this occurs, _svcauth_des returns the AUTH_BADCRED status.
You (string) do not have Secure RPC credentials in NIS+ domain 'string'
This message could be caused by trying to run nispasswd on a server that does not have the credentials required by the command. (Keep in mind that servers running at security level 0 do not create or maintain credentials.)
See "NIS+ Ownership and Permission Problems" for additional information on credential, ownership, and permission problems.
You may not change this password
This message indicates that your administrator has forbidden you to change your password.
You may not use nisplus repository
You used -r nisplus in the command line of your command, but the appropriate entry in the NIS+ passwd table was not found. Check the passwd table in question to make sure it has the entry you want. Try adding nisplus to the nsswitch.conf file.
These messages refer to password aging. They indicate that your password has been in use too long and needs to be changed now. See "The password expired Message" for further information.
These messages refer to password aging. They indicate that your password is about to become invalid and should be changed now. See "The will expire Message" for further information.
Your specified repository is not defined in the nsswitch file!
This warning indicates that you have specified a password information repository with the -r option, but that password repository is not included in the passwd entry of the nsswitch.conf file. The command you have just used will perform its job and make whatever change you intend to the password information repository you specified with the -r flag. However, the change will be made to information that the nsswitch.conf file does not point to, so no one will ever gain the benefit of it until the switch file is altered to point to that repository.
For example, suppose the passwd entry of the switch file reads: files nis, and you used
to establish a password age limit. That limit would not affect anyone because they are still using a switch file set to files nis.
verify_table_exists: cannot create table for string nis_perror message.
To perform an operation on a table, NIS+ first verifies that the table exists. If the table does not exist, NIS+ attempts to create it. If it cannot create the table, it returns this error message. The string portion of the message identifies the table that could not be located or created; the nis_perror message portion provides information as to the cause of the problem (you can look up that portion of the message as if it were an independent message in this appendix). Possible causes for this type of problem:
The server was just added as a replica of the directory and it might not have the directory object. Run nisping -C to checkpoint.
You are out of disk space. See "Insufficient Disk Space".
Database corruption.
Some other type of software error. Contact the Sun Solutions Center.
ypcat: can't bind to NIS server for domain domainname. Reason: can't communicate with ypbind.
See "NIS Problems and Solutions"
yppoll: can't get any map parameter.
See "NIS Problems and Solutions"
This.
Do not link table entries. Tables can be linked to other tables, but do not link an entry in one table to an entry in another table.
In the Solaris environment 2,."
This.)
|
http://docs.oracle.com/cd/E19455-01/806-1387/6jam692fi/index.html
|
CC-MAIN-2013-20
|
refinedweb
| 17,086
| 57.67
|
So there I was, wanting to release an application of mine, but I didn�t want to release all the assemblies that my application used. Why? Well, that would be exporting some functionality that this particular app didn�t use, and even though it was dotfuscated, it just felt wrong.
It turns out that .NET, and even .NET 2.0, despite people asking for it, does not support static linking. For those not sure what that is � static linking is when you don�t compile to a DLL, you compile to a .lib, and then the whole thing gets linked together into one .exe file.
There are many arguments as to the pros & cons of static linking, and I am not going to get into that here � suffice to say that I wanted to static link � which is good enough for me.
So � off to Google I go, and I find that Microsoft has a tool called ILMerge, which takes a plethora of parameters. What ILMerge does is de-compile .NET assemblies and then build them all into one primary assembly. Yes, it does .NET 2.0 as well.
You can also use ILMerge within your own applications as it exposes an interface, so you can just reference it (obviously, you can only do this in VS 2005, as prior to that, you cannot use .exe files as a reference � you can however get around this in 2003 if you so wish � search CodeProject � it has been done in here!).
What HTMerge (Hollingside Technologies Merge) does is make it so simple that I just have a post build step as:
htmerge "$(TargetDir)
The � before the $ is to ensure that directory paths with spaces in are parsed correctly. This does assume that HTMerge is on the path. Given that parameter as a post build step, all assemblies that are part of your application are merged into one .exe file, which is stored in a subdirectory called merged beneath the debug/release directory.
As it stands, this code only works properly when building a .exe type project, but the concept stands.
Hope someone finds this useful!
using System; using System.Text; using System.Collections; namespace HTMerge { class Program { static void Main(string[] args) { try { String strDir = ""; if (args.Length != 1) { Console.WriteLine("Usage: HTMerge directoryName"); return; } else { strDir = args[0]; } String[] exeFiles = System.IO.Directory.GetFiles(strDir, "*.exe"); String[] dllFiles = System.IO.Directory.GetFiles(strDir, "*.dll"); ArrayList ar = new ArrayList(); Boolean bAdded = false; //there might be more than 1 exe file, //we go for the first one that isn't the vshost exe foreach (String strExe in exeFiles) { if (!strExe.Contains("vshost")) { ar.Add(strExe); bAdded = true; break; } } if (!bAdded) { Console.WriteLine("Error: No exe could be found"); //I know multiple returns are bad� return; } bAdded = false; foreach (String strDLL in dllFiles) { ar.Add(strDLL); bAdded = true; } //no point merging if nothing to merge with! if (!bAdded) { Console.WriteLine("Error: No DLLs could be found"); //I know multiple returns are bad� return; } //You will need to add a reference to ILMerge.exe from Microsoft //See ILMerging.ILMerge myMerge = new ILMerging.ILMerge(); String[] files = (String[])ar.ToArray(typeof(string)); String strTargetDir = strDir + "\\Merged"; try { System.IO.Directory.CreateDirectory(strTargetDir); } catch { } //Here we get the first file name //(which was the .exe file) and use that // as the output String strOutputFile = System.IO.Path.GetFileName(files[0]); myMerge.OutputFile = strTargetDir + "\\" + strOutputFile; myMerge.SetInputAssemblies(files); myMerge.Merge(); } catch (Exception ex) { Console.WriteLine(String.Format("Error :{0}",ex.Message)); } } } }
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/cs/htmerge.aspx
|
crawl-002
|
refinedweb
| 590
| 66.74
|
TORONTO (ICIS)--TransCanada has decided to proceed with its 1.1m bbl/day pipeline project to ship oil from ?xml:namespace>
TransCanada said that a successful open season confirmed strong market support from oil producers and refiners for the project, with about 900,000 bbl/day of “firm, long-term contracts to transport crude oil from western
The pipeline would help lessen western
TransCanada CEO Russ Girling said that the project – known as Energy East Pipeline – would give refiners in eastern
However, Keystone XL and other pipeline projects are still needed, Girling added.
"Energy East is one solution for transporting crude oil, but the industry also requires additional pipelines such as Keystone XL to transport growing supplies of Canadian and
Energy East Pipeline will have a capacity of about 1.1m bbl/day. It is expected to be in service by late 2017 for deliveries in
The Canadian dollar (C$) 12bn ($11.7bn, €8.8m) project involves converting existing 3,000km (1,865 miles) of natural gas pipeline capacity to crude oil service, and constructing about 1,400km of new pipeline.
The pipeline will terminate at
Shipping crude oil via pipeline, rather than rail, gained in public support in
($1 = €0.75, $1 = C$1.03)
TransCanada to proceed with 1.1m bbl/day Canada west-to-east pipeline
TORONTO (ICIS)--TransCanada has decided to proceed with its 1.1m bbl/day pipeline project to ship oil from ?xml:namespace>
|
https://www.icis.com/resources/news/2013/08/01/9693337/transcanada-to-proceed-with-1-1m-bbl-day-canada-west-to-east-pipeline/
|
CC-MAIN-2017-47
|
refinedweb
| 240
| 51.68
|
07 July 2009 16:32 [Source: ICIS news]
LONDON (ICIS news)--The Commodity Futures Trading Commission (CFTC) is considering the introduction of limits on the holdings of energy futures traders to stem the market effects of price speculation, the US regulator announced on Tuesday.
The CFTC will hold a series of hearings throughout July and August to determine the need for government-imposed restrictions in oil, gas and other energy markets.
“My firm belief is that we must aggressively use all existing authorities to ensure market integrity,” said CFTC chairman Gary Gensler.
“The Commodity Exchange Act states that the CFTC shall impose limits on trading and positions as necessary to eliminate the undue burdens on interstate commerce that may result from excessive speculation.
“Our first hearing will focus on whether federal speculative limits should be set by the CFTC to all commodities of finite supply, in particular energy commodities,” he added.
The tougher stance follows an announcement from the ?xml:namespace>
In 2008, many market sources attributed the massive spike in crude oil prices that led to the record $147/bbl in July to the involvement of speculators.
Recently, futures prices hit this year’s peak of $73/bbl. This particular spike, on 30 June, was reportedly caused by a rogue trader with international brokerage firm PVM Oil Futures.
Crude oil prices had been steadily increasing from $44/bbl at the start of 2009 despite the findings of numerous market reports pointing to falling global demand for the product.
August Brent futures were trading at $63.13/bbl on Tuesday
|
http://www.icis.com/Articles/2009/07/07/9230903/cftc-mulls-limits-on-holdings-of-energy-market-speculators.html
|
CC-MAIN-2015-18
|
refinedweb
| 260
| 50.36
|
David Harvey: would like to experiment further with speeding up object construction
gmpy lessons???
Do "sage -i gmpy-1.0.1". Then you can see the following (*ignore* the wall time):
import gmpy
a = gmpy.mpz(9393r); b = gmpy.mpz(1293r)
%time for i in range(10^6): c = a * b /// CPU time: 0.38 s, Wall time: 0.53 s
a = 9393; b = 1293
%time for i in range(10^6): c = a * b /// CPU time: 0.78 s, Wall time: 1.16 s
a = 9393r; b = 1293r
%time for i in range(10^6): c = a * b /// CPU time: 0.37 s, Wall time: 0.66 s
====================================
import gmpy
a = ZZ.random_element(2^256); b = ZZ.random_element(2^256)
a = gmpy.mpz(int(a)); b = gmpy.mpz(int(b))
%time for i in range(10^6): c = a * b /// CPU time: 0.59 s, Wall time: 0.86 s
a = ZZ(long(a)); b = ZZ(long(b))
%time for i in range(10^6): c = a * b /// CPU time: 1.12 s, Wall time: 1.41 s
a = long(a); b = long(b)
%time for i in range(10^6): c = a * b /// CPU time: 1.62 s, Wall time: 2.15 s
hi people,
Just want to float an idea for discussion and possibly a coding sprint at SD3.
Some background: Today I did some work on speeding up getting coefficients out of NTL objects, specifically polynomials in Z[x]. It's much improved now; when you request a coefficient of an NTL ZZX object, it copies the bytes directly into a new Integer object, instead of what it used to do (which went via a C string in decimal, and a python string, and a python long, etc etc).
But still what is taking a lot of time is constructing the Integer object. In fact, it's quite embarrasing: it takes us about half as long to construct 100,000 Integer objects as NTL takes to *multiply* two polynomials with 100,000 small coefficients:
sage: time for i in range(100000): pass CPU times: user 0.05 s, sys: 0.00 s, total: 0.05 s sage: time for i in range(100000): x = None CPU times: user 0.09 s, sys: 0.00 s, total: 0.09 s sage: time for i in range(100000): x = int() CPU times: user 0.16 s, sys: 0.00 s, total: 0.17 s sage: time for i in range(100000): x = Integer() CPU times: user 0.36 s, sys: 0.00 s, total: 0.36 s sage: f = PolynomialRing(ZZ, "x")([ZZ.random_element() for _ in range (100000)]) sage: time g = f*f CPU times: user 0.76 s, sys: 0.02 s, total: 0.79 s
This is despite all the work we put into this at SD2.
It would be good to be able to optimise object construction in general. Unfortunately I think the general case is a very difficult problem. Anyone who worked on this at SD2 will agree, I'm sure
On the other hand, I would wager that construction of Integer objects is by far the most important. So perhaps we should give up some beauty and unity of code to just get Integers working damn fast. So here's what I propose: at SD3, let's try writing an experimental pure C function for constructing Integers that gets inserted into whatever tp_xyz slot is appropriate. I don't care if it has to deal directly with mangled pyrex names or whatever. From memory, all it needs to do is: (1) reference counting on the integer ring (ha ha we could even skip this if we could guarantee that no-one else ever resets the parent, and that there is always at least one reference to the integer ring lying around somewhere) (2) malloc some space for the actual python object (3) fill in some fields, like the TypeObject* (4) mpz_init.
David
> (1) reference counting on the integer ring (ha ha we could even skip > this if we could guarantee that no-one else ever resets the parent, > and that there is always at least one reference to the integer ring > lying around somewhere) There is my integer with the integer ring -- it should be immutable and create at module load time and there should only ever be exactly one copy of it. I think we should definitely be allowed to forgot about reference counting for it. > (2) malloc some space for the actual python object > (3) fill in some fields, like the TypeObject* > (4) mpz_init You should put (0) or (5) object pool as an important step -- this "object pool" idea is one of the tricks that Python uses for its ints. For example, in your benchmark: time for i in range(10^5): x = int() Python is looking up and returning exactly the same int (the 0) every time: sage: int() is int() True sage: a = 999038r; b=999038r sage: a is b True In contrast, when you do Integer(), so is creating a new integer object every time, and probably (?) also freeing one: sage: Integer() is Integer() False sage: a = 999038; b=999038 sage: a is b False >. Or return objects to the pool -- this can also speed up desctructing, since you just don't do. This project is very very well worth pursing. -- William
Some Ideas from Robert Bradshaw
Some thoughts: I think a pool is a very important idea to consider. I can think of two instances where 1000's of integer objects would be created: first, in some large object such as a matrix or polynomial (in which case there should be a specialized type) and second in some huge loop (in which case a pool would help immensely). Also, it'd be interesting to look at the distribution, but I wouldn't be surprised if the majority of integers (ephemerally) created were relatively small--say < 100s. Zero and one especially are used all over. Similar to the pool idea, it might be worth allocating the first 100 integers and whenever you want to create a "small" integer, it would simply return one of these. (I think small one-limb mpz_t's can be detected very easily with mpz_size and a bit mask.) Of course, using python ints might be in order for many of these cases too. A related idea came up in the discussion we had here on linear algebra. Right now if one wants to optimize linear algebra over a new ring one must re-implement matrix multiplication, addition, etc. The generic algorithms request an entries (as a Python object), perform the arithmetic, then store the resulting python object. This can be hugely inefficient. Rather, what if the matrix had void* methods _get_unsafe_raw(i,j) and _set_unsafe_raw(i,j), and the corresponding ring had _add_raw(), _mul_raw(), etc. Also, the ring could have _get_raw() and _create_from_raw(). For the integer ring, these would return mpz_t* and, for instance, _mul_raw() could even be a macro to mpz_mul. The generic base case would just pass around python objects. The "reference counting" for these raw results would have to be done manually. I would suggest giving them the same semantics as gmp. This way one could implement generic polynomial/matrix/etc algorithms that would be able to operate efficiently on any ring with the above methods. For some cases (such as the integers) one would want actual specialized matrices, etc. but it would greatly reduce the work to get significant speedup for objects containing many elements of a given generic ring. Also, it would make implementations for specific ring elements easier to swap in and out (without having to change all the types that access the element internals).
|
http://www.sagemath.org:9001/days3/sprints/objconst
|
crawl-003
|
refinedweb
| 1,290
| 72.56
|
WriteableBitmap gives you a bitmap object that you can modify dynamically - but exactly how to do this isn't always obvious.
Bitmaps are generally implemented as immutable objects in WPF. What this means is that once you create a bitmap you can't make any changes to it. You can manipulate bitmaps by creating new versions, which then immediately become immutable and sometimes this is a good way to work. For more information on using immutable bitmaps see: BitmapSource: WPF Bitmaps
Immutable bitmaps can be very efficient unless you want to indulge in a lot of dynamic changes in which case the overhead of creating and destroying them rapidly becomes too expensive. In this situation you need something a little more flexible - the WriteableBitmap.
The WriteableBitmap, as its name suggests, isn't immutable and you can get at its individual pixels and manipulate them as much as you want. This is the ideal way to work when you need dynamic bitmaps. So let’s take a look at WriteableBitmap, how it works and how to use it to do dynamic things.
Notice that the WriteableBitmap class in Silverlight is very different to the same class in WPF.
To use WriteableBitmap we need to add:
using System.Windows.Media.Imaging;
which contains all of the additional bitmap facilities we need.
You can create a WriteableBitmap in two ways. The most commonly used is to simply specify the size and format of the bitmap:
WriteableBitmap wbmap = new WriteableBitmap(100, 100, 300, 300, PixelFormats.Bgra32, null);
This specifies a WriteableBitmap 100 by 100 pixels with a resolution of 300dpi by 300 dpi using a Bgra32 pixel format. Each pixel, a 32-bit int, uses a four-byte BGRA – that is the pixel is made up of a byte giving the Blue, Green, Red and Alpha values. The final parameter is used to specify a palette if the format needs one.
You can specify a wide range of formats for the bitmap you create and in each case each pixel takes a given number of bits to represent and you need to find out how the bits allocated to each pixel determine its colour.
The second method of creating a WriteableBitmap is to base it on an existing BitmapSource or derived class. For example, you should be able to create a WriteableBitmap from a standard bitmap using the constructor:
WriteableBitmap(bitmapsource);
but if you try this with a BitmapImage loaded from a URI you will get a null object error. The reason is that the bitmap might not yet be downloaded. For a local file the bitmap load blocks execution until it is loaded:
Uri uri = new Uri(@"pack://application: ,,,/Resources/mypic.jpg"); BitmapImage bmi = new BitmapImage(uri); WriteableBitmap bmi2 = new WriteableBitmap(bmi); image1.Source = bmi2;
In this case the WriteableBitmap is created with no problems. If the URI was an HTTP URL, however, the load would not block the execution and the result would be an error.
This makes it more difficult to use this constructor - see Loading Bitmaps: DoEvents and the closure pattern
Once you have the WriteableBitmap you can start to work with its pixels. There are two methods provided which give you a limited equivalent of the BitBlt operation that you find in the low-level API. In this case you can either take a rectangle of pixels and copy them to an array or you can take an array and copy the data to a rectangle of pixels. In each case the array of data is treated as if it was a stream of bytes and simply stored or retrieved from the specified rectangle of pixels in the bitmap.
The WriteableBitmap understands the format of the bitmap it is storing and so it will automatically work out how many bytes are used per pixel and how to find the pixel data at a given co-ordinate. However, it can't know how you have organised the data in the array or how you might need the data organised when the pixels are read to the array.
The simplest scheme is that a row of pixel data is stored in the array adjacent to the previous row. That is, if the original image has p pixels and each pixel takes b bytes to represent then the first row in the array takes p*b bytes to store and the next row starts at p*b+1 and so on.
You can see that the pixel in column x row y (counting from zero) is stored at byte x*b+p*b*y. (This is an example of a storage mapping function.) The only complication with this simple scheme is that a single row of pixels might not end on a whole byte. For example, if you have a black and white image format you only need 1 bit per pixel and the first row of a 10 x 10 bitmap only needs 10 bits to store. However, to make things simple each row has to start on a new byte and so the amount of storage needed to store the first row is two bytes.
Notice that in this case the amount of storage allocated to a row is more than strictly needed according to the number of bits or bytes per pixel, i.e. it isn't just p*b.
There are other reasons why the storage needed for a row isn't always the obvious minimum - for example Windows insists that a row always begins on a 4-byte boundary.
For this reason we define and use the "stride".
The stride S is defined to be the amount of storage needed to store a row of the image including any padding needed to ensure that the next row starts correctly aligned.
In this case the WriteableBitmap takes care of the stride in its internal representation of the pixels and you don't need to worry about what value it is using. However you do have to worry about the stride used in the array of data that you are responsible for. In most cases you can simple set the stride to be the number of bytes needed to store a row of the bitmap - rounded up to make sure each row starts on a new byte if necessary.
You only have to worry about other issues, such as starting on a 4-byte boundary, if it is imposed by other parts of the system such as getting the data from a source that uses a specific stride.
<ASIN:0596101139>
<ASIN:1590597605>
<ASIN:0470041803>
|
https://i-programmer.info/programming/wpf-workings/527-writeablebitmap.html
|
CC-MAIN-2022-40
|
refinedweb
| 1,095
| 66.07
|
Simple on/off devices are, obviously, the simplest of IoT devices you can work with. In this extract from a new book on using GPIO Zero on the Pi in Python we look at how to get started.
Buy from Amazon.
<ASIN:1871962668>
<ASIN:B08NZ2QP41>
There are only two simple on/off devices – the LED and the Buzzer. In this chapter we look at each in turn and learn how to create our own new custom on/off devices.
The inheritance hierarchy for the simple output devices is:
If you don’t know about inheritance then see the previous chapter.
Knowledge of the inheritance hierarchy is mostly useful in creating your own custom classes to extend GPIO Zero. Device is the most general and then we have GPIODevice which corresponds to a single GPIO line used as input or output. OutputDevice is a general output line, DigitalOutputDevice only has two states and finally LED and Buzzer correspond to real devices.
The LED class is the archetypal on/off device. You have already seen how to create an LED object associated with a particular GPIO pin:
led = LED(4)
creates an LED object associated with GPIO4.
Other things you can specify when creating an LED object are:
LED(pin, active_high=True, initial_value=False, pin_factory=None):
pin
the GPIO pin you want to user
active_high
if True the on method sets the line high if false the off method sets the line high.
initial_value
if false the device is initially off if true device is initially on
pin_factory
the underlying pin factory to use – let GPIO Zero set this.
So for example:
led=LED(4, active_high=True, initial_value=True)
creates an LED associated with GPIO4 that is high when switched on and hence initially on.
There are only four methods associated with LED. The most commonly used are on and off which switch the LED on and off.
If you want to make the LED flash then you can use:
blink(on_time, off_time,n, background)
which will blink the LED n times, on for on_time and off for off_time specified in seconds. The background parameter defaults to true and this allows the blinking to be performed on another thread in the background. That is, if background is false the blink method blocks and does not return until the n blinks have been completed, which brings your program to a halt. If you want to do other things while the LED is flashing then set background to True or accept the default. The LED will then flash n times after blink has returned.
For example, our original Blinky LED program given in Chapter 3:
from gpiozero import LED
from time import sleep
led = LED(4)
while True:
led.on()
sleep(1)
led.off()
sleep(1)
can be written as:
from gpiozero import LED
from signal import pause
led = LED(4)
led.blink(on_time=1,off_time=1,n=100)
print("Program Complete")
pause()
If you try this out you will discover that the LED keeps flashing for almost 200 seconds after the program has printed Program Complete. Notice that you need the pause at the end of the program because if your program comes to a halt so do any threads that the program has created. In short, without the pause() you wouldn’t see the LED flash at all. The point is that usually when you give a Python instruction you only move on to the next instruction when the current instruction has completed.
This is generally called “blocking” because the current instruction stops the next instruction executing before it is complete. The call to blink is non-blocking because it returns before it has finished everything you told it to do and the next instruction is executed while it is unfinished. Instructions that are non-blocking are very useful when working with hardware because it allows your program to get on with something else while the hardware is doing something.
Compare the behavior of the background non-blocking program with a blocking version:
from gpiozero import LED
led = LED(4)
led.blink(on_time=1,off_time=1,n=100,background=False)
print("Program Complete")
In this case you don’t need the pause() because the program waits for the LED to have completed 100 flashes. You will only see the Program Complete message after 200 seconds.
It is interesting that in either case the blink method makes use of a new thread to run LED in the background, the only difference is that when background is false the main thread waits for the blink thread to complete.
The toggle method simply changes the LED from on to off or off to on depending on its current state. You can use it to write the Blinky program in yet another way:
from gpiozero import LED
from time import sleep
led = LED(4)
while True:
led.toggle()
sleep(1)
led.toggle()
sleep(1)
There are also some useful properties. The is_lit property is true if the LED is currently active and value sets and gets the state of the LED as a 1 or a 0.
Finally we have the pin property which returns the pin object that the LED is connected to. The Pin object provides lower-level access to the GPIO line.
|
https://i-programmer.info/programming/hardware/14214-pi-iot-in-python-using-gpio-zero-onoff-devices.html
|
CC-MAIN-2021-17
|
refinedweb
| 879
| 60.04
|
Marcus Goldfish wrote: > Hoping someone can help with this... > > I have a logical python namespace using a directory tree and __init__.py > files. For example, PYTHONPATH points to ~pyroot, and I have the following: > > ~pyroot/ > ~pyroot/utils/ > ~pyroot/utils/commands/mygrep.py > > Which makes it nice to code: > > # some python script > import utils.commands.mygrep as grep > > However, I have a problem when running python scripts from the command > line. I would like to do this: > > > python utils.commands.mygrep.py > > but it doesn't work. Is there a trick, or something that I am missing, > that will let me run scripts like that? python utils/commands/mygrep.py will work if mygrep.py doesn't import other modules from utils; not sure if it will work with imports. Kent > > Thanks! > Marcus > > ps-- WinXP, python 2.4 > > > ------------------------------------------------------------------------ > > _______________________________________________ > Tutor maillist - Tutor at python.org >
|
https://mail.python.org/pipermail/tutor/2006-November/050849.html
|
CC-MAIN-2016-44
|
refinedweb
| 144
| 77.84
|
The Working Programmer - Going NoSQL with MongoDB
By Ted Neward | May 2010
Over the past decade or so, since the announcement of the Microsoft .NET Framework in 2000 and its first release in 2002, .NET developers have struggled to keep up with all the new things Microsoft has thrown at them. And as if that wasn’t enough, “the community”—meaning both developers who use .NET on a daily basis and those who don’t—has gone off and created a few more things to fill in holes that Microsoft doesn’t cover—or just to create chaos and confusion (you pick).
One of those “new” things to emerge from the community from outside of the Microsoft aegis is the NoSQL movement, a group of developers who openly challenge the idea that all data is/will/must be stored in a relational database system of some form. Tables, rows, columns, primary keys, foreign key constraints, and arguments over nulls and whether a primary key should be a natural or unnatural one … is nothing sacred?
In this article and its successors, I’ll examine one of the principal tools advocated by those in the NoSQL movement: MongoDB, whose name comes from “humongous,” according to the MongoDB Web site (and no, I’m not making that up). Most everything MongoDB-ish will be covered: installing, exploring and working with it from the .NET Framework, including the LINQ support offered; using it from other environments (desktop apps and Web apps and services); and how to set it up so the production Windows admins don’t burn you in effigy.
Problem (or, Why Do I Care, Again?)
Before getting too deep into the details of MongoDB, it’s fair to ask why any .NET Framework developers should sacrifice the next half-hour or so of their lives reading this article and following along on their laptops. After all, SQL Server comes in a free and redistributable Express edition that provides a lighter-weight data storage option than the traditional enterprise- or datacenter-bound relational database, and there are certainly plenty of tools and libraries available to provide easier access to it, including Microsoft’s own LINQ and Entity Framework.
The problem is that the strength of the relational model—the relational model itself—is also its greatest weakness. Most developers, whether .NET, Java or something else entirely, can—after only a few years’ experience—describe in painful detail how everything doesn’t fit nicely into a tables/rows/columns “square” model. Trying to model hierarchical data can drive even the most experienced developer completely bonkers, so much so that Joe Celko wrote a book—“SQL for Smarties, Third Edition,” (Morgan-Kaufmann, 2005)—entirely about the concept of modeling hierarchical data in a relational model. And if you add to this the basic “given” that relational databases assume an inflexible structure to the data—the database schema—trying to support ad hoc “additionals” to the data becomes awkward. (Quick, show of hands: How many of you out there work with databases that have a Notes column, or even better, Note1, Note2, Note3 …?)
Nobody within the NoSQL movement is going to suggest that the relational model doesn’t have its strengths or that the relational database is going to go away, but a basic fact of developer life in the past two decades is that developers have frequently stored data in relational databases that isn’t inherently (or sometimes even remotely) relational in nature.
The document-oriented database stores “documents” (tightly knit collections of data that are generally not connected to other data elements in the system) instead of “relations.” For example, blog entries in a blog system are entirely unconnected to one another, and even when one does reference another, most often the connection is through a hyperlink that is intended to be dereferenced by the user’s browser, not internally. Comments on that blog entry are entirely scoped to that blog entry, and rarely do users ever want to see the aggregation of all comments, regardless of the entry they comment on.
Moreover, document-oriented databases tend to excel in high-performance or high-concurrency environments; MongoDB is particularly geared toward high performance, whereas a close cousin of it, CouchDB, aims more at high-concurrency scenarios. Both forgo any sort of multi-object transaction support, meaning that although they support concurrent modification of a single object in a database, any attempt to modify more than one at a time leaves a small window of time where those modifications can be seen “in passing.” Documents are updated atomically, but there’s no concept of a transaction that spans multiple-document updates. This doesn’t mean that MongoDB doesn’t have any durability—it just means that the MongoDB instance isn’t going to survive a power failure as well as a SQL Server instance does. Systems requiring full atomicity, consistency, isolation and durability (ACID) semantics are better off with traditional relational database systems, so mission-critical data most likely won’t be seeing the inside of a MongoDB instance any time soon, except perhaps as replicated or cached data living on a Web server.
In general, MongoDB will work well for applications and components that need to store data that can be accessed quickly and is used often. Web site analytics, user preferences and settings—and any sort of system in which the data isn’t fully structured or needs to be structurally flexible—are natural candidates for MongoDB. This doesn’t mean that MongoDB isn’t fully prepared to be a primary data store for operational data; it just means that MongoDB works well in areas that the traditional RDBMS doesn’t, as well as a number of areas that could be served by either.
Getting Started
As mentioned earlier, MongoDB is an open-source software package easily downloaded from the MongoDB Web site, mongodb.com. Opening the Web site in a browser should be sufficient to find the links to the Windows downloadable binary bundle; look at the right-hand side of the page for the Downloads link. Or, if you prefer direct links, use mongodb.com/lp/download/mongodb-enterprise. As of this writing, the stable version is the 1.2.4 release. It’s nothing more than a .zip file bundle, so installing it is, comparatively speaking, ridiculously easy: just unzip the contents anywhere desired.
Seriously. That’s it.
The .zip file explodes into three directories: bin, include and lib. The only directory of interest is bin, which contains eight executables. No other binary (or runtime) dependencies are necessary, and in fact, only two of those executables are of interest at the moment. These are mongod.exe, the MongoDB database process itself, and mongo.exe, the command-line shell client, which is typically used in the same manner as the old isql.exe SQL Server command-line shell client—to make sure things are installed correctly and working; browse the data directly; and perform administrative tasks.
Verifying that everything installed correctly is as easy as firing up mongod from a command-line client. By default, MongoDB wants to store data in the default file system path, c:\data\db, but this is configurable with a text file passed by name on the command line via --config. Assuming a subdirectory named db exists wherever mongod will be launched, verifying that everything is kosher is as easy as what you see in Figure 1.
Figure 1 Firing up Mongod.exe to Verify Successful Installation
If the directory doesn’t exist, MongoDB will not create it. Note that on my Windows 7 box, when MongoDB is launched, the usual “This application wants to open a port” dialog box pops up. Make sure the port (27017 by default) is accessible, or connecting to it will be … awkward, at best. (More on this in a subsequent article, when I discuss putting MongoDB into a production environment.)
Once the server is running, connecting to it with the shell is just as trivial—the mongo.exe application launches a command-line environment that allows direct interaction with the server, as shown in Figure 2.
Figure 2 Mongo.exe Launches a Command-Line Environment that Allows Direct Interaction with the Server
By default, the shell connects to the “test” database. Because the goal here is just to verify that everything is working, test is fine. Of course, from here it’s fairly easy to create some sample data to work with MongoDB, such as a quick object that describes a person. It’s a quick glimpse into how MongoDB views data to boot, as we see in Figure 3.
Figure 3 Creating Sample Data
Essentially, MongoDB uses JavaScript Object Notation (JSON) as its data notation, which explains both its flexibility and the manner in which clients will interact with it. Internally, MongoDB stores things in BSON, a binary superset of JSON, for easier storage and indexing. JSON remains MongoDB’s preferred input/output format, however, and is usually the documented format used across the MongoDB Web site and wiki. If you’re not familiar with JSON, it’s a good idea to brush up on it before getting heavily into MongoDB. Meanwhile, just for grins, peer into the directory in which mongod is storing data and you’ll see that a couple of “test”-named files have shown up.
Enough playing—time to write some code. Quitting the shell is as easy as typing “exit,” and shutting the server down requires only a Ctrl+C in the window or closing it; the server captures the close signal and shuts everything down properly before exiting the process.
MongoDB’s server (and the shell, though it’s not as much of an issue) is written as a native C++ application—remember those?—so accessing it requires some kind of .NET Framework driver that knows how to connect over the open socket to feed it commands and data. The MongoDB distribution doesn’t have a .NET Framework driver bundled with it, but fortunately the community has provided one, where “the community” in this case is a developer by the name of Sam Corder, who has built a .NET Framework driver and LINQ support for accessing MongoDB. His work is available in both source and binary form, from github.com/samus/mongodb-csharp. Download either the binaries on that page (look in the upper-right corner) or the sources and build it. Either way, the result is two assemblies: MongoDB.Driver.dll and MongoDB.Linq.dll. A quick Add Reference to the References node of the project, and the .NET Framework is ready to rock.
Writing Code
Fundamentally, opening a connection to a running MongoDB server is not much different from opening a connection to any other database, as shown in Figure 4.
Discovering the object created earlier isn’t hard, just … different … from what .NET Framework developers have used before (see Figure 5).
using System; using MongoDB.Driver; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { Mongo db = new Mongo(); db.Connect(); //Connect to localhost on the default port. Database test = db.getDB("test"); IMongoCollection things = test.GetCollection("things"); Document queryDoc = new Document(); queryDoc.Append("lastname", "Neward"); Document resultDoc = things.FindOne(queryDoc); Console.WriteLine(resultDoc); db.Disconnect(); } } }
If this looks a bit overwhelming, relax—it’s written out “the long way” because MongoDB stores things differently than traditional databases.
For starters, remember that the data inserted earlier had three fields on it—firstname, lastname and age, and any of these are elements by which the data can be retrieved. But more importantly, the line that stored them, tossed off rather cavalierly, was “test.things.save()”—which implies that the data is being stored in something called “things.” In MongoDB terminology, “things” is a collection, and implicitly all data is stored in a collection. Collections in turn hold documents, which hold key/value pairs where the values can be additional collections. In this case, “things” is a collection stored inside of a database, which as mentioned earlier is the test database.
As a result, fetching the data means connecting first to the MongoDB server, then to the test database, then finding the collection “things.” This is what the first four lines in Figure 5 do—create a Mongo object that represents the connection, connects to the server, connects to the test database and then obtains the “things” collection.
Once the collection is returned, the code can issue a query to find a single document via the FindOne call. But as with all databases, the client doesn’t want to fetch every document in the collection and then find the one it’s interested in—somehow, the query needs to be constrained. In MongoDB, this is done by creating a Document that contains the fields and the data to search for in those fields, a concept known as query by example, or QBE for short. Because the goal is to find the document containing a lastname field whose value is set to “Neward,” a Document containing one lastname field and its value is created and passed in as the parameter to FindOne. If the query is successful, it returns another Document containing all the data in question (plus one more field); otherwise it returns null.
By the way, the short version of this description can be as terse as:
When run, not only do the original values sent in show up, but a new one appears as well, an _id field that contains an ObjectId object. This is the unique identifier for the object, and it was silently inserted by the database when the new data was stored. Any attempt to modify this object must preserve that field or the database will assume it’s a new object being sent in. Typically, this is done by modifying the Document that was returned by the query:
However, it’s always possible to create a new Document instance and manually fill out the _id field to match the ObjectId, if that makes more sense:
Of course, if the _id is already known, that can be used as the query criteria, as well.
Notice that the Document is effectively untyped—almost anything can be stored in a field by any name, including some core .NET Framework value types, such as DateTime. Technically, as mentioned, MongoDB stores BSON data, which includes some extensions to traditional JSON types (string, integer, Boolean, double and null—though nulls are only allowed on objects, not in collections) such as the aforementioned ObjectId, binary data, regular expressions and embedded JavaScript code. For the moment, we’ll leave the latter two alone—the fact that BSON can store binary data means that anything that can be reduced to a byte array can be stored, which effectively means that MongoDB can store anything, though it might not be able to query into that binary blob.
Not Dead (or Done) Yet!
There’s much more to discuss about MongoDB, including LINQ support; doing more complex server-side queries that exceed the simple QBE-style query capabilities shown so far; and getting MongoDB to live happily in a production server farm. But for now, this article and careful examination of IntelliSense should be enough to get the working programmer started.
By the way, has authored and coauthored a dozen books, including the forthcoming “Professional F# 2.0” (Wrox). He consults and mentors regularly—reach him at ted@tedneward.com or read his blog at blogs.tedneward.com.
Thanks to the following technical experts for reviewing this article: Kyle Banker and Sam Corder
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
|
https://msdn.microsoft.com/en-us/magazine/ee310029.aspx
|
CC-MAIN-2019-22
|
refinedweb
| 2,618
| 50.26
|
I'm using matplotlib to plot log-normalized images but I would like the original raw image data to be represented in the colorbar rather than the [0-1] interval. I get the feeling there's a more matplotlib'y way of doing this by using some sort of normalization object and not transforming the data beforehand... in any case, there could be negative values in the raw image.
import matplotlib.pyplot as plt import numpy as np def log_transform(im): '''returns log(image) scaled to the interval [0,1]''' try: (min, max) = (im[im > 0].min(), im.max()) if (max > min) and (max > 0): return (np.log(im.clip(min, max)) - np.log(min)) / (np.log(max) - np.log(min)) except: pass return im a = np.ones((100,100)) for i in range(100): a[i] = i f = plt.figure() ax = f.add_subplot(111) res = ax.imshow(log_transform(a)) # the colorbar drawn shows [0-1], but I want to see [0-99] cb = f.colorbar(res)
I've tried using cb.set_array, but that didn't appear to do anything, and cb.set_clim, but that rescales the colors completely.
Yes, there is! Use
LogNorm. Here is a code excerpt from a utility that I wrote to display confusion matrices on a log scale.
from pylab import figure, cm from matplotlib.colors import LogNorm # C = some matrix f = figure(figsize=(6.2,5.6)) ax = f.add_axes([0.17, 0.02, 0.72, 0.79]) axcolor = f.add_axes([0.90, 0.02, 0.03, 0.79]) im = ax.matshow(C, cmap=cm.gray_r, norm=LogNorm(vmin=0.01, vmax=1)) t = [0.01, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0] f.colorbar(im, cax=axcolor, ticks=t, format='$%.2f$') f.show()
|
https://pythonpedia.com/en/knowledge-base/2546475/how-can-i-draw-a-log-normalized-imshow-plot-with-a-colorbar-representing-the-raw-data-in-matplotlib
|
CC-MAIN-2020-16
|
refinedweb
| 296
| 70.5
|
example:
The package is available on The PostgreSQL Experts Inc Github Repository and also on PGXNThe package is available on The PostgreSQL Experts Inc Github Repository and also on PGXNandrew# create extension unnest_ordinality; andrew=# select * from unnest_ordinality('{a,b,c,d,e,f,g}'::text[]); element_number | element ----------------+--------- 1 | a 2 | b 3 | c 4 | d 5 | e 6 | f 7 | g
Surely something I would love to use. Thanks Andrew!
I typically achieve that now by:
select row_number() over(),* from unnest('{a,b,c,d,e,f,g}'::text[]);
We tried that. This is LOTS faster.
Andrew,
Had trouble compiling on my 9.2 windows mingw64. I had to take out the line
#include "access/htup_details.h"
Once I took that out seemed to work fine (compiled and quick test even on my VC 9.2 EDB install).
With the line I was getting errors like:
c:/ming64/projects/pgx64/pg92/include/POSTGR~1/server/access/htup_details.h:130:
3: error: conflicting types for 'DatumTupleFields'
c:/ming64/projects/pgx64/pg92/include/POSTGR~1/server/access/htup.h:131:3: note:
previous declaration of 'DatumTupleFields' was here
c:/ming64/projects/pgx64/pg92/include/POSTGR~1/server/access/htup_details.h:132:
8: error: redefinition of 'struct HeapTupleHeaderData'
c:/ming64/projects/pgx64/pg92/include/POSTGR~1/server/access/htup.h:133:16: note
: originally defined here
c:/ming64/projects/pgx64/pg92/include/POSTGR~1/server/access/htup_details.h:459:
8: error: redefinition of 'struct MinimalTupleData'
c:/ming64/projects/pgx64/pg92/include/POSTGR~1/server/access/htup.h:461:16: note
: originally defined here
Not sure if others ran into same issue or just a mingw64 one.
Thanks,
Regina
I've pushed a fix - I had the version string comparison wrong.
But something looks rather wrong with your installation, too. access/htup_details.h shouldn't even exist on a 9.2 installation.
I thought it always did, but they shuffled functions around. In 9.3 it broke our postgis raster functionality the shuffling so had to make changes just for IFDEF for 9.3
I have opted to use separate branches in things like FDWs, that match the PostgreSQL branches. That's probably not possible for things like PostGIS. But you really shouldn't be building against unclean Postgres sources, which is what this looks like.
Nevermind I see you are right. looking back at this ticket
I wonder if maybe I accidentally installed 9.3 in my 9.2 cluster once so it has a mix. I'll rebuild that.
Very useful! Maybe something which belongs in core?
Just curious, what was the actual use-case? I know I've needed this before, but can't remember in what situation.
I forget - you'd need to ask Josh :-)
There's a patch for full WITH ORDINALITY support floating around which will almost certainly be in 9.4.
I know. But lots of people can't wait that long. 9.4 is likely to be more than a year away and many people don't upgrade immediately. This was wanted in a hurry.
I've got a patch in for the next CommitFest which implements this in core. You can find it, including associated docs, here:
Let me know what you think :)
david AT fetter DOT org
@Andrew:
Yes, I was only explaining that it wouldn't make sense to add this into core because something better is in the works and we couldn't backpatch this function anyway. :-)
But before 9.4 is out, I think this is a handy extension, so thank you for your work.
Teensy nit here: your version puts the ordinality column at the beginning, where the standard (and hence my patch) puts it at the end.
I wasn't trying to implement the standard :-)
|
http://adpgtech.blogspot.com/2013/04/unnestordinality-extension.html
|
CC-MAIN-2014-52
|
refinedweb
| 626
| 58.99
|
Easy Java Simulations The Manual Version 3.4 - September 2005 Francisco Esquembre Contents 1 A first contact 1 1.1 About Easy Java Simulations . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Installing and running Ejs . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Working with a simulation . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4 Inspecting the simulation . . . . . . . . . . . . . . . . . . . . . . . . 15 1.5 Modifying the simulation . . . . . . . . . . . . . . . . . . . . . . . . 23 1.6 A global vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2 Creating models 39 2.1 Definition of a model . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.2 Interface for the model in Easy Java Simulations . . . . . . . . . . . 43 2.3 Declaration of variables . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.4 Initializing the model . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.5 Evolution equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.6 Constraint equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 2.7 Custom methods and additional libraries . . . . . . . . . . . . . . . . 65 iii iv chapter 3 Building a view 77 3.1 Graphical interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.2 Association between variables and properties . . . . . . . . . . . . . 81 3.3 How a simulation runs . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.4 Interface of Easy Java Simulations for the view . . . . . . . . . . . . 83 3.5 Editing properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.6 Learning more about view elements . . . . . . . . . . . . . . . . . . . 94 4 Using the simulation 103 4.1 Using your simulations . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.2 Files generated for a simulation . . . . . . . . . . . . . . . . . . . . . 105 4.3 Ejs configuration options . . . . . . . . . . . . . . . . . . . . . . . . 107 4.4 Running the simulation as an applet . . . . . . . . . . . . . . . . . . 109 4.5 Running the simulation as an application . . . . . . . . . . . . . . . 115 4.6 Distribution of simulations . . . . . . . . . . . . . . . . . . . . . . . . 117 CHAPTER 1 A first contact c 2005 by Francisco Esquembre, September 2005 We describe in this chapter how to create an interactive simulation in Java using Easy Java Simulations (Ejs for short). In order to get a general perspective of the complete process, we will inspect, run and finally modify, slightly but meaningfully, an already existing simulation. This will help us identify the parts that make a simulation, as well as become acquainted with our authoring tool. 1 1.1 About Easy Java Simulations Every technical work requires the right tool. Easy Java Simulations is an authoring tool that has been specifically designed for the creation of interactive simulations in Java. Though it is important not to confuse the final product with the tool used to create it, and, in principle, the simulations we will create with Ejs can also be built with the help of any modern computer programming language, this tool originates from the specific expertise accumulated along several years of experience in the creation of computer simulations, and will therefore be very useful to simplify our task, both from the technical and from the conceptual point of view. From the technical point of view because Ejs greatly simplifies the creation of the view of a simulation, that is, the graphical part of it, a process that usually requires 1 Readers of the OSP Guide (see) may skip this chapter. The information contained in it has been already covered in the chapter about Ejs of the guide. 1 2 CHAPTER 1. A FIRST CONTACT an advanced level of technical know-how on programming computer graphics. From the conceptual point of view, because Ejs provides a simplified structure for the creation of the model of the simulation, that is, the scientific description of the phenomenon under study. Obviously, part of the task still depends on us. Ours is the responsibility for the design of the view and for providing the variables and algorithms that describe the model of the simulation. We will soon learn how to use Ejs to build a view. For the model, we will learn to declare the variables that describe its state and to write the Java code needed to specify the algorithms of the model. Stated in different words, we will learn to program the computer to solve our model. If you have never programmed a computer, let me tell you that it is a fascinating experience. And, actually, it is not difficult in itself. All that is needed is to follow a set of basic rules determined by the syntax of the programming language, Java in our case, and, obviously, to have a clear idea of what we want to program. It can be compared to writing in a given human language. No need to say that one can attempt to write simple essays or complete literary creations. With this software tool, we can create models of different complexity, from the very simple to the far-from-trivial. Once more, Easy Java Simulations has some built-in features that will make our task easier, also when writing our algorithms. For instance, Ejs will allow us to solve numerically complex systems of ordinary differential equations in a very comfortable way, as well as it will automatically take care of a number of internal issues of technical nature (such as multitasking, to name one) which, if manually done, would require the help of an expert programmer. But let us proceed little by little. 1.2 Installing and running Ejs Easy Java Simulations can be run under any operating system that supports a Java Virtual Machine, and it works exactly the same in all cases. Only what this section describes can be different depending on the operating system you use in your computer. Though we will illustrate the process assuming you are using the Microsoft Windows operating system, the explanations should be clear enough for users of different software platforms, with obvious changes. Nevertheless, you will find detailed installation and start-up instructions for the most popular operating systems in the Web pages for Ejs,. 1.2. INSTALLING AND RUNNING EJS 1.2.1 3 Installation Let’s start our work! First of all, we must install the software we need in our computer. The steps needed for this are the following: 1. Copy the files for Ejs to you hard disk. 2. Install the Java 2 Standard Edition Development Kit (JDK) in your computer. 3. Inform Ejs where to find the JDK. All the software required to run Ejs is completely free and can be found in the Web server of Ejs. The installation of Ejs consists in uncompressing a single ZIP file that you will most likely have downloaded from the Web server. It is recommended that neither this directory, nor any of its parents, contains white spaces in its name. We will assume that you uncompressed this file in the root directory of your hard disk, thus creating a new directory called, say, C:\Ejs. But any other directory will serve as well. 2 The installation of the JDK follows a standard process established by Sun Microsystems (the company that created Java) which, under Windows, consists in running a simple installation program. The version recommended at the time of this writing is 1.5.0 04, although any release later than 1.4.2 should work just fine. The only important decision you need to take during the installation is to choose the directory where Java files will be copied to. We recommend that you just accept the directory suggested by the installation program. In the case of version 1.5.0 04, this defaults (in English-based computers) to C:\Program files\Java\jdk1.5.0 04. Easy Java Simulations requires the Java Development Kit to run. Please, do not confuse this with the Java Runtime Environment (JRE), sometimes called the Java plug-in. The JRE is a simpler set of Java utilities that allows your browser to run Java applets (a Java applet is an application that runs inside an HTML page) and certain applications, but does not include, for instance, the Java compiler. Thus, although you might be able to run Ejs’ interface, you won’t be able to generate simulations. Hence, please make sure that you download the (larger) JDK. It is possible that your computer has already a copy of the JDK installed in it. For instance, if you use an operating system that ships with a Java Virtual Machine, such as Mac OS X or some distributions of Linux. In this case, we still recommend that you check the version of Java you got. If it is too old, you should consider updating to a more recent version. 2 In Unix-like systems, the directory may be uncompressed as read-only. In this case, please enable write permissions for the whole Ejs directory. 4 CHAPTER 1. A FIRST CONTACT The last step you need to complete is to let Ejs know in which directory you installed the JDK. 3 The way to do this depends on which method you will use to launch Ejs. The recommended method to run Easy Java Simulations is to use Ejs’ console. In this case, once you run the console (usually by double-clicking on the EjsConsole.jar file), you will just need to write the installation directory you used for the JDK in the console’s “Java (JDK)” text field. We describe this in detail in Subsection 1.2.3 below. Alternatively, for operating system purists, Ejs’ installation includes three script files (one for each major operating system: Windows, Mac OS X, and Linux) that will help you run Ejs from the command line. If you choose this method, you will need to edit the script file for your operating system and modify the variable JAVAROOT defined in the first few lines of this script to point to the installation directory of the JDK. Thus, for instance, if you are using Windows and have used the suggested installation directory, then you may not even need to do anything at all. If, on the contrary, you installed the JDK in the directory, say, C:\jdk1.5.0 04, then you must edit the file called Ejs.bat that you will find in the directory where you installed Ejs, and modify the line in this file that reads: set JAVAROOT=C:\Program files\Java\jdk1.5.0_04 so that it reads as follows: set JAVAROOT=C:\jdk1.5.0_04 Virtually all the reported installation problems have their origin in Ejs not finding the JDK because this environment variable is not properly set. There are some options that you may want to configure before running Ejs. The most interesting one is, perhaps, the language for the interface. Ejs offers an interface in several different languages from which you need to choose before running it. You can choose that of the operating system installation itself, English, Spanish, and Traditional Chinese from the console. If you can’t obtain Ejs’ interface in the language you want, try editing the script file for your operating system and modifying the self-explanatory commands in there. All in all, if you followed the installation instructions provided and cannot get Ejs to run as described in Subsection 1.2.3, please e-mail us at fem@um.es. Send a simple description of the problem, including any error message you may have gotten in the process. We’ll try to help. 3 This step is not necessary in Mac OS X. 1.2. INSTALLING AND RUNNING EJS 1.2.2 5 Directory structure We need to say some words about the organizational structure of the files in the Ejs directory. Once you have created this directory in your hard disk, please inspect it with the file browser of your operating system. You will find in it a set of three files, all with the same name, Ejs, if only they have different suffixes (extensions is the technical word). These files, Ejs.bat, Ejs.macosx, and Ejs.linux are those used to run Ejs under the different major operating systems. You will also find the JAR file for Ejs’ console, EjsConsole.jar. These four files are covered in the next subsection. Finally, there are also two files called LaunchBuilder.bat and LaunchBuilder.sh in this directory. They are used to run a utility program called LaunchBuilder, which we will cover in Subsection 4.5.1. Besides those files, you will also find two directories called data and Simulations. The first directory contains the program files for Ejs, and you should not touch it. The second directory will be your working directory and you can modify its contents at your will, with one important exception: do not touch the directory called library that you will find in it. This directory contains the library of files required for the simulations that we will create. Because of its importance and also so that it does not interfere with the files and directories that you may create in the directory Simulations, we have given this library a directory name that starts with the character “ ”. It is therefore forbidden (if this sounds too strict, let’s say it is very dangerous) to use this character as the first letter of the name of any simulation file that you may create in the future. You will also find other subdirectories of Simulations that begin with the character “ ”. We have included, for instance, a directory called examples with many sample simulations (and auxiliary files for them) that may be useful for you to inspect and run. Depending on the distribution you got, there might be additional sample directories. The usual operation with Easy Java Simulations takes place in the directory Simulations. We will save our simulation files in it and Ejs will also generate there the files needed for the simulations to run. As you work with Ejs, this directory will contain more and more files (it may even become crowded!). You can do whatever you think appropriate with your files, but recall not to delete, move, nor rename the library directory (or, even better, do not touch any directory or file whose name starts with the character “ ”). 6 CHAPTER 1. A FIRST CONTACT 1.2.3 Running Easy Java Simulations Please return now to the Ejs directory. As we already mentioned, there are two ways of running Easy Java Simulations. Using the console (recommended) To run Ejs from the console, we need to run the console file EjsConsole.jar first. This is a self-executable JAR file. Thus, if your system is properly configured (usually Windows and Mac OS X systems are, once you have installed the JDK on them), you just need to double-click this file. If you cannot make it run this way, open a system prompt, change the current directory to Ejs, and type the following: 4 java -jar EjsConsole.jar You should get a window like that of Figure 1.1. Figure 1.1. Easy Java Simulations’ start-up console under Windows. Notice that the console includes a text field labeled “Java (JDK)”, near the top. This field can be left empty in Mac OS X computers, but in Windows and Linux, 4 You may need to fully qualify the java command if it is not in your system PATH for binaries. 1.2. INSTALLING AND RUNNING EJS 7 you must write there the location of your JDK. The figure shows the default value for Windows systems. If the “Java (JDK)” text field doesn’t point to the directory where you installed the JDK, either type the correct installation directory in this field, or use the button to its right to bring-in a file browser, and use it to select the JDK installation directory. Then, click on the “Launch Easy Java Simulations” button and Ejs’ window should appear. Again, there are some options that you may want to configure before running Ejs. The console offers you an easy way to select them. Also, the console includes a button that runs LaunchBuilder. This utility program is described in Subsection 4.5.1. Notice that the console includes a text area in which Ejs will print whatever messages it produces. Using the script file Among the files called Ejs, select the one that corresponds to your operating system and run it. • Under Windows, use the file called Ejs.bat and double-click on it to run it. • The start-up file for Ejs under Mac OS X has the name Ejs.macosx. To run it, you’ll need to open a terminal (also know as shell) window, change the current directory to Ejs and type “./Ejs.macosx”. Before this, however, make sure that this file has execution permission. For this, write in the terminal window the command “chmod +x Ejs.macosx”. • The file called Ejs.linux corresponds to Linux operating systems. The steps to run this file are the same as for Mac OS X. If everything went well, in a few seconds you will find two windows in your screen. The first of these is the operating system window from which Ejs is run and contains just a few (rather strange) sentences. See Figure 1.2. This window will not be of interest for us, except for the fact that it will display any message that Ejs produces. You can minimize it (but do not close it!) or just place it where it won’t disturb and ignore it (except for messages). The second window is Ejs user interface itself.. 8 CHAPTER 1. A FIRST CONTACT Figure 1.2. Terminal window that launches Easy Java Simulations under Windows. The interface of Easy Java Simulations Figure 1.3 shows the interface of Easy Java Simulations, to which we have added some notes. From now on, we will just concentrate on it, and minimize (or just ignore) either the console or the terminal window we used to launch Ejs. You will notice that Ejs interface is rather basic. This was a design decision. It often happens (at least it happens to us) that the first look at a software program with an enormous amount of icons in its taskbar, causes fear rather than ease. Recall that Easy Java Simulations has the adjective Easy in its name and we want this to be appreciated (specially when used by students) from the very first moment. However, despite its austere aspect, Ejs has all it needs to have. The program will be displaying its capabilities as we need them. We’ll start exploring the interface by looking at the set of icons to the right, what is called in the figure the taskbar. You may be surprised by the place chosen to locate this taskbar. Usually, this bar (that provides tasks such as loading and saving our files, for instance) appears at the top of most program windows. Well, it is an option. We first decided to place it on the right-hand side because it seemed to us that this was the place where it would leave more free space for the real work. Actually, once we got used to it, we think it is a very comfortable place to have it. The taskbar shows the following icons: “New”. Clicking this icon clears the current simulation (if there is one) and returns Ejs to its original initial state. “Open”. This icon lets you load an existing simulation file. 1.2. INSTALLING AND RUNNING EJS 9 Figure 1.3. Easy Java Simulations user interface (with annotations). “Save”. This is used to save the current simulation on disk in the same file from which it was loaded. “Save as”. This icon also lets you save the current simulation on disk, but in a file different from the one currently in use. “Run”. When this icon is clicked, Ejs generates and runs the simulation currently loaded. It then changes its color to red (returning to green when you exit the simulation). “Font”. This icon lets you change the type and size of the font used in the text areas of Ejs. “Options”. This allows you to modify some options that change the appearance and behavior of Ejs. See Subsection 4.3. “Information”. This icon shows information about Easy Java Simulations. 10 CHAPTER 1. A FIRST CONTACT We will show how to use these icons as we need them for our work, though their meaning and use should be rather natural. Coming back to Figure 1.3, please notice that there is a blank area at the lower part of the window with a header that reads “You will receive output messages here”. This means exactly what it promises. This is a message area that Ejs will use to display information about the results of the actions we ask it to take. Let us turn now our attention to the most important part of the interface, the workpanel, the central area of the interface, and the three radio buttons on top on it, labeled “Introduction”, “Model” and “View”. When you select one of these buttons, the central area of Ejs displays the panel associated to the edition of the corresponding part of the simulation. Obviously, these parts are the introduction, the model and the view. 1.3 Working with a simulation Now that we are familiar with the interface, we will use it to work with an existing simulation. Click with the mouse on the “Open” icon and a dialog window, similar to the one shown in Figure 1.4, will appear. This window will show you the files contained in your Simulations directory. Figure 1.4. Contents of the Simulations directory. Open the directory examples/Manual/FirstContact. You will find there the simulation file Spring.xml. Select it and click the “Open” button in this dialog window; Ejs will then load this simulation. Notice that the message area of Ejs will display a confirmation message (“File successfully read. . . ”) and that the title of Ejs window will change to include the name of the file just loaded. 1.3. WORKING WITH A SIMULATION 1.3.1 11 The introduction Figure 1.5 shows the aspect of the workpanel of Ejs with the short introduction that we prepared for this simulation. Figure 1.5. Introduction pages for the simulation of a spring. You can read the second introduction page by clicking on the corresponding tab. As user of an existing simulation, this panel allows you to read the narrative that the author included as prelude or as instructions of use for the simulation. Later in this chapter we will learn how to modify these pages. 1.3.2 The view Besides the changes to the interface of Ejs itself, you will notice that two new windows have appeared on the screen. These windows, which are displayed in Figure 1.6, correspond to the interface of the simulation we loaded and make what we call its view. We can investigate how this view has been built. If we select the panel for the view in Ejs (clicking the corresponding radio button), we will see in the central area two frames which display several icons, see Figure 1.7. The frame on the right-hand side shows the set of graphical elements of Ejs that can be used to build a view, grouped by functionality. The frame on the left displays the actual construction chosen for this particular simulation. In a first approach, we can consider this panel as an advanced drawing tool specialized in the visualization of scientific phenomena and its data. Obviously, if 12 CHAPTER 1. A FIRST CONTACT Figure 1.6. Visualization of the phenomenon (left) and panel for plots (right) for the simulation of a spring. we want to create nice drawings, we’ll need to learn all the drawing tools at hand, and an important part of the task of learning how to use Easy Java Simulations consists in learning which elements exist and what they can do for us. 1.3.3 The model To proceed in this first inspection of the simulation, please click now the radio button for the model. The central panel will now show a set of five subpanels, each containing a part of the model (see Figure 1.8). You can explore these subpanels on your own, you will probably guess what each of them does. We’ll be doing this soon, but we are now interested in running the simulation. 1.3.4 Running the simulation A warning: do not try to run the simulation clicking on the buttons of the windows that appeared when we loaded the simulation! These windows are actually static and only serve us to get an idea of how the final simulation will look like. (More precisely, they are there to help the author build the view, during the creation of the simulation). You can distinguish these static windows from others that will soon appear because the static ones have a note between parentheses in their title that reads “Ejs window”. To run the simulation correctly, you need to click on the Run icon in Ejs taskbar. Then, after a few seconds (the fewer, the faster your computer is) Ejs will display some reassuring messages in its message area and two new windows, very similar to the previous ones, will appear, if only this time their title won’t tell they are “Ejs windows”. Now, you can interact with the simulation. Click the “Play” button and the 1.3. WORKING WITH A SIMULATION 13 Figure 1.7. Tree of elements for the simulation (left) and set of graphical elements of Ejs (right). spring will start oscillating, since it starts from a non-equilibrium position. The dialog window on the right will plot the graph of the displacement of the ball at the end of the spring with respect to the equilibrium (the graph in black) and of its velocity in the X axis (the graph in red), as Figure 1.9 shows. You can now work with the simulation to show some of the characteristics of simple harmonic motion. In particular, you can illustrate the dependence (or independence) of the period with respect to the parameters of the system and to its initial conditions. You can also modify the values of the mass and the elasticity constant of the spring using the input fields provided by the interface. You can place the spring in different initial positions just by clicking the ball at its end and dragging it to the position you want. You can measure the period by clicking the mouse on the plotting panel so that a yellow box appears displaying the coordinates of the point you clicked upon. You now have a small laboratory to work with your students on the motion of a spring. 1.3.5 Publishing the simulation on a Web server If this simulation is of interest for you, you will probably like to be able to publish it on the Internet so that other people (your students, for instance) can run it in a remote way. Now, this is one of the greatest benefits from the fact that Easy Java Simulations is based on Java: you already have all you need! Indeed, if you inspect the directory 14 CHAPTER 1. A FIRST CONTACT Figure 1.8. Subpanels of the model. The panel for the definition of variables is currently shown. Figure 1.9. The simulation as it runs. Simulations, you will see in it a set of files, of different type, whose names all start with the word “Spring” (well, one of them starts with the word “spring”). Open the one called Spring.html with your favorite Web browser and you will get something similar to what Figure 1.10 displays. Explore the links offered by this Web page and you will see that Ejs has used the introduction pages of the simulation to create the corresponding HTML pages, and that it has added to them a new page that includes the simulation itself in form of a Java applet, see Figure 1.11. It is important to remember that the simulation will only appear correctly if your browser is capable of displaying applets of Java version 1.4.2 or later. (If the simulation doesn’t appear in your browser as shown in Figure 1.11, you’ll need to install the necessary Java plug-in or JRE. You’ll find the instructions for this on the Web pages for Ejs, at.) 1.4. INSPECTING THE SIMULATION 15 Figure 1.10. Main Ejs-generated Web page for the simulation of the spring. You will notice that only the window that visualizes the spring appears inside the HTML page. The window that plots the graphs appears separately. This is because the first window was selected by the author, during the design of the interface, as the main window. See Subsection 3.4.3 for more details. If you want to publish this simulation on a Web server of your own, you only need to copy in it all the files that appeared in your Simulations directory (remember that all of them start by the word “Spring”, either upper or lowercase), and, this is important, also the directory that contains the library of Ejs: library. Once copied, you can delete your files (but not the library!) from this directory, if you want to keep it clean. Anyway, you can always re-create them repeating what you did along this section. If you want to learn more about the contents of these HTML pages, please read Section 4.4. 1.4 Inspecting the simulation Now that you know how to use a simulation previously created with Ejs, you may have two questions in mind: • Could I know what the computer does to simulate the motion of the spring? 16 CHAPTER 1. A FIRST CONTACT Figure 1.11. The simulation of the spring in applet form. • Even more interesting, could I modify this simulation to include other types of motions, or to plot different graphs? These are the two questions that, because their answers are positive, make of Easy Java Simulations a very special tool. You can surely find on the Internet many applets that simulate scientific processes. However, very few of them will allow you to see their “secrets”, that is, how they have been done. And those that allow you this, will surely do it by providing the full Java code of the applet, something which is only useful for expert Java programmers (which can also understand the phenomenon). Ejs on the contrary, because it was designed to be used by people that don’t need to be Java programmers, allows you to understand and customize the simulation in a much simpler and efficient way. We’ll show this in the present and next section by inspecting and modifying our example in a substantial way. 1.4.1 Inspecting the model Let’s go back to the model by clicking the corresponding radio button. We already saw, in Figure 1.8, that this panel contains five subpanels. 1.4. INSPECTING THE SIMULATION 17 Declaration of variables The first of these, labeled “Variables” and visible in Figure 1.8, displays the table of variables needed for our model. When programming, we use the term variables either for the parameters, the state variables or the inputs and outputs of the model. Every numerical value (or of any other type, such as boolean or text) is called a variable, even if its value remains constant all along the simulation. For the model of our example, these variables are the following: m, the mass of the ball at the end of the spring, k, the elastic constant of the spring, l, the length of the spring in equilibrium, x, the horizontal coordinate of the end of the spring, y, the vertical coordinate (which must remain constant), vx, the velocity of the horizontal motion, t, the variable that represents the time of the simulation, dt, the increment of time for each step of the simulation. As we can see, some variables correspond to parameters of the system (m, k and l), others describe the state of it (x, y and vx), and others are variables needed for the simulation itself (t and dt). Initialization of the model Variables need to be initialized. The table of Figure 1.8 provides an initial value for each of our variables, thus specifying the precise initial state of the system. In occasions, however, we will need a more complex initialization (for instance, if some preliminary computation is required). For these cases, we can use the second subpanel of the model. This is not necessary for our model, though, and we will leave this panel empty (we don’t display it either). Evolution of the model Next panel, labeled “Evolution”, is of special importance. It is used to indicate the simulation what to do when the evolution runs. The panel can contain two types of pages: a first type in which the author must directly write the Java code that implements the desired algorithm, and a second one, the type that this example is 18 CHAPTER 1. A FIRST CONTACT using, specially indicated for models that can be described using systems of ordinary differential equations. In our model, the differential equation that rules the motion is obtained from applying Newton’s Second Law, F = m a, and the basic assumption that the response of the spring to a displacement dx obeys Hooke’s Law, F = −k dx. We therefore obtain the second order differential equation: ẍ = − k (x − l), m (1.1) where x is the horizontal coordinate of the end of the spring. (We are choosing a coordinate system with its origin placed at the fixed end of the spring and the X axis along it). To enter this second order equation in the editor for differential equations of Ejs we need to rewrite it as a system of first order differential equations, which we achieve by using the variable vx = ẋ. All this results in the formulation displayed in Figure 1.12. 5 Figure 1.12. Evolution panel with the differential equations for our spring. Actually, this form of writing the previous second order differential equation may be easier to understand for students not familiar with the concept of differential equations. Note in the figure that we have selected t as the independent variable and that we are asking the editor to provide a solution for the equation for an increment of time given by dt. With this information, when the simulation is created, the editor will generate all the code needed for the numerical computation of the solution of the equation, 5 Note that the products require the character “*”. 1.4. INSPECTING THE SIMULATION 19 using for this the numerical method indicated in the field right below the equations. For this relatively simple problem, we have chosen the so-called Euler–Richardson method, a method of order two that provides a reasonable ratio speed/precision (for this problem). The field called “Tolerance”, that appears here empty, is only required when the method of numerical solution is of the type called adaptive, see Subsection 2.5.3. The field “Events” tells us that there are no events defined for this differential equation. We’ll discuss events in Subsection 2.5.4. Solving differential equations numerically is a sophisticated task, although it can be automated in an effective way. This is precisely what Easy Java Simulations offers: you write the equations in this editor and Ejs automatically generates the corresponding code. Before proceeding, we need to turn our attention to the left frame of this subpanel. In it we instruct Ejs to play 20 steps of the simulation per second (or, in different words, to display 20 frames per second). This, together with the value of 0.05 that we used for the variable dt will result in a simulation that runs approximately in real time (although this is not absolutely necessary). Notice also that there is a checkbox called “Autoplay”, that is now unchecked. This box must be checked if we want the simulation to start automatically playing the evolution when it is run. Constraints among variables The subpanel labeled “Constraints” is used to write the Java code needed to establish fixed relationships among variables of the system. We call these relationships constraints. In our model, the panel (not shown here) is empty since there are no constraints among variables. Custom methods The last subpanel of the model is labeled “Custom”. This panel can be used by the author of the simulation to define his/her own Java methods 6 Differently to the rest of panels of the model, that play a well-defined role in the structure of the simulation, methods created in this panel must be explicitly used by the author in any of the other parts of the simulation. Again, in our example, this panel is empty and is not displayed. 6 “Method” is the name that Java gives to what other programming languages call functions or subroutines. The object-oriented nature of Java turns these methods into something even more powerful than that. However, given the simplified use of Java that we chose for Ejs, this definition will be sufficient for us. 20 CHAPTER 1. A FIRST CONTACT The model as a whole We can now describe the integral behavior of all subpanels of the model. To start the simulation, Ejs declares the variables and initializes them, using for this both the initial value as specified in the table of variables, and whatever code the user may have written in the initialization panel. At this moment (ignoring for once the left-to-right order), Ejs also executes whatever code the user may have written in the constraint pages. The reason for this is that the constraints contain equations that can modify the initial value of certain variables, as will be explained in Subsection 1.5.1 and, in more detail, in Chapter 2. Next, when the simulation plays, Ejs executes the code provided by the evolution panel and, immediately after, also the possible constraints. Once this is done, the system will be ready for a new step of the evolution, which it will repeat at the prescribed speed (number of images per second). Notice that, as we mentioned already, methods in the panel “Custom” are not automatically included in this process. 1.4.2 Inspecting the view Specifying the view of the simulation takes two steps. In the first place, we need to build the tree of elements displayed in the left frame of Figure 1.7. This tree is graphically descriptive of the components of the view because each element appears next to an icon representative of its function. Usually, the author also gives each element an illustrative name (at least, we do so), which also facilitates the comprehension. The second step, less evident but also easy to accomplish, consists in editing the so-called properties of each element. Properties are internal values of the element that determine how the element looks and behaves on the screen. We may be interested in editing the properties of an element either to change any of its static graphical peculiarities (such as the color or font, for instance), or to instruct it to use the variables of the model to modify its dynamic aspect (position or size) on the screen. This second possibility is what turns the simulation into a really dynamic, interactive visualization of the phenomenon under study. Let us see how we used it in this particular example. Select the panel for the view and, in the tree of elements of our simulation, click on the element called Ball to select it. The graphical aspect of this element is precisely that of a filled ellipse, and corresponds to the ball that you can see at the end of the spring. Once selected (you will see a colored frame around its name), click again on it, but this time with the right button of the mouse. The menu of Figure 1.13 will appear. 1.4. INSPECTING THE SIMULATION 21 Figure 1.13. Popup menu for the element Ball of the view. Select the option “Properties” (the one highlighted in the figure) and the window displayed in Figure 1.14 will appear. (Double-clicking directly on the Ball element is a short cut for this option.) Figure 1.14. Table of properties for the element Ball of the view. This window displays the table of properties of the element. In our case, the static values indicate that the ball will be displayed filled with the color cyan (a sort of light blue), in the form of an ellipse of size (0.2,0.2) (which finally results in a circle being drawn), and that the element is enabled, that is, that it will respond to user interaction if (s)he clicks on it with the mouse. You can also notice, and this is the most interesting thing, that other properties of the element are associated to variables of the model, or use expressions that use them. For instance, the position of the element is associated to the point (x,y), where x and y are the corresponding variables of the model. This association is a two-way link. In one direction it means that, every time the value of these variables changes during the execution of the model, the element will be informed and the corresponding properties updated to the new values, causing the ball to move at the simulation’s pace. In the other direction, if the user interacts with the ball to modify its position, then, in an automatic way, the value of the variables of the 22 CHAPTER 1. A FIRST CONTACT model associated to these properties will also be modified. In this simple manner, associating variables of the model to properties of the view elements, we can easily build complete dynamic, interactive simulations. Finally, the way the element Ball must react when it is interacted by the user has been established associating to the (so called) action properties of the element expressions that use variables of the model, or methods of it. We can see in Figure 1.14 that the action produced “On Press”ing the element is associated to the predefined method pause(). (This method just pauses the simulation). The action produced “On Drag”ing the element evaluates the sentence y = 0.0;, which keeps the spring in a horizontal position. Finally, the action produced “On Release” of the element evaluates the code: 7 vx = 0.0; _resetView(); We can use for this code any valid Java expression that involves the variables of the model and even any Java method that we may have defined in the “Custom” subpanel of the model. According to the code written in these three action properties, the sequence of the interaction with the element Ball is the following: 1. When the ball is first clicked, the simulation is paused (if it was playing). This is necessary because it is rather uncomfortable to try to reposition a ball while it is moving because of the effect of the evolution of the system. 2. When moving the ball, we force the y coordinate to keep the value 0. This way, even if the user inadvertently tries to move the ball in the vertical direction, the spring remains horizontal. 3. When the ball is finally released, the system will freeze the ball (vx = 0), clean the graphs ( resetView()) and leave the system ready to start a new simulation run with the new initial conditions. You can also inspect the properties of the other elements to become familiar with some of the different types of elements that Ejs offers and their properties. Of particular interest are the elements called Displacement and Velocity. 7 Code in action properties can span more than one line. When this happens, the field adopts a different background color. If the code is not clearly visible, click on the first button to its right, , and an editor window will display it more clearly. 1.5. MODIFYING THE SIMULATION 1.5 23 Modifying the simulation We have already used one of the big peculiarities of Easy Java Simulations, the fact that it lets us see how the simulation works. In this section we will use the other important characteristic of it, the possibility of modifying the simulation to adapt it to our preferences or to include new possibilities. Very often, when we explore a program created by another person, right after learning the possibilities that the program offers, we come to the universal question: “What if. . . ?” (or a variant of this one: “Is it possible to. . . ?”). Unless the author has taken into account all different possibilities (which is unlikely), the answer may cause us a small disappointment. This doesn’t need to be the case with Easy Java Simulations. The tool not only lets us see the interiors of the author’s creation, but also allows us to contribute to it adapting or enriching the model and the view according to our needs. We will illustrate this process by extending our simulation in the following way: 1. We will modify the model so that it includes both friction and an external force, turning our originally free spring into a forced, damped oscillator. The resulting second order differential equation is: ẍ = − k b 1 (x − l) − ẋ + fe (t) m m m (1.2) where b is the so-called coefficient of dynamic friction and fe (t) is an external force applied to the system. We’ll use in particular an oscillatory force of the form fe (t) = A sin(ω t), that allows us to explore interesting phenomena such as resonance. 2. We will modify the view so that it also displays a diagram of the phase space; that is, of velocity versus displacement. 3. We will compute and plot the potential and kinetic energies of the system and their sum. 1.5.1 Modifying the model We begin by modifying the model according to our new needs. Please select again, in Ejs’ interface, the panel for the model and, inside it, the subpanel for variables. We are going to add some new variables to our model. Even though we could simply add the new variables to the existing table, it is sometimes preferable, for clarity, to keep several separated tables of variables. This has only organizational purposes; actually, all variables can be used in exactly the same way, independently of the page in which they have been defined. 24 CHAPTER 1. A FIRST CONTACT Adding new variables Click with the right button on the upper tab of the page (where the label “Simple spring” is) and a popup menu will appear, as Figure 1.15 shows. Figure 1.15. Popup menu for the page of variables. In this menu, select the “Add a new page” option (the one highlighted in the figure) and the program will add a new page with an empty table for variables. (Ejs will first prompt you for a name for the new page, you can choose, for instance, “Advanced spring”.) In this new table, initially empty, click on the first (and only) cell of the column “Name” and type b. In the next column, “Value”, type 0.1. This will be our coefficient of dynamic friction. You can write a comment about it in the text field situated at the lower part of the table. The table will look as in Figure 1.16. In a similar way, please create new variables called amplitude and frequency, with values 0.2 and 2.0, respectively. Finally, create three new variables called potentialEnergy, kineticEnergy and totalEnergy, for the potential and kinetic energy and their sum, respectively. Do not assign to these variables any initial value (we’ll soon see why this is not necessary). You can comment all these variables, if you want to do so. The final result can be seen in Figure 1.17. 8 8 Do not worry about the extra blank row at the end of the table, Ejs ignores it. But it you want, you can delete this empty row by right-clicking on it and selecting the option “Remove this variable” from the popup menu that appears. 1.5. MODIFYING THE SIMULATION 25 Figure 1.16. Adding the coefficient of dynamic friction b to our model. Modifying the equations of the evolution Let’s go now to the evolution page and edit the right-hand side of the second differential equation so that it reads now: -k/m*(x-l) - b*vx/m + externalForce(t)/m The result is shown in Figure 1.18. Notice that we are using in this expression the method externalForce(), that is not yet defined. We’ll need to create it as a custom method, when we get to this panel. Adding the computation of the energy But, before this, select the subpanel for the constraints. Click on the message “Click to create a new page” and give the new page the name “Energies”. In the editor that appears, write the following code: potentialEnergy = 0.5*k*(x-l)*(x-l); kineticEnergy = 0.5*m*vx*vx; totalEnergy = potentialEnergy + kineticEnergy; Please, play special attention and copy this code exactly as it appears above! (The multiplication character “*” and the semicolon “;” are particularly easy to forget.) Compilers of computer languages are inflexible concerning syntax errors. Any small mistake when typing the code can result in the compiler issuing a (sometimes very 26 CHAPTER 1. A FIRST CONTACT Figure 1.17. The complete table for the new variables. long) sequence of error messages. The appendices (which are distributed separately from this manual, visit) include some instructions to understand these messages but, for the moment, we will assume that you copied this code correctly. The result (once we add a short comment to this page) is shown in Figure 1.19. This is the first time that we use constraints. You may remember that we defined the constraints as “fixed relationships among variables”. In our model, the potential and kinetic energy respond to the expressions Ep = 21 kx2 and Ek = 12 mvx 2 . The code we wrote above is the translation of these equations to Java code. Now comes a subtle point. One of the questions most frequently asked by new users of Ejs is: “why don’t we write these equations into a page of the evolution, instead of one of constraints?” The reason is that this relationship among variables must always hold, even if the evolution is not running. It could very well happen that the simulation is paused and the user interacts with the simulation to reposition the ball. If we write the code to compute the energy in the evolution, the values for the energy will not be properly updated, since the evolution is not evaluated (because the simulation is not playing). To prevent this situation we need to write these equations in a constraint page. Constraint pages are always automatically executed after the initialization (at the beginning of the simulation), after every step of the evolution (when the simulation is playing) and each time the user interacts with the simulation. Therefore, any relationship among variables that we code in here, will always be verified. 1.5. MODIFYING THE SIMULATION 27 Figure 1.18. The new differential equations of the model. Coding the external force Recall finally that, to complete our model, we still need to specify the expression for the external force. 9 To this end, go to the “Custom” subpanel and click on it to create a new page called “External force”. The new page will be created with the code necessary to create a standard method. Edit this code to read as follows (see also Figure 1.20): public double externalForce (double time) { return amplitude * Math.sin(frequency*time); } This code illustrates the syntax for a method that returns a (double precision) numeric value. (The expression Math.sin corresponds to a call to the sine function of the mathematical library of Java.) Notice that this method accepts a parameter, time, that can be used as another variable inside the body of the method (time is then called a local variable), and that it uses the (global) variables amplitude and frequency. The use of the parameter time is important; it would be a mistake to define the method as follows: public double externalForce () { return amplitude * Math.sin(frequency*t); } 9 Even when we could have written the expression for this force directly in the corresponding cell of the differential equation, it will be simpler for our students or users to interpret the model if we write it separately. 28 CHAPTER 1. A FIRST CONTACT Figure 1.19. Constraint equations for the computation of the energy. which uses directly the global variable t. This is incorrect because t is a variable that changes during the process of solution of the differential equation (while amplitude and frequency do not), and this affects the precision of the numerical algorithm for the solution. Thus, as a general rule, we must state that: If a global variable can change during the resolution of a differential equation, this variable must be included as one of the parameters sent to any method called from the cells of the equation editor. This is exactly what we did. Our new model is ready. You will notice in Figure 1.20 that the “Custom” panel of the model includes some extra controls at its bottom. Although Ejs tries to provide all the programming tools that a typical user may need, experienced Java programmers may always want to go a step further and, for example, reuse any previous Java code or libraries that they may have created previously. These controls allow just this, as will be explained in detail in Section 2.7. 1.5.2 Modifying the view Lets us now enrich the visualization of our phenomenon with the graph of the phase space, velocity versus displacement. For this, select the panel for the view and, in the right-hand frame, from the upper set of elements called “Containers”, click on the icon for “PlottingPanel”, . 10 When you click on it, the icon will be highlighted 10 If you doubt about which icon is the right one, place the cursor over it and wait a second until a tip appears revealing the type of the element selected and displaying a brief description of it. 1.5. MODIFYING THE SIMULATION 29 Figure 1.20. Custom method with the expression for the external force. with a colored background and the cursor will change to a magic wand, . With this wand, go to the tree of elements and click on the element called Dialog. You are then asking Ejs to create an element of type “PlottingPanel” inside the window Dialog. This window is prepared to accept this type of “children”. When you do this (give the new element the name PhaseSpacePanel), a new panel with axes will appear inside the window with the plots, sharing the available space with the previous plotting panel. Get rid of the magic wand by clicking with it on any blank space of the tree of elements’ frame. Then, since both plotting panels (the old and the new one) look too small, double-click on the tree element called Dialog to bring-in its property editor and modify the one called “Size”, changing it from the current value of 400,200 to the value 400,400 (it may also appear as ‘‘400,400’’), thus doubling its height. The table of properties for this dialog window will look as in Figure 1.21. Figure 1.21. Properties for the dialog window. Edit now the properties of the new element PhaseSpacePanel so that they appear as in Figure 1.22. 30 CHAPTER 1. A FIRST CONTACT Figure 1.22. Properties for the element PhaseSpacePanel. Notice that, when you type on the text field of a property to modify it, background color of the field turns yellow. This is Ejs’ way of telling you that value won’t be accepted until you hit the “return” key. In this very moment, background will turn white again. For this reason, never leave any field with yellow background! the the the the An exception to this rule are the fields of action properties. These fields are special: any edition on them is automatically accepted, and they only change color (to a sort of light green) to warn you that they span more than one line. These editors can obviously be left colored. The window with the plots will finally look as in Figure 1.23. Adding the graph of velocity versus displacement The plotting panel we added is only the container for the graph we want to add. The graph is visualized using an element of the type called “Trace”, with icon , which you can find on the subpanel of drawing elements (or “Drawables”) called “Basic”. Click on this icon to select it and to get the magic wand again, and use it to create an element of this type, named PhaseSpace, inside PhaseSpacePanel. Once you do this, edit the properties of the new element and give to its properties “X” and “Y” the values x-l (recall l is the length of the spring at rest) and vx, respectively. 1.5. MODIFYING THE SIMULATION 31 Figure 1.23. Dialog window with two plotting panels. This is all we need to do to display the phase space graph. As this example shows, the value of a property can also be specified using an expression in which one or more variables of the model appear. In this case, however, the bidirectionality of the connection between model variables and view properties is lost, it only works in the obvious direction. Plotting the energy The process we carried out for the phase space graph is very similar to what we need to do to display a graph of the three energies we computed in the model. Create another plotting panel, called EnergyPanel, in the Dialog window and increase again the vertical size of this window (changing the “Size” property to 400,600). Edit the new panel and change the property “Title” to Evolution of the energy, and the properties “Title X” and “Title Y” to Time and Energy, respectively. Leave the other properties as they appear originally. Use the magic wand to create three elements of type “Trace” called Potential, Kinetic and Total, inside EnergyPanel. Edit the properties of these three traces associating the property “X” to the variable t in all three cases, and the property “Y” to the variables potentialEnergy, kineticEnergy and totalEnergy, respectively. Additionally, for all three traces, type the fixed value 300 in the field for the property called “Points” (this value indicates that the traces should only display the last 300 points of data they have received) and choose a different value for the 32 CHAPTER 1. A FIRST CONTACT property called “Line Color” of each of the traces, so that you can distinguish the graphs. To obtain a panel with the available colors, click on the icon immediately to the right of the text field for this property. We chose (for no particular reason) the color red for the potential energy, green for the kinetic energy and blue for the total energy. As an example, Figure 1.24 shows the panel of properties for the element Potential. Figure 1.24. Properties of the trace Potential. Displaying the new parameters Finally, we will add new elements of the type “NumberField” so that the user can visualize and edit the values of the variables b, amplitude and frequency. They will be located below those already existing for the mass and the elastic constant of the spring. You can find this type of element, with icon , in the “Display” tab of the “Basic” central panel of the right frame. Select this icon and, with the magic wand, click on the element of the tree called LowerPanel to create three new elements with names identical to the variables we want to display, that is, b, amplitude and frequency. This coincidence on names doesn’t cause any problem. Names of variables in the model must be unique, but they do not conflict with similar names for the elements of the view. We must finally edit the properties of these elements so that they fulfill their work. We’ll show you how to do this for the first of them. Bring the panel of properties for the new element b into view and edit the properties as in Figure 1.25. The association of the property “Variable” with the variable of the model b tells 1.5. MODIFYING THE SIMULATION 33 Figure 1.25. Properties for the number field for b. the element that the value displayed or edited in this field is that of b. The property called “Format”, to which we assigned the value b = 0.00, has a special meaning. It doesn’t indicate that b should take the value of 0, but is interpreted by the element as an instruction to display the value of b with the prefix “b = ” and with two decimal digits (hence the value 0.00). That’s all, do the same for the other two number fields, associating the same properties to amplitude and amp = 0.00, and to frequency and freq = 0.00, respectively. To facilitate the association of properties with variables of the model, you can use the icon that appears to the right of the text field for the corresponding property. If you click on this icon, a list of all the variables of the model that can be associated to this property will be offered to you. You can then comfortably select the one you want with the mouse. 1.5.3 An improved laboratory Our simulation is now ready! This is a perfect moment to save our changes. 11 Because we don’t want to overwrite the original simulation (we may be interested in preserving it), we will save this new one in a different file. Click then on the “Save as” icon of Ejs’ taskbar. A dialog window will appear as shown in Figure 1.26, that will allow you to chose the name of the file for the new simulation. Type the name MySpringAdvanced.xml in the field called “File Name” and click the “Save” button of this dialog window. Ejs will then save the new file and will notify you in the message area. We can now run the simulation. Click the “Run” icon and you will obtain results similar to those of Figure 1.27. We have thus created a complete laboratory to explore the behavior of simple harmonic motion plus damping and external forces. You can use this laboratory with 11 Actually, we should have done this earlier. If you are used to work with computers, you’ll know that they have the bad habit of stopping (hanging is the slang word for this) with no apparent reason and with no previous warning. We therefore recommend you to learn to save your work from time to time. 34 CHAPTER 1. A FIRST CONTACT Figure 1.26. Saving our simulation. your students to explore interesting phenomena such as resonance (try a value for the p frequency close to k/m). Besides the file MySpringAdvanced.xml that you just created, you will find this simulation in the examples/Manual/FirstContact directory, with the name AdvancedSpring.xml. Modifying the introduction You might want to modify now the introduction for the simulation. Select the panel for the “Introduction” and add a new page or edit the existing ones. Use for this the popup menu for each page and select the option “Edit/View this page”. A simple visual editor for HTML pages will then appear to help you edit the HTML code. See Figure 1.28. HTML pages are text pages that include special instructions, or tags, that allow Web browsers to give a nice format to the text as well as to include several types of multimedia elements. The editor allows you to work in a WYSIWYG (what you see is what you get) mode. However, if you know HTML already and want to work directly on the code (which, sometimes, is preferable), select the option that appears highlighted in Figure 1.28. You can write as many introduction pages as you want. As we saw earlier, each of the pages will turn into a link in the main HTML page that Ejs generates for the simulation. A second possibility is to delay until the last moment the edition of the introduction and, once the HTML pages have been generated by Ejs, copy the simulation and their pages to a different directory (for instance, that of the Web server from which they will be published) and modify them using your favorite HTML editor. If this is the option of your choice, we insist that you do this work in a directory other that Simulations. Otherwise, if you ever run your simulation 1.6. A GLOBAL VISION 35 Figure 1.27. The improved simulation in action. again from within Ejs, you will loose whatever edition you have made, because Ejs will overwrite the edited pages! 1.6 A global vision We end this chapter taking an overview of what we did in it. We first learned to install and run Easy Java Simulations. Next, we loaded one of the examples included in the examples directory and run it. We checked that, besides running the simulation as an independent Java application, Ejs also generates a complete set of Web pages, ready to be published, that contain the simulation in form of a Java applet. We then learned to distinguish and to explore the different parts that make a simulation, and, finally, we even modified these parts to improve the simulation, adding new functionality. The three parts of a simulation are the introduction, the model and the view. Each of these parts has its function and its own panel in Ejs’ interface with a particular look and feel. 36 CHAPTER 1. A FIRST CONTACT Figure 1.28. Improving the introduction. The introduction contains an editor for HTML code needed to create the multimedia narrative that wraps the simulation. We can create the pages we need with this editor, either working in WYSIWYG mode or writing directly the HTML code. The model contains a series of subpanels needed for the specification of the different parts of it: definition of variables, initialization, evolution of the system, constraints or relationships among variables, and custom methods. Each panel provides edition tools that facilitate the job of creation. Finally, the view contains a set of predefined elements that can be used as individual building blocks to construct a structure in form of a tree for the interface of our simulation. These elements, that are added through a procedure of click and create (our magic wand), have in turn a set of properties that indicate how each element looks and behaves, and that, when associated to variables from the model (or expressions that use them), turn the simulation into a true dynamic and interactive visualization of the phenomenon under study. Well, and this is all! Though the chapter is a bit long because we accompanied the description with many details and instructions, the process can be summarized in the few paragraphs above. Obviously, learning to manipulate with fluency the interface of Easy Java Simulations requires a bit of practice, as well as the familiarization with all the possibilities that exist. In particular, with respect to the creation of the view, you’ll need to learn the many several types of elements offered and what each of them can do for you. The rest of this manual will give more information about all these possibilities. Also, the examples provided with Ejs are a great source of information. If you are anxious to start exploring our simulations and create your own ones, and if you 1.6. A GLOBAL VISION 37 think you got a sufficiently general overview of the way Ejs works (this was precisely the goal of this chapter), you can go directly to explore some examples, returning later to the other chapters of this manual when you need to improve your knowledge about Ejs. Exercises Exercise 1.1. Modify the example of the spring to remove the restriction that the motion is only horizontal. Introduce also a second external force along the Y axis of the form A2 cos(ω2 t + φ) (φ is the phase difference between both external oscillatory forces) and explore the different resulting motions. Exercise 1.2. Include new elements of the type “NumberField” for the values of the velocities in X and Y for the simulation of the previous exercise. Modify the code for the “On Release” action of the ball to allow initial conditions that do not start from rest. You’ll find this exercise solved in the examples/Manual/FirstContact directory with the name Spring2D.xml. CHAPTER 2 Creating models c 2005 by Francisco Esquembre, September 2005 In this chapter we cover in detail the different parts that make the model of a simulation and explain the support that Easy Java Simulations offers for their implementation. This chapter can be considered as a complete reference for the use of the panel of Ejs for the description of the model. 2.1 Definition of a model Let’s start with a bit of theory. We create the model of a phenomenon when we define its relevant magnitudes, set their values at a given initial instant of time, and establish the rules that govern how these magnitudes change. We use the term “magnitude” here to refer either to state variables (the variables that describe the state of the phenomenon), or to parameters, or to any other input or output quantities of the model. When we work with our simulation, we’ll always refer to these magnitudes as variables. Actually, a variable can hold a value that changes along the execution of the simulation or one that doesn’t. In other words, it can represent a constant or variable magnitude, but this won’t prevent us from using this terminology. 1 1 It is possible that a magnitude considered constant at one initial implementation, changes later, either because we change the model and the role that this variable plays in it, or because we allow 39 40 CHAPTER 2. CREATING MODELS This way, the state of a model is completely determined by the value of its variables at a given instant of time. This state can change because of two reasons: • The internal dynamics of the simulation, which we will call evolution. • The influence of external agents. In our case (since we are using simulations that do not read data from the real world, for instance), because of the interaction of the user with the simulation, to changes the value of one or more of the variables of the model. Changes directly caused by any of these reasons may cause other changes in an indirect way. This is actually the case when there exist variables in the model whose values explicitly depend on the values of those who where modified. In this situation, we say that there exist constraints among the variables affected. All these changes are ruled by equations that describe the laws under which the evolution takes place, or that state the interdependence of the variables. These equations must be made explicit by using mathematical formulas or logical computer algorithms. So, finally, in order to specify the model of a simulation, we need to establish: • the variables of the model, • their initial state, • the evolution equations, and • the constraint equations. 2.1.1 Variables and the initial state of a model First of all, we need to declare the variables of our model. This is a crucial process from which a good or a bad simulation can result: we need to choose the right reference system, the magnitudes that allow us to write simpler formulas,. . . In order to illustrate what follows, we will assume that the variables of our model are called x1 , x2 , . . . , xn . Obviously, if we want our model to be more easily understood, it is customary to give meaningful names to the variables, such as position, velocity, number of individuals of a species, concentration of a chemical element, etc., which are in turn simplified using easy to identify shorter acronyms. However, since our exposition is general, we will assume this naming mechanism with subscripts. the user to interact with the simulation to modify the magnitude. Thus, the term variable turns out finally to be appropriate. 41 2.1. DEFINITION OF A MODEL In the second place, we need to set the initial state by giving the right value to each of the variables. This can be done, in the majority of cases, by simply typing the desired value. In occasions, however, the initial value of some variables can only be obtained by making some previous computations. We call this process (by either of both methods) initializing the model. 2.1.2 Evolution equations Continuing with the terminology introduced above, the system can evolve in an autonomous way from the current state, x1 , x2 , . . . , xn , to a new state x∗1 , x∗2 , . . . , x∗n , thus simulating the passing of time (which, by the way, can or cannot be one of our variables—although it frequently is). We call the equations that rule this transition evolution equations, which can be written using one or more expressions of the form x∗i = fi (x1 , x2 , . . . , xn ) (2.1) Sometimes, these laws have a direct mathematical formulation, as in the case of the so-called discrete systems. In other situations, they derive from the discretization of continuous models which are described by differential equations. In many cases, however, the formulation (2.1) (so typically mathematical) requires of a combination of numerical techniques and of logical algorithms that conforms an elaborated computer algorithm. But, as a conclusion, we will state that simulating the evolution in time of the model consists in computing, from the current state x1 , x2 , . . . , xn , the new values x∗1 , x∗2 , . . . , x∗n , take these as the new state of the model, and iterate this process indefinitely while the simulation runs. Thus, the third step that we need consists in writing the evolution equations. 2.1.3 Constraint equations The last step consists in writing the constraint equations. As we mentioned above, changes in the variables directly caused by evolution equations can have indirect effects on other variables. This indirect changes are determined by what we will call constraint equations, which can be made explicit by one or more of expressions of the form xi = gi (x1 , x2 , . . . , xi−1 , xi+1 , . . . , xn ) (2.2) where, as you can see, a variable should only appear at one side of the equation. These expressions indicate that, if one or several of the variables on the righthand side change, then equation (2.2) must be evaluated so that the variable on 42 CHAPTER 2. CREATING MODELS the left is consequently modified. Similarly to evolution equations, this theoretical formulation can require in practice the need to write a sophisticated algorithm, partially logical, partially numerical. It is a common temptation to consider constraint equations as part of the evolution. That is, if the changes are caused primarily by the evolution,. . . why not include these relationships as part of it? That would be, however, a bad practice. The reason is that there is a second source for changes, namely the direct interaction of the user with the simulation. Indeed, if constraints interrelationships among variables must always be valid, then they should also hold when the user changes any of these variables, even if the simulation is paused (that is, if the evolution is not taking place). For this, it is convenient to identify clearly and to write separately both types of equations. This will allow the computer to know which equations it must evaluate in each case (see Subsection 2.1.4 below). A useful (though perhaps not always valid) criterion to distinguish both types of equations is to examine the expression we use to compute the value of a variable at a given instant of time and, if this value depends on the current value of the same variable, then this is most likely an evolution equation. If, on the contrary, the value of the variable can be computed exclusively from the value of other variables, then it is a constraint. 2.1.4 Running the model Once we complete the four steps described above, the model is precisely defined. If we now run the simulation, the following events will take place: 2 1. The variables are created and their values set according to what the initialization states. 2. The constraint equations are evaluated, since the initial value of some variables may depend on the initial values of others. In this moment, the model is found to be in its initial state, and the simulation waits until an evolution step takes place or the user interacts with it. 3. In the first case, the evolution equations are evaluated and, immediately after, the constraints. A new state of the model at a new instant of time is then reached. 4. In the second case, when the user changes a variable, constraint equations are evaluated and we obtain a new state of the model at the same instant of time. 2 See, however, the more complete description, which includes the view, in Section 3.3. 2.2. INTERFACE FOR THE MODEL IN EASY JAVA SIMULATIONS 43 A coherent model should take care that there exist no contradictions among their different parts. In particular, the values given to a variable during the initialization or the evolution should not be different to what it results from any constraint that affects this variable. However, if this actually happens, Ejs will solve the contradiction by giving always priority to (that is, evaluating in the last place) the constraint. Because of the multitasking form in which simulations are run in Ejs, it is possible that the user interacts with the simulation right when it is in the middle of one evolution step. Although this doesn’t necessarily need to be bad in itself, it can occasionally produce undesired effects, such as the change of the value of a parameter in the middle of an algorithm. In these situations, it is preferable to ask the user to pause the simulation (using a control that allows him/her to do this) before interacting with it. This is the scheme, simple but effective, that Ejs uses for the creation of the model of our simulations. 2.2 Interface for the model in Easy Java Simulations Let us now describe the structure that Ejs provides for this task. The panel in Ejs dedicated to the creation of the model displays five radio buttons (see Figure 2.1), one for each of the four steps described in the previous section, plus an extra one, labeled “Custom”, whose use will be explained in Section 2.7. Figure 2.1. Panel of Easy Java Simulations for the model. Each radio button allows us to visualize the corresponding subpanel in the central working area of Ejs. The aspect of these subpanels is, in principle, rather similar, displaying a message that invites us to create a new page. If you click on this 44 CHAPTER 2. CREATING MODELS message, the system will effectively create a new page, asking first for a name for it, as Figure 2.2 shows. Although Ejs always proposes a generic name for the new pages, we recommend that you use descriptive names for them. You can type any name you want, with no restrictions. The only recommendation is that these names are different and not too long. Figure 2.2. Asking for the name of a new page. Once you type the new name and click “Ok”, a new page will appear. The aspect of this page depends on the subpanel you are working on. Figure 2.3 shows a partial view of a new page for variables. Figure 2.3. Partial view of a new page for variables. Notice that the new page has a header tab with its name on it. This system of tabs is very convenient to organize the pages when there are more than one in a panel. The reason to create several pages in the same panel is just to group variables or algorithms with similar purposes or meaning in separated pages, thus helping clarify the model. Ejs actually uses all these pages in the same way, processing them from left to right. Each page offers an option popup menu that appears right-clicking the mouse on its header tab. See Figure 2.4. You can use the options in this menu to create a new empty page for this panel, to copy, rename or remove the current one, or to change the order of the pages. There are finally two other options that require a small explanation. The option to enable and disable the page can be used when we want Ejs not to use a page that we have created, but without removing it. This can be useful when we are testing different algorithms for the same model and we want to compare them by enabling and disabling them in turns. On the other hand, the option to show and hide a page is used (very rarely, that’s the truth) for pedagogical reasons, when the author wants to hide pages with 2.3. DECLARATION OF VARIABLES 45 Figure 2.4. Popup menu for a page. difficult to understand algorithms that (s)he doesn’t want to expose the students to, at least in a first approach. The “Options” icon (see also Subsection 4.3) in Ejs’s taskbar offers the possibility of making completely invisible the pages defined as hidden. Later on, when the students are ready to understand the pages, the author can ask them to use this option to make them visible again. When any of these two options is used, the name on the header tab will append a letter between parentheses indicating the status of the page, (D) for disabled, (H) for hidden. Disabled pages are ignored by Ejs when it generates the simulation. Hidden pages, on the contrary, are perfectly valid. This way of working is common to all subpanels, the rest of the chapter describes each of these subpanels in more detail. 2.3 Declaration of variables Let’s start with the first step of the creation of a model: the declaration of variables. Variables are easy to define, we just need to give them a valid name, specify the type of each of them and, for the case of arrays (vectors or matrices), also specify their dimension. Optionally, we can also give an initial value to each variable. Let us describe each of these points in detail. 2.3.1 Types of variables Even when in mathematical formulas we frequently use the so-called real numbers (integers, rationals and irrationals) 3 without distinguishing among them, when we write a computer program we do make distinctions among different types of variables, 3 In some problems we may use complex numbers. However, Java doesn’t support them natively, hence any computation with complex numbers must be done accessing explicitly the corresponding real and imaginary parts of the numbers. 46 CHAPTER 2. CREATING MODELS depending on the use we plan to do of them and on the computer memory required to hold their possible values. Thus, for instance, variables that will only hold integer values require considerably less memory and the computations that involve only integer numbers perform faster because modern computers implement optimized routines for integer arithmetic. Java language implements the following basic types of variables: • boolean, for variables of boolean type, that is, those which can hold only two values: true or false. • byte, short, int, and long, for integer values. • float and double, for the so-called floating-point numbers (what we would call real numbers). • char and String, for characters and texts, respectively. As an object-oriented programming language, Java introduces also a new type called Object as the basis of a whole new world of advanced constructions called classes. Nevertheless, except in the occasions in which it is absolutely essential to do the computations using the minimal possible type of variable so that to optimize memory usage, we can choose to work only with the standard type for each category. This will be our case, and Ejs will only make use of variables of types boolean, int, double, String, and (to open the world of object-orientation) Object. Although Ejs was not designed as a tool for professional programmers, it leaves a door open for them to use directly object-oriented constructions by accepting also variables of type Object. These can also be used for simple, still useful, things, such as to create colors (objects of the class java.awt.Color) that change along with the state of the model. 2.3.2 Creation of variables Each new page of the panel labeled “Variables” contains a copy of the editor for variables, that takes the form of a table. See Figure 2.5. To add a variable, we write its name in the first column of the edition row and select its type using the selector of the third column. If we want to give the variable an initial value, we only need to type it in the corresponding column. On the lower part of the editor there exists also a comment field in which we can type a short description of the role the variable plays in the model. This can be useful to facilitate the comprehension of the model. 2.3. DECLARATION OF VARIABLES 47 Figure 2.5. Table for the edition of variables (with annotations). If the variable is an array, 4 we must indicate its dimension as well. Ejs can work both with simple and multidimensional variables. If the field of the “Dimension” column is left empty, then a simple variable will be created, that is, just one variable of the given type. If, on the contrary, we type in this field one or more integer numbers (it’s important that these numbers are integer!) between square brackets, then Ejs will create a set of variables of the given type, where each of the numbers among square brackets indicates a dimension. Hence, for instance, if we type [50] in this field, Ejs will create a unidimensional array (or vector) with 50 coordinates, reserving memory space for 50 individual variables of the type indicated. If we write [10][100], it will create a two-dimensional matrix of 10 rows, each of them with 100 elements. The integer number used to indicate a dimension can be either a constant or a variable of type int that has been previously defined. It is important to notice that, if a dimension of an array is indicated using an integer variable, the array will be created using (for the corresponding dimension) the value that the variable holds at the moment of the initialization of the array. Any later change in the value of the variable will not directly affect the dimension of the matrix. Each time you create a new variable, a new edition row will automatically appear to accept more variables. But we can also insert a variable between two existing ones using the popup menu for each row (different from the popup menu for the page) that appears when you right-click on the corresponding row. The options of this menu can be seen in Figure 2.6. 4 We use the term array to refer both to vectors and to matrices of any number of dimensions. 48 CHAPTER 2. CREATING MODELS Figure 2.6. Pop-up menu for each variable. 2.3.3 Initial value for a variable A variable can be initialized writing a constant value or a simple expression in the corresponding field of the “Value” column. By default, Ejs gives each variable a default (zero for numeric variables) value, which we can then edit. This is not mandatory, though, and we can even leave this field empty. In this case, we will need to initialize the variable in the initialization panel (see Section 2.4) or as a result of a constraint. If we provide an initial value for an array, all the elements of the array will be initialized to the same value. If we want to provide a different value to each of the elements of an array, usually a value that is an expression of the index of each element, we must explicitly include the index or indexes in the name of the variable and use them in the expression for the value. Thus, for instance, the definition of the following variables will appear in the editor as in the table of Figure 2.7. • isElastic: boolean, simple, initialized to true. • aText: variable of type String, simple, initialized to “Free fall” (double-quotes are mandatory!). • n: integer, simple, initialized to 10. • time, gravity, x, y, and vy: double precision real variables, simple and initialized to 0, except gravity and y, which are initialized to 9.8 and 0.8, respectively. • posX, posY and velY: unidimensional arrays (vectors) of n (that is, 10) elements of double precision, initialized to equally spaced values from -1 to +1 the first one, and with random values between 0.5 and 1 the second. • vectors: three-dimensional matrix with 10 rows, each with 10 cells and each cell with 2 elements of type double, not initialized. • aColor: An object, simple, initialized to the predefined (by Java) color red. 2.3. DECLARATION OF VARIABLES 49 Figure 2.7. Sample declaration of variables. Notice in particular the expression that uses the function random() from Java mathematical library (see Section 2.7) and the form in which the elements of the array posX are initialized with equally spaced values from -1 to 1, using the ad-hoc variable i. In the expression of the right-hand field for posX, we are using that the first element of an array in Java has the index 0, and the last one, the size of the array minus one. Also, it was necessary for us to write the expression (2.0*i) between parentheses to make sure that Java doesn’t use integer arithmetic to evaluate this expression, which it does when it has the opportunity to, and that would result in values different to what we expect. Recall these two principles and you will save yourself some troubles! With respect to the initial value of aColor, this example illustrates the use we can do of the (literally) hundreds of classes in Java. This manual doesn’t try to teach you Java in all its extension, but this is actually one of the few classes that can be really useful for us, for instance, to make our objects change color dynamically. Once we have declared the variable this way, we can give aColor the value of the color we want using a sentence such as : aColor = new java.awt.Color(255,0,0); where the parameters of this invocation to the class java.awt.Color must be three integer numbers between 0 and 255 that indicate the levels of red, green and blue, respectively, for the color we want. In the case shown, the color created is, again, red. If you want to get semi-transparent colors (which is rarely really necessary and causes the computer to run slower), you can include a fourth parameter indicating the level of transparency. 50 CHAPTER 2. CREATING MODELS 2.3.4 Using the variables When, later on, we want to use a simple variable in a Java expression, we only need to write its name. If we want to use an element of an array, we need to indicate which element we refer to. 5 For this, if the array is unidimensional, we write the name of the array and, between square brackets, an integer number that indicates the position of the element in the array. If the array has more than one dimension, we must specify as many integers between square brackets as needed. Once more, recall that the first element of an array has the index 0, and the last one, the size of the array minus one. Forgetting this (for many, a bit strange) way of numbering, is a frequent source of severe errors during the execution of a simulation. Thus, in the case shown above, trying to use the element posX[10] will cause what is known as an exception, that is, a long complaint of the Java virtual machine. This errors, unfortunately, are not detected during the compilation, which makes them even more of a nuisance. Correct examples of use of the variables defined above are the following: isElastic = false; time = 3.5; posY[0] = 1.0; posY[n-1] = 4.2; vectors[9][9][1]=5.0; while next ones are incorrect: n = 1.5; x[0] = 0.1; posX[10] = 1.0; posY[0][0] = 1.0; vectors[9][9] = 5.0; 2.3.5 Naming conventions Along the process of creating the simulation, we will need to provide a name for several different elements of it: pages for the different panels, variables, custom methods (see Section 2.7) and elements of the view (Chapter 3). Names for pages you create have no restrictions, since they are only used for organizational purposes. You can even use (if there is a good reason for it) the same name for different pages. But the names for the other elements do need to follow some rules in order to avoid conflicts among them (and to help keep clarity, as well). Thus, we will respect the following conventions: 1. The name of each variable or custom method must be unique in the model. The name of view elements must be unique in the view (even when it can coincide with a name used in the model). 5 Java doesn’t natively support matrix algebra. 2.4. INITIALIZING THE MODEL 51 2. Names are constructed by a combination of alphanumerical characters (from “a” to “z”, from “A” to “Z”, and from “0” to “9”) with no limit of length. The first character must be alphabetical. 3. Although not strictly compulsory, the first character of the names of variables and of custom methods of the model will be lowercase. For view elements, on the contrary, the first character will be uppercase. 4. It is recommended to use descriptive names. If we need to write more than one word, there must be no blank spaces among them; words can be joined, for instance, by using uppercase for the first letter of the second and consecutive words. Thus, instead of cm, we will write centerOfMasses. 5. It is forbidden to use names that start with the character “ ”, since Ejs can use some such names for internal tasks. Finally, the following words cannot be used because they are reserved words either by Java or by Ejs: 6 Palabras reservadas: boolean, break, byte, case, catch, char, continue, default, do, double, else, float, for, getSimulation, getView, if, import, initialize, instanceof, int, long, Object, package, private, public, reset, return, short, static, step, String, switch, synchronized, throws, try, update, void, while. 2.4 Initializing the model As we have seen in the previous section, we can initialize the variables of the model with constant values or with simple expressions, using the field in the column “Value” of the table of variables. However, we sometimes require initializations which are a bit more complex. For instance, if we want to give elaborated values to the elements of an array, or if we want to do some previous computations. In these situations, we can use the second of the subpanels of the model, “Initialization”. We can create in this panel one or more pages of an editor where we can write the Java code that does these computations. This code will be executed once, when the simulation starts. For instance, we can give values to the elements of the array vectors in the example above using the code shown in Figure 2.8. 7 Although the algorithm we write in these pages can be as sophisticated as the problem requires, the process is rather simple in itself: just add as many pages as you wish and write in them the Java code you need. This code only needs to contain 6 7 This list may grow as new releases of Java introduce new features. This code is not actually very interesting in itself. It is included only for illustrative purposes. 52 CHAPTER 2. CREATING MODELS Figure 2.8. Sample initialization code. the Java expressions and sentences that your algorithm requires, thus avoiding other technicalities of Java (such as method declaration). Recall that, if there are more than one initialization pages, Ejs will process them always from left to right. If you want to change the order in which a given page appears, use the popup menu for it. The editor for the code (the white edition area) has its own popup menu, displayed in Figure 2.9. This menu offers the typical edition options, but also a simple code assistant that will help you introduce standard Java constructions, such as loops and conditional statements. It also offers an option that displays some of the predefined methods of Ejs (see Subsection 2.7.3). Figure 2.9. Popup menu (left) and code assistant (right). You will finally observe that the lower part of the editor includes a text field in which you can write a short comment about the code of the page. Documenting the code, either with comments inside the code,8 or using this text field, is an effort that will be very much appreciated both by your users and by yourself when, after some time, you need to re-read your own code. 8 Lines which start with a double slash, //, are considered comments. 2.5. EVOLUTION EQUATIONS 2.5 53 Evolution equations The interface for this subpanel, that we see in Figure 2.10, is a bit more elaborated. Figure 2.10. Panel for the evolution of the model. 2.5.1 Setting the parameters for the evolution To the left of the panel we see a slider that lets us modify a value called “Frames per second” (“FPS” for short) that is also shown in the field immediately below it. This parameter tells Ejs how many times per second it should execute the evolution equations. The value we can see in the image corresponds to the symbolic value “MAX” (for maximum), which tells Ejs that it should run the evolution as fast as it can. However, even when this may seem to you in principle the most desirable thing, in many situations it is not so. Modern computers can be so fast that this may prevent us from appreciating properly the evolution of the phenomenon under study. In these cases, you can use this slider to set a smaller FPS number. The change from the value “MAX” to the one immediately below it (24) is actually qualitatively important. A value of 10 FPS can be enough for simple processes. If accompanied by a value of 0.1 seconds for your time step (if you have such a variable), it will approximately display a simulation running in real time. Finally, the setDelay() predefined method (see Subsection 2.7.3) allows you a finer control of the speed of execution of a simulation. 54 CHAPTER 2. CREATING MODELS Below these controls, and still on the left of the panel, you will see a checkbox labeled “Autoplay”, which appears activated in the image. This tells Ejs that the evolution should start automatically when the simulation is run. You can leave the checkbox selected if this is what you want. In many situations, however, you may prefer that the evolution doesn’t start automatically; for instance, if you prefer that your users can first manipulate the interface of the simulation to select appropriate initial conditions for your model. In these cases, you must click on the “Autoplay” box to deactivate it and, obviously, provide an alternative way of starting the evolution. The easiest way for this is to include a button in your simulation’s view that, when clicked, will invoke the predefined method play() (see Subection 2.7.3 and Chapter 3). 2.5.2 Editors for the evolution Differently to the rest of subpanels in the model, the central area of the evolution subpanel is divided into two parts. The upper one invites us to create a page (for Java code), while the lower one allows us to create a page with an editor for ODEs, that is, ordinary differential equations. 9 This corresponds to the fact that the evolution of a model can be described in two, complementary ways. In principle, the implementation of the evolution equations will simply consist in the transcription of expressions (2.1) to Java code. That would certainly be the case for the so-called discrete models, in which variables change in time steps of previously defined duration. However, many of the processes that we will study are based in continuous models, in which time is considered as a continuous magnitude. In these cases, since the computer is essentially a discrete machine, evolution equations may derive, at least in part, from the numerical resolution (which implies the discretization) of systems of ordinary differential equations, a task that implies writing rather sophisticated code. Even when it is possible that you prefer to write this type of code by yourself (for instance, if your interest is precisely the teaching of this type of numerical methods), Ejs includes the possibility of typing these equations in a specialized editor that will automatically generate, when the simulation is created, the Java code required to implement some of the most popular methods for numerical solution of ordinary differential equations. Hence, when you come to work on the evolution, you need to start choosing which of the two editors you want to use, the one for plain Java code or the one 9 These differential equations are called ordinary to distinguish them from partial differential equations. 2.5. EVOLUTION EQUATIONS 55 specialized in ODE. Obviously, since your model may require using both types of editors, the popup menu for these pages allows you to create as many additional pages of both types as you may need. 2.5.3 Editing ordinary differential equations The editor for plain Java code is identical to the one described in Section 2.4, so we don’t need to talk about it here. Figure 2.11 shows the editor for ordinary differential equations. Figure 2.11. Editor for ordinary differential equations. This editor allows us to introduce systems of explicit, first-order ordinary differential equations. That is, these in which the first derivative of a variable of the model is expressed as an explicit function of itself and the other variables. This apparent limitation still allows us to solve a great deal of differential problems with this editor, in particular because the variables that are differentiated can be either simple variables or vectors, i.e. unidimensional arrays. In both cases, for obvious reasons, the variables need to be of double type. The fact that the differential equations need to be of first order is no restriction in itself, since any system of equations of higher order can be rewritten as another system of first order, introducing some auxiliary variables. The fact that expressions need to be explicit may require, we admit it, to elaborate a bit our equations, or carefully choosing our variables. We will illustrate the process using an example that involves both simple and dimensioned variables. We will be using the variables we declared in Section 2.3. Thus, suppose, to begin with, that we want to solve the second-order differential 56 CHAPTER 2. CREATING MODELS equation: ÿ(t) = −g, (2.3) which corresponds to a body in free-fall, where variable y represents the position in height of the body and g = 9.8 m/s2 is the acceleration due to gravity on the Earth’s surface. 10 Using vy as an auxiliary variable, the equation can be rewritten as the system: ẏ(t) = vy (t) v˙y (t) = −g (2.4) Similarly, suppose that the vector variable posY represents the position in height of a set of bodies that fall freely, so we are faced with the vector system of ODEs (using velY as auxiliary variable): ˙ posY(t) = velY(t) ˙ velY(t) = −g (2.5) where g denotes a vector with all its components equal to g. Declaring an ODE Before typing these equations in Ejs’ editor, we must first select the independent variable for our system. Usually (and also in this example) this variable is the time. Even when we have written time in the equations above as t, in our set of sample variables, the name we chose for it was time. This will be therefore the value that we must type in the field of the editor devoted for the independent variable. We can either directly type this value in the field, or use the icon that appears to the right of it, , that (since the independent variable must always be a simple, double variable) will show the list of all simple variables of type double. In the second place, we must indicate the integration step that we want our editor to use to solve the differential equation. This value can either be a constant or a variable of type double, and indicates the increment that the independent variable must experiment, in each step of the evolution, at the end of the process of numerical resolution of the differential equation. Thus, for instance, if we type in this field the value, say, 0.01, in each step of the evolution, Ejs will solve the equation advancing from the current state of the system at time t, to a later state at time t + 0.01. Notice that, as part of the solution of the system, the independent variable time will be augmented by the given increment. For this reason, all the differential equations that use the same independent variable need to be written in the same page. 10 Obviously, this differential equation can be easily solved analytically, but we will ignore this. 2.5. EVOLUTION EQUATIONS 57 Once we have introduced these data, the editor looks as in Figure 2.12. Notice how the variable time appears automatically in the denominator of the differential expression of the first row of the table. Figure 2.12. Introduction of the independent variable and its increment. We now need to edit one row of the table for each of the differential equations of our system. The process for simple (non-dimensioned) variables consists in doubleclicking on the left cell of each row and typing in there the variable that we want to differentiate, and then double-clicking on the right cell of the same row and typing the expression for the corresponding derivative. The editor takes care of displaying the differential expression in a familiar way. Thus, for instance, for equations (2.4), the result must look like in Figure 2.13. Figure 2.13. Differential equations for simple variables. To facilitate the editing process, we can alternatively use the popup menu for this table. If we right-click on a cell, a popup menu will appear that will allow us to select the variables that we want to type, as well as other standard options for table maintenance. The mechanism to introduce the derivative of vectors is very similar, but with one small difference. Since the editor can only really differentiate (for technical reasons) simple variables, we must include the index of the array in the differential expression, both on the left and the right-hand side. We can choose for the index any name that doesn’t conflict with a global variable. The typical name is i, variable that we can consider as local. 11 If we use the popup menu for the cell to choose the variable to differentiate and the one selected is a vector, then Ejs will automatically add this index. The result of the edition of equation (2.5) (together with those 11 A local variable is one that is defined and used only inside a given block of code. 58 CHAPTER 2. CREATING MODELS already edited for (2.4)) are displayed in Figure 2.14. This concludes the definition of the differential equations for our example. Figure 2.14. Differential equations for simple variables and for vectors. Using custom methods in an ODE In this example, the expressions on the right-hand side of the equations were rather simple. However, it could very well happen that these expression are longer to write or that they require a more complex algorithm. In these cases, it is more appropriated to define a custom Java method (see Section 2.7) that makes the computations separately and invoke this method from the right-hand side cell of our equation. When we choose to do this, it is very important that the methods that appear in these expressions are self-contained. With this we mean that, if the expression depends on one or more of the variables that are being differentiated, or on the independent variable, then all of them must appear as parameters of the invoked method. The reason for this is that (almost all) the algorithms of numerical integration used to solve the equations compute intermediate states of the system, in order to improve their precision. Including all the variables involved as parameters allows them to do these computations correctly. For instance, if we wanted to do this for our equation (2.4), the editor should look as in Figure 2.15, where we should previously declare, in the “Custom” subpanel, the following methods: 12 double f1 (double t, double pos, double vel) { return vel; } double f2 (double t, double pos, double vel) { return -gravity; } 12 The simplicity of these equations would allow us to declare f1 and f2 with less parameters. However, it is a good practice to get used to include as parameters all the variables that could possibly appear in the expressions, in case we change later the dynamics of the problem. 2.5. EVOLUTION EQUATIONS 59 Figure 2.15. Differential equations using custom methods. Although we will see in detail the generic syntax of custom methods in Section 2.7, it is important to underline here the following peculiarities: • It is a good programming practice to use for the parameters names different to those of the global variables, thus stressing that their values (at the intermediate states computed by the solver) can be different. • The method can still use for its algorithms any other global variable, as long as this variable doesn’t appear differentiated in this system of differential equations. • These methods don’t need to be declared public (see Section 2.7), unless they will also be used by the view (which is actually rather unusual). For the case of dimensioned variables it is very usual that the index is also included as a parameter of the method invoked. This is the case when the expression in the right cell depends on the element of the matrix that is being differentiated. Choosing the numerical algorithm Immediately below the table that defines the differential equations, we can see a selector that allows us to choose the numerical algorithm that we want to use to solve our equations. The algorithms offered are currently: 13 Euler This is a method of order 1, and therefore of very low precision. This method is included only for pedagogical reasons, since it is easy to understand by students. Euler–Richardson This is a method or order 2 with a ratio speed/precision appropriated for simple problems. We typically recommend to start always with this method and, if you observe that the results lack precision, use a more powerful method (that also requires more computing time). 13 We may add new algorithms in the future. 60 CHAPTER 2. CREATING MODELS Runge–Kutta This is the clasical Runge-Kutta method of order 4. A very popular method with more than reasonable precision. Runge–Kutta–Fehlberg This is a method with adaptive step size. It consists in a sophistication of the previous one that automatically adjusts the integration step to obtain an error smaller than a predefined value called tolerance. If you select this method, you’ll need to indicate the tolerance in the field that will be enabled for this purpose. To the right of the tolerance field you will see the elements of the interface that help us include events for our differential equations. Events are discussed in the next subsection. Commenting the page Notice, finally, that each page of equations offers a text field where you can write a short comment for it. 2.5.4 Events of a differential equation Sometimes, when we are implementing the evolution of our simulation through the solution of a system of differential equations, we want the computer to detect that a given condition which depends on the variables of the system has taken place, allowing us then to make some corrections to adjust the simulation. For instance, suppose that the falling body that we are simulating using equations (2.4) is an elastic ball that we have thrown against the floor. The numerical solution of these equations doesn’t take into account, by itself, the fact that the computed solution will eventually take the ball to a position below ground, that is, y(t) < 0. As the method for numerical solution advances at constants step of time, it is very likely that the exact moment of the collision of the ball with the floor doesn’t coincide with any of the solution steps of the algorithm, taking the ball to a, let’s put it this way, ‘illegal state’. Instead of this, we would have preferred that the computer had detected the problem and had momentaneously stopped at the precise instant of the collision, applying then the code necessary to simulate the rebounding of the ball against the floor, and continuing the simulation from these new initial conditions. This is the archetypical example of what we call an event. More precisely, we define an event as the change of sign of a real-valued function of the state variables (the variables that we differentiate) and of the independent variable of an ODE. (Events caused only by the independent variable are traditionally called time events, while those caused by the other variables are called state 2.5. EVOLUTION EQUATIONS 61 events). To simplify the discussion that follows and, since actually all variables depend on the independent variable, we will denote by h(t) the function that changes sign in the event. Operational definition of event Finding the instant of time at which the event takes places consists therefore in finding the solution of the equation h(t) = 0. We will assume that the usual state of the system implies h(t) ≥ 0 and that the event takes place when h(t + δt) < 0 (δt is the increment we set for the independent variable of this ODE). Notice that we take as valid a state of the system that verifies h(t) = 0. 14 It is only when at the next instant of time the system becomes illegal that we consider that the event has taken place. This is, however, the theoretical approach. In practice, the numerical resolution of differential equations can only be done in an approximate way. Even more, small round-off errors during the computations can produce states where h(t) < 0, with h(t) extremely small, even when the system is actually legal. For these reasons, we must slightly modify our definition so that it becomes more operational. The definition of event that we will use is established through the following procedure: 1. Each valid state of the system at instant t has associated to it a non-negative number h(t). Because of possible round-off errors in the computation of this number, a state will still be considered valid if h(t) > −, where is a small positive tolerance. 2. If the evolution step takes the system to a state such that h(t + δt) ≤ −, then the method of numerical solution will consider that an event has taken place in the interval [t, t + δt]. Then, (a) The method will search for a point t0 in this interval that verifies |h(t0 )| < . In this search, the first point to check will be t itself (this is done to improve the treatment of simultaneous events). (b) When it finds this point, the method of resolution will invoke a userdefined action. The result of this action must bring the system back to a state such that either h(t + δt) > − immediately (evolving from the new conditions set by the action) or, at least, this will become true in a finite number of iterations of this same process (starting again from point 1). 14 In our example, the ball can be lying on the ground without the need to rebound. 62 CHAPTER 2. CREATING MODELS Thus, with this definition, an event is specified by providing: (a) the function h which depends on the state, (b) the desired tolerance, and (c) the action to invoke at the event instant. Although the function h doesn’t need to be continuous (this allows for greater flexibility when specifying events), it must be defined so that the procedure indicated above can work. That is, if h(t) is a legal state and h(t + δt) ≤ −, then there must exist a point t0 in [t, t + δt] such that |h(t0 )| < . Creating events for an ODE Notice that the editor for differential equations includes a button labeled “Events” and a field that indicates that, by default, there are no events defined for this equation. Clicking the button “Events” will bring in an independent window with an editor with a behavior similar to that found in other parts of the model, and which we can use to create as many pages of events for our differential equation as we want. There exist, however, some differences. Observe in Figure 2.16 that a page for the edition of events appears divided into two sections (besides the text field for comments). Figure 2.16. A page for an event of our differential equation. In the upper section of this page we need to indicate how to detect the event. For this, we need to write the code that computes the function h(t) from the values of 2.5. EVOLUTION EQUATIONS 63 the state and the independent variable of our ODE. This code must end, necessarily, returning a value of type double. To help us remember this, the editor writes by default the next code (which, by the way, causes no event at all): return 1.0; In the “Tolerance” field on the upper right corner of this section we need to indicate the value for the tolerance that we want for this event (this is the in the discussion above). The lower section is used to tell the computer what to do when it detects (and then precisely locates) an event. Recall that this action must solve, or at least simplify, the situation that triggered the event. In the upper-right corner of this second section we find a checkbox labeled “Stop at event”, currently activated, that tells the computer whether it should return from the solution of the ODE at the instant of the event or not. Notice that, if checked, this causes the real increment of the independent variable to be smaller that the one originally desired. But still, checking this option may be useful if you want to appreciate the exact moment of the event. A technical tip: the code for the action doesn’t need to return any value explicitly using a return statement, the checkbox “Stop at event” takes care of this. However, if the code for the action has any return statement in its body, for instance to interrupt the execution of the code at any given point (never as the last sentence of the code), the value returned must be a boolean. This boolean will indicate, if true, that the event must interrupt the computation of the solution at the moment of the event. We can use our example of the falling ball to construct a sample event. For this, edit the code of the upper section of the page so that it reads: return y; This indicates that the event will take place when the ball reaches the level of the ground, y = 0. The default value for the tolerance is adequate. As action for the event, write the code: vy = -vy; which simulates a totally elastic rebounding at the instant the event takes place. Leave the box “Stop at event” checked, so that the system visualizes the instant of the rebounding. 64 CHAPTER 2. CREATING MODELS Final considerations about events The creation of the function for the “Zero condition” and of the “Action” are totally under the responsibility of the user. He or she must consider carefully the process that they will cause according to the procedure described above. Notice that the event editor allows us to define more than one events for the same differential equation. When there are more than one events defined, the editor will look for those that take place in the same interval [t, t + δt], and, among these, which one takes place first, executing the action associated to this one. If two events take place simultaneously, the editor can invoke the corresponding actions in any order. The user should bear this possibility in mind and try to avoid infinite loops, in which one event causes a second event, which in turn reproduces the first one, and so indefinitely. This would cause the computer to hang. A second problem arises from the so-called Zeno-type problems. 15 This takes place when the system reaches a state in which a big number of events take place with increasingly smaller gaps of time between two consecutive ones. This causes the computer to spend all its time solving events instead of simulating the process, which slows down the simulation so that it even seems that it has stopped. Zeno-type problems can appear in apparently innocent problems. Try an inelastic rebounding of the ball and you will get one! Just be warned. See an example of how we solved this one in the examples/Events/BouncingBall.xml sample file. Finally, since the method for the solution of events checks the state of the system in several different instants of time when looking for the precise (well, actually approximate) moment of the event, it is difficult to predict how many times the system will execute the code for the function h and with which states of the system. However, in order to facilitate the creation of elaborated events, the system makes the following compromise with you. If, (a) the function h(t) of a given event returns a value less than or equal to the tolerance, and (b) this event is the one that finally triggers its action (when there are more than one events defined, this could not be the case, even if the first condition holds), then the action of the event will be called immediately after the evaluation of the corresponding function h. That is, the function of this same event will not be called with a different state of the system before invoking the action. 15 Do you remember Zeno, the one of the paradox of Aquiles and the turtle?: “Aquiles will never catch the turtle because when Aquiles covers the distance that separates him from the turtle, it has already moved a little bit. And when Aquiles covers this new distance, again the turtle has moved further. And so, infinitely many times. . . ” 2.6. CONSTRAINT EQUATIONS 65 We can benefit from this compromise, for instance, to set the value of certain global variables, during the evaluation of the function h, which will be used in the corresponding action for the event. In the examples/Events/BallsInBox.xml example file, you can see how we used this compromise to build an event that handles a situation with collisions among many balls. 2.6 Constraint equations The panel for the creation of constraint equations behaves exactly the same way as the panel for the initialization does (although, obviously, the role of these pages is different). The only thing we need to do is to write the code that transcribe constraints equations (2.2) into sentences of Java language. 2.7 Custom methods and additional libraries The last radio button of the model is labeled “Custom”. The main purpose of the corresponding subpanel is to allow us to create our own methods (what other programming languages call functions or subroutines) that we may need in other parts of the simulation. A second use of this subpanel is to let advanced users access their own libraries of Java classes packed in JAR or ZIP archives. We will refer to them as additional libraries (additional to Ejs itself). Accordingly, the interface for this panel is divided in two parts, as Figure 2.17 shows. The bigger upper part lets us create our custom methods. The lower part is used to add additional Java libraries. Figure 2.17. Subpanel for custom methods and additional libraries. 66 CHAPTER 2. CREATING MODELS We cover first custom methods and leave for Subsection 2.7.2 the description of how to access your own Java libraries. 2.7.1 Custom methods Custom methods are frequently used to group portions of code with the intention of making other parts of the model (or the view) simpler, either because the resulting model is easier to read and hence to understand, or because it reuses code that appears in several parts of the model. A second utility of custom methods is that of preparing actions that can be invoked as response of user interaction with the interface of the simulation (by using them in action properties of view elements, see Chapter 3). The look and feel and the way of working of the panel for custom methods is very similar to those for the initialization and constraints. There exist, however, two important differences. The first one lies in the use that Ejs makes of what we write in these pages. To be more precise, Ejs makes no explicit use of it. Differently to the code that we may have written in any of the other editors, that played a defined role in the life of the simulation, methods defined here will not be used unless we explicitly invoke them from another part of the simulation. The second difference is that we have an even greater degree of freedom (if this is possible) to write Java code in this panel. The only requisite in that what we write conforms valid Java methods. You can create methods as complex as you want, that take several parameters and have return values of any type. Creating custom methods When we create a new page in this panel called, say, “My Method”, Ejs helps us including in the page the initial skeleton for the method. See Figure 2.18. Figure 2.18. Initial page for a custom method. Creating a Java method requires, besides writing the necessary code, providing a name for it, indicating the type of the returned value (if any), and specifying the method’s accessibility. We also need to indicate whether the method accepts parameters or not, and their type. 2.7. CUSTOM METHODS AND ADDITIONAL LIBRARIES 67 With respect to the method’s name, we can choose whichever we want, respecting the conventions that we established in Subsection 2.3.5. The term ‘accessibility’ refers to which other parts of the simulation can use the method. If the method is declared as ‘public’ (this is the purpose of the public keyword which we see in Figure 2.18) then the method is universally visible. It can be used anywhere in the model and in the view. It also results visible from outside the simulation by using JavaScript. (This will be explained in detail in Chapter 4.) If the method is not declared as public (either using the keyword private or just nothing) then the method can only be used inside the model. There is actually no powerful reason that prevents us from declaring all our custom methods public, except perhaps reducing the offer to final users which don’t know what every method does and for whom it could even be confusing to see so many methods available. The return value of the method makes reference to whether the method returns a value (thus acting as a function) or not (a subroutine). If it doesn’t return any value, then we must indicate the special type void. If it does, we must indicate the type of value returned, which can be any valid Java type, either simple or dimensioned. If this is the case, the execution of the method must always end with a return sentence that indicates the value effectively returned. The code that accompanied Figure 2.15, for instance, illustrates this. Finally, parameters are indicated by a comma-separated list of pairs type name which declare local variables that can be used in the method’s body. This list must be written between the parentheses that follow the method’s name. If a method accepts parameters, then it must be always invoked using the exact same number and type of variables. If a method requires no parameters, then we can leave the list of parameters empty. You can change any aspect of the basic declaration provided by Ejs when it creates the new page: the accessibility, the type of the return value, the list of parameters (initially empty) and even the method’s name (which will then be used only to decorate the page header tab). You can also declare more than one methods in the same page. For example, the following methods compute the exponential of the cosine of a double value, and the conjugate of a complex number (specified by a double[2] array), respectively: double expCos (double x) { return Math.exp (Math.cos(x)); } double[] conjugate (double[] complex) { return new double[]{complex[0],-complex[1]}; } 68 CHAPTER 2. CREATING MODELS 2.7.2 Using your own Java libraries Advanced users may have legacy code that they want to reuse. These code is usually already compiled and has been packed in compressed Java archives, also known as JAR files. Using Java classes in packed in JAR files is straightforward. There are only three steps required for this: 1. Place a copy of the JAR file in an accessible place. A good place is anywhere under the Simulations directory. 2. Use the “Additional libraries” control to add this JAR file to the libraries Ejs will use to run your simulation. 3. Use your classes normally in the code. Classes you use need to be fully qualified. Alternatively, you can use the “Import statements” control to ask Ejs to add import statements to the generated Java code. We illustrate these three steps with a simple example. Suppose you have created a Java class which provides a static method that computes the norm of a vector (given as a double array). The code of this class may looks as follows: package mylibrary.math; public class VectorMath { static public double norm (double[] vector) { double norm2 = 0; for (int i=0; i<vector.length; i++) norm2 += vector[i]*vector[i]; return Math.sqrt(norm2); } } Suppose, also, that this class file has been compiled and included in a JAR file called myLibrary.jar. To use this class from within Ejs, you would typically do the following: 1. Copy myLibrary.jar to the Simulations directory. 2. Go to the “Custom” subpanel of the model and, on the “Additional libraries” control, click on the “Add” button. In the file dialog that will appear, select the file myLibrary.jar and click “Ok”. The “Additional libraries” control will change to reflect that this library has been added. See Figure 2.19. 2.7. CUSTOM METHODS AND ADDITIONAL LIBRARIES 69 Figure 2.19. Adding a new additional libraries. 3. Now, you can just type, in any of the code editors of Ejs, the following code: _println ("Norm = " + mylibrary.math.VectorMath.norm (new double[]{1,1,1})); If you run the simulation, the console will read “Norm = 1.7320508075688772”. As you see, we fully qualified (mylibrary.math.VectorMath) the VectorMath class in our code. Obviously, if we only need to use the methods from this class once, this is just fine. However, if we need to make frequent use of methods in this class, we can avoid having to fully qualify them every time by adding a, so called, import statement. For this, click on the “Add” button of the “Import statements” control and type mylibrary.math.VectorMath in the input dialog that appears. The control will reflect the change (see Figure 2.20). Figure 2.20. Adding an import statement. Now the code above can be written simply as: _println ("Norm = "+VectorMath.norm (new double[]{1,1,1}) ); 2.7.3 Predefined methods A special type of methods are those that control the execution of the simulation itself. For instance, to pause or play the evolution, to modify the speed at which it plays, to play it step by step, or to return the simulation to its initial state, among others. Maybe, the most sophisticated of them are those that allow you to save the state of the simulation in a disk file and read it later across the Internet. Because implementing these methods requires specific knowledge of the way Ejs works internally, they are provided by the system for your perusal. These methods 70 CHAPTER 2. CREATING MODELS can be used as any other method, either by themselves or in the body of other methods. The tables that follow list these methods, grouped by category, and the effect that each of them causes. Methods that control the execution of the simulation void _play() Starts the evolution of the simulation. void _pause() Stops the evolution of the simulation. void _step() Executes one step of the evolution of the simulation. void _setFPS(int fps) Sets the number of frames per second for the simulation (which will modify the settings of the slider of the evolution panel) to the integer value fps. void _setDelay(int delay) This method is, so to say, the reciprocal of the previous one. It allows to control the number of milliseconds that Ejs must wait when it finishes one step of the evolution, before executing the next one. Obviously, this affects the number of frames per second of the simulation. void _reset() This initializes all model variables to the values set in the tables of variables, it also executes the pages of initialization code (followed by those of constraints) and clears the view. This brings back the simulation to its exact initial state. void _initialize() This method can be considered as a soft reset. It executes the initialization pages (plus constraints), but doesn’t use the initialization values of the tables of variables. This can be useful when we want to initialize the simulation while respecting the values of some variables that the user has modified using the interface of the simulation. boolean _isPlaying() This method returns a true boolean if the evolution is playing and a false one if it is not. This method is typically used to enable and disable some view elements depending on whether the simulation is playing or not. boolean _isPaused() This method returns a true boolean if the evolution is paused and a false one if it is not. It is the opposite of the previous one and its use is similar to that one. Methods specific for the view void _resetView() It resets all the view elements to their original state. The precise meaning of this depends on each type of element. For instance, the result of this method on an element of type “Trace” will clear all the points drawn so far, including the memory (if it has memory). void _clearView() Clears all the elements of the view. Again, the precise meaning of this depends on each type of elements. In general, this method can be considered as a softer version of the previous one. For instance, the result of this method on an element of type “Trace” will clear the points drawn so far in the current trace, but will respect the traces in memory (if it has memory). 2.7. CUSTOM METHODS AND ADDITIONAL LIBRARIES void _print(String text) Prints the provided text in the view of the simulation if this contains an element of the type “TextArea”. If this is not the case, the message will appear on the console from which the program was launched or on the Java console of the Web browser that is running the applet. void _println(String text) Prints the indicated text, followed by a (so-called) carriage return and a new line, so that the next message will appear at the beginning of a new line. The text will appear as with the method _print. void _println() Prints a blank line, followed by a carriage return and a new line. Works similarly to the method _print. void _clearMessages() Clears the text area of the view. If the view has no such element, the method has no effect at all. String _format(double value, String format) Returns a String with the provided numerical value written according to the format indicated. The possible values for the format are the same as those for the “Format” properties that several view elements exhibit. See for instance the HTML reference page for the element of type “NumberField”. void _alert(String element, String title, String message) Displays a dialog, with the given title, that shows the given message. The element parameter must be the name of an element of the view on top of which the dialog will appear. If null the dialog will appear on top of the element set using the setParentComponent() method or, if this is not set, at the center of the computer screen. The view can’t be accessed until the message is acknowledged. void _setParentComponent(String element) Sets the element of the view on top of which subsequent alert dialogs will appear. org.opensourcephysics.ejs.control.ControlElement _view.getElement(String element) Gets the ControlElement object for the given element of the view. Consult the JavaDoc documentation for ControlElement for more information. java.awt.Component _view.getVisual(String element) Gets the java.awt.Component wrapped by a given element of the view. See _view.getComponent() for an explanation. java.awt.Component _view.getComponent(String element) Gets the java.awt.Component wrapped by a given element of the view. Usually, the component and the visual of an element are the same. There are occasions in which they are different. For example, for a TextArea, the visual is the JTextArea and the component a ScrollPane. It is wise to check the class before using it. java.awt.Container _view.getContainer(String element) Gets the java.awt.Container wrapped by a given container element of the view. Usually, the container and the component of a container element are the same. There are occasions in which they are different. For example, for a Frame, the component is the JFrame and the container is its content pane. It is wise to check the class before using the object returned. Object _view.getObject(String element, String objectName) Some view elements define particular objects in its interior (the function of a ControlFunction is a good example). This method gets the object with the given name defined inside the given element of the view. Elements should document which objects they contain. 71 72 CHAPTER 2. CREATING MODELS org.opensourcephysics.ejs.Function _view.getFunction(String element, String functionName) Some view elements define org.opensourcephysics.ejs.Function objects in its interior (the function of a ControlFunction is a good example). This method gets the object with the given name defined inside the given element of the view. Elements should document which functions they define. Methods for input and output boolean _isApplet() Whether the simulation is running as an applet. String _getParameter(String name) When the simulation runs as an applet, this method can be used to retrieve the value of a given <param> tag of the applet. The name of the parameter is given by name. If the parameter of the applet is not set, this method returns null. If the simulation runs as an application, this method looks for a pair of arguments (of the Java start-up command for the application) of the form "-name value" (where name is the argument of this method). If found, the method return returns the value of value. This method can therefore be used to read parameters from either the <applet> HTML tag or the start-up command, using exactly the same program logic for both, applet and application form. String[] _getArguments(String name) When the simulation runs as an application, this method returns the array of arguments that was specified at the Java start-up command. If the simulation runs as an applet, this method returns null. boolean _saveText(String filename, String text) Saves the given text to the specified file. The method returns true if everything went well, and false if there was any problem (then you might also get an error message in the console from which you run Ejs). If the filename starts with the prefix “ejs:”, then the text is written to a temporary memory location. This means that data can be later read as if they had been stored in a file (using the readText method), but only during the same working session of the simulation, since data will be lost when the simulation ends. This possibility of saving temporarily to memory is useful when the application is running as an applet, since (for security reasons) an (unsigned) applet can not save to the hard disk. However, this possibility of saving to memory can be used without problems. If any other case, the text is written to a file on the hard disk. Security restrictions apply. This means that, again, if the simulation is running as an applet, then the applet needs to be signed or the method will trigger a security exception. You’ll need to consult a specialized book to learn how to sign Java applets (Ejs can’t -for security reasons- do this for you, sorry.) boolean _saveText(String filename, StringBuffer textBuffer) A variant of the previous method that uses a StringBuffer. (StringBuffer is a Java class that works faster when you need to handle many small texts.) 2.7. CUSTOM METHODS AND ADDITIONAL LIBRARIES String _readText(String filename) It reads text from the file filename. Returns null if it was not successful. If the name of the file starts with the prefix “url:”, then the method will interpret the rest of the name as a location on the Internet relative to the location of the simulation itself and will try to connect to this location and read the data from the corresponding Web server. If, moreover, the rest of the name starts with the prefix “http:”, then the Internet location will be considered as an absolute URL. These options are specially interesting when we call this method from an HTML page using JavaScript (see Subsection 4.4.1). Finally, if the name starts with the prefix “ejs:”, Ejs will try to read the data from the specified memory location (see _saveText()). boolean _saveImage(String filename, String element) Saves the image of a given element of the view to the specified file. The method returns true if everything went well, and false if there was any problem (then you might also get an error message in Ejs’ console). The filename parameter is used as in _saveText() above. Moreover, if the filename has an extension, this extension is used as the format in which to save the image file, and the system tries to do so. Different systems may support different graphic formats. Currently, only PNG and JPEG formats are guarantied to be supported by all Java Virtual Machines (JPEG is used by default). We have added support for GIF format, but it only works well with images with less than 256 different colors on it. boolean _saveState(String filename) Saves the state of all model variables in the specified file. If successful, it returns a true boolean; if it encounters any problem, it returns false. The parameter filename indicates the name of the file where to store the data, and is used as in _saveText() above. The file is created as a binary file. To create text files, use the method _saveText() described above. boolean _readState(String filename) It reads the value of all model variables from the file filename. This file must have been created using a previous call to the _saveState() method from the same simulation, that is, with exactly the same variables, declared in the same exact order. Returns a true boolean if successful and a false if it is not. The filename parameter is used as in _readText() above. boolean _saveVariables(String filename, String variablesList) Similar to _saveState(), but only saves the variables specified in the given list. This must be a String with the names of the variables you want to save, separated by either spaces, commas, or semicolons. The parameter filename indicates the name of the file where to store the data, and is used as in _saveText() above. The file is created as a binary file. To create text files, use the method _saveText() described above. boolean _saveVariables(String filename, java.util.ArrayList variablesList) Similar to _saveState(), but only saves the variables specified in the given list. The parameter filename indicates the name of the file where to store the data, and is used as in _saveText() above. The file is created as a binary file. To create text files, use the method _saveText() described above. boolean _readVariables(String filename, String variablesList) Similar to _readState(), but only reads the variables specified in the given list. The data file must have been created with a matching call to _saveVariables(). The list of variables must be a String with the names of the variables you want to read, separated by either spaces, commas, or semicolons. The parameter filename indicates the name of the file where to read the data from, and is used as in _readText() above. 73 74 CHAPTER 2. CREATING MODELS boolean _readVariables(String filename, java.util.ArrayList variablesList) Similar to _readState(), but only saves the variables specified in the given list. The data file must have been created with a matching call to _saveVariables(). The parameter filename indicates the name of the file where to read the data from, and is used as in _saveText() above. Methods to access variables using JavaScript boolean _setVariables(String command) This method is used to set the value of one or more variables of the model. The parameter command must consist on a semicolon-separated list of instructions of the type variable=value. As in, for example, _setVariables("x = 1.0; y = 0.0"). If the variable is a unidimensional array (a vector), it is still possible to use this method, separating the values of the different array elements by commas, as in _setVariables("posX = -0.5,0.0,0.5"). This method is not designed to be used from within Ejs, but from HTML pages using JavaScript. See Subsection 4.4.2. boolean _setVariables(String command, String sep, String arraySep) This is a variant of the previous methods in which it is possible to specify the characters that separate variables and array elements. This may be necessary when you want to assign a value to variables of type String which contain commas or semicolons.. String _getVariable(String variable) This method is the counterpart of _setVariables. Given the name of a variable, it returns a String with its value. The user must extract the value of the variable (using the right type) from this String. 2.7.4 Methods from Java mathematical library Even when these methods are not properly from Ejs, but belong to Java mathematical library, we find it useful to end this section providing a list of them, since they are of frequent use when writing code. These method must be invoked adding to their names the prefix “Math.”. Thus, for instance, to compute the sine of 0.5 (radians) we must write: Math.sin(0.5). Methods of the mathematical library of Java double abs(double x) Returns the absolute value of x. double acos(double x) Returns the inverse function of the cosine, cos−1 (x), from 0 to π. double asin(double x) Returns the inverse function of the sine, sin−1 (x), from −π/2 to π/2. double atan(double x) Returns the inverse function of the tangent, tan−1 (x), from −π/2 to π/2. double ceil(double x) Returns the smallest integer greater than or equal to x. 2.7. CUSTOM METHODS AND ADDITIONAL LIBRARIES 75 double cos(double x) Returns the cosine of x. double exp(double x) Returns the exponential of x with basis e, ex . double floor(double x) Returns the greatest integer smaller than or equal to x. double log(double x) Returns the natural logarithm (with basis e) of x. double max(double x, double y) Returns the greatest of the numbers x and y. int max(int x, int y) Returns the greatest of the integer numbers x and y. double min(double x, double y) Returns the smallest of the numbers x and y. int min(int x, int y) Returns the smallest of the integer numbers x and y. double pow(double x, double y) Returns x to the power of y. More precisely, ey log x . double random() Returns a random number between 0.0 and 1.0 (0.0 is excluded). double rint(double x) Returns the integer number (in form of double) closest to x. long round(double x) Returns the integer number (in form of long integer) closest to x. double sin(double x) Returns the sine of x. double sqrt(double x) Returns the square root of x. double tan(double x) Returns the tangent of x. double atan2(double x, double y) Returns the argument of the complex number x+yi. Exercises In the examples/Manual/Model subdirectory of your Simulations directory, you will find the model that we have developed under the name FreeFall.xml. Run it with Ejs and you will obtain a total of 11 free-falling balls. However, only one of them will rebound. Exercise 2.1. Check that, since the rebounding is totally elastic, the ball always bounces back to the initial height, even if you let the simulation run for a long period of time. To better visualize this, add to the simulation a plotting panel and plot in it a trace with the height of the ball. Exercise 2.2. Create new variables for the potential, kinetic and total energies of the ball that rebounds, use constraints to compute these energies and investigate 76 CHAPTER 2. CREATING MODELS how well is the energy preserved. Exercise 2.3. Try to create an event so that all balls rebound when they reach the ground, and check again if the energy for the whole set is still preserved. You will find the solution to these exercises in the cited subdirectory. CHAPTER 3 Building a view c 2005 by Francisco Esquembre, September 2005 There is no doubt. The creation of the graphical user interface, or view, is the part of the simulation that requires a deeper knowledge of advanced programming techniques. Usually, the graphical tools provided by a high-level programming language have general applicability and, because of this, no specialization for any particular task. Hence, an important technical effort is required to apply these tools for a use as specific as the one we want. However, the current degree of development of computer graphics makes it impossible to conceive a simulation without an advanced graphic visualization. Moreover, if the simulation is to be used for pedagogical purposes, we also need to provide it with a high level of interactivity. For all these reasons, Easy Java Simulations uses its own powerful library. A library that is specialized in the visualization of scientific processes and data, and whose use has been simplified as much as possible. This library is based on the standard Java library called Swing and on the Open Source Physics tools created by Wolfgang Christian at Davidson College, 1 but it also adds its own developments. All these sets of Java classes are supported by Java 2 and its plug-ins (version 1.4.2 or later), so they can visualized by most modern Web browsers. 1 Davidson College is a reputed institution of higher education located in North Carolina, in the USA. For more information, visit the Web site. 77 78 CHAPTER 3. BUILDING A VIEW We will introduce these elements in a simplified, though effective, way that is sufficient for our purposes. 3.1 Graphical interfaces We will create a graphical user interface for our simulation through the construction of a tree-like structure of selected graphical elements. The process can be compared to playing with one of these block-construction games. We call graphical element, or simply element, each of these building blocks, a piece of the interface that occupies an area of the computer screen and that performs a specific task on which this element is specialized on. There are elements of several types: panels, labels, buttons, particles, arrows,. . . and an important part of the game consists in learning which elements exist and how to use all their possibilities. The graphical aspect of each element is very much determined by its type. However, each element has a set of inner characteristics, called properties, that we can modify so that the element looks and behaves according to our needs. These properties are usually set using constant values, but they can also be changed dynamically, thus creating the animations we want. Moreover, some elements have a special type of properties, called actions, that allow us to define the response of the interface to the user interaction with these elements (for instance when clicking the mouse or typing on the keyboard). This provides the required interactivity. In this chapter we will show the common principles of use of these elements to build sophisticated user interfaces, without getting into details about any particular type of element. The names of the element types are rather descriptive of their natural use. However, the best way to learn how to use them is through examples, and consulting the reference pages that you will find in the web server of Ejs,. In order to continue with our general description, we need to mention now only a big classification of the elements in three groups: containers, basic elements and drawing elements (or drawables). 3.1.1 Containers A container is a graphical element that can host other elements in its area of the screen. When this happens, we call the container element the parent, and the element contained the child. Since a container can be both parent and child, we can form a complete hierarchical structure of graphical elements with the form of a tree, whose root (which is not really an element) we will refer to as the Simulation View. 3.1. GRAPHICAL INTERFACES 79 As the first child of this root, we will usually create what we call the main window, which is the window that first appears on the computer screen when we run the simulation as an application, or the one that appears embedded in the HTML page, when we run the simulation as an applet. The elements that need to interact with the rest of windows of the operating system are containers of a special group generically called windows. There exist two concrete types of these: frames and dialogs. See the corresponding entry in the reference pages. There are, in turn, two main families of containers: containers that can host basic elements and other containers, and containers for drawing elements. The first group is used, together with basic elements, to configure the skeleton of the simulation’s view, providing the user with elements to control the simulation and to modify the value of the variables of the model. In this group of containers we list windows, toolbars, standard panels, split panels, and tabbed panels. The second group is used, together with drawing elements, to create animations or to visualize graphs of the simulated phenomenon. The group comprises twodimensional drawing panels, plotting panels (2D panels with coordinate axes), and three-dimensional drawing panels. The Layout property Maybe the most important property of a container of the first group mentioned above is the layout. When an element is added to a container, the parent assigns to the child appropriated position and size according to the available space, the child requirements, and of, so to call it, the container’s own policy of space distribution or layout. Some layouts allow the child to choose its relative position inside the parent, but in most of them position is a consequence of the “order of birth” of the child. Containers of the second group mentioned above define their own system of coordinates that allows their children (which must be drawing elements) to specify their position using user coordinates. For this reasons these containers have no layout property. We need to admit that the first time a new user is faced with layouts, they always seem an unnecessary nuisance: “Why can’t I just place a child exactly here with a fixed, prescribed size?”. The answer appears naturally when the user resizes or adds new elements to a view (which happens rather frequently). Then, the original sizes and positions are almost always inadequate. Using layout policies in the containers of our view instructs them how to proceed in these situations to distribute the new available space in a form that is congruent with the previous one. After a bit of practice, the use of layout policies becomes not only simple, but it even appears as the natural way of designing the interface. 80 CHAPTER 3. BUILDING A VIEW Even when Java provides a great deal of layout policies (and allows the programmer to create new ones), we will only use the following layouts: Flow It distributes the children in one row, from left to right, in a way much similar to how words appear in a paragraph. Children can be left, right, or center justified. Border Maybe the most frequently used layout, distributes its children in one of five possible locations: up, down, left, right and center, from which each child must choose a different one. Grid The second most popular, places its children in a rectangular table with as many equally sized rows as columns as indicated. Horizontal box This distribution works similarly to a grid with only one row, if only it doesn’t force all its children to have the same size. Vertical box This works as a single-columned grid, with children of possible different height. For all these layouts it is possible to provide two additional parameters that indicate how much it should separate children among them, both horizontal and vertically. 3.1.2 Basic elements The group of basic elements is made of a set of interface elements that can be used to decorate the view, to visualize and edit variables of the model, and also to invoke control actions (that is, methods that manipulate the model). You will find the elements in this group rather familiar, since they appear in practically all computer programs that you may use. The group includes labels, buttons, sliders, text fields, etc. These basic elements can be added to any of the containers of the first group of Subsection 3.1.1, but can not host other elements. That is, they are not containers themselves. There are two exceptions to this assertion. The tab of basic elements labeled “Menu” contains all the elements you need to create menus for your view. In this tab, you will also find the “MenuBar” and “Menu” types, which are, in purity, containers. 3.1.3 Drawing elements The set of drawing elements is one of the main contributions to Ejs of the Open Source Physics project. It consists of a set of view elements that can be included 3.2. ASSOCIATION BETWEEN VARIABLES AND PROPERTIES 81 in dedicated containers (these of the second group of Subsection 3.1.1) to create animated graphics that visualize the different states of the model. These animated graphics can range from the simple to the sophisticated, and be drawn in two or three dimensions, including particles, arrows, springs, GIF images, polygons, vector fields, contour maps, three-dimensional bodies, and even more. 3.1.4 Creation of the view Thus, finally, the creation of the view consists in generating a structure, or tree, of elements that serves our purposes. This tree must include elements that visualize the state of the model, elements for the interaction of the user with the simulation, and containers that host all other elements. 3.2 Association between variables and properties As we mentioned above, graphical elements have certain internal values, which we call properties, that can be customized to make the element behave in a particular way. Since we are mainly interested in the dynamical visualization of the state of our models and in providing the required interactivity to our simulations, what we really want is to use our model variables as value for some of the properties and as arguments for the actions of the view elements. This process, which we call associating or linking objects of the model with properties of the view, is what turns our simulation into a really dynamic, interactive application. In traditional programming, these links are created using calls to predefined methods of the graphic components. The main problem is that, since the standard graphical library was created, as we said before, to support any general use, the methods we are talking about are of rather low level, which demands high technical skills from the user. Easy Java Simulations dramatically simplifies this situation, in the first place, by adapting the graphical elements it offers to the use we expect of them, but also allowing us to establish the required links through a simple process of edition of the properties of each element. In this process, the only thing we need to do is to indicate in the so-called panel of properties of the element, which object of the model must be associated to which property. In most occasions this can simply be done by choosing from a list of possible values the one we want for the link. With this information, Ejs will generate the code required to ensure that, later, when the simulation runs, all the necessary calls to the low-level routines are done correctly for this connection between model and view to work properly. 82 CHAPTER 3. BUILDING A VIEW Linking is a two-way, dynamic process. With this we mean that at any time during the execution of the simulation, the element property will hold the value of the model variable to which it has been associated. Thus, if the evolution of the model changes this value, the element will automatically reflect the change. But also, in a reciprocal way, if a property changes as result of the interaction of the user with the view (by introducing a value, moving a slider,. . . ), the variable of the model will immediately receive the new input value. The configuration of the so-called action properties of the elements with code expressions that modify model variables or that use model methods, allows in turn the control of the simulation by the user, who will be able to execute the code interactively (by clicking a button for instance). This mechanism turns out to be, as we will see, both simple and effective to create advanced user interfaces for our simulations. 3.3 How a simulation runs Once we have established the associations between model and view that we need, the simulation will be ready to be run. With this information, Ejs generates the code necessary for the correct behavior of the future simulation. We can now complete the description of the execution of a simulation generated by Ejs that we started in Subsection 2.1.4. When a simulation runs, this is exactly what takes place behind the scenes: 1. The variables are created and their values set according to what the initialization states. 2. The constraint equations are evaluated, since the initial value of some variables may depend on the initial values of others. 3. The view of the simulation is created and it appears on the computer screen. All the associations that we may have created are used to instruct the view elements how to correctly visualize the state of the model, and how to properly respond to user interaction. At this moment, the model is found to be in its initial state, the view reflects this state, and the simulation waits until an evolution step takes place or the user interacts with it. 4. In the first case, the evolution equations are evaluated and, immediately after, the constraints. A new state of the model at a new instant of time is then reached. The view is then updated through the links with the model. 3.4. INTERFACE OF EASY JAVA SIMULATIONS FOR THE VIEW 83 5. In the second case, if the user changes a variable associated to a model variable, this is consequently modified, or, if the user invokes a control action, the corresponding code is executed. In any of both cases, constraint equations are evaluated immediately after. We then obtain a new state of the model at the same instant of time and the view is updated using the links with the model. 3.4 Interface of Easy Java Simulations for the view The interface of Ejs for this panel is shown in Figure 3.1. Figure 3.1. Ejs interface for the view. The left-hand side of the working area of Ejs is now occupied by a panel that displays the (initially empty) tree of elements that forms the view. The node called Simulation View shown corresponds to the root of this tree. When we add new elements to our view, this panel will be displaying them reflecting the corresponding tree-form structure. The right-hand side of the working area displays a panel with three sections in vertical. Each section contains a set of icons, grouped according to the classification that we did in Section 3.1. Each icon represents one of the types of elements that we can use to create the view. Whenever the number of icons is high, the section groups them in subpanels with identification tabs. You can explore the icons by placing the mouse on top of each of them and waiting, without clicking, for a second, until a small tip appears indicating the name of the type and a short description of its function. See Figure 3.2. Figure 3.2. Detail of the information for the type “Button”. 84 CHAPTER 3. BUILDING A VIEW 3.4.1 Types of elements We make in this paragraph a short introduction of the main types of elements offered by Ejs. Please notice that we don’t cover all the types, but leave the complete list for the reference you will find on the Web pages for Ejs. Containers Frame The main characteristic of frames is that they know how to deal with the windows environment of your operating system. In particular they can be minimized and maximized in the usual way. Each view must have at least a first element of this type in which to place other elements, we will call this first frame the main window. Dialog Window dialogs, or simply dialogs, are also independent windows, but, differently to frames, they can not be minimized (though they can be hidden). Besides this, they have the ability to always display on top of the frame that precedes them in the tree of elements, which can be useful in simulations with more than one windows. Panel Panels are the most basic type of container, and hence the first candidates to use when we just want to group other elements. SplitPanel This is a container that has two clearly separated resizable areas. TabbedPanel This is a container that displays only one of its several children at a time, organizing them using tabs. DrawingPanel This is a basic container for two-dimensional drawings. PlottingPanel Similar to the previous one, it adds a system of cartesian or polar axes. DrawingPanel3D This is a three-dimensional version of DrawingPanel. Basic elements Button Element in form of a button, used to invoke actions. CheckBox A button that allows to select one of two possible boolean states, true or false. 3.4. INTERFACE OF EASY JAVA SIMULATIONS FOR THE VIEW 85 RadioButton Similar to the previous one, it has the particularity that it works in group with all other radio buttons in the same container. Then, selecting any of the buttons, automatically unselects all the others. ComboBox An element in form of a list that allows selecting one of several options. Slider Allows to visualize and modify a numerical value, within a given range, in a graphical way . NumberField Allows to visualize and modify a numerical value using the keyboard. TextField Allows to visualize and modify a value of type String using the keyboard. Label Used to write labels for decorative or informational purposes. Drawables Particle Draws an interactive ellipse or rectangle in a given position and with a given size. Arrow Draws an interactive arrow or segment. Spring Draws an interactive spring. Image Draws an interactive GIF image or animated image. Text Draws an interactive text. Trace Draws a polygonal line created by accumulation of successive points, thus forming the trace of a motion. Sets Draw sets with several of the corresponding elements. Polygon Draws a polygonal line from the coordinates of its vertex. Surface Draws a parametric surface from the coordinates of the points of a regular grid on the surface. There exist particular elements for some surfaces such as spheres, cylinders and other bodies. ContourPlot Draws a contour map of a scalar field. 86 CHAPTER 3. BUILDING A VIEW GridPlot Alternative (simpler) visualization of a scalar field. VectorField Visualizes a vector field. SurfacePlot Draws in 3D the graph of a scalar field. 3.4.2 Adding elements to the view The creation of new view elements follows the next four steps. Choosing the type of element to create For this, just click on the icon for the type you want to create. When you do this, the background of the icon will change color, indicating that it has been selected. Moreover, the cursor will change to a hand when you are on top of an icon, and to when the icon is selected. See Figure 3.3. a magic wand Figure 3.3. Choosing the icon for an element of type “Frame”. Choosing the parent for the new element That is, the container that will host the new element. For this, click with the magic wand on a container element (that you created previously) in the tree of elements. If the new element is a frame or a dialog, which require no parent, you need to click directly on the root node, the one called “Simulation View”. This is precisely the case that Figure 3.4 illustrates. Choosing a name for the new element Each element needs a name for it. Ejs will propose one based on the type of element that you are creating (see Figure 3.5), you can either accept it or choose a more representative name according to the role of the element in the interface. Recall, however, the naming conventions that we established in Subsection 2.3.5. 3.4. INTERFACE OF EASY JAVA SIMULATIONS FOR THE VIEW 87 Figure 3.4. Choosing the parent for the new element. Figure 3.5. Choosing a name for the new element. If necessary, choosing a position for the new element This is only necessary when the parent is using the border layout policy, as we explained in Subsection 3.1.1. In this case, a dialog such as the one shown in Figure 3.6 will appear 2 offering one of the five possible positions: up, down, left, right, and center. Choose the one you want and click “OK”. Figure 3.6. Choosing a position for the new element. And this is all! The element is created, placed at the corresponding position, and the tree of elements is updated to include the new element. 3.4.3 Modifying the tree of elements Once we have created the tree of elements for our view, or even when we are still building it, we may want to modify its structure. This can be necessary to correct any possible mistake or to improve the initial configuration including new functionality. 2 This figure doesn’t correspond to the previous one, since a frame doesn’t need to choose a position. 88 CHAPTER 3. BUILDING A VIEW For this, each element in the tree has a popup menu that can be invoked by right-clicking with the mouse on the element. Figure 3.7 shows the view of the example of Section 3.6 with the popup menu for one of its elements. This menu can vary a little bit depending on the precise element that we choose, but the image shows a typical case. 3 Figure 3.7. Popup menu for a new element. The first entry in the menu is the one most frequently used. (Actually, it is so frequently used, that Ejs provides a shortcut for it. If you double-click on an element on the tree, this option is selected –even when the popup menu is not shown.) This option allows us to modify the properties of the element and will be described in detail in the next section. The second option allows us to change the name of the element, and the third one, its parent. In both cases, a dialog window will appear to allow us to choose a different name or parent, respectively. In a second group, there appear the options that allow us to change the position of the element inside its parent. A label tells us the current position of the element. This position may have been chosen by ourselves (if the layout policy of the parent is border) or be “imposed” by the parent according to the order of creation of its children. In the first case, to change the position we will need to use the option called “Change position”. In the second case, this option will be disabled and we will need to use the other two options to change the relative order of the element with respect to its “brothers”. The tree of elements will always reflect the actual order. There exist a final group of options, cautiously separated from the others, that 3 Because of its special nature, the popup menu for the root of the tree contains only the “Paste” option. 3.5. EDITING PROPERTIES 89 is used to cut, copy, paste, and remove the element. Be careful, Ejs will not ask for confirmation if you choose any of these options! (And there is no “Undo”, either.) A special option, only for frames A special option, that only appears in menus for elements of type “Frame”, is the one called “Main Window”. Perhaps you noticed that, although the icon for the “Frame” class is , the icon that appears associated to the first of the frames of a tree looks slightly different, . This has a special meaning. When the simulation is run in form of an applet, that is, inside an HTML page (see Chapter 4), one of the windows of the view appears inside this page, while all others will appear separately. We call this window the main window. A second property of this window is that, when the simulation runs as an independent application, closing the main window will end the simulation. The icon identifies this window and the option “Main Window” of the menu for a frame element (see Figure 3.8) will allow you to choose which of the frames (if there are more than one) will be the main window. 3.5 Editing properties Figure 3.8. Detail of the “Main Window” option. When we create new elements, they appear with default values for their properties. As we said already, properties are internal values that determine how the elements look and behave. We can modify these properties using the panel for the edition of properties of each element. For this, we need to select the first of the options of the popup menu for the element, “Properties”, or double-click on the element’s node in the tree of elements. Figure 3.9 shows the panel of properties for an element of the class “Button”. 3.5.1 Choosing constant values The way to modify a property is, in principle, straightforward: you just need to click on the corresponding entry field and type the desired value for the property. However, it is sometimes difficult to remember the correct format for some of these values, specially if the property is one of the technical ones (such as color, 90 CHAPTER 3. BUILDING A VIEW Figure 3.9. Panel of properties for an element of type “Button”. layout or font, for instance). To help with this, input fields of the properties have a button to its right that helps us choose the right value. (There exist a second button which we will discuss in a minute.) If the icon that appears on this button is , then there is no editor that facilitates the task; but most likely you don’t need it either, since this is a property that requires just a numeric or text field for which you can simply type the value you want. If the icon on the button is, on the contrary, , then Ejs provides an editor specialized in this property. We recommend you to use it. Figure 3.10 shows the specialized editors for the layout, color and font properties, respectively, from left to right. All of them work in a natural way. Figure 3.10. Specialized editors for the layout, color and font properties. Many of the elements, and particularly drawables, are interactive. This means that (when enabled) their values, positions, sizes, etc. can be changed interactively. In this case, interacting with the element (for instance, dragging it with the mouse), changes their properties. This is particularly useful to easily position or size windows and also to create drawings interactively. 3.5. EDITING PROPERTIES 3.5.2 91 Association with variables and use of methods of the model As we said in Section 3.2, properties can also be set through their association with variables of the model, with Java expressions that use them, or, in the case of action properties, using Java sentences that involve variables and methods from the model. These associations are the essence of the interactivity of the simulation. This process is also easy to accomplish. You just need to type the variable or the Java expression in the corresponding property field. To facilitate this task, though, there exists a second column of buttons, one for each property, in the table of properties. These buttons display one of the two icons: or . If you click on a button with the icon , a dialog window will appear, listing all the variables of the model that can be associated to the property that you are editing. It is possible that not all variables of the model are listed, since, for instance, a property of numeric type can not be associated to a variable of type String. Choose among the offered variables the one you want and click “Ok”. Thus, for instance, if we click on this second button for the property “Enabled” of the panel in Figure 3.9, whose value needs a boolean, we will obtain a list of all boolean variables of the model. If our model only defines the boolean isVisible, we will obtain the list of Figure 3.11. Observe that, as this example shows, this list can also include, besides the variables that our Figure 3.11. List of all possible model define, some variables and methods prede- boolean variables. fined by Ejs. These are displayed in a different color. (In this concrete example, isPlaying is a boolean variable that is true if the evolution of the simulation is playing and false if it is not. The variable isPaused is the negation of isPlaying. Finally, isApplet() is a method that returns a true boolean if the simulation runs as an applet, and false if it runs as an application.) Any connection between a model variable and an element’s property works right from the start. With this we mean that the connection will of course work when the simulation runs, but also under Ejs. You can appreciate that properties associated to model variables use the values of the variables specified in the corresponding “Value” cells of the table of variables. If you change any of these values, the view will readily reflect this change. But the connection also works the other way round. If you interact with a view element and change one of its connected properties, the value in the “Value” cell will be updated too. This can be useful to set initial conditions of your simulation in an interactive 92 CHAPTER 3. BUILDING A VIEW way. There is only one exception for this rule. If the “Value” column for the variable affected by this interaction contains an expression (instead of a constant value), Ejs will ask you for confirmation before replacing the expression with the constant value obtained from the interaction. Finally, if the button on the second column exhibits the icon , this tells us that the property is one of the so-called action properties. That is, that it can be used to define an action that will be invoked when the user interacts with the element in a given way (which depends on the type of element and on the property chosen). If we click on a button with this icon, a dialog will display all custom methods defined by the model (see Subsection 2.7.1), together with some of the predefined methods defined by Ejs (Subsection 2.7.3) in different color, to help us choose one of them. Thus, if the model defines the methods action1() and action2(), and we click on the second button of the property “Action” of Figure 3.9, the result will be like in Figure 3.12. Figure 3.12. List of model methods for an action. The action can consist in a simple call to any of the model or predefined methods or can include directly Java sentences that perform the action. It can even consist of a combination of both. Clicking on the first icon to the right of the input field of an action property brings in a simple editor specialized on edition of Java code that will help us write our code more comfortably. 3.5.3 The color codes of the property fields Property fields, that is, the text fields where you type in the values for the properties, implement a simple color code. 3.5. EDITING PROPERTIES 93 Fields for properties that are not action properties are always displayed in white background. When you edit them, however, their background turns yellow, so to warn you that you are modifying them, but that the value has not yet been applied to the property. Only when you hit return, thus accepting the value, gets the background white again. For this reason, don’t leave yellow backgrounds behind! Actions properties are different. Since their values are only really used when the simulation is generated, whatever you type in the corresponding field is immediately valid. For this reason, their background never turns yellow. However, since (differently to all other properties) the code you write for an action property can span more than one line, whenever this happens, the background of the field changes color to a sort of dim green, to warn you that there are possibly more lines than those you can see in the field. You can therefore leave green backgrounds behind. And we recommend that you always use the editor provided to view and modify the corresponding code. 3.5.4 Special case: properties of type String When we assign manually the value of a property, it is possible that there exist a small ambiguity in the case of properties that require text values. Indeed, suppose that we edit the “Title” property for a frame and write in it the value “graph”, and suppose that we have defined a variable of String type in our model which is called precisely graph. In this situation, what should Ejs do? Should it use the text “graph” for the title of the frame, or should it use the value of the variable graph for it? Even if you think that the example is a bit forced, it sometimes happens. In principle, Ejs will almost always do what we expect it to do, but if this is not the case, we need to solve the problem either changing the name of the variable of the model or using the following additional rules: • If we write the text between quotes (simple or double), we force the element to use the text literally. In our example, we should write either ‘graph’ or “graph”. • If we write the text between per cent characters, %, we will force the element to consider the text as the name of a variable. In our example, we would need to write %graph%. The use of these rules helps Ejs to completely solve the ambiguity. The specialized editors make automatic use of these rules. 94 CHAPTER 3. BUILDING A VIEW 3.6 Learning more about view elements Well, this is actually all you need to know, in general, about building views with Easy Java Simulations! Of course, you will need to learn more about the different types of elements that exist and how they can be used, together with the possible values of their properties, to achieve several visualizations and interactive capabilities. But this is learned in a more effective and nicer way through examples. For this reason, we end here the general description and refer to the examples distributed with Ejs to see practical cases of use. Recall also that on the Web server of Ejs you will find a set of HTML pages describing all the elements that Ejs offers. Do not hesitate to explore these pages on your own to learn more about a given type of element. We end this chapter with a concrete example of use that also has interest in itself. Although, usually, the model of the simulation contains the greatest part of its dynamic information, the view we build with Ejs can have its own life by itself. The degree of utility of it will depend on the capabilities of the graphical elements that we use to build it. We will illustrate this life of the view by constructing a simple function plotter using an element called “AnalyticCurve” as the main component of the view. This plotter will visualize the graph of a function f (x) which depends on three parameters. To reinforce all that we explained in this chapter, we will guide you through all the process. Main window Let’s start with an empty simulation. For this, click the “New” icon, taskbar; this will delete any previous simulation. , in Ejs Now, choose the panel for the “View” and create an element of type “Frame”. For this, click on the first icon of the subpanel for containers, , and click with the magic wand the root node of the tree of elements, the one called Simulation View. Accept for the new element the proposed name Frame. The result will look as in Figure 3.13. Now edit the properties of the frame to change its title to Function plotter. The panel of properties will look as in Figure 3.14. Notice that, as we said in Subsection 3.5.3, when you modify a field, it takes a yellow background (except for action properties). Don’t forget to hit the return key of your keyboard to accept the value, so that the background looks white again. Only then will the change really affect the element’s property. 3.6. LEARNING MORE ABOUT VIEW ELEMENTS 95 Figure 3.13. Initial tree of elements and the non-edited frame. For the figures of this chapter, though, we have left the fields we edited with the yellow background, so that you can easily distinguish which fields have been modified. Figure 3.14. Panel of properties for the main window. In this new frame, which has by default a border layout, we will create two children. The one at the center will be a plotting panel and the one in the Down position will hold the controls for the parameters of the function. Plotting panel Select the icon for the type “PlottingPanel”, , and click with the magic wand on our frame. Accept the proposed name and place the new panel on its parent’s center. A big panel with axes will be created and will occupy the interior of the frame. Edit the properties of the panel to change its “Title” to Graph of the function, its “Title X” to x, and its “Title Y” to f(x). 96 CHAPTER 3. BUILDING A VIEW Now, from the subpanel of drawable elements, select the tab header labeled as “Bodies”. From the icons in the corresponding subpanel, select the one for the type “AnalyticCurve”, and create an element of this type as child of the plotting panel we created above. The name suggested by Ejs is acceptable. The element “AnalyticCurve” creates the graph of a curve in space (that is a set of points (x,y,z)) from the analytic formula provided to compute these points. In this sense, it is one of the elements offered by Ejs that incorporates more computational power independently of the model. For more information about the element “AnalyticCurve” consult its reference page. Edit the properties of this new element as shown in Figure 3.15. With this we are asking the element to display a blue colored graph made of 50 points with coordinates (x,%Fx%), where these expressions for the X and Y coordinates are interpreted as functions of the “Variable” x which varies in the interval from 0 to 2*π. This finally results in the graph of the function of variable x, given by the expression contained in the text variable Fx. Figure 3.15. Panel of properties for the analytic curve. The tree of elements for our simulation will reflect the changes and will now look as Figure 3.16 shows. You will notice that the plotting panel now displays a horizontal blue line. This is the graph of the function we are plotting. The reason that it is just a horizontal line is that we haven’t yet defined the text variable Fx. Control panel In order to define this variable, as well as other parameters that may help defining the function, we will create a panel that will hold several controls. In the first place, select the “Panel” icon, , in the panel of containers, and click with the magic wand on Frame to create an element of type “Panel” as second child of it. The name 3.6. LEARNING MORE ABOUT VIEW ELEMENTS 97 Figure 3.16. Tree of elements (under construction) and panel displaying the graph. suggested by Ejs, Panel, is acceptable. Place the panel in the “Down” position of the frame. We won’t need to edit the properties of this element. Notice that, although the panel appears in the tree of elements, it is not visible in the frame itself. This is because the panel is still empty, and it therefore doesn’t claim any screen space from its parent. As soon as we add children that do claim for space, the situation will change. The two children that we will add to the panel will be again elements of type “Panel”. Since this icon is probably still selected, create two new elements of this type as children of Panel with names PanelFx and PanelParameters. Place them in the positions “Up” and “Center”, respectively. Even when the frame still looks the same, the tree of elements should look like in Figure 3.17. Figure 3.17. Tree of elements We will only need to modify the layout property under construction. of the element PanelParameters so that it takes the form of a grid with 0 rows and 1 column (that is, a column as long as needed). For this type grid:0,1 in the “Layout” text field of the panel of properties for PanelParameters, or use the specialized editor for layouts by clicking the auxiliary edition button, , for this property. Now, create a label, , on the left side of the panel PanelFx (the proposed name will do) and edit its “Text” property typing f(x)= in the corresponding field. In the center of this panel, create a text field, , called FieldFx and edit its properties as in Figure 3.18. With this we are instructing the text field to associate whatever we 98 CHAPTER 3. BUILDING A VIEW write on it with the variable Fx. We also give an initial value to this variable using the property called “Value”, typing in it the function we want to visualize, using the correct Java syntax: a*Math.sin(b*x+c). Figure 3.18. Properties for FieldFx. Observe that the function we are suggesting (of course, you can use any other one) contains three variables a, b and c, that have not been defined (x in the expression refers to the independent variable with respect to which the graph is built). We will need to give non-zero values to these parameters so that the curve is interesting. We will do this using the PanelParameters panel. Choose now the element of type “Slider”, , and create three such sliders in this panel with names SliderA, SliderB and SliderC. Since the layout of the parent places the elements in form of a column, you will not be prompted to choose the position for the new elements. This takes us to the final structure of the tree of elements for our view, which you can see in the left image of Figure 3.19. We will still need to customize the sliders, but since the creation of new elements is limiting the available space for the original plot (and the customization we foresee for the sliders will make these even bigger), we will resize the main window. For this, edit again the properties for Frame and change its size to 300,500. The final result should look as in the right-hand side image of Figure 3.19. To finish the example, edit first the properties of SliderA as in Figure 3.20. The most important fact of this customization is that the slider will allow us to modify the variable a in a range from 0.0 to 5.0, giving to a an initial value of 1.0. (The value a=0.00 for the property “Format” doesn’t assign to a any value, it just indicates how the value of a should be displayed; in this case, with two decimal figures.) Finally, edit the properties of SliderB exactly as those above but changing a by b, and those of SliderC as shown in Figure 3.21. Using the plotter The example is now complete. (Don’t forget to save it to disk!). Now, if you run it, the graph displayed in the upper panel will correspond to the sine function. Play 3.6. LEARNING MORE ABOUT VIEW ELEMENTS 99 Figure 3.19. Final tree of elements (left) and frame still under construction. with the sliders for a, b, and c to see the effect of each of the parameters on the graph of the curve. You can also introduce different expressions in the text field for the function to plot (respecting the correct Java syntax) which can or can not use the parameters. See Figure 3.22. We would like to emphasize that this example shows how the view can have a life on its own, independently of the model. In our example, in fact, we created no model at all, and all the variables have been autonomously created by the view. However, if you now create the variables a, b, and c in the table of variables of the model and modify them in the body of it, the variables of the view will always Figure 3.20. Properties for SliderA. 100 CHAPTER 3. BUILDING A VIEW Figure 3.21. Properties for SliderC. use the corresponding values, thus automatically establishing a connection between them thanks to their common name. A corollary of this is that we can create simulations in which the view defines and uses variables that don’t need to have been declared by the model. However, these variables are usually of auxiliary nature; any really interesting behavior will undoubtedly be based in the definition and modification of variables in the model. Exercises The subdirectory examples/Manual/View of your Simulations directory contains the complete example of this chapter under the name FunctionPlotter.xml. Exercise 3.1. Create a simple model for the example of Section 3.6 that defines the variables a, b, and c, and such that it gives them an interesting use. For instance, one in which the evolution consists in increasing 20 times per second the variable c in, say, 0.1 units. The result must be a traveling wave that moves to the left. Exercise 3.2. If you look again, in Figure 3.15, at the table of properties of the element AnalyticCurve, you will observe that you can provide an analytic expression for the three coordinates X, Y, and Z. This means that you can easily create a plotter for parametric curves. That is, for curves in the plane of the form (x(t), y(t)) (or of the form (x(t), y(t), z(t)) for curves in space). Create a plotter for planar (or spatial, if you prefer) parametric curves starting from the example in Section 3.6. Exercise 3.3. If you feel inclined to work in three dimensions, create a plotter for functions of two variables f (x, y) using the element “AnalyticSurface”, . (You will need to use “DrawingPanel3D” instead of the two-dimensional “PlottingPanel”.) Exercise 3.4. If you still feel inclined to it, create a plotter for parametric surfaces (x(u, v), y(u, v), z(u, v)) in general, and use it to display different surfaces such as a 3.6. LEARNING MORE ABOUT VIEW ELEMENTS 101 Figure 3.22. Graphs of two functions using our plotter. sphere, a cylinder, a hyperbolic paraboloid, a Möbius band, etc. Exercise 3.5. Create a model for the simulations of the two previous exercises that allows you to animate the resulting graphs to simulate waves in two and three dimensions, or dynamic changes in the surfaces. You will find the solution to some of these exercises in the directory cited above. CHAPTER 4 Using the simulation c 2005 by Francisco Esquembre, September 2005 As soon as you create your own simulations, you will most likely be interested in using them in your lectures or sharing them with your colleagues. You may even want to publish them on the Internet! This chapter will teach you how to do this. 4.1 Using your simulations Simulations created with Easy Java Simulations can be run in three different forms. The first form is using Ejs itself, in a way similar to what we did in Chapter 1. The second form consists in executing the simulation as an applet, using a Web browser. The third form consists in running the simulation as an independent Java application. The first option has an important pedagogical advantage: the user can see how the simulation was created, learning from the knowledge that you used when building the simulation. This offers, in our opinion, a great added value. But we must admit that it also has a small inconvenience: the user must have Ejs installed in her computer and need to know how to use it, at least in a basic way. Even when we may agree that Ejs is a wonderful tool1 which, furthermore, is free for distribution, you, most likely, don’t want that all your users need to install it and 1 We, proud authors, do think so!^ ¨ 103 104 CHAPTER 4. USING THE SIMULATION learn its use just to be able to run a simulation. Fortunately, simulations created with Ejs are, once generated, independent from it. Thus, when this situation appears, we recommend to run the simulation in form of Java applet. A Java applet is a special type of application that has been designed to be run inside a Web browser. The browser loads an HTML file (HTML stands for Hyper Text Markup Language) which indicates the browser that it must run the required applet inside its own window. This HTML file and the corresponding applet can be located on your hard disk or in an Internet location and be served to you through the network by a so-called Web server, a computer that runs this special service. A second reason for this recommendation is that the HTML pages that Ejs creates for the applet can be modified so that to complete the narrative that you included in the introduction for the simulation. This allows you to create a complete set of pedagogical units that can be used for your teaching duties. Finally, it is also possible to include in this narrative controls of a simple script language called JavaScript that integrate the text with the control of the simulation in a very natural way. 2 The third form of execution of simulations, as independent Java applications, is also possible without installing Ejs. The user only needs to have a Java Virtual Machine (such as that provided by the Java Runtime Environment) installed in her computer. Simulations running as applications can be installed on the hard disk, or can also be served through the network by a Web server using Java Web Start technology. We describe this technology further below. An important reason to use this third form is when you want to run (independently of Ejs) a simulation that saves data to the hard disk. As we explained in Subsection 2.7.3, simulations created with Ejs can, if the author decided to offer this possibility, read data from the hard disk or from the Web, independently of the form in which they run. However, to first create this data, it is necessary to be able to write on disk, which, for security reasons included in Web browsers, is only possible when you run the simulation from within Ejs or as an independent application. (In this second case, the applications must be installed in your hard disk. Java Web Start includes a security system similar to that of applets.) Distribution of a simulation within Ejs Distributing a simulation created with Ejs for other people to use it with Ejs is immediate: you just need to provide your user with the simulation file (the one with 2 Although it is out of the scope of this manual the complete description of the format of HTML files and of JavaScript, we will provide a basic introduction which is enough to illustrate their use with Ejs. You can surely find appropriated books on these topics in your usual technical bookstore. 4.2. FILES GENERATED FOR A SIMULATION 105 extension .xml) together with any auxiliary files, such as images or data, that the simulation may need. Hence, for instance, for the simulation of Chapter 1 (which uses no additional data file), you’ll only need to provide the file called Spring.xml. Your user will need to know how to start Ejs, load this file, and work with it, similarly to what you did in that chapter. The rest of this chapter is devoted to explain how to run the simulations in any of the other two forms, as well as to teach you how to correctly distribute the simulations created with Ejs independently from it. Copyright and disclaimer This is a good moment to remind you of the copyright restrictions about Easy Java Simulations and the simulations created with it. Good news! There are very minor restrictions. Here is a copy of the conditions of use of Ejs. Easy Java Simulations is the exclusive copyright of its author, Francisco Esquembre, who distributes it under a GNU GPL license. Examples of simulations included with Ejs are copyright of their authors. Easy Java Simulations and its JAR library files can be copied and distributed without limit and without previous permission, provided this is done for non-commercial reasons. In any case, the reference to the author that appears in the program must always be preserved. Authors can distribute any simulation that they create using Easy Java Simulations, provided a reference to this fact is included, together with a link to the official Web page for Easy Java Simulations,. It is not necessary to include this in all pages with a simulation on it of an educational unit; including a reference for the whole set in a clearly visible place will suffice. Any publication that results from the use of Easy Java Simulations must refer to the official Web server for Ejs,. Finally, a short disclaimer. Although Ejs has gone through a long sequence of tests, Easy Java Simulations is provided ’as is’. That is, we refuse any responsibility for any problem or damage caused by the use of the software. 4.2 Files generated for a simulation We need to describe briefly what files are created when we run a simulation with Ejs, right before the simulation actually appears on the computer screen. We will illustrate the process using the example of Chapter 1. 106 CHAPTER 4. USING THE SIMULATION Recall that the name of that example was Spring.xml. After running the example you will find the following files in your Simulations directory: Spring.java and SpringApplet.java These are the files that contain the Java code for the simulation and its applet. Recall that Ejs is, after all, a code generator: it collects all the information that you provide to it in its different panels and generates the necessary Java code. There is a long way from the information you provide to the final result, but Ejs takes care of everything automatically. The Java code contained in these files unveils some of our secrets and you can inspect it if you want. However, only a Java programmer with particular needs would be interested in modifying the code directly. You can therefore, after all, safely instruct Ejs to delete automatically these files after using them. Actually, it is very likely that you don’t find these files in your Simulations directory, to begin with. This is because Ejs is configured by default to delete them after generating the simulation. We mention them here because they are important to understand how Ejs works. See Subsection 4.3 to learn how to configure Ejs to not to delete them. spring.jar This is the final file produced by the Java compiler when it processes the previous files. Well. . . , not quite. Compilation can produce a lot of small files that Ejs groups on a single file that it then compresses to facilitate the distribution. This file is, for our example, precisely spring.jar. This file is a (Java) self-executable file. Thus, if your system has the Java Runtime Environment installed, you can, most likely, run the simulation by double-clicking on the JAR file. (Read however, the distribution notes of Section 4.6.) Spring.html This is the main HTML file generated by Ejs in order to run the simulation in form of a Java applet. 3 We say that this is the main HTML file because, depending on the configuration options of Ejs (see Subsection 4.3) and on the number of pages of introduction that we wrote for the simulation, Ejs generates a set of more or less HTML pages that all start with the name Spring. We will describe in more detail the contents of this main HTML file in Section 4.4. Spring.bat This is, finally, an auxiliary file that can be used to run the simulation as an application, if double-clicking the JAR file fails. It will be discussed in detail in Section 4.5. 3 It is possible that Ejs is configured not to generate HTML files. Although it is not by default. See Subsection 4.3. 4.3. EJS CONFIGURATION OPTIONS 107 You will also find in your working directory the library subdirectory, other directories with examples, and, probably, also some other files which resulted from the previous execution of other simulations. The library subdirectory contains Java libraries that are necessary for the execution of the simulation and they will need to be distributed along with the files for the simulation itself, as we will see in Section 4.6. 4.3 Ejs configuration options The behavior of Ejs can be modified a little bit using the configuration panel displayed in Figure 4.1. Because some of the options affect the files that Ejs creates along with the simulation, we describe all these options here. Figure 4.1. Ejs’ configuration options. We access this panel by clicking on the icon in Ejs’ taskbar. The options are grouped in blocks and their values are saved at the end of every session. The first block allows us to configure the initial aspect of Ejs. The second block helps us determine which files are generated for the distribution of the simulation from a Web server. The final block concerns the creation of other generated files for the simulation. 108 CHAPTER 4. USING THE SIMULATION Options which affect the appearance of Ejs The first option in the first block lets us to choose the location in which Ejs appears at the beginning of a session. The options are the center of the screen, the upper-left corner of it, or the current position. This last option allows us to select any location on the screen. The second option lets us choose the default font for the code editors in Ejs. The third option allows us to select a predefined file as basis for future simulations. This can be useful if, for instance, all your simulations happen to have a similar basic structure for the view. In this case, you can create this structure beforehand, save it to a file, and indicate the name of this file in this configuration field. This way, every time you click on the “New” icon of Ejs’s taskbar, Ejs will remove the current simulation and will automatically load this file. This feature can be of special interest to facilitate your students the creation of standard views. A fourth option in the first block tells Ejs whether it should show or hide the so-called hidden pages of the model. See Section 2.2 for more details about what this means. The final option in this first block enables Ejs to connect to a special type of network server that can be used to exchange simulation files among users. This is still an experimental development. But, typically, this will enable a new icon in Ejs’ taskbar that you can click to connect to the server specified in the corresponding “URL” field. We will provide more details about this in Ejs’ home page,, when it is ready for general use. Options for files for Web-based distribution of simulations As mentioned already, Ejs creates a set of HTML pages, all starting with the name of the simulation, with the purpose of including in them all the information provided by you in the “Introduction” panel, together with an extra page that embeds the simulation itself in form of a Java applet. It also creates, finally, a master page that organizes the structure of all other pages and serves as entry point for the set. The first option in this block allows you to choose the appearance of this master HTML page: either using frames with the table of contents on the left, similarly but with the table of contents located at the top, everything in one single page (which we only recommend if the introduction is really small), or even not to create HTML pages at all. This last possibility can help you keep your Simulations directory clean if you plan to use your simulations always from within Ejs, or run them as applications. The option labeled “HTML page <body” refers to the possibility of adding a parameter of your choice to the <body> tag of all HTML pages generated by Ejs. 4.4. RUNNING THE SIMULATION AS AN APPLET 109 This can be useful, for instance, to let you choose your favorite background color for the pages, or to include a background image in them. A third option in this same block lets you choose whether you want Ejs to generate a JNLP file for distribution using Java Web Start. In this case, you’ll need to provide the URL from which you will be serving the JNLP file. See Section 4.6 for more details. Finally, there is an option in this block that allows you to create a separate HTML file for use with eMersion. eMersion (see) is a collaborative tool created at the EPFL in Lausanne, Switzerland. Although this is still an experimental development, Ejs generates eMersion-enabled simulations. The use of Ejs with eMersion will be described in a separate document, once it is ready for general use. Options for the creation of other generated files Finally, we find a block with three options. The first option asks us whether or not we want Ejs to generate an execution BAT file for running the simulation from a system prompt. Typically, you will uncheck this option if running under Windows and Mac OSX operating systems, and will leave this checked for Linux (which are usually not configured by default to run JAR files by double-clicking on them). The second option in this block allows you to instruct Ejs to automatically delete the Java files that it creates when generating the simulation, after processing them. This can help you keep your Simulations directory clean. The final option lets you tell Ejs to generate all the files for a simulation in a subdirectory of the Simulations directory. This option can help you keep your Simulations directory organized and clean, but has a small inconvenience. Ejs can’t guarantee that all the data files that your simulation needs are copied to this subdirectory. For this reason, if you check this option, you’ll need to make sure by yourself that all data (specially image) files are present in this subdirectory, copying them by yourself if necessary. 4.4 Running the simulation as an applet As we said above, one of the possible ways to run a simulation created with Ejs, independently of it, is through the corresponding HTML page. When a simulation is successfully generated, Ejs creates (except if configured otherwise) a set of HTML pages for it. One of these pages, the one we called master HTML page, will be named exactly as the simulation, if only it will have the extension .html. 110 CHAPTER 4. USING THE SIMULATION You can run the simulation by simply loading this master HTML page in your favorite Web browser. We need to make an important remark, though. Both Ejs and the simulations it generates use Java components that require version 1.4 or later of Java. (Current release, at the time of this writing, is 1.5.0 04.) For this reason, your Web browser will need to support this version in order to successfully run the simulations. Unfortunately, even when Java is a well established standard, different commercial interests of the software companies involved, caused that, nowadays, some browsers only support, by default, versions of Java that we could call prehistorical. The good news is that updating your browser to more recent versions is not only easy, but also free. All you need to do is to download an updated version of the so-called Java plug-in and install it. If you followed the installation instructions of Section 1.2, then your browser has most likely already been updated. If, on the contrary, your browser doesn’t visualize properly the simulations generated by Ejs, please refer to the installation instructions for Ejs, which you will find in the Web server for Ejs. (If you got Easy Java Simulations from a CD ROM, please look there for this information, too.) You can also find the necessary software in the Web server of Sun Microsystems,. The recommended Java version is 1.5.0 04 or later. 4.4.1 The content of the HTML files Ejs generates an HTML file for each of the pages of the introduction, which contain basically what you wrote in them. Ejs creates also a master page that organizes all other pages by using frames, one of which displays a table of contents. Finally, Ejs generates an HTML page that includes the simulation in form of Java applet. The name of this page is that of the simulation followed by the word Simulation and by the extension .html. Thus, for instance, the file created for the example of Chapter 1 is called SpringSimulation.html. We now describe the contents of this page. 4 The file SpringSimulation.xml has three different parts, that we list separately. The result of loading this file on a Web browser is shown in Figure 4.2. The first part of the file consists of a simple header, necessary in all HTML pages: <html> <head> <title> Home page for Spring</title> 4 In case you configured Ejs to generate a single HTML page, the content of this file will appear inside that page. 4.4. RUNNING THE SIMULATION AS AN APPLET 111 Figure 4.2. HTML page with the simulation. </head> <body bgcolor=’#C8DFD0’> This header declares the file as an HTML page, gives it a title, and opens the section <body> of the page, where the real content is. You will recognize in this header the name of the simulation in the <title> tag, and the value of the <body> tag indicated in Ejs’ configuration panel. The second is the most important part of the page, since, after a short introductory message (which you can remove, if you wish): The simulation’s view should appear right under this line.<br> we find the <applet> tag, used to embed the simulation in the HTML page: <applet code="spring.SpringApplet.class" codebase="." archive="_library/ejsBasic.jar,spring.jar" name="Spring" id="Spring" width="315" height="248"> </applet> 112 CHAPTER 4. USING THE SIMULATION This instruction tells the browser to load the applet called SpringApplet, which Ejs generated for our simulation. The tag also tells the browser to display the simulation with the size it had in our view. In this case, 315 pixels wide and 248 pixels high. Please pay attention to the second line of the <applet> tag above. The parameter called archive lists the files that the applet will need to run correctly. In this case, these are the JAR files generated for the simulation, spring.jar and the basic library of Ejs, ejsBasic.jar. Their location is indicated relatively starting from the directory indicated by the parameter codebase which, in this case, is “.”, that is, the same directory where the HTML page is. For this reason, the library directory must always be present in the Simulations directory. However, you can easily modify, if you want, any of these parameters, codebase or archive, in order to place your JAR files in any other location. Pay attention too, finally, to the third line of the <applet> tag. This line provides a name for the simulation inside the HTML page through the use of the parameters name and id, 5 which we will use in the next paragraph. 4.4.2 Control of the simulation using JavaScript The third part of the content of the page is made of the following lines: <!--- Finally the JavaScript buttons ---> <br><hr width="100%" size="2"><br> <p>You can control it using JavaScript. For example, using buttons:</p> <p> <input type="BUTTON" value="Play" onclick="document.Spring._play();";> <input type="BUTTON" value="Pause" onclick="document.Spring._pause();";> <input type="BUTTON" value="Reset" onclick="document.Spring._reset();";> <input type="BUTTON" value="Step" onclick="document.Spring._step();";> </p><p> <input type="BUTTON" value="Slow" onclick="document.Spring._setFPS(1);";> <input type="BUTTON" value="Fast" onclick="document.Spring._setFPS(10);";> <input type="BUTTON" value="Faster" onclick="document.Spring._setFPS(1000);";> </p> </body> </html> in which you will observe that the name of the simulation appears several times, exactly as indicated by the parameters name and id above. 5 We need both parameters for compatibility reasons with the two main browsers. 4.4. RUNNING THE SIMULATION AS AN APPLET 113 The lines with the </body> and </html> tags simply close the page. The most interesting lines are those with the <input> tag, that show how we can define JavaScript buttons that interact with the simulation. JavaScript is a simple script language supported by most Web browsers that allows a basic level of programming inside an HTML page. Even when we do not describe JavaScript in detail here, we will show how to use it to control the simulation in an additional way to the use of its user interface. JavaScript can access the simulations created with Ejs to: • Invoke any of the predefined methods of Ejs. • Invoke any of the custom public methods that we may have defined in the model. • Set or read the value of any of the variables of the model. JavaScript control using buttons The procedure is always similar to the one shown above. We first need to include an entry button using the <input> tag: <input type="BUTTON" value="Play" onclick="document.Spring._play();";> where the value of the parameter value indicates the text that will be displayed by the button in the page (in our case, Play), and the value of the parameter onclick, the method that must be invoked . In this case, the method is the predefined method play(). The prefix document.Spring must be included exactly as shown, since it identifies the object (Spring) in this page (document) that defines the method. The result is that clicking the button labeled Play of Figure 4.2 will play the simulation. Thus, for instance, if we had a public method called myMethod(), defined in the panel of custom methods of Ejs for the model of this simulation, we could include a JavaScript button that will invoke it, using the following <input> tag (notice the inclusion of the word _model): <input type="BUTTON" value="Invoke my method" onclick="document.Spring._model.myMethod();";> JavaScript control using hyperlinks An alternative way of using JavaScript is through the HTML tag for hyperlinks, <a>. The same example as above would now read as follows: 114 CHAPTER 4. USING THE SIMULATION To invoke my method, <a href="JavaScript:document.Spring._model.myMethod();"> click here </a> Methods with parameters If the method that you want to invoke accepts parameters (such as setDelay(int delay) or readState(String filename), for instance) simply include these parameters between the parentheses of the method’s invocation. In the particular case that the parameter is a text, you will need to delimit it using simple quotes, as in: <input type="BUTTON" value="Read State" onclick="document.Spring._readState(‘url:data.dat’);";> This example requires that we previously create the file data.dat (invoking the method saveState("data.dat")) which must be located in the same directory of the hard disk or Web server from which the HTML file for the simulation has been loaded. Warning: The example may not work with some configurations when reading the simulation from the hard disk, depending on how strict the security policy of your Web browser is. Reading data from a Web server should cause no problems, though. A particularly interesting method is setVariables(String command), that can be used to set the value of one or more variables of the simulation. Thus, to set the initial conditions x = 0.5 and vx = 0 for the spring, we would use the instruction: <input type="BUTTON" value="Initial conditions" onclick="document.Spring._setVariables(’x=0.5; vx=0;’);";> Of course, the model must define the corresponding variables. Notice how the individuals instructions are separated by semicolons. 4.4.3 Passing parameters to the applet Every applet has a built-in way of reading parameters from the <applet> tag. For this, you need to give the parameter a name and a value and include a <param> line between the lines which contain the <applet> and </applet> keyword. For instance, suppose we want to pass our simulation the parameter called MyParameter with the value 10. The corresponding applet tag for the Spring simulation would be: 4.5. RUNNING THE SIMULATION AS AN APPLICATION 115 <applet code="spring.SpringApplet.class" codebase="." archive="_library/ejsBasic.jar,spring.jar" name="Spring" id="Spring" width="315" height="248"> <param name="MyParameter" value="10"> </applet> Note the inverted commas that surround the parameter’s name and value. To get the value of this parameter in the simulation, you’ll need to invoke, in any suitable part of it, the predefined method String _getParameter(String name) (see Subsection 2.7.3). It is important to notice that this method returns a String. You should take care of properly extracting the information, if this is a numeric value, from the returned string. Notice also that, if the parameter is not defined, the _getParameter() method returns a null string. For the example above, a correct (and safe) way of processing a parameter would be as follows: double value = 0.0; // default value for the variable String parameter = _getParameter ("MyParameter"); if (parameter!=null) value = Double.valueOf (parameter); Finally, note that you can define as many parameters and include as many <param> lines in the <applet> tag as you need. 4.4.4 Writing your own HTML code Once we have seen the basic content that your HTML needs in order to include the simulation as an applet, and the format of the possible JavaScript commands to control it, you can modify your HTML page in any way you find convenient. In particular, if you know how to write HTML code or you have an specialized editor, you can enrich your Web pages for the simulation with new narrative or with detailed instructions of use of the simulation. 4.5 Running the simulation as an application The third of the forms in which you can run a simulation is, as we said in the introduction for this chapter, as an independent Java application. For this, however, your computer must have a Java Virtual Machine installed. If you followed the installation instructions of Chapter 1, then there is one already installed in your 116 CHAPTER 4. USING THE SIMULATION computer. If you didn’t, please read the installation instructions for Ejs (see Section 1.2) or get a Java Virtual Machine from the Web server of Sun Microsystems,. The recommended version is, again, 1.5.0 04 or later. The file you need to run the simulation as an application is Spring.bat. If we inspect its contents, 6 we find the following lines: "C:\Program files\Java\jdk1.5.0_04\jre\bin\java" -jar itt.jar This single line calls the run-time engine of Java (which in our system is in the directory C:\Program files\Java\jdk1.5.0 04\jre\bin) telling it to run the simulation contained in the self-executable JAR file spring.jar. Recall that the file is self-executable, but that it requires the library subdirectory to be present in the same directory as spring.jar. This dependence is specified in the so-called manifest file inside the spring.jar file. Advanced programmers can edit this manifest to change the location of the library directory. If you execute the file Spring.bat in the standard way for your operating system, a window will appear displaying the view of the simulation. 4.5.1 Running the simulation using Launcher Launcher and LaunchBuilder are two tools, created by Doug Brown and included in the Open Source Physics library distributed with Ejs, that can be used to organize a set of self-executable JAR files, and to provide an end-user with a single window from where to run them. We don’t describe any of these tools in detail here (interested readers can read the OSP Guide –consult) since, actually, they are very easy to use. But we need to mention that Easy Java Simulations includes both tools to help you organize the simulations you create with it. This is specially useful if you want to distribute a number of simulations in a single directory. These tools can help you provide a single, documented, and easy to use, entry point to all of them. For this, once you have generated the simulation, or simulations, you want, start LaunchBuilder using the button provided by Ejs’ console or running one of the script files LaunchBuilder.bat (for Windows) or LaunchBuilder.sh (for Mac OS X and Linux). This creates automatically in your Simulations directory two files called Launcher .bat and Launcher .osp which can be used to organize your simulation JAR files. 6 We illustrate here only the file which corresponds to Windows operating system. The files for other operating systems are equally simple. 4.6. DISTRIBUTION OF SIMULATIONS 117 Actually, after creating the files, LaunchBuilder appears on the screen to help you customize this organization. (Again, the use of LaunchBuilder is rather natural, but you can refer to the OSP Guide for more information.) See Figure 4.3. Figure 4.3. LaunchBuilder interface for the simulations of Chapter 1. Once the edition is finished, you can run your simulations from Launcher by double clicking the Launcher .jar file that has been now created in the Simulations directory. The interface for Launcher is displayed in Figure 4.4. Figure 4.4. The final Launcher interface for the same simulations. The Launcher .jar and the Launcher .osp files can be distributed together with the simulations (see below) to help your end users run the simulations. 4.6 Distribution of simulations As we have said several times, simulations created with Ejs are independent of it, if only they need some libraries that include the graphic elements and other required 118 CHAPTER 4. USING THE SIMULATION Java classes. These libraries are not part of the standard Java distribution, because they have been specially created for Ejs to simplify the construction of simulations and to help add specific functionality without much user effort. Because of this, when you distribute your simulations, you need to remember to include these libraries in the distribution package. This, fortunately, is very easy to do. To distribute any simulation created with Ejs, you just need to provide the files generated by Ejs for the simulation, as we saw in Section 4.2, together with the library directory (with all its contents). Also, if you designed your simulation to use additional files (such as GIF images, sound, or other data files), you’ll need to provide these too. Simulations created with Ejs can be distributed using any of the following ways: From a Web server You’ll need to copy the JAR, HTML, and auxiliary files for the simulation in a directory of your server. Copy also the library directory into the same directory as your simulation on the Web server. In a CD-ROM This procedure is similar to the previous one. Just copy the JAR, HTML, and auxiliary files, and the library directory to the CD. As mentioned above, you can also include properly configured copies of the Launcher .jar and Launcher .osp files, so that your users can use them to run the simulations as applications. Using Java Web Start This is a technology created by Sun to help deliver Java applications from a Web server. The programs are downloaded from a server at a single mouse click, and they install automatically and run as independent applications. Easy Java Simulations is prepared to help you deliver your simulations using this technology. More precisely, Ejs can automatically generate a Java Web Start JNLP file for your simulation. For this, in Ejs’ configuration options (see Subsection 4.3) click on the “Create JNLP file for Java Web Start” check box and type in the file below it the URL of the directory (only the directory!) from which you will be serving this JNLP file. Now, when you generate the simulation, a new file with the name of the simulation, but with the extension jnlp, will be created. Copy this file along with the other files generated for the simulation to the directory of the Web server you specified. Finally, your server will need to report, for any file with the extension jnlp, a MIME type of application/x-java-jnlp-file. Not all servers are configured to do this by default. The way to do this depends on your browser, but, for instance, on the Apache server you just need to stop the server, edit the conf/mime.types file to add a line like the following: application/x-java-jnlp-file JNLP 4.6. DISTRIBUTION OF SIMULATIONS 119 and re-start the server again. If you distribute more than one simulation on a CD or from the same Web server, you can install a single copy of the library directory, as long as all the execution files for your simulations correctly point to that directory. This can be achieved by editing the <applet> tags (for applets) or the manifest files of the JAR files (for applications) in a convenient way. However, we recommend you to take the easy way out and just copy all the simulations in the same directory. Finally, don’t forget to tell your users to install a Java Virtual Machine, or the necessary Java plug-in for the Web browser. The recommended version is 1.5.0 04 or later. Exercises Exercise 4.1. If you have access to a Web server, place one of the simulations we created on the previous chapters in it. Check that the simulation can be executed using a Web browser (with the corresponding plug-in). Exercise 4.2. Modify the HTML for the simulation you published in this Web server, introducing in it some JavaScript controls, using both buttons and hyperlinks. Exercise 4.3. Create a complete didactic unit on resonances from the improved simulation of the spring that we created in Chapter 1. Distribute the resulting didactic unit on a CD or from a Web server.
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
|
https://manualzz.com/doc/31728111/easy-java-simulations-the-manual
|
CC-MAIN-2020-16
|
refinedweb
| 39,432
| 61.87
|
I was wondering about what to use and any tutorials on linux network programing
I was wondering about what to use and any tutorials on linux network programing
Type on Linux command prompt:
man 2 socket
man 7 socket
or type man socket -a and 'q' as many times as needed to get an interesting page. sys/socket.h is the header needed.
There are also libraries to use sockets which work both in windows and unix like Common C++.
Socket programming HOWTO is Python related document.
Beej's Guide to Network Programming:
I am not using Dev-C++.
#!/usr/bin/env python
import sys;file=open(sys.argv[0]);print file.read();file.close()
C++ Makes you Feel Better
"Gravity connot be held reponsible for people falling in love"--Albert Einstein
|
https://cboard.cprogramming.com/linux-programming/21774-linux-network-programing.html
|
CC-MAIN-2017-30
|
refinedweb
| 132
| 54.63
|
Dispute Resolution Service
The Milestone Payment System offers protection to both clients and freelancers by giving equal control over created payments for awarded projects. With that said, in the event that your project does not go as planned, we have a Dispute Resolution Service available to you, which allows contesting the return (for clients) or release (for freelancers) of in progress Milestone Payments (meaning, those Milestones that are not yet released).
In all circumstances, we strongly encourage you to resolve project issues or disputes between yourselves rather than use this service. It is provided only as a last resort should you be unable to reach a mutual agreement.
If you get to a point where you need to use it, you can find the Dispute option in the dropdown button across the Milestone Payment that you wish to dispute. Clicking Dispute will direct you to the Disputes page where you can specify the exact amount you will be disputing from the Milestone Payment and your reason for filing a dispute.
4 Dispute Stages
We follow a systematic process in resolving disputes.
● Stage 1. This is the stage where the dispute was initiated and you are waiting for the other party to either accept your offer or challenge your claim.
● Stage 2. This is when both parties already entered the dispute. During this stage, both are encouraged to negotiate and settle the dispute.
● Stage 3. The dispute enters this stage if either party refuses to settle the dispute and when one pays the arbitration fee. This is your last chance to resolve the dispute between yourselves and also your final chance to submit evidence to support your case.
● Stage 4. Once both pay the arbitration fee, the case is transferred to our Dispute team. The team will review the dispute and they will be the ones to issue a final verdict. Dispute verdicts are final and irreversible.
Note: Users should continue to be responsive, and both should pay attention to an ongoing dispute as there is a time window set in most stages. If you miss to take action, you will automatically lose the dispute and the amount under dispute will be transferred to the other party’s account.
Read more about our Dispute Resolution Service policy here.
|
https://www.tr.freelancer.com/support/payments/dispute-resolution-service-32237?keyword=
|
CC-MAIN-2021-39
|
refinedweb
| 377
| 58.72
|
.cvsclient.command.editors;21 22 import java.io.*;23 import java.util.*;24 25 import org.netbeans.lib.cvsclient.command.*;26 27 /**28 * Data object created by parsing the output of the Editors command.29 * @author Thomas Singer30 */31 public class EditorsFileInfoContainer extends FileInfoContainer {32 33 private final String client;34 private final Date date;35 private final File file;36 private final String user;37 38 EditorsFileInfoContainer(File file, String user, Date date, String client) {39 this.file = file;40 this.user = user;41 this.date = date;42 this.client = client;43 }44 45 public File getFile() {46 return file;47 }48 49 public String getClient() {50 return client;51 }52 53 public Date getDate() {54 return date;55 }56 57 public String getUser() {58 return user;59 }60 }61
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/netbeans/lib/cvsclient/command/editors/EditorsFileInfoContainer.java.htm
|
CC-MAIN-2017-17
|
refinedweb
| 146
| 51.24
|
Start ArangoDB on Google Kubernetes Engine (GKE)
In this guide you’ll learn how to run ArangoDB on Google Kubernetes Engine (GKE).
Create a Kubernetes cluster
In order to run ArangoDB on GKE you first need to create a Kubernetes cluster.
To do so, go to the GKE console. You’ll find a list of existing clusters (initially empty).
Click on
CREATE CLUSTER.
In the form that follows, enter information as seen in the screenshot below.
We have successfully ran clusters with 4
1 vCPU nodes or 3
2 vCPU nodes.
Smaller node configurations will likely lead to unschedulable
Pods.
Once you click
Create, you’ll return to the list of clusters and your
new cluster will be listed there.
It will take a few minutes for the cluster to be created.
Once you’re cluster is ready, a
Connect button will appear in the list.
Getting access to your Kubernetes cluster
Once your cluster is ready you must get access to it.
The standard
Connect button provided by GKE will give you access with only limited
permissions. Since the Kubernetes operator also requires some cluster wide
permissions, you need “administrator” permissions.
To get these permissions, do the following.
Click on
Connect next to your cluster.
The following popup will appear.
Click on
Run in Cloud Shell.
It will take some time to launch a shell (in your browser).
Once ready, run the
gcloud command that is already prepare in your commandline.
You should now be able to access your cluster using
kubectl.
To verify try a command like:
kubectl get pods --all-namespaces
Installing
kube-arangodb
You can now install the ArangoDB Kubernetes operator in your Kubernetes cluster on GKE.
To do so, follow the Installing kube-arangodb instructions.
Deploying your first ArangoDB database
Once the ArangoDB Kubernetes operator has been installed and its
Pods are in the
Ready
state, you can launch your first ArangoDB deployment in your Kubernetes cluster
on GKE.
To do so, follow the Deploying your first ArangoDB database instructions.
Note that GKE supports
Services of type
LoadBalancer.
|
https://www.arangodb.com/docs/devel/tutorials-kubernetes-gke.html
|
CC-MAIN-2020-16
|
refinedweb
| 343
| 66.84
|
I would like to create toggles that look like regular buttons, or would like regular buttons to act as Toggles using the new Unity3D GUI (post 4.6) I'm on version 5.1.3. I've been searching and cannot find any documentation or tutorials that show how to do this in the new GUI. All Unity3D documentation I've found only shows how to do this in the legacy UI. I've already gone through this as well, in hopes of finding the answer: and the 2nd and 3rd part as well, but it doesn't show what replaces the toggle Using button style...
Hopefully a simple solution. Thanks in advance!
I've figured out how to make a Button act like a Toggle with the below code in my UIManager C# Script, and assigning (in the INSPECTOR) the GameObject "Pressed" to the UIManager script. HOWEVER: The button still appears like a button. When I press it, it momentarily changes color, but does not stay the same color. I am thinking I need to fetch a different image from within the script? public class TabletUI : MonoBehaviour { public GameObject Pressed;
private bool toggle = false;
public void PB_1()
{
if (!toggle)
{
Pressed.SetActive(true);
toggle = true;
Debug.Log("Sirens ON");
}
else
{
Pressed.SetActive(false);
toggle = false;
Debug.Log("Sirens OFF");
}
}
}
Answer by AnnaB417
·
Mar 16, 2016 at 09:52 PM
Ultimately, the TOGGLE UI element seems to replace the Toggle usingButton function. I've now got my project set up and working as intended by using 2D sprites and "sprite swap" in the Toggle transitions. So I've got one set of images to look like buttons off (not lit up) and another set of images that look like buttons turned on (indicator lights on). The Unity GUI interface sets up the Toggle as a Bool, so when clicked it returns "is on" for true, and unclicked for false. No addition scripting
54 People are following this question.
Toggle Button: hope it toggle once but it toggle many times
1
Answer
how to make one button disappear if i click another button unity
1
Answer
Button inside a window not responding!
1
Answer
Images as buttons with GUI skins
1
Answer
Assign action to UI button
0
Answers
|
https://answers.unity.com/questions/1137662/what-replaces-toggle-using-button-style-in-new-gui.html
|
CC-MAIN-2020-40
|
refinedweb
| 377
| 71.95
|
In this post I start taking cue from the EventSourcing example and discuss some strategies of improving some aspects of the domain model. This is mostly related to raising the level of abstraction at which to program the solution domain. Consider this post to be a random rant to record some of my iterations in the evolution of the domain model.
The code snippets that I present below may sometimes look out of context since they're part of a bigger model. The entire prototype is there in my github repository ..
Story #1 : How to kill a var
Have a look at the event store that I implemented earlier ..
class EventStore extends Actor with Listeners {
private var events = Map.empty[Trade, List[TradeEvent]]
def receive = //..
//..
}
With an actor based implementation the mutable
var eventsis ok since the state is confined within the actor itself. In another implementation where I was using a different synchronous event store, I had to get around this mutable shared state. It was around this time Tony Morris published his Writer Monad in Scala. This looked like a perfect fit .. Here's the implementation of the abstraction that logs all events, shamelessly adopted from Tony's Logging Without Side-effects example ..
import TradeModel._
object EventLog {
type LOG = List[(Trade, TradeEvent)]
}
import EventLog._
case class EventLogger[A](log: LOG, a: A) {
def map[B](f: A => B): EventLogger[B] =
EventLogger(log, f(a))
def flatMap[B](f: A => EventLogger[B]): EventLogger[B] = {
val EventLogger(log2, b) = f(a)
EventLogger(log ::: log2 /* accumulate */, b)
}
}
object EventLogger {
implicit def LogUtilities[A](a: A) = new {
def nolog =
EventLogger(Nil /* empty */, a)
def withlog(log: (Trade, TradeEvent)) =
EventLogger(List(log), a)
def withvaluelog(log: A => (Trade, TradeEvent)) =
withlog(log(a))
}
}
and here's a snippet that exercises the logging process ..
import EventLogger._
val trd = makeTrade("a-123", "google", "r-123", HongKong, 12.25, 200).toOption.get
val r = for {
t1 <- enrichTrade(trd) withlog (trd, enrichTrade)
t2 <- addValueDate(t1) withlog (trd, addValueDate)
} yield t2
Now I can check what events have been logged and do some processing on the event store ..
// get the log from the EventLogger grouped by trade
val m = r.log.groupBy(_._1)
// play the event on the trades to get the current snapshot
val x =
m.keys.map {t =>
m(t).map(_._2).foldLeft(t)((a,e) => e(a))
}
// check the results
x.size should equal(1)
x.head.taxFees.get.size should equal(2)
x.head.netAmount.get should equal(3307.5000)
Whenever you're appending data to an abstraction consider using the Writer monad. With this I killed the var events from modeling my event store.
Story #2: Stateful a la carte
This is a story that gives a tip for handling changing state of a domain model in a functional way. When the state does not change and you're using an abstraction repeatedly for reading, you have the Reader monad.
// enrichment of trade
// Reader monad
val enrich = for {
taxFeeIds <- forTrade // get the tax/fee ids for a trade
taxFeeValues <- taxFeeCalculate // calculate tax fee values
netAmount <- enrichTradeWith // enrich trade with net amount
}
yield((taxFeeIds map taxFeeValues) map netAmount)
val trd = makeTrade("a-123", "google", "r-123", HongKong, 12.25, 200)
(trd map enrich) should equal(Success(Some(3307.5000)))
Note how we derive the enrichment information for the trade keeping the original abstraction immutable - Reader monad FTW.
But what happens when you need to handle state that changes in the lifecycle of the abstraction ? You can use the State monad itself. It allows you to thread a changing state across a sequence transparently at the monad definition level. Here's how I would enrich a trade using the State monad as implemented in scalaz ..
val trd = makeTrade("a-123", "google", "r-123", HongKong, 12.25, 200).toOption.get
val x =
for {
_ <- init[Trade]
_ <- modify((t: Trade) => refNoLens.set(t, "XXX-123"))
u <- modify((t: Trade) => taxFeeLens.set(t, some(List((TradeTax, 102.25), (Commission, 25.65)))))
} yield(u)
x ~> trd == trd.copy(refNo = "XXX-123")
In case you're jsut wondering what's going around above, here's a bit of an explanation.
initinitializes the state for
Trade.
initis defined as ..
def init[S]: State[S, S] = state[S, S](s => (s, s))
modifydoes the state change with the function passed to it as the argument ..
def modify[S](f: S => S) = init[S] flatMap (s => state(_ => (f(s), ())))
We apply the series of
modifyto enrich the trade. The actual threading takes place transparently through the magic of comprehensions.
There is another way of securely encapsulating stateful computations that allow in-place updates - all in the context of functional programming. This is the ST monad which has very recently been inducted into scalaz. But that is the subject of another post .. sometime later ..
2 comments:
It’s a great post, you really are a good writer! I’m so glad someone like you have the time, efforts and dedication writing, for this kind of article… Helpful, And Useful.. Very nice post!
Thank you for sharing such relevant topic with us. I really love all the great stuff you provide. Thanks again and keep it coming.
|
http://debasishg.blogspot.com/2011/03/killing-var-and-threading-state.html
|
CC-MAIN-2013-20
|
refinedweb
| 868
| 66.33
|
Hot answers tagged google
2
There is an extension in Chrome Extension market which directly download pdf instead of opening. "Direct Pdf Download" Link is :-
2
This response is a special service from ICANN. It’s about namespace collisions between internal networks and the internet. Why this particular warning comes up for .google, I don’t know. As far as I know, a .google TLD has not been announced, yet it is listed in the list of known TLDs. Of course, if it’s an active TLD, you’ll simply get a NXDOMAIN, domain ...
1
Unfortunately, this is a Google problem and not yours, so there is no simple fix. Google is detecting your IP incorrectly. Google has a page to report this and they say it might take up to a month for them to do so. Here's the Google main article: Here's the report page: ...
1
If it's just happening on one computer, you can change your search settings location:
1
No, there isn't such a log specifically for Google Drive. However, there's a security log for the Google account in general, showing all devices that have logged in to the account. In you can find a device list and security events.
1 ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
http://superuser.com/tags/google/hot?filter=month
|
CC-MAIN-2015-14
|
refinedweb
| 223
| 74.9
|
This article is for all those times when the data you’re getting isn’t the data you’re wanting.
Allow me to outlay a contrived example, based on a few true stories.
Let’s say I’m building a new website in an organisation that has been around for a while. There are already some REST endpoints lying around, but they’re not exactly tailor made for what I want to build.
I’m going to zero in on the endpoints I need just to authenticate a user and get some details about them:
: this will authorise a user and return a token
/auth
: this will return basic information about the user
/profile
: this will return all unread notifications for the user
/notifications
For the use of my app, let’s imagine that I will never want these separately, so ideally they’d be a single endpoint.
But my problems reach further than simply having too many endpoints. The data that’s returned by these endpoints is a little bit terrible.
For example, the /profile endpoint was written in simpler times, and written in NotJavaScript, so it’s got prop names only a mother could love:
{ "Profiles": [ { "id": 1234, "Christian_Name": "David", "Surname": "Gilbertson", "Photographs": [ { "Size": "Medium", "URLS": [ "/images/david.png" ] } ], "Last_Login": "2018-01-01" } ] }
Gross.
And the
response makes theresponse makes the
/notification
response look like Shirley Temple.response look like Shirley Temple.
/profile
{ ", "Photographs": [ { "Size": "Medium", "URLS": [ "/images/smelth.png" ] } ] }, "message_summary": "I'm launching my own cryptocu", "message": "I'm launching my own cryptocurrency soon and many thanks for you to look at and talk about" } } }
Oh my, that list of messages is an object, not an array. And there’s the same complex representation of a user, oh and that timestamp appears to show seconds since the start of 1970 — surprise!
If I was to draw a picture of this situation, it would look a helluva lot like this, with red signifying the crappy state of the data.
Now, I don’t have to do anything about this situation. I can just load these three APIs and use the data that I’m given. If I want to render the user’s full name on the screen, then I’ll combine
andand
Christian_Name
when I need to.when I need to.
Surname
(A side note about names. The idea that a person’s name can be split into two parts — and that the first part can be used as an informal name — is a very Western concept. If you want to be internationally loved, I urge you to consider a person’s name a single string, and do away with any assumptions about how you can break this up into smaller chunks for varying levels brevity and informality.)
Back to my non-ideal data structure. The obvious problem with concatenating data in my UI code is that I might need to repeat this action in a few places. A small problem if I do it a small number of times, a big problem if I do it a big number of times. That’s some wise shit right there.
Problem two is that I’m creating complexity in my UI code. In my opinion, UI code should be as simple and as readable as humanly possible. The more inline data manipulation I have, the greater the complexity, and — as I’ve said before — complexity is where the bugs hide.
The third problem is about types. For one thing, in the above examples, message IDs are strings, but user IDs are numbers — there’s nothing technically wrong with that, but it’s confusing. And what about that date! And that profile photo mess — all I want is the URL of the photo, darn it.
If I manipulate this data as I pass it down into my UI code, then in any given modules, it will be unobvious exactly what it is that I’m dealing with. Changing the shape and type of my data as I pass it around is mental overhead that I could be living without.
(I could implement a static typing system to address this problem, but strongly typed bad code is still bad code.)
So, having spent what feels like a really long time trying to convince you of a problem, let’s talk solutions.
First prize: change the APIs
If the current state of the APIs have no real reason to be so terrible, then creating a
set of endpoints that better suit current requirements might be the best solution.set of endpoints that better suit current requirements might be the best solution.
/v2
Green = real nice data structure
When starting a new project with imperfect APIs, I will always ask if this is an option. But there’s occasionally a very good reason that APIs are the way they are, or it’s simply out of scope to change the way they work.
In which case, I will try for …
Second prize: a BFF
Ah, the good old BFF. The backend-for-the-frontend.
With a BFF, I can abstract away those goofy general-purpose REST endpoints, and provide the payload perfection that my new frontend deserves.
A caption for the image above
A BFF’s raison d’être (sorry) is to please the frontend. Maybe it serves up more REST endpoints, or a GraphQL service, or web sockets or whatever. The point is, it’s there to serve the needs of the UI.
My favourite architecture (yes I have a favourite) is a NodeJS BFF where frontend developers can roam free and create the perfect APIs for the frontend they are building. Ideally, it sits right within the same repo as the rest of the frontend code, so sharing logic between the frontend and the backend is easy (e.g. validating submitted data).
This also means that doing a task that requires frontend changes and API changes will take place in the same repo. Small things, but nice things.
In the scenarios where this isn’t possible, then I must settle for …
Third prize: The BIF
A Backend In the Frontend takes the same logic that you might have in a BFF (combine APIs and clean data), and moves it into the frontend code.
(Is BIF a stupid name? Yes. Did this even need a name? No. Has this been done before, 20 years ago, and already has a name? Yes, that seems to be how these things go.)
I like the font style of these image captions. Small, but not too small. Light, but not too light.
What is this really?
As suggested in the title, this is a pattern — a way of thinking about and organising code. It doesn’t remove any logic, it just separates logic of one type (modifying data structures) from logic of a different type (rendering UI).
This must be the ‘separation of concerns’ I keep hearing about.
Now, although this is nothing earth shattering, I’ve seen it not done plenty of times, so I figure maybe some people would find it helpful to see it all laid out.
You can think of the BIF code as code that you could one day lift out and put on a Node server and everything would still work just the same. (Or even move it into a private npm package that’s shared between multiple frontends in your organisation — exciting!)
You will recall from earlier that I’ve got two problems: too many API calls, and a data structure I don’t like.
These will be handled by one file each and so, for this contrived example, my entire BIF will consist of just two files (and a test).
First …
Combining API calls
Making multiple API calls in my app code is not such a big deal. But still, I’d like to abstract these out so that I can make a single ‘request’ (from my app to the BIF) and get back the data in the perfect shape.
Of course, I can’t help the fact that I must make three HTTP requests, but the app doesn’t need to know about this.
My BIF is going to expose its API as functions. So when my app wants to get some user data, I will call
, which will return the user data., which will return the user data.
getUser()
import parseUserData from './parseUserData'; import fetchJson from './fetchJson'; export const getUser = async () => { const auth = await fetchJson('/auth'); const [ profile, notifications ] = await Promise.all([ fetchJson(`/profile/${auth.userId}`, auth.jwt), fetchJson(`/notifications/${auth.userId}`, auth.jwt), ]); return parseUserData(auth, profile, notifications); };
First I make a request to the auth service to get a token I can use to authorise the user (let’s not worry about how this happens, we’re in imagination land).
Once I have that token I can make two calls simultaneously to get the profile and notification data.
with Promise.all and destructuring syntax is. Hubba hubba.with Promise.all and destructuring syntax is. Hubba hubba.
async/await
If
could get enough time off work, I’d take it on a romantic trip around the world.could get enough time off work, I’d take it on a romantic trip around the world.
async/await
OK so that was step one, to abstract away the fact that this is three network requests. But you will have noticed the call to
..
parseUserData
Let’s talk about that.
Cleaning the data
I have one recommendation that I think makes all the difference when first implementing a BIF — particularly on a new project: forget about the data you get for a moment and think about what data your app needs.
Don’t be clever and try and plan for how the app might behave in 2021. Just make it ideal for today (over-planning for the future is the #1 cause of over-complexity).
OK, getting back to it. I know what the incoming data looks like from those three APIs, and I know what the ideal output of this parsing process should look like.
It would seem that this is one of those rare cases where TDD actually makes sense. So, let’s write a big long test!
import parseUserData from './parseUserData'; it('should parse the data', () => { const authApiData = { userId: 1234, jwt: 'the jwt', }; const profileApiData = { Profiles: [ { id: 1234, Christian_Name: 'David', Surname: 'Gilbertson', Photographs: [ { Size: 'Medium', URLS: [ '/images/david.png', ], }, ], Last_Login: '2018-01-01' }, ], }; const notificationsApiData = {', }, message_summary: 'I\'m launching my own cryptocu', message: 'I\'m launching my own cryptocurrency soon and many thanks for you to look at and talk about' }, }, }; const parsedData = parseUserData(authApiData, profileApiData, notificationsApiData); expect(parsedData).toEqual({ jwt: 'the jwt', id: '1234', name: 'David Gilbertson', photoUrl: '/images/david.png', notifications: [ { id: 'msg-1234', dateTime: expect.any(Date), name: 'Alice Guthbertson', premiumMember: true, photoUrl: '/images/alice.png', message: 'Hey I like your hair, it really goes nice with your eyes' }, { id: 'msg-5678', dateTime: expect.any(Date), name: 'Bob Smelthsen', premiumMember: false, photoUrl: '/images/placeholder.jpg', message: 'I\'m launching my own cryptocurrency soon and many thanks for you to look at and talk about' }, ], }); });
I have a special button on the side of my computer labelled “make tests work” — so I just hit that and the following code magically appears in my editor:
const getPhotoFromProfile = profile => { try { return profile.Photographs[0].URLS[0]; } catch (err) { return '/images/placeholder.jpg'; // default to placeholder image } }; const getFullNameFromProfile = profile => `${profile.Christian_Name} ${profile.Surname}`; export default function parseUserData(authApiData, profileApiData, notificationsApiData) { const profile = profileApiData.Profiles[0]; const result = { jwt: authApiData.jwt, id: authApiData.userId.toString(), // IDs should always be strings name: getFullNameFromProfile(profile), photoUrl: getPhotoFromProfile(profile), notifications: [], // notifications array should always exist, even if empty }; Object.entries(notificationsApiData.data).forEach(([id, notification]) => { result.notifications.push({ id, dateTime: new Date(Number(notification.timestamp) * 1000), // date from server was seconds since epoch, not milliseconds name: getFullNameFromProfile(notification.user), photoUrl: getPhotoFromProfile(notification.user), message: notification.message, premiumMember: notification.user.Enhanced === 'True', // honestly, who does this? }) }); return result; }
I tell you what, (if I may be serious for a moment) there is something really satisfying about pulling out 200 little data-modification snippets scattered all about an application, putting all that logic in one file, and having it properly unit tested and commented.
I mentioned earlier that a BFF is my preferred method for combining/cleaning data, but there’s one area in which the BIF has it beat: the data returned to the app can have JavaScript objects that aren’t supported in JSON, like Date or Map (the most underused thing in JavaScript).
So, in this case, I’m converting the date that came from the server (expressed in seconds since 1970) into a JavaScript Date object.
And that’s all there is to it.
Is this something I should think about doing?
Attractive and good at questions!
I’d suggest this: go cruising through your UI code with the following thoughts rattling about in your mind:
- Am I combining properties together that are never used separately (e.g. first name and last name)?
- Am I putting up with PascalCase? (You deserve better.)
- Are IDs sometimes strings, sometimes numbers, or otherwise confusing?
- Are dates sometimes a
object, sometimes a number, and maybe even a string?
Date
- Am I often checking if a property exists and is an array before looping over it to render UI? Could that not be an array — sometimes empty — in all cases?
- Am I sorting/filtering an array that ideally would have been sorted/filtered correctly in the first place?
- Am I checking for the existence of a property and falling back to some default value if it doesn’t exist (for example a placeholder image URL)?
- Am I using properties with different (perhaps old) terminology, mixed in with properties that use different terminology, but they represent the same thing?
- Am I passing around properties of an object that are never used, because they came in from the API? Is the extra clutter when inspecting what’s going on making me sad or anxious?
If one or two of these things are happening in your code, I wouldn’t fix what ain’t broke and I thank you for reading this far.
But if you’re fiddling with data all over the shop, and it makes your code harder to think about, or harder to test, or more likely to harbour bugs, then I reckon you should get yourself a BIF.
One last word of praise for the humble BIF is that for an existing application it’s easy to implement in little steps. You can set up a
function that mostly just passes data straight through. Then bit by bit, you can move logic out of your UI code and into this function — perhaps something to do on Tidy Fridy, or TODO Toosday.function that mostly just passes data straight through. Then bit by bit, you can move logic out of your UI code and into this function — perhaps something to do on Tidy Fridy, or TODO Toosday.
parseData
Thanks for reading. Seriously, I mean it.
And for all my cryptocurrency readers that aren’t web developers that have been wondering for the last 9 minutes what on earth is going on: hi!
|
https://hackernoon.com/frontend-in-the-backend-a-pattern-for-cleaner-code-b497c92d0b49?source=rss----3a8144eabfe3---4
|
CC-MAIN-2019-43
|
refinedweb
| 2,541
| 63.09
|
Hello all, header files seemed pretty simple when i first learned about them. until i actually tried to use my own.
The program that i am attempting to create should simply add 1 to 24, but it is using a function found in a header file as a test.
The problem is that i get no errors, but the addition does not happen. This is confusing me quite a bit, and it may be the fact that it's 10 to one in the morning, but i just can't figure it out.
Here is the header file (count.h):
int count (int x) { x = x + 1; return x; }
and the main file:
int main () { #include <iostream> #include "count.h" using namespace std; int num; num = 24; cout << num << endl; count (num); cout << num << endl; cin.get(); }
|
https://www.daniweb.com/programming/software-development/threads/340279/confused-regarding-custom-header-files
|
CC-MAIN-2019-04
|
refinedweb
| 137
| 88.57
|
Possible Duplicate:
How can I get generic Type from a string representation?
"List<String>"
Type.GetType()
Type.GetType("List<String>");
Type.GetType("System.Collections.Generic.List.String"); // Or [String]
typeof
You can't get it from
"List<String>", but you can get it from
Type.GetType:
Type type = Type.GetType("System.Collections.Generic.List`1[System.String]");
You're lucky in this case - both
List<T> and
string are in mscorlib, so we didn't have to specify the assemblies.
The `1 part specifies the arity of the type: that it has one type parameter. The bit in square brackets specifies the type arguments.
Where are you getting just
List<String> from? Can you change your requirements? It's going to be easier than parsing the string, working out where the type parameters are, finding types in different namespaces and assemblies, etc. If you're working in a limited context (e.g. you only need to support a known set of types) it may be easiest to hard-code some of the details.
|
https://codedump.io/share/QIAaEgPhwOx3/1/how-to-gettype-of-listltstringgt-in-c
|
CC-MAIN-2018-13
|
refinedweb
| 173
| 58.28
|
John is a member of the NestJS core team
So you've noticed the cool ways in which you can dynamically configure some of the out-of-the-box NestJS modules like
@nestjs/jwt,
@nestjs/passport and
@nestjs/typeorm, and want to know how to do that in your own modules? This is the right article for you! 😃
Intro
Over on the NestJS documentation site, we recently added a new chapter on dynamic modules. This is a fairly advanced chapter, and along with some recent significant improvements to the asynchronous providers chapter, Nest developers have some great new resources to help build configurable modules that can be assembled into complex, robust applications.
This article builds on that foundation and takes it one step further. One of the hallmarks of NestJS is making asynchronous programming very straightforward. Nest fully embraces Node.js Promises and the
async/await paradigm. Coupled with Nest's signature Dependency Injection features, this is an extremely powerful combination. Let's look at how these features can be used to create dynamically configurable modules. Mastering this skill lets you build modules that are re-usable in any context. This enables fully context-aware, re-usable packages (libraries), and lets you assemble apps that deploy smoothly across cloud providers and throughout the DevOps spectrum - from development to staging to production.
Basic Dynamic Modules
In the dynamic modules chapter, the end result of the code sample is the ability to pass in an
options object to configure a module that is being imported. After reading that chapter, we know how the following code snippet works via the dynamic module API.
import { Module } from '@nestjs/common'; import { AppController } from './app.controller'; import { AppService } from './app.service'; import { ConfigModule } from './config/config.module'; @Module({ imports: [ConfigModule.register({ folder: './config' })], controllers: [AppController], providers: [AppService], }) export class AppModule {}
If you've played around with any NestJS modules - such as
@nestjs/typeorm,
@nestjs/passport or
@nestjs/jwt - you'll have noticed that they go beyond the features described in that chapter. In addition to supporting the
register(...) method shown above, they also support a fully dynamic and asynchronous version of the method. For example, with the
@nestjs/jwt module, you can use a construct like this:
@Module({ imports: [ JwtModule.registerAsync({ useClass: ConfigService }), ] })
With this construct, not only is the module dynamically configured, but the options passed to the dynamic module are themselves constructed dynamically. This is higher order functionality at its best. Configuration options are provided based on values extracted from the environment by the
ConfigService, meaning they can be changed completely external to your feature code. Compare that to hardcoding a parameter, as with
ConfigModule.register({ folder: './config'}), and you can immediately see the win.
In this article, we'll further explore why you may need this feature, and how to build it. Make sure you have a firm grasp on the concepts in the Custom providers and Dynamic modules chapters before moving on to the next section.
Async Options Providers Use-case
The section heading above is quite a mouthful! What the heck is an async options provider?
To answer that, first consider once again that our example above (the
ConfigModule.register({ folder: './config'}) part) is passing a static options object in to the
register() method. As we learned in the Dynamic modules chapter, this options object is used to customize the behavior of the module. (Review that chapter before proceeding if this concept is not familiar). As mentioned, we're now going to take that concept one step further, and let our options object be provided dynamically at run-time.
Async (Dynamic) Options Example
To give us a concrete working example for the remainder of this article, I'm going to introduce a new module. We'll then walk through how to use async options providers in that context.
I recently published the @nestjsplus/massive package to enable a Nest project to easily make use of the impressive MassiveJS library to do database work. We'll study the architecture of that package and use parts of it for the code we analyze in this article. Briefly, the package wraps the MassiveJS library into a Nest module. MassiveJS provides its entire API to each consumer module (e.g., each feature module) through a
db object that has methods like:
db.find()to retrieve database records
db.update()to update database records
db.myFunction()to execute scripts or database procedures/functions
The primary function of the
@nestjsplus/massive package is to make a connection to the database, and return the
db object. In our feature modules, we then access the database using methods hung off the
db object, like those shown above. Right off the bat, it should be clear that in order to establish a database connection, we need to pass in some connection parameters. In our case, with PostgreSQL, those parameters would be something like:
{ user: 'john', password: 'password', host: 'localhost', port: 5432, database: 'nest', }
What we quickly realize is that it's not optimal to hard-code those connection parameters in our Nest app. The conventional solution is to supply them through some sort of Configuration Module. And that's exactly what we can do with Nest's
@nestjs/jwt module as shown in the example above. That, in a nutshell, is the purpose of async options providers. Now let's figure out how to do that in our Massive module.
Coding for Async Options Providers
To begin with, we can imagine supporting a module import statement using the same construct we found in
@nestjs/jwt, like this:
@Module({ imports: [ MassiveModule.registerAsync({ useClass: ConfigService }) ], })
If this doesn't look familiar, take a quick look at this section of the Custom providers chapter. The similarities are deliberate. We're taking inspiration from the concepts learned from Custom providers to build a more flexible means of supplying options to our dynamic modules.
Let's dig in to the implementation. This is going to be a long section, so take a deep breath, maybe refill your coffee cup, but don't worry - we can do this! 😄. Bear in mind that we are walking through a design pattern that, once you understand it, you can confidently cut-and-paste as boilerplate to start building any module that needs dynamic configuration. But before the cutting and pasting starts, let's make sure to understand the template so we can customize it to our needs. Just remember that you won't have to write this from scratch each time!
First, let's revisit what we expect to accomplish with the
registerAsync(...) construct above. Basically, we're saying "Hey module! I don't want to give you the option property values in code. How about instead I give you a class that has a method you can call to get the option values?". This would give us a great deal of flexibility in how we can generate the options dynamically at run-time.
This implies that our dynamic module is going to need to do a little more work in this case, as compared to the static options technique, to acquire its connection options. We're going to work our way up to that result. We begin by formalizing some definitions. We're trying to supply MassiveJS with its expected connection options, so first we'll create an interface to model that:
export interface MassiveConnectOptions { /** * server name or IP address */ host: string; /** * server port number */ port: number; /** * database name */ database: string; /** * user name */ user: string; /** * user password, or a function that returns one */ password: string; /** * use SSL (it also can be a TSSLConfig-like object) */ ... }
There are actually more options available (we can see what they are from examining the MassiveJS connection options documentation), but let's keep the choices basic for now. The ones we modelled are required to establish a connection. As a side note, we're using JSDoc to document them so that we get a nice Intellisense developer experience when we later use the module.
The next concept to grapple with is as follows. Since our consumer module (the one calling
registerAsync() to import the MassiveJS module) is handing us a class and expecting us to call a method on that class, we can surmise that we'll probably need to use some sort of factory pattern. In other words, somewhere, we're going to have to instantiate that class, call a method on it, and use the result returned from that method call as our connection options, right? Sounds (kind of) like a factory. Let's go with that concept for now.
Let's describe our prospective factory with an interface. The method could be something like
createMassiveConnectOptions(). It needs to return an object of type
MassiveConnectOptions (the interface we defined a minute ago). So we have:
interface MassiveOptionsFactory { createMassiveConnectOptions(): Promise<MassiveConnectOptions> | MassiveConnectOptions; }
Nice! We can return the options object directly, or return a Promise that will resolve to an options object. Nest makes it super easy to support either. Hence, we now see the "async" part of our async options provider coming into play.
Now, let's ask the following: what mechanism is going to actually call our factory function at run time, take the resulting
options object, and make it available to the part of our code that needs it? Hmmm... if only we had some general purpose mechanism. Maybe a feature that we could register an arbitrary object (or function returning an object) with at run-time, and then have that object passed into a constructor. Anybody got any ideas? 😉
Well of course, we've got the awesome NestJS Dependency Injection system at our disposal. That seems like a good fit! Let's figure out how to do that.
The recipe for binding something to the Nest IoC container, and later having it injected, is captured in an object called a provider. Let's whip up an options provider that will meet our needs. If you need a quick refresher course on custom providers, go ahead and re-read the Custom providers chapter now. It won't take long. I'll wait right here.
OK, so now you remember that we can define our options provider with a construct like the following. We already have an intuition that we need a factory provider, so this seems like the right construct:
{ provide: 'MASSIVE_CONNECT_OPTIONS', useFactory: // <-- we need to get our options factory inserted here! inject: // <-- we need to supply injectable parameters for useFactory here! }
Let's tie a few things together. We're in pretty deep, so now's a good time to do a quick refresher on the big picture, and assess where we're at:
- We're writing code that constructs, then returns, a dynamic module (our
registerAsync()static method will house that code).
- The dynamic module it returns can be imported into other feature modules, and provides a service (the thing that connects to the database and returns a
dbobject).
- That service needs to be configured at the time the module is constructed. A more helpful way to say this is that the service depends on a dynamically constructed options object.
- We're going to construct that configuration options object at run-time, using a class that the consuming module hands us.
- That class contains a method that knows how to return an appropriate options object.
- We're going to use the NestJS Dependency Injection system to do the heavy lifting to manage that options object dependency for us.
OK, so we're working on steps 4, 5, and 6 right now. We're not yet ready to assemble the entire dynamic module. Before we do that, we have to work out the mechanics of our options provider. Returning to that task, we should be able to see how to fill in the blanks in the skeleton options provider we sketched out earlier (see the lines annotated
<-- we need to... above). We can fill in those values based on how the
registerAsync() call was made:
@Module({ imports: [ MassiveModule.registerAsync({ useClass: ConfigService }) ], })
Let's go ahead and fill them in now based on what we know. We'll sketch a static version of the object, just to see what we're trying to generate dynamically in the code we're about to write:
{ provide: 'MASSIVE_CONNECT_OPTIONS', useFactory: async (ConfigService) => await ConfigService.createMassiveConnectOptions(), inject: [ConfigService] }
So we've now figured out what our generated options provider should look like. Good so far? It's important to remember that the
'MASSIVE_CONNECT_OPTIONS' provider is just fulfilling a dependency inside the dynamic module. Now that I mention it, we haven't really looked at the service that depends on the
'MASSIVE_CONNECT_OPTIONS' provider we're working so hard to supply. Let's connect a few more dots there and take a quick moment to consider that service. The service -- the one that connects and returns a
db object -- is, predictably enough, declared in the
MassiveService class. It's surprisingly straightforward:
@Injectable() export class MassiveService { private _massiveClient; constructor(@Inject('MASSIVE_CONNECT_OPTIONS') private _massiveConnectOptions) {} async connect(): Promise<any> { return this._massiveClient ? this._massiveClient : (this._massiveClient = await massive(this._massiveConnectOptions)); } }
The
MassiveService class injects the connection options provider and uses that information to make the API call needed to asynchronously create a database connection (
await massive(this._massiveConnectOptions)). Once made, it caches the connection so it can return an existing connection on subsequent calls. That's it. That's why we're jumping through hoops to be able to pass in our options provider.
We've now worked out the concepts and sketched out the piece parts of our dynamically configurable module. We're ready to start assembling them. First we'll write some glue code to pull this all together. As we learned in the dynamic modules chapter, all that glue should live in the module definition class. Let's create the
MassiveModule class for that purpose. We'll describe what's happening in this code just below it.
@Global() @Module({}) export class MassiveModule { /** * public static register( ... ) * omitted here for brevity */ public static registerAsync( connectOptions: MassiveConnectAsyncOptions, ): DynamicModule { return { module: MassiveModule, providers: [ MassiveService, ...this.createConnectProviders(connectOptions), ], exports: [MassiveService], }; } private static createConnectProviders( options: MassiveConnectAsyncOptions, ): Provider[] { return [ { provide: 'MASSIVE_CONNECT_OPTIONS', useFactory: async (optionsFactory: MassiveOptionsFactory) => await optionsFactory.createMassiveConnectOptions(), inject: [options.useClass], }, { provide: options.useClass, useClass: useClass, } ]; }
Let's get a firm handle on what this code does. This is really where the rubber meets the road, so take time to understand it carefully. Consider that if we were to insert a logging statement displaying the return value from the following call:
registerAsync({ useClass: ConfigService });
we'd see an object that looks pretty much like this:
{ module: MassiveModule, providers: [ MassiveService, { provide: 'MASSIVE_CONNECT_OPTIONS', useFactory: async (optionsFactory: MassiveOptionsFactory) => await optionsFactory.createMassiveConnectOptions(), inject: [ ConfigService ], }, { provide: ConfigService, useClass: ConfigService, } ], exports: [ MassiveService ] }
This should be pretty recognizable, as it would plug right into a standard
@Module() decorator to declare module metadata (well, all except for the
module property, which is part of the Dynamic module API). To describe it in English, we're returning a dynamic module that declares three providers, exporting one of them for use in other modules that may import it.
The first provider is obviously the
MassiveServiceitself, which we plan to use in our consumer's feature modules, so we duly export it.
The second provider (
'MASSIVE_CONNECT_OPTIONS') is only used internally by the
MassiveServiceto ingest the connection options it needs (notice that we do not export it). Let's take a little closer look at that
useFactoryconstruct. Note that there's also an
injectproperty, which is used to inject the
ConfigServiceinto the factory function. This is described in detail here in the Custom providers chapter, but basically, the idea is that the factory function takes optional input arguments which, if specified, are resolved by injecting a provider from the
injectproperty array. You might be wondering where that
ConfigServiceinjectable comes from. Read on 😉.
Finally, we have a third provider, also used only internally by our dynamic module (and hence not exported), which is our single private instance of the
ConfigService. So, Nest is going to instantiate a
ConfigServiceinside the dynamic module context (this makes sense, right? We told our module to
useClass, which means "create your own instance"), and that will be injected into the factory.
If you made it this far - congrats! That was the hardest part. We just worked out all the mechanics of assembling a dynamically configurable module. The rest of the article is gravy!
One other thing that should be obvious from looking at the generated
useFactory syntax above is that the
ConfigService class must implement a
createMassiveConnectOptions() method. This should be a familiar pattern if you're already using some sort of configuration module that implements various functions to return options of a particular shape for each service it gets plugged into. Now perhaps you can see a little more clearly how this all fits together.
Variant Forms of Asynchronous Options Providers
What we've built so far allows us to configure the
MassiveModule by handing it a class whose purpose is to dynamically provide connection options. Let's just remind ourselves again how that looks from the consumer perspective:
@Module({ imports: [ MassiveModule.registerAsync({ useClass: ConfigService}) ] })
We can refer to this as configuring our dynamic module with a
useClass technique (AKA a class provider). Are there other techniques? You may recall seeing several other similar patterns in the Custom providers chapter. We can model our
registerAsync() interface based on those patterns. Let's sketch out what those techniques would look like from a consumer module perspective, and then we can easily add support for them.
Factory Providers: useFactory
While we did make use of a factory in the previous section, that was strictly internal to the dynamic module construction mechanics, not a part of the callable API. What would
useFactory look like when exposed as an option for our
registerAsync() method?
@Module({ imports: [MassiveModule.registerAsync({ useFactory: () => { return { host: "localhost", port: 5432, database: "nest", user: "john", password: "password" } } })] })
In the sample above, we supplied a very simple factory in place, but we could of course plug in (or pass in a function implementing) any arbitrarily sophisticated factory as long as it returns an appropriate connections object.
Alias Providers: useExisting
This sometimes-overlooked construct is actually extremely useful. In our context, it means we can ensure that we re-use an existing options provider rather than instantiating a new one. For example,
useClass: ConfigService will cause Nest to create and inject a new private instance of our
ConfigService. In the real world, we'll usually want a single shared instance of the
ConfigService injected anywhere it's needed, not a private copy. The
useExisting technique is our friend here. Here's how it would look:
@Module({ imports: [MassiveModule.registerAsync({ useExisting: ConfigService })] })
Supporting Multiple Async Options Providers Techniques
We're in the home stretch. We're going to focus now on generalizing and optimizing our
registerAsync() method to support the additional techniques described above. When we're done, our module will support all three techniques:
- useClass - to get a private instance of the options provider.
- useFactory - to use a function as the options provider.
- useExisting - to re-use an existing (shared,
SINGLETON) service as the options provider.
I'm going to jump right to the code, as we're all getting weary now 😉. I'll describe the key elements below.
@Global() @Module({ providers: [MassiveService], exports: [MassiveService], }) export class MassiveModule { /** * public static register( ... ) * omitted here for brevity */ public static registerAsync(connectOptions: MassiveConnectAsyncOptions): DynamicModule { return { module: MassivconnectOptions.imports || [],eModule, imports: providers: [this.createConnectProviders(connectOptions)], }; } private static createConnectProviders( options: MassiveConnectAsyncOptions, ): Provider[] { if (options.useExisting || options.useFactory) { return [this.createConnectOptionsProvider(options)]; } // for useClass return [ this.createConnectOptionsProvider(options), { provide: options.useClass, useClass: options.useClass, }, ]; } private static createConnectOptionsProvider( options: MassiveConnectAsyncOptions, ): Provider { if (options.useFactory) { // for useFactory return { provide: MASSIVE_CONNECT_OPTIONS, useFactory: options.useFactory, inject: options.inject || [], }; } // For useExisting... return { provide: MASSIVE_CONNECT_OPTIONS, useFactory: async (optionsFactory: MassiveConnectOptionsFactory) => await optionsFactory.createMassiveConnectOptions(), inject: [options.useExisting || options.useClass], }; } }
Before discussing the details of the code, let's cover a few superficial changes to make sure they don't trip you up.
- We now use the constant
MASSIVE_CONNECT_OPTIONSin place of a string-valued token. This is a simple best practice convention, covered at the end of this section of the docs.
- Rather than listing
MassiveServicein the
providersand
exportsproperties of the dynamically constructed module, we promoted them up to live in the
@Module()decorator metadata. Why? Partly style, and partly to keep the code DRY. The two approaches are equivalent.
Fully Understanding the Code
You should be able to trace the path through this code to see how it handles each case uniquely. I highly recommend you do the following exercise. Construct an arbitrary
registerAsync() registration call on paper, and walk through the code to predict what the returned dynamic module will look like. This will strongly reinforce the patterns and help you firmly connect all the dots.
For example, if we were to code:
@Module({ imports: [MassiveModule.registerAsync({ useExisting: ConfigService })] })
We could expect a dynamic module to be constructed with the following properties:
{ module: MassiveModule, imports: [], providers: [ { provide: MASSIVE_CONNECT_OPTIONS, useFactory: async (optionsFactory: MassiveConnectOptionsFactory) => await optionsFactory.createMassiveConnectOptions(), inject: [ ConfigService ], }, ], }
(Note: because the module is preceded by an
@Module() decorator that now lists
MassiveService as a provider and exports, it our resulting dynamic module will also have those properties. Above we are just showing the elements that get added dynamically.)
Consider another question. How is the
ConfigService available inside the factory for injection in this
useExisting case? Well - heh heh - that's kind of a trick question. In the sample above, I assumed that it was already visible inside the consuming module -- perhaps as a global module (one declared with
@Global()). Let's say that wasn't true, and that it lives in
ConfigModule which has not somehow registered ConfigService as a global provider. Can our code handle this? Let's see.
Our registration would instead look like this:
@Module({ imports: [MassiveModule.registerAsync({ useExisting: ConfigService, imports: [ ConfigModule ] })] })
And our resulting dynamic module would look like this:
{ module: MassiveModule, imports: [ ConfigModule ], providers: [ { provide: MASSIVE_CONNECT_OPTIONS, useFactory: async (optionsFactory: MassiveConnectOptionsFactory) => await optionsFactory.createMassiveConnectOptions(), inject: [ ConfigService ], }, ], }
Do you see how the pieces fit together?
Another exercise is to ponder the difference in the code paths when you use
useClass vs.
useExisting. The important point is how we either instantiate a new
ConfigService object, or inject an existing one. It's worth working through those details, as the concepts will give you a full picture of how NestJS modules and providers fit together in a coherent way. But this article is already too long, so I'll leave that as an exercise for you, dear reader. 😄
If you have questions, feel free to ask in the comments below!
Conclusion
The patterns illustrated above are used throughout Nest's add-on modules, like
@nestjs/jwt,
@nestjs/passport and
@nestjs/typeorm. Hopefully you now see not only how powerful these patterns are, but how you can make use of them in your own project.
As a next step, you may want to consider browsing through the source code of those modules, now that you have a roadmap. You can also see a slightly evolved version of the code in this article in the @nestjsplus/massive repository (while you're there, maybe give it a quick ⭐ if you like this article 😉). The main difference between the code in this article and that repo is that the production version needs to handle multiple asynchronous options providers, so there's a tiny bit more plumbing.
Now you can confidently start using these powerful patterns in your own code to create robust and flexible modules that work reliably in a wide variety of contexts.
As a final bonus, if you're building an Open Source package for public use, just combine this technique with the steps described in my last article on publishing NPM packages, and you're all set.
Feel free to ask questions, make comments or suggestions, or just say hello in the comments below. And join us at Discord for more happy discussions about NestJS. I post there as Y Prospect.
Discussion
In the section about Supporting Multiple Async Options Providers Techniques, in the first block of code, I think there a slight mistake in the code.
This part:
should be:
Good catch! Thanks for reporting!
Thanks for this post @johnbiundo 👏
It surely provides a good example of how to develop our own dynamic modules. I'm wondering how you would handle multiple database connections with this
MassiveModule? Wouldn't you have to provide multiple
MassiveServiceinstance? If so, how would we inject the right one at the right place?
@fwoelffel Thank you for your feedback!
Right, I didn't really cover that in this article, but good question. The Massive library uses
pg-promiseunder the covers and the
dbobject I briefly mention actually represents a connection pool. So the recommended pattern for Massive (and any
pg-promiselibrary) is to use a singleton connection object which manages the connection pool under the covers. The full Massive integration library I built has additional options to configure the connection pool size and some other parameters to let you fine tune.
Oh, and there's a tiny bit more plumbing in the full library. Basically, it builds an injection token representing the shared connection object, and to use it in any module, you just inject that token. I tried to cover that in the @nestjsplus/massive docs, but let me know if it's still not clear. I felt like this detail - while relevant to understanding the full MassiveJS library - was a bit too distracting to cover in this article, but I'm not surprised you picked up on it!
Hope that answers the question!
Great article! i think it's worth showing the interface of
MassiveConnectAsyncOptionsas i feel it's a missing piece
@gimboya, Thank you for the feedback.
I am in the midst of a large project so won't have time to update the article at the moment, but appreciate your suggestion. I'm sure you probably found it, but just in case, you can view the interface here: github.com/nestjsplus/massive/blob.... Here's what it looks like:
Yup got it! and very much appreciate it. I am exited that I am starting to grasp dynamic modules
Such a great article, thanks!
Glad you found it helpful @mustapha !
Thank you for this!
Your NestJS module for Massive was actually my introduction to MassiveJS, which was incredibly impressive. It really does hit the sweet spot for database interaction - avoiding the heaviness of an ORM while still providing some elegant interactivity.
I was wondering, do you have a preferred approach for utilizing TypeScript with Massive in your NestJS projects?
thank you for the great article, I spend a lot of time to create the share module with async style.
Glad it helped! Thanks for the feedback.
when using an import (say HttpModule) from within the register.
How would you achieve passing your options parameters to the HttpModule import on the registerAsync method?
Thanks for sharing. I have translated it into Chinese. Can I post it on my blog?
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/nestjs/advanced-nestjs-how-to-build-completely-dynamic-nestjs-modules-1370
|
CC-MAIN-2020-45
|
refinedweb
| 4,509
| 55.44
|
#include "stdafx.h"
#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std;
int _tmain() {
// declare variables
int principal = 1000;
double rate = .03;
int years = 1; //counter
double balance = 0.0;
// Begin loop
do{
balance = principal * pow(1 + rate, years);
cout << "year " << years << ":" << endl;
cout << " $" << balance << endl;
years += 1; //update years counter
} while (years < 6);
system("pause");
return 0;
}
//This is the end of the initial program...
I need to modify the program to use annual interests rates of 3%, 4%, and 5%.
After that I need to modify the program to make it so the user enters the dollar amount of the deposit, the interest rate or rates, and the number of years.
I'm a design student at New England Tech but before we can go full design they want to make sure we know for sure if we want to be a designer or a programmer and so we need to take C++ classes which I suck at so any help would be appreciated.
Forum Rules
|
http://forums.codeguru.com/showthread.php?544119-Building-a-savings-account-program&p=2150021&mode=threaded
|
CC-MAIN-2017-47
|
refinedweb
| 168
| 60.79
|
The C# team posts answers to common questions and describes new language features.
Roslyn not only makes dynamic code evaluation a lot easier, it also provides the necessary APIs to implement a fully featured scripting experience. This post covers the basics of the Scripting API in Roslyn. For an example of how Roslyn can be used to implement a rich environment for entering and executing code at runtime take a look at the C# Interactive window implemented in the Roslyn CTP. The C# Interactive window provides syntax highlighting, intellisense, and even refactoring for code supplied at runtime.
This post uses C# as an example because that is what Roslyn supports for scripting in this first CTP.
Let’s start by looking at a very simple example. Let’s just execute some code dynamically. To do that all we need is an instance of the ScriptEngine class. Specifically we need the C# version defined in Roslyn.Scripting.CSharp. All we have to do is create an instance of ScriptEngine and call the Execute method with the code to be evaluated. The Roslyn scripting version of Hello World is as simple as:
using Roslyn.Scripting.CSharp;
namespace RoslynScriptingDemo
{
class Program
{
static void Main(string[] args)
{
var engine = new ScriptEngine();
engine.Execute(@"System.Console.WriteLine(""Hello Roslyn"");");
}
}
}
The solution needs references to Roslyn.Compilers and Roslyn.Compilers.CSharp.
Notice, we did not have to use APIs that take MSIL or that carefully build up a tree of code that then needs to be compiled and executed. We can simply send the ScriptEngine text snippets of the code we write all the time. The ScriptEngine takes care of the rest to execute it.
Unfortunately the code above does have a few limitations. The first of which is that there’s no cumulative execution context, so if we define a variable as part of one code snippet and then try to access the variable from another code snippet, we will get an exception. For example, if we change the body of Main to something like this:
var engine = new ScriptEngine();
engine.Execute(@"var a = 42;");
engine.Execute(@"System.Console.WriteLine(a);");
We will get a compilation error from the scripting engine as the statements are treated as separate compilations. Fortunately, the Scripting API provides the Session type that can be used as an execution context when calling Execute. A session stores the context of a series of submissions, so by providing a session we can execute the code above and get the expected result.
using Roslyn.Scripting.CSharp;
using Roslyn.Scripting;
namespace RoslynScriptingDemo
{
class Program
{
static void Main(string[] args)
{
var engine = new ScriptEngine();
var session = Session.Create();
engine.Execute(@"var a = 42;", session);
engine.Execute(@"System.Console.WriteLine(a);", session);
}
}
}
We can even define a new method, store it in the session, and then invoke it like this.
engine.Execute(@"void Foo() { System.Console.WriteLine(""Foo""); }", session);
engine.Execute(@"Foo();", session);
However, we still cannot interact with the hosting application. Obviously scripting is of limited use if the user supplied code cannot interact with the application somehow. To address this we need to provide a host object to the session.
The code submissions will automatically have access to any public members of the host object. To supply a host object, we pass in an instance of any type we would like to use as an interface between the host application and the session. Furthermore, we have to make the ScriptEngine aware of this type as well. To do that we pass in a reference to the assembly defining our host object type. With that the setup code now looks as follows:
var hostObject = new HostObject();
var engine = new ScriptEngine(new[] { hostObject.GetType().Assembly.Location });
var session = Session.Create(hostObject);
The host object can be of any type. The ScriptEngine binds any free identifier to public members of the type, as if the type's member names were globally accessible from the script. For the sake of this demo, let’s just make it as simple as possible.
public class HostObject
{
public int Value = 0;
}
With the code above, we can now access the Value field on our hostObject instance like this:
engine.Execute(@"Value = 42;", session);
engine.Execute(@"System.Console.WriteLine(Value);", session);
Notice how the instance's members are implicitly available within the session. We simply access any public member on the instance.
The examples we have been looking at so far are really simple, so let’s round off this introduction by adding some functionality that resembles what an application might do with the scripting API. Imagine we have a Report type like the one listed below.
// This will act as our host object
public class Report
{
// Internal state
private readonly List<int> values = new List<int>();
private int result;
// Encapsulate the internal data structure
public void AddValue(int value)
{
values.Add(value);
}
// Allow read-only enumeration of values
public IEnumerable<int> Values { get { return values; } }
// User code will provide the implementation for these methods
public Action GetValues;
public Func<int> CalculateResult;
// This method may be called by both the host application and the user code
public void PrintResult()
{
var get = GetValues;
if (get != null)
{
get();
}
var calc = CalculateResult;
if (calc != null)
{
result = calc();
}
Console.WriteLine("The result of the calculation is {0}", result);
}
}
The class stores an internal list of numbers and a result. It provides an AddValue method to populate the list and a read-only iterator. The result can be printed by calling the PrintResult method. User-supplied code determines how the list is actually populated and how the result is calculated.
The Report class exposes two delegate types, GetValues and CalculateResult for this purpose. When PrintResult is called it invokes the delegates to populate the list of values and calculate the result. Our simple application may look something like this:
static void Main(string[] args)
{
var report = new Report();
var engine = new ScriptEngine(new[] {
"System.Core", report.GetType().Assembly.Location });
var session = Session.Create(report);
engine.Execute(@"using System.Linq;", session);
engine.Execute(
@"GetValues = () => { for(int i = 1; i <= 3; i++) AddValue(i); };",
session);
engine.Execute(@"CalculateResult = () => Values.Sum();", session);
report.PrintResult();
}
Admittedly this is still a simple example, but it does illustrate how script code can be used to customize our host application.
Notice how we can assign anonymous methods to the delegate members, and the host application can call them. Also, notice how we can access the LINQ extension method Sum in the user supplied code. To enable this we need to let ScriptEngine reference System.Core and import the System.Linq namespace.
That completes the overview of how to use Roslyn to add scripting support to an application. For more information on the Roslyn Scripting API please see the walkthroughs and The Roslyn Project Overview.).
Yet another virus development kit (VDK) for dynamic virus injection through uncompiled/unverified strings.
Please tell me I'm not the only person on planet Earth that figured that one out.
It's a TOOL - in the meaning of something which can be used to provide useful features (like scripts in an desktop application). Do not tell me that you never used a scripting language in any of your apps. And before you say, NO, many apps cannot afford having scripts sandboxed because that would simply cancel the whole utility of scripts in the first place. Unless you embed scripts in document files or read them from the net or other stupid thing like that, there is no security issues since the user executing scripts could just fire up a compiler himself and have an easier life.
Simple and elegant explanation. I was always hoping for something like Roslyn to execute dynamic code.
@script kiddie
I'll think you'll find at Roslyn will execute dynamic code inside a sandbox which will protect the OS and only give access to a few features as/when/if the code is digitally signed and trusted by the host.
In the latest CTP (September 2012) I found that the above examples don't work as the Execute and various other actions have been moved to the Session Type rather than the ScriptEngine.
I am just starting to look at Roslyn for my next generation product because of the scripting capability. I have written a scripting language for my commercial ERP application and one of the things that always concerned me is that malicious scripts could be easily fired and destroyed covering up its tracks. Does Roslyn support "Script Signing" similar to "Code signing" to ensure the script is from a trusted developer/source? If not, then what are some recommended ways to bolt down an application that supports C# scripting using Roslyn?
Hi all;
I want your help and comment in this link:
stackoverflow.com/.../how-i-can-execute-c-sharp-code-using-roslyn-in-asp-net-web-form
Thank you a lot
Any way of stopping the execution by passing a CancellationToken?
Thanks for explaining the so clearly. Very helpful.
|
http://blogs.msdn.com/b/csharpfaq/archive/2011/12/02/introduction-to-the-roslyn-scripting-api.aspx
|
CC-MAIN-2015-06
|
refinedweb
| 1,498
| 55.54
|
Insights for your business
Blue Fish Guide to Tracking ROI in Marketing
When you decide you are ready for your business to grow, you need to invest in marketing. But like any investment, you need to be sure you can determine its return (ROI – Return on
Blue Fish Guide to Tracking ROI in Marketing
When you decide you are ready for your business to grow, you need to invest in marketing. But like any investment, you need to be sure you can determine its return (ROI – Return on
Featured
Don’t Replicate Magento 1 in Magento 2, Think about your MVP
Better performance and scalability! Streamlined checkout! More mobile-friendly and better content management! B2B enhancements! Chances are you’ve already heard about all the new and improved
Using a Customer Journey Map to Define Your Business
What is a Customer Journey Map? A customer Journey Map is an end-to-end timeline visualization of a hypothetical customer’s interactions with a business. While many
Why Does ECM Implementation Take SO Long?!
Once you’ve committed to implementing an ECM for your company, a few questions may arise during the process. You might wonder why a place to
- All
- Case Studies
- ECM
- Ecommerce
- Featured
- Marketing
- News
- Videos
- All
- Case Studies
- ECM
- Ecommerce
- Featured
- Marketing
- News
- Videos
Seilevel And Blue Fish Join Forces
For 18 years Seilevel has been helping Fortune 1000 companies figure out what to build and Blue Fish has been helping those same customers build …
What B2B ECommerce Companies Need to Know About the New Federal Sales Tax Laws
Last June the Supreme Court decided to allow state and local governments to impose sales taxes on online businesses which sell to customers located within …
Case Study: Premier Research Labs
The Golden Years For decades, Texas Supplements has enjoyed a sterling reputation as one of the nation’s leading nutraceutical manufacturers. Thanks to their high quality …
Introducing the new CalOptix.com
CalOptix provides leading brands and innovative products in the optical accessories and over the counter eye wear market to thousands of optical retailers and mass …
2017 Sales Tax Changes
Happy New Year! With the new year comes new sales and use tax rules. New sales and use tax rules are in effect as of …
New Features in Ephesoft Transact 4.1
At the Ephesoft Innovate conference in October, 2016, the Ephesoft team highlighted the new features that would be part of the Ephesoft 4.1 release. Ephesoft …
Sales Tax Nexus: Everything you want to know
What is Sales Tax Nexus? Sales Tax Nexus is also called “sufficient physical presence” and is a legal term that refers to the requirement for …
Online Sales Tax Showdown
2017 will mark the 25th anniversary of a U.S. Supreme Court decision that has exempted many online retailers from having to collect sales tax on …
Related Products Rules in Magento 2
Related Products, Up-sells, and Cross-sells are a powerful tool in Magento. Do you find yourself spending a lot of time managing Product Relationships for your …
Blue Fish goes live with Junior.Club
Junior.Club is a subscription golf club program for kids. The program offers high quality Callaway golf clubs to junior golfers at an affordable monthly cost. …
Advanced Shipping Rules in BigCommerce
Recently, one of our BigCommerce clients asked us to help them solve some of their advanced shipping problems. Up until that point, they were using …
Stencil & CircleCI
Stencil is a newly introduced framework that’s greatly improving the development process for BigCommerce stores. Developers are finding that Stencil is giving them powerful tools …
|
https://bluefishgroup.com/insights/page/3/
|
CC-MAIN-2019-43
|
refinedweb
| 596
| 54.36
|
Casting pointers to references
Casting a pointer (like Foo *) to a reference (like Foo &) via reinterpret_cast or a C-style cast probably doesn't do what you want.. [*]
I came across one such casting bug today, and wondered what the compiler actually emits for it.
As it turns out, GCC warns when you cast a pointer to its corresponding ref type:
test.cpp:12:23: warning: casting ‘int*’ to ‘int&’ does not dereference pointer
Unfortunately, if you cast it to a corresponding const ref type it stays silent. Consider this snippet of C++ code:
#include <stdio.h> extern int SomeGlobal; void DumpValue(const int &value) { printf("%d\n", value); } int main() { int *pval = &SomeGlobal; DumpValue((const int &) pval); return 0; }
Note that the correct approach is to use the deref operator (*) on pval to turn it into an int &, which is compatible with the const int & signature of DumpValue.
After a quick give-me-the-assembly command line sequence:
g++ -o test.o -c test.cpp objdump -d -r test.o # Get assembly with inline linker relocation directives.
We can see the resulting x64 assembly:
Walking through it step by step:
Instruction 2d is placing the address of SomeGlobal into the stack frame, at location -0x8(%rbp). [†] It currently has $0x0 as a value, with a note for the linker to replace that with the address for SomeGlobal when the linking process figures out where SomeGlobal lives.
Instruction 35 computes the address of that stack slot with a lea instruction (which is like a fancy-pants add).
Instructions 35 and 39 make that address of the stack slot into the first argument (%rdi) to DumpValue.
So, the argument won't contain the address of SomeGlobal, like we were hoping to provide to DumpValue, but the stack slot address instead. [‡] The cast resulted in a pointer to its operand — the behavior that you would expect if you took a value type and casted it to a ref, like so:
#include <stdio.h> struct MyStruct { int foo, bar; }; void DumpValues(const MyStruct &ms) { printf("%d %d\n", ms.foo, ms.bar); } int main(void) { MyStruct ms = {42, 1024}; DumpValues(reinterpret_cast<const MyStruct &>(ms)); return 0; }
|
http://blog.cdleary.com/2011/08/casting-pointers-to-references/
|
CC-MAIN-2017-30
|
refinedweb
| 363
| 61.06
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.