text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
St. Petersburg, FL | Clearwater, FL | Pinellas Park, FL Visit Us Online At CrownCars.com Or Call 888-752-0581
CONTENTS
22
JULY / AUGUST 2018 l Volume 2 No. 1
SPOTLIGHT 28-30 Life's A Beach
36 28
When’s the last time you went to the beach? If your answer is “it’s been a while,” it’s high time to rediscover the reason so many vacationers head to our beautiful Pinellas County beaches. Sugar sand, blue water, seabirds and dolphins, and spectacular sunsets are only minutes away. We offer a guide, along with beaches where you can frolic with your pooch. We’ve also tossed in some ideas for surviving the summer, from a spin at a local ice rink to our favorite beach bars.
36-39 Summer Craft Cocktails
54
There’s nothing like a cool, refreshing cocktail to take the edge off a hot summer day or night. We reached out to a number of St. Pete establishments with a reputation for concocting adventurous and, above all, tasty craft cocktails. And, lucky you, they agreed to give us their recipes. Get out your shaker and make some ice, it’s sipping time!
56
CONNECT WITH SPL
14 45
Find us online StPeteLifeMag.com Facebook.com/StPeteLifeMag Twitter: �StPeteLifeMag Advertising: 813.447.9900 bdrake�stpetelifemag.com
77 2
StPeteLifeMag.com
July/August 2018
Editorial: editor@stpetelifemag.com 260 1st Ave. S. Suite 200-151 St. Petersburg, FL 33701
LIVE
WORK
14-15 Neighborhoods
The small downtown neighborhood known as Historic Roser Park has a fascinating story to tell. Tucked into a hilly enclave, it’s filled with history and a supportive group of homeowners.
42 Dining Out - Yeoman's Cask & Lion
Look for the beefeaters standing guard out front and you’ll know you’re there. The British have arrived to deliver us those standard dishes like Bangers and Mash and Shepherd’s Pie in a super groovy atmosphere.
52 Beauty & The Burg
Cindy Stovall dishes on the latest St. Pete arts scene news.
22-23 French American School Of Tampa Bay
A multicultural and arts-forward school is coming to St. Petersburg this fall. We interview the founder and learn why this unique education brings a fresh perspective to the academic scene in Tampa Bay.
10-11 Rising Tide
Sharing a work space is thriving around Tampa Bay. We take a look at Rising Tide Innovation Center and other co-working spaces in St. Pete.
56 Conversations
We chat with St. Pete entrepreneurs who are making a mark on the local scene from a businessman to a boutique and winery owner to the dynamic duo behind a networking pop-up group for badass working women.
70 Kindness - Davion Graduates 43 Dining Out - Ford's Garage
Fill up your tank and have fun doing it at the kitschy new Ford’s Garage, where you’ll dine in a modern-day version of a 1920s filling station. It’s a hoot to check out the old cars, gas pumps and memorabilia while you're munching on a burger.
Against all odds, St. Pete teen Davion Only-Going graduates from high school. His plea to be adopted was heard across the nation.
46-47 Fashion - Flirty & Fabulous
PLAY 45 Sips & Suds
Chilled sparkling wines are just the thing for relaxing poolside this summer.
Show a little skin, embrace bright colors and make this the season for stepping out in style.
76-80 SPL Scene
The social scene, galas, grand openings and fundraisers around town.
62-63 Travel - Key West
50 Can I Get A Witness
St. Pete’s veteran photographer Herb Snitzer has captured urban life and the jazz culture as a St. Pete activist for more than five decades. Cindy Stovall interviews him about his life and the current exhibit of his work at the Museum of Fine Arts.
4
StPeteLifeMag.com
July/August 2018
48 Fashion - Staying Cool
Men can be casual and fashionforward this summer with a colorful short-sleeve shirt and short pants. .
Experience a relaxing and fun getaway to Florida’s Southernmost Island.
64-65 Travel - NYC
Cindy Cockburn knows the Big Apple. She gives us insider tips on where to stay and dine.
WELCOME TO SPL One year ago, we had a vision. We had watched St. Pete’s exciting transformation over the past decade, marveled at its growth and the entrepreneurial spirit that has become a hallmark of this fine city. We saw changes that would bring new residents and businesses, excitement and a newfound identity, while preserving our rich history. As we observed this growth – with forward-thinking city leaders forging new ground - we decided to jump in and become entrepreneurs ourselves. We would create a lifestyle magazine directed to the very core of what makes St. Pete our ‘Burg – our entrepreneurs and dreamers, our residents, businesses, organizations and charities, our arts community, our medical community, our colleges and universities, our fabulous restaurants and breweries and bars and festivals. All these components that make St.Petersburg so unique were the impetus behind a publication for all of them - St. Pete Life. As we mark our one year anniversary, we thank all of our supporters who joined us in our decision to bring a fully focused and targeted high-quality magazine to St. Petersburg. We also thank you for the kudos and words of encouragement in the past year. But in order to succeed, we need more continued support. Please consider advertising in an upcoming issue, so we can spread your message to area residents and continue to cover topics of interest for both locals and visitors. It is clear that St. Pete Life has an avid following. In response to overwhelming requests, we have added a mail subscription offer to those not currently receiving the magazine in one of our targeted zip codes. You can sign up for bi-monthly mail delivery by going to our website at stpetelifemag.com and clicking on Subscribe.
St. Pete Life Magazine is a bi-monthly print publication distributed by mail to 20,000 of St. Pete’s most discriminating readers as well as at key local outlets. Follow us daily online and in social media on St.PeteLifeMag.com. Here, you’ll find the magazine’s features, archived articles, digital only content and promotions.
PUBLISHER/CEO
Beth Ann Drake EDITOR
Marcia Biggs ART DIRECTOR
Alicia Brown
ADVERTISING
Annette Mensch Account Executive CONTRIBUTORS
Cindy Cockburn Travel/Features
Kevin Godbee
In this issue, we celebrate all that our steamy, sultry summers offer us here in the Sunshine City. When’s the last time you headed to the beach for a sunset dinner? Or kicked back with an astounding craft cocktail created by one of our local mixologists at a bar or restaurant? Look for some encouragement within these pages. With this issue, we also announce our partnership with St. Pete Catalyst, the new daily business news platform that is covering many of the city’s remarkable entrepreneurs, leaders and business engines who are giving St. Pete its spark. To see what they are up to, check out stpetecatalyst.com .
Dining/St. Petersburg Foodies
We encourage you to follow us on Facebook, as we share fun and interesting events happening around the city between issues. As always, we welcome your comments. Here’s hoping you’ll stay cool this summer.
Sips and Suds
bdrake�stpetelifemag.com Publisher StPeteLifeMag.com
July/August 2018
Jose Martinez Men’s Fashion
Megan Simons Women’s Fashion
Cindy Stovall Arts Writer
Edith Swierzbinski CO-FOUNDER/BUSINESS MANAGER Ralph Zuckerman
Dorian Photography
Valerie Bogle
Beth Ann Drake 6
Marcia Biggs
editor@stpetelifemag.com Editor
Cover Photo by Barry Lively. Fashion by Pippa Pelure Boutique
Sunset On St Pete Beach Photo by Kathleen M. Finnerty
8
StPeteLifeMag.com
July/August 2018
CONCIERGE CONSULTATION + CREATION + CONSTRUCTION
AT GRAND KITCHEN + BATH OUR TEAM IS WITH YOU EVERY STEP OF THE WAY DURING YOUR HOME RENOVATION. ENSURING CUSTOMER SERVICE AND CRAFTSMANSHIP ARE OUR TOP PRIORITIES TO CREATE THE PERFECT SPACES THAT ENHANCE THE LIVES OF YOUR ENTIRE FAMILY ...EVEN HIS!
2600 4th Street N St. Petersburg grandkitchen.com 727.327.3007 LIC#CGC1521771 LIC# CGC 1521771
+ CABINETS & COUNTERTOPS PLUMBING & & FIXTURES FIXTURES ++FLOORING & TILE + LIGHTING & DECORATIVE HARDWARE + CABINETS & COUNTERTOPS ++ PLUMBING FLOORING & TILE + LIGHTING & DECORATIVE HARDWARE + OUTDOOR KITCHENS + DESIGN CONSULTATION + EXPERIENCED CONTRACTOR + OVER 40 YEARS OF EXPERIENCE + OUTDOOR KITCHENS + DESIGN CONSULTATION + EXPERIENCED CONTRACTOR + OVER 40 YEARS OF EXPERIENCE
BUSINESS
Rising Tide Innovation Center Charting a course in St. Pete’s growing climate of shared work space
PHOTOS/RISING TIDE INNOVATION CENTER
BY MARCIA BIGGS Sunlight filters through the windows as a few people tap, tap, tap on their laptops and life unfolds on the downtown streets below. Just before a holiday weekend, it is the quiet before the storm at Rising Tide Innovation Center. Just two weeks ago, nearly 200 people including St. Pete Mayor Rick Kriseman and St. Pete Chamber President Chris Steinocher gathered here to celebrate the official opening of St. Pete’s newest co-working space. Every day now, the queries come in, a good sign for two law partners who decided to leave their Tampa office space and cross the pond for a new beginning. The vision of law partners Leigh Fletcher and Tina Fischer, Rising Tide occupies some 8,000-square feet of space on two floors in the historic McCrory Building in the 400 block of Central Avenue. After working in a downtown Tampa high-rise for more than a decade, the dynamic duo made the move to downtown St. Pete last fall where they felt a groundswell of support and energy rising.
10
StPeteLifeMag.com
July/August 2018
“Tina already lived here, and most of my friends and family live here,” explained Fletcher. “We wanted to do something bigger than just a law firm. Our lease was up and we were looking at our options. Since most of our clients are national, we were able to work anywhere. Knowing St. Pete’s entrepreneurial spirit and innovative community, we knew this was where we wanted to be.” They made the move to the new fourth floor space in September, ironically the same week that Hurricane Irma was bearing down on Florida. It was an appropriate name -- Rising Tide – because “a rising tide lifts all boats,” says Fletcher. In early May, the third floor became available and they jumped on it. Within 30 minutes of deciding to take the third floor, a call came in from iSocrates, an educational software firm, relocating from Connecticut and seeking temporary office space. “That confirmed our decision to expand to the additional floor,” said Fischer.
BUSINESS
Rising Tide founders Leigh Fletcher and Tina Fischer celebrate their grand opening in May with Mayor Rick Kriseman and other city dignitaries.
Work spaces on the 4th floor enjoy expansive views of downtown St. Petersburg.
After a whirlwind 10 days of contactors, designers, movers and installers all pitching in at breakneck speed, the two floors were transformed into a light, airy and modern co-working center with a sophisticated design focused on natural elements like brick walls, custom-designed wood desks, tables and chairs, and glass partition walls all complemented by furnishings and art in hues of blues and green (ocean and land).
available, and added benefits include a business mailing address, reception services and copy and fax machines.
Members may select from dedicated and temporary workspaces, with high-tech conference capabilities and collaborative spaces available. Monthly fees go from $45 to $325 and range from a “hotdesk” that allows the worker to come and go from communal desks, to a dedicated desk and file cabinet or a private office space. A first-rate free snack bar and gourmet coffee are always
In addition to the iSocrates group, the Rising Tide ranks include local businessman Mario Farias, the Latin Film Festival and tech firm WB3 – and, of course, the law offices of Fletcher and Fischer. They are currently welcoming community groups, small businesses and non-profits to use some spaces for receptions and other functions. Rising Tide Innovation Center is located at 433 Central Avenue, St. Pete. To learn more, go to risingtidecowork.com or call (727) 877-8230.
July/August 2018
StPeteLifeMag.com
11
BUSINESS
The Co-Work Revolution Has Arrived The idea of offering entrepreneurs, start-ups and established small businesses a home or on-the-go office is catching on in cities around the globe. Most co-work spaces offer basic services such as a mailing address, high-speed internet, conference space and board rooms, copy and fax service, and a desk or private office available on a monthly basis. Membership fees come in various levels depending on choice of amenities; most are monthly but some may have daily drop-in fees and longterm office rentals. In St. Pete, the five-story Station House started the wave when it opened in 2014 after extensive renovations to a 100-year-old fire station in the heart of downtown. Station House offers members more than co-work space, calling itself a “networking mecca” by offering monthly breakfasts, business workshops, arts and social events . The web site claims more than 150 businesses as members, many with the “instinctive desire for community and social interaction.” A largely millennial base shares ideas at the coffee house, networks at happy hours at Ichicoro Ane restaurant, enjoys regular yoga classes, or attends social events in 4,000 square feet of event space on the upper floors. stationhousestpete.com Station House owner Steve Gianfilippo is opening a Station House in Tampa’s Hyde Park Village in 2019, occupying 30,000 square feet of co-work and social space in the heart of the village. Over in Tampa, the sprawling new Bay 3 co-working space in the Armature Works building in the Heights takes the industrial warehouse style to new heights. The open two-story 11,000-square-foot space sports steel beams, open ductwork, modern lighting and floor to ceiling windows in a redeveloped part of the old Tampa Electric streetcar trolley barn. (The new Heights Public Market, a busy food court, occupies another part of the building.) Here, long communal tables mix with small work nooks, desks, private offices and even a living room area. Like the other co-work spaces, Bay 3 offers members services from high-speed internet to mailing addresses, a copy center and conference rooms. armatureworks.com/work-space/ On a smaller scale, the new Arts Business Incubator recently opened by Creative Pinellas allows local arts and non-profit
12
StPeteLifeMag.com
July/August 2018
organizations a chance to co-work in a communal space in Largo (formerly the Gulf Coast Museum of Art). Incubators seek to provide additional support for start-ups, with mentors and experts in business, technology, investing and other areas bringing knowledge in a broad variety of skills and acumen. creativepinellas.org/incubator TEC Garage is a tech business accelerator, part of the Tampa Bay Innovation Center, located at the St. Petersburg College Downtown campus. Geared toward startups with a strong business coaching attitude, since 2014 Tec Garage has been connecting young companies and entrepreneurs with capital and coaching. The incubation program provides clients with access to networks, experts, international markets, industry peers, market research, service providers, university support systems and funding. The staff and mentors represent a broad range of skills and technical backgrounds. To co-work here, participants must be a manufacturing or tech company. tecgarage.org The Greenhouse, an incubation and learning center for St. Petearea startups, is providing business owners and entrepreneurs with the education, resources and assistance necessary to thrive in the local economy. Comprised of an expert team from the City of St. Petersburg and St. Petersburg Area Chamber of Commerce, the Greenhouse does not require membership to attend its extensive list of workshops which are often free, or low fee. Topics range from how to apply for a small business loan, to internet marketing and sales, software management, and starting a non-profit and taxes. stpetegreenhouse.com
PHOTOS/ERNESTO GARCIA
• Made-To-Measure Suits, Sports Jackets and Shirts • Van Laack Shirts - A True Luxury Shirt • Raleigh and Slate Jeans - Finest Material Jeans • Strong Body, MPG and Viori Luxurious Athleisure Wear • Mizzen + Main Moisture Wicking, Four-Way Stretch Shirts • 2UNDR™ High Performance Men’s Athletic Underwear • Exotic Leathers and Goodyear Welted Shoes • Stylish Tuxedo Rentals
Daily Wine Tastings
Sip + Shop + Suit Up
NOW OPEN! MERMOSA WINERY + BOUTIQUE, formerly CERULEAN BLU, is St. Pete's first Winery + Boutique specializing in refreshing sparkling wines made from organic grapes. Sip our Refreshing Sparkling Wines, Shop the best resort wear fashions and Suit Up with Tampa Bay's Swimwear Fit Experts. Cheers!
Sartorial Inc. Style For The Classic Male
Bubbly Tastings
400 Beach Drive • Downtown St. Petersburg • 727-290-6783 || customerservice@sartorialinc.com
Ribbons & Ties Children’s Boutique
Ribbons and Ties Children’s Boutique brings to you only the best in quality clothing and accessories from around the world. Children’s Clothing • Swimwear Christening Gowns Baby Bags • Accessories Events • Photography Paintings and Sculptures by World-Renowned artist WYLAND
Also featuring James Coleman, Arcade Latour, Gaylord Ho and many more Artists and Jewelry Designers.
727.201.8285 400 Beach Drive NE
Monday - Friday 10:00am - 7:00pm Saturday 9:30am - 7:00pm Sunday 12:00pm - 6:00pm
727.289.9501
400 Beach Dr NE • Suite 150
NEIGHBORHOODS
Historic Roser Park Tucked into a hilly enclave, this downtown neighborhood is a historic gem PHOTO S/KAI WARREN
BY MARCIA BIGGS What just might be St. Pete’s best kept secret is really as old as, well, the hills that are a part of it. In Historic Roser Park, a jumble of historic homes both large and small are set into hillsides and on hilltops. Once the site of Tocobaga Indian mounds now long buried beneath the streets, this 270-acre district just south of downtown St. Petersburg is home to a fiercely loyal community, a mix of creatives and professionals, artists and entrepreneurs, young and old, single and married, straight and gay, yet all linked together by a love of historic charm and old-fashioned neighborliness. The Historic Roser Park neighborhood lies hidden in the shadow of Bayfront Medical Center’s mass of hospital buildings and parking garages. The steeply descending brick street of Roser Park Drive leads into a surprising landscape of majestic oaks and towering palms, and along the winding Booker Creek which is dotted with benches and blooming bougainvillea. The
14
StPeteLifeMag.com
July/August 2018
neighborhood is home to a wide variety of architectural styles and types, including Frame Vernacular, Bungalow, Prairie, Foursquare, Craftsman, Mediterranean Revival, Colonial Revival, Neoclassical, and Tudor Revival. But as lots and older homes get sold, more new homes are being built. Like downtown St Petersburg, the Historic Roser Park neighborhood is experiencing a renaissance.
The Roser Park Vision
Roser Park was the vision of wealthy Ohio developer Charles M. Roser. He began work on his idyllic “suburb” in 1911, purchasing a 10-acre citrus grove south of the city and eventually adding more land along Booker Creek. Brick was a rare and expensive material in those days, but Roser insisted upon its abundant use. The subdivision soon expanded to 80 unique and beautiful homes. The first residential subdivision to be established outside of the downtown St. Petersburg business district, Roser Park was an
NEIGHBORHOODS early “streetcar suburb” conveniently located along the downtown trolley line.
more younger couples moving in and more young children.”
To this day, the neighborhood Roser created is a veritable living museum of post-Victorian architectural style with close to 150 residences. Thanks to many years of determination and hard work by members of its neighborhood association, Historic Roser Park became the city’s first Local Historic District in 1987. In 1998, it was listed on the National Register of Historic Places for its significance in community planning and development, architecture, and landscape architecture. The association works closely with the City of St. Petersburg to document and manage the Roser Park “vision” using a Neighborhood Plan.
Like many older urban neighborhoods, Roser Park fell into decline. It was in the early 1970s that people began purchasing the vacated rundown historic homes at rock bottom prices. Ron Motyka was one of them. “History is my passion,” confesses the schoolteacher and resident historian. He moved into a large stately historic home in need of some TLC and jumped in head first, delving into archives and records, determined to restore not only his home but some the neighborhood’s former glory. He worked diligently with the city to obtain the historic district designation, and later the national one. With funding from a St. Petersburg neighborhoods grant, he set about establishing an Outdoor Museum.
Downtown Nearby
Deb Camfferman has been with the Historic Roser Park Neighborhood Association since moving here two decades ago. She chaired the volunteer committee that put on the 2018 Tour of Homes on March 31, welcoming some 275 visitors. She loves the neighborhood for its proximity to downtown and the close-knit community. “I can ride my bike to downtown in a matter of minutes,” she says. It’s the kind of neighborhood where potluck Porch Parties are a regular thing and a missing dog results in an all-out emergency alert. “Everyone here is very friendly, very social. We all watch out for our neighbors,” explains Camfferman, whose coral pink 1926 Mediterranean Revival home perched above Booker Creek is something of a landmark. Adam Gyson, the current president of the neighborhood association, lives with his wife, Sarah, and young daughter in a 1925 Dutch Colonial home. Coming from a condo lifestyle in downtown, they embraced the idea of living in a historic city neighborhood, with the restaurants and museums of downtown close by. “When we first moved in seven years ago we were one of only a few younger homeowners,” says Gyson. “Now I’m seeing
The display route initially wound along Booker Creek but has since been expanded to most of the other streets in the neighborhood. The 28 plaques, mounted on 3 foot tall wrought-iron poles, describe the history and development of Roser Park, and some show early 20th-century photos of the neighborhood, half of which has disappeared in place of urban development. The district retains many of its original design features, including rusticated block retaining walls, brick streets, original hexagon sidewalk pavers and granite curbstones. Roser Park’s neighborhood association continues to work to restore other original and period features, such as vintage street lighting and signage. It seems fitting that the grand centenarian oak spreading its arms over Booker Creek is called Charles (named after the founder). And yes, there actually is a park in Roser Park. The city has preserved 8-acres along the creek where a walking path offers a splendid reprieve and an almost idyllic sense that you are, indeed, walking through history. To see a 6-minute “Living Local” online video produced by the City of St. Petersburg profiling Historic Roser Park, go to stpete.org/neighborhoods/neighborhood_profiles.php
NEIGHBORHOOD
Yard & Garden Roll Out The Barrel It’s the rainy season, and smart ‘Burgers know they can recycle rainwater to feed their yards and gardens by using a rain barrel. Learn how to make one at the UF/IFAS Extension Florida-Friendly Landscaping workshop on August 3, at Weedon Island Preserve Cultural and Natural History Center. The one-hour seminar starts at 10:30 am with advance registration required. Homeowners will learn how to save that rainwater in a recycled plastic drum for use in plant beds and potted plants. Each attendee will receive instructions on how to build and install a rain barrel, or you have the option of purchasing a completed rain barrel in advance of the workshop. Free to attend; $40 includes rain barrel. For more information or to register, contact Brian Niemann at (727) 453-6524 or bniemann@pinellascounty.org
Dress Up Your Yard Treehouse Gallery is a treasure trove of colorful, kitschy, fun summer furnishings for the garden and patio. Discover loads of painted pottery, planters, and yard art, along with imported furniture, art and accessories from around the world. Treehouse specializes in one-of-a-kind old world, vintage, rustic and salvaged pieces from Asia, India and Mexico. Located at 2835 22nd Avenue N. (727) 328-3606 treehousegallery.com
Summer Fertilizer Ban St. Pete residents are reminded that Pinellas County’s fertilizer ordinance prohibits the sale or application of fertilizers containing nitrogen and/or phosphorous between June 1 and Sept. 30. Phosphorous cannot be used at any time of the year unless a soil test confirms that it is needed. The nitrogen/phosphorous ban aims to prevent fertilizer runoff from entering storm drains and lakes, ponds, rivers, Tampa Bay and the Gulf of Mexico. Excess nitrogen and phosphorous can cause harmful algae blooms that lower oxygen levels and
16
StPeteLifeMag.com
July/August 2018
lead to fish kills. Pinellas County is one of more than 90 Florida communities that have summertime fertilizer restrictions. Recent data shows that the ordinance is having a positive impact on the aquatic environment. Pinellas County Environmental Management recommends the
NEIGHBORHOOD following Florida-friendly lawn care best practices to keep a healthy landscape during the summer: • Look for products with “0-0” as the first two numbers on the fertilizer label. • Apply iron to keep lawns green during the summer without increasing growth • Use compost to enrich soil. • Set lawn mower blades between 3½ to 4 inches for St. Augustine turf to encourage deep roots that resist fungus and pests. • Buy plants adapted to Florida’s hot and humid climate and plant them in places that suit their sun and water needs. For more tips on caring for your summer lawn and garden, along with Florida-friendly landscaping information, go to befloridian.org
Mosquito Madness
Pinellas County Mosquito Control asks all citizens to do their part to reduce the mosquito population. Remember that mosquitoes only need ¼ to ½ inch of standing water for the larvae to survive. If you have a pond that appears to be a breeding ground, or if your property seems to be highly active, Mosquito Control technicians are treating known breeding areas and responding to calls from citizens. To make a report or inquiry on mosquito control in your area, call (727) 464-7503. What you can do: • Empty water from flower pots, garbage cans, recycling containers, and buckets - any item that can hold water. Check for standing water under houses, near plumbing drains, under air conditioner drip areas, around septic tanks and heat pumps. • Flush birdbaths weekly and keep rain gutters cleared of debris. These are two major breeding grounds for mosquitoes. • Get rid of bromeliads in your landscape. Mosquitoes love them because they hold standing water.
July/August 2018
StPeteLifeMag.com
17
CONVERSATIONS
Desiree Noisette
Mermosa Winery & Boutique PHOTO/ BARRY LIVELY
BY CINDY COCKBURN When I told a girlfriend I had to go on a cruise leaving from the South of France a few years ago, she literally drove me straight to a boutique downtown called Cerulean Blu for bikini shopping. On behalf of shopaholics everywhere we say congrats to Desiree Noisette on rebranding her store concept and creating a new shopping experience at Mermosa. Located at 400 Beach Drive, daily wine tastings add to the pleasure of shopping. Her lines include a variety of tropical offerings from fabulous bathing suits she is known for to great gift ideas. Check out www. mermosa.com.
18
woke up, I knew there could be a better way for women to go through the swimwear shopping experience without feeling so vulnerable, overwhelmed and exhausted. I started writing a business plan and a few months later, I quit my job and started Cerulean Blu - the place where women feel beautiful in swimwear. With curated swimwear designed for real women’s bodies, expert fit specialists to guide the way, giant dressing rooms with good lighting and a vacation-mode feel.
What is your background? Prior to starting Mermosa, I sat at the helm of Cerulean Blu (now merged with Mermosa). My background was a lawyer, but I must admit I’m really thriving in my role as an entrepreneur because it gives me the opportunity to explore possibilities, create jobs and implement industry disruptive business processes. I am a recovering lawyer, swimwear fit expert, clothing designer, mom of two boys and now, maker of the perfect day-to-night sparkling wine brand Mermosa.
Please explain the unique concept behind Mermosa Mermosa is a metamorphosis of Cerulean Blu into the ultimate sip and shop experience with our own signature line of sparkling wines created for day-to-night. Wine has always been a part of our vacation-mode experience with Cerulean Blu, but it was a challenge to find the right wine to serve during the day in our hot tropical climate. The sparkling wines I tried over the years tasted too sweet, too tart or artificial. So finally my creative juices kicked in and next thing you know I’m on a plane to Oregon wine country and our kitchen transformed into a science lab with beakers and CO2 tanks.
How did the concept for Cerulean Blu first begin? The concept came after a girls’ night out at Red Mesa Cantina. We spent the evening eating nachos, drinking margaritas and complaining about swimwear shopping. The next day when I
We created a mermaid-inspired sparkling wine brand and introduced two in May. Mersecco uses organic grapes and is dry with no residual sugar. It is clean, crisp and refreshing. Mermosa also uses organic grapes and has just a splash of organic orange
StPeteLifeMag.com
July/August 2018
CONVERSATIONS and pineapple. We have beautiful white painted bottles with gold mermaid labels and plan to put the wine in cans later this summer. Both are available for purchase in elegantly packaged bottles in-house and online. The grapes are sourced from Washington State, the wines are blended in Oregon’s Willamette Valley and we finish the bottling process right here in downtown St. Pete. I’m also working on a few other flavors – Mergarita, Mertini, Merjito and C, by Mermosa. What inspires you? I ascribe to the mantra that if you’re not moving, you’re dying. It’s hard not to be inspired in St. Pete where the community is full of creatives who mentor, encourage and support small businesses. Over the years I’ve explored new concepts – a resortwear line, a design studio and an
expansion, among other things. Some things worked, some didn’t – but they were all good lessons that helped me prepare and build a network of collaborators including fellow serial entrepreneurs and St. Pete enthusiasts Jon LaBudde and Beth England for my biggest idea yet: Mermosa. What are some of your favorite spots for dining in St. Pete? Number one is BRUNCH! We are the brunch capital of Florida. With so many options it’s hard to pick, but one of my go-to spots is Red Mesa Cantina. My husband and I love to combine Artwalk with dinner at Brick & Mortar and salsa dancing at Ceviche, later coffee and dessert at Cassis. The perfect date night, worth every penny for the babysitter.
Season Tickets on Sale Now
18 19
We’ll save you a great seat!
Beethoven’s Symphony No. 5 Disney in Concert – Tale as Old as Time The Planets Broadway Tonight Music of John Williams Gershwin’s Rhapsody in Blue Tchaikovsky’s Violin Concerto and much more!
FloridaOrchestra.org | 727.892.3337 or 1.800.662.7286 TFO-St Pete Life Magazine.indd 1
July/August 2018
19
6/19/2018 3:07:10 PM
StPeteLifeMag.com
BUSINESS
French American School of Tampa Bay PHOTOS / FRENCH AMERICAN SCHOOL
Willy and Elizabeth LeBihan are bringing the French American School of Tampa Bay to St. Petersburg after running the school in Maine for 17 years. BY MARCIA BIGGS This fall, area children will be offered the unique experience of school learning though a multicultural education at the French American School of Tampa Bay (FASTB). This unique immersion school will be spearheaded by Willy and Elizabeth LeBihan, who bring 17 years of experience running L’Ecole Française du Maine: The French School of Maine. The French American School of Tampa Bay will offer children from pre-school through grade 12 the opportunity to benefit from both French and American education systems. As a native of France, Willy brings knowledge of the French education system while Maine-born Elizabeth integrates the American perspective. Both have graduate degrees and extensive professional experience as teachers and administrators. FASTB teaching emphasizes the importance of art, music, language and outdoor play as part of a successful learning environment. While a traditional curriculum of math, science,
22
StPeteLifeMag.com
July/August 2018
and history are taught, a strong cultural arts program will help in the process of learning a second language, says Elizabeth. “In the US school system, art and music were the first programs to get cut from curriculums,” she says. “As teachers, we see that as an integral part of an education, not ‘extras.’ Children are not getting enough creative time or physical activity. At our school they will get it all – music, art, science, math, creative play and physical activities – these are all skills we try to develop.” The LeBihans are no strangers to St. Petersburg. With family living in the area, Elizabeth has been visiting since she was a child and the couple has spent many family vacations here with their three children. They now reside in St. Petersburg and are excited to be here to start their new school. After two years of preparation and searching for just the right location, the LeBihans selected a former day care center on
BUSINESS 62nd Avenue near the Gateway area. The school will be 5,500 square feet with 7 convertible classrooms. This summer, the structure is being completely renovated and a new playground installed in the adjacent green space. The school will be affiliated with the Mission Laïque Française, renowned for successfully managing French schools abroad (more than 55,000 students in 41 countries), the French Ministry of Education and its prestigious international network of 400 schools, and the Association of French Schools in North America which comprises nearly 60 primary, elementary and secondary schools in the United States and Canada, all of which are accredited by the French Ministry of National Education. FASTB is going through the process of accreditation by the Florida Department of Education. What if your child does not speak French? The LeBihans figure nearly one-third of their students in the Maine school are American with no prior French skills, while one-third is French and one-third with an international background. “We think our market is really the American families who want their child to interact, learn another language and culture,” says Elizabeth. The French language is not taught though textbook learning at FASTB, but through conversation during school activities. Early childhood is the easiest time for a person to become bilingual, she adds. “It’s much easier to learn a second language when young, particularly for pre-school and younger grades, they can just soak it up.” “In America, we teach languages in middle school or high school and that is the worst time to learn since we have already developed our English language skills,” she says. “The earlier the better. We do not teach the language, we live the language. We do not teach French, we teach in French. And this actually helps to improve English language skills. After 17 years of teaching in Maine, we are now finding that our students in college are excelling in English, scoring very high in writing and language skills.” Why St. Pete for the new school? “The way St. Pete is growing and developing is really exciting,” says Elizabeth. “The French government felt a real need for a school in this area, the nearest ones are in Atlanta and Boca Raton/Miami. There’s a growing French community in Tampa Bay and many businesses find it easier to recruit international talent if their children can attend a bilingual school. If only here for a couple years, they can assimilate better back into their schools in Europe.” The French American School of Tampa Bay opens in the fall of 2018. The school is located at 2100 62nd Ave. N. in St Pete. For more information go to fastb.org, follow the school on Facebook at facebook.com/TheFrenchAmericanSchoolofTampaBay or call 727-800-2159 to arrange a campus visit.
July/August 2018
StPeteLifeMag.com
23
SPL NEWS
St Pete Paws
More attention will be given to our four-legged residents under a new city initiative announced by Mayor Kriseman in June. The city will aim to partner with animal welfare and veterinary communities to create plans to make St. Petersburg a leading pet-friendly city. St. Pete becomes the second city in Florida to join the Better Cities for Pets campaign. The City has developed a program called St. Pete Paws which will encourage pets are welcome, and owners are caring to provide safe and secure environments.
already an incredibly supportive community for pets, but we’re aiming to do even more.”
St. Petersburg already has plentiful pet-friendly amenities, from petfriendly hotels and restaurants to kennels, doggie daycares and veterinarians to fire stations that are outfitted with special airway equipment for pets.
City officials and staff.
The Platinum Group • Your St. Pete Real Estate Specialists
NO Better Place • NO Better Time! Pia Hiotis 614-561-5996 • Cynthia Serra 727-580-3335 • Dan Casper 773-965-6465 • Connie Lancaster 727-741-2000
1100 4th Street North • Suite 200 • St. Petersburg, FL 33701
24
StPeteLifeMag.com
July/August 2018
SPL NEWS.
and it’s positively impacting pets. Pinellas County’s live release rate – measured by adoptions, transfers and other factors – reached a six-year high at 73 percent in 2017. In addition, trap-neuter-vaccinatereturn (TNVR) programs that started in 2014 are keeping community cats healthier and limiting the number of litters born into community cat colonies.
“Pets rely on us to keep them healthy, and this initiative makes pet health and happiness a shared responsibility for our community,” said Dr. Jimmy Barr, Chief Medical Officer.
Just CHILL TO PERFECTION NO NEEDLES. NO SURGERY. LITTLE TO NO DOWNTIME.
All Photos Courtesy of G. Joseph Fitzgerald, DO
Before
CoolSculpting® reduces fat in those stubborn areas using a process that freezes fat cells and naturally eliminates them from your body.
Before
60 Days After One CoolSculpting Session 30 Days to Final Result
90 Days After One CoolSculpting Session
UPCOMING
COOL EVENTS Feature a presentation, live demo, free consultations and discounts up to 25% off. Space is limited, RSVP required.
JULY 12 · 6-8 PM JULY 19 · 12-2 PM AUGUST 16 · 6-8 PM AUGUST 30 · 12-2 PM
PROUD TO BE A LEADING COOLSCULPTING PRACTICE IN TAMPA BAY WITH FOUR COOLSCULPTING MEDICAL DEVICES
CALL FOR A FREE CONSULTATION
727.595.3400
12442 Indian Rocks Road | Largo, FL
July/August 2018
StPeteLifeMag.com
25
veterinary center
Your Best Friend Deserves the Best Care. Keep them happy and healthy with SPCA Tampa Bay. At the SPCA Tampa Bay Veterinary Center, your pet will receive the highest level of care from our experienced professional staff. Our state-of-the-art facility was built to serve the pets of St. Pete with a lifetime of services. From digital X-rays to in-house lab services, surgery, checkups and even spay and neuter, we offer full service care, all under one roof. Call or come in to make an appointment.
3250 5th Ave N., St. Petersburg, FL 33713
•
7 2 7. 2 2 0 . 1 7 6 2
•
SPCATampaBay.org/Veterinary-Center
ts
ly Presen d u ro P a p S id e M e if L Pure
g n ti fi e n e B t r e c n o AC nds Across the Bay
Julie Weintraub’s Ha
SATURDAY, SEPT. 15th
WYNDHAM GRAND CLEARWATER BEACH
M
5:30 PM SPONSOR RECEPTION 6:30 COCKTAIL RECEPTION
Y
Y
Y
Featuring DINNER • RAFFLES • AUCTIONS SURPRISES AND THE STARS OF TRANS-SIBERIAN ORCHESTRA & BROADWAY’S ROCKTOPIA
Chloe Lowery SPONSORSHIPS & TICKETS AVAILABLE ROB EVAN Call (727) 595-3400 to Purchase
“A Helping Hand for a Better Tomorrow” “A Helping Hand for a Better Tomorrow”
HANDS ACROSS THE BAY is a 501 (c)(3) non-profit organization developed to positively impact the lives of as many Tampa Bay residents as possible. It assists and mentors hardworking families in need,advocates for change, supports other organizations,and spreads love with various acts of kindness.
SPOTLIGHT
Life’s A Beach
BY MARCIA BIGGS When’s the last time you went to the beach? If your answer is “it’s been a while,” it’s high time to rediscover the reason so many vacationers head to our beautiful Pinellas County beaches – they are world-class travel destinations! Sugar sand, blue water, seabirds and dolphins, and spectacular sunsets are only minutes away. Now stop making excuses and beat the heat by heading to one of our many beaches. If you are new to the area, welcome to paradise. Here’s our guide to some favorites places to lay your beach towel:
Honeymoon Island State Park
Nature lovers will appreciate the beauty of Honeymoon Island, an undeveloped 385-acre barrier island off Dunedin with 4 miles of white, sandy beaches, hiking trails and great bird-watching opportunities along the 2.5-mile Osprey Trail. Florida’s mostvisited state park draws those who are looking to explore real Florida. Be sure to stop in at the Nature Center upon arriving where you can and learn about the island’s critters, birds and history. The South Beach Pavilion and Café Honeymoon provide food, beverages and rent beach umbrellas and chairs,
28
StPeteLifeMag.com
July/August 2018
kayaks and bicycles. floridastateparks.org/park/Honeymoon-Island Admission fee: $8 per vehicle; $4 single occupant vehicle; $2 pedestrians, bicyclists, extra passengers; $4 per vehicle sunset fee starting one hour prior to sunset.
Caladesi Island State Park
Sister island to Honeymoon Island State Park, Caladesi is one of the few completely natural barrier islands along Florida’s Gulf Coast. Ranked as one of the Top 10 Beaches of 2016 by Dr. Stephen Leatherman (“Dr. Beach”), the 3-mile-long island is accessible only by boat. Get here by taking a short ferry ride from Honeymoon Island or rent a kayak from the outfitter along the Dunedin causeway. A paddling trail winds through the mangroves. For boaters, the park has a marina with electric and water hookups. Come prepared by bringing beach chairs and a cooler, although the concession offers great burgers and sandwiches. Ferry service is provided by the Caladesi Island Ferry ($14 adults, $7 children 6-12, free for 5 and younger; (727) 734-1501. floridastateparks.org/park/Caladesi-Island
SPOTLIGHT Admission fee: $8 per vehicle; $4 single occupant vehicle; $2 pedestrians, bicyclists, extra passengers; $4 per vehicle sunset fee starting one hour prior to sunset.
Clearwater Beach
With its wide sugar-sand beaches, lively restaurants and bars, and kid-friendly water sports, it’s no surprise that fun-loving Clearwater Beach was voted the No. 1 Beach in America in 2018 by TripAdvisor. There’s something for everyone here from millennials to couples and families. Stroll the half-mile Beach Walk, check out the surf shops and cafes, sail away on a pirate ship, go dolphin watching, wine and dine on a sunset dinner cruise or head out on a fishing charter. Don’t miss a grouper sandwich at one of four Frenchy’s seafood restaurants or a beachfront margarita at Palm Pavilion. For the best beach sunset view, you can’t beat the rooftop bar at Jimmy’s Crow’s Nest at Pier House 60. Sunsets at Pier 60 are nightly affairs with crafters, buskers and live music on weekends.
Sand Key
If you prefer a little solitude with spectacular Gulf views, quiet and romantic Sand Key Park is just minutes from the hustle and bustle of Clearwater Beach. Pack a picnic, grab a fishing pole or just kick back for a lazy afternoon at this 95-acre county park with its wide, sandy beach, picnic pavilions and bathhouses. Walking trails and a boardwalk wind through a salt marsh where you can spot herons, egrets and other wading birds. Shelling can be excellent just after a storm. Parking fee: $5
Indian Rocks Beach and Indian Shores
High-rise condominiums share the beach with small hotels and motels, beach cottages, family restaurants, and neighborhood bars, but you can still find some 20 beach access points that lead to the white sandy beaches here. Come early to grab a parking spot at Indian Rocks public beach with its restrooms and bathhouse. Discover the tiny Seaside Seabird Sanctuary in Indian Shores, home to injured pelicans, birds of prey and other shorebirds. When hunger and thirst strike, you’ll find lots of options like Lulu’s Oyster Bar and Tap House and Guppy’s on the Beach.
Beach Guide Honeymoon Island State Park
Caladesi Island State Park Clearwater Beach Sand Key Bellair Beach Indian Shores & Indian Rocks Beach
North Redington Beach & Redington Shores
Madeira Beach Treasure Island St. Pete Beach
Pass-A-Grille Beach
Treasure Island
Known for three miles of sparkling sand and sea, locals gravitate to popular beach bars like Sloppy Joe’s at the Bilmar Beach Resort, or Caddy’s and the Ka-Tiki Lounge which offer live music most evenings on Sunset Beach. Gator’s Café and Saloon on John’s Pass is where you can sit on the outdoor deck and watch the boats go by. A popular drumming circle gathers each Sunday night on the beach an hour before sunset with bongo and bucket drummers, hula hoopers and fire dancers of all ages.
Shell Key, Fort De Soto & Egmont Key
St. Pete Beach
Settle in at a tiki bar, check out surf shops or indulge in a romantic oceanfront dinner – it’s all easy to do at St. Pete Beach. Some of the best beach bars can be found here, like Jimmy B’s, the Undertow and Harry’s. Stroll the Corey Avenue District’s boutiques, galleries and stop for a bite to eat. For some great people watching, enjoy a cocktail poolside at the five-star 1928 “Pink Palace” the Loews Don CeSar Hotel.
July/August 2018
StPeteLifeMag.com
29
SPOTLIGHT
Pass-A-Grille Beach
A quiet respite can be found beyond the sea oat dunes at PassA-Grille, just south of St. Pete Beach. When you get hungry, grab a snack at the beachfront Paradise Grille or walk across the street to the legendary Hurricane restaurant for a grouper sandwich. (The sunset views from the rooftop are legendary.) Stroll the galleries along the Eighth Avenue Historic District or head to the Merry Pier for some fishing (rod rentals available) or to find boat excursions for dolphin-watching, snorkeling or exploring Shell Key, a small island that is both a bird refuge and great shelling spot.
Fort De Soto Park
This gorgeous 1,136-acre county park at the southern tip of the Gulf beaches immerses visitors in an unspoiled Florida barrier island habitat. Fort De Soto can’t be beat as an all-day getaway with grandma and the kids. There’s plenty to do with two white, sandy beaches, 15 picnic shelters, a 7-mile paved trail for biking, a historic fort to explore, kayak and bike rentals and two fishing piers. Catch the ferry to Egmont Key for a chance to do some snorkeling and tour a working lighthouse. A 238-site family campground here is one of the prettiest on the Gulf Coast. Parking fee: $5. pinellascounty.org/park/05_Ft_DeSoto.htm
Ready for a Weekend Getaway?
To find a beach condo, cottage or hotel room, try one of the many search engines online such as hotels.com, expedia.com, trivago.com, and vrbo.com Another good resource is VisitStPeteClearwater.com where an extensive directory to beach lodging can be found.
30
StPeteLifeMag.com
July/August 2018
SPOTLIGHT
Say ‘No Straw, Please’ Americans use 500 million plastic straws each day, which means that you’ll likely use over 35,000 of these little suckers in your lifetime. In fact, it’s their small size that makes them unrecyclable. Almost a third of all plastics produced escape collections systems and wind up in our oceans where they can be swallowed by our marine life. The Sunshine City’s business community has shown an overwhelming commitment to turning the tide on single use plastic pollution along our coast, with nearly 150 business joining the campaign to provide straws only upon request. Look for the No Straws sign and please, do your part, by saying “No straw, please” when you drink or dine out. To see a list of establishments that have joined the campaign, go to nostrawsstpete.com. Now stop sucking, please.
Beach Bars... find on the open-air second floor are priceless.. Legend has it that Gators Cafe and Saloon is the longest waterfront bar in the world. It has three floors and is the place to watch the game – especially if you’re a (University of Florida) Gators fan. You may even spot a dolphin or two passing by on the water as you sip the night away..
Pete/Clearwater area. Check out the outdoor tiki bar and The Wave, a nightclub featuring DJs and bottle service that’s geared toward a dressed-up younger set. Dance the night away while a live band plays on a massive outdoor stage. A quintessential beachfront spot, Undertow Beach Bar has a great selection of import and craft beers. The oversized wooden beach chairs facing the beach volleyball court are a huge hit, and you can catch sunsets from the rooftop deck. The rustic-feeling PCI Beach Bar is a spring break hub in St. Pete Beach. Old license plates line the ceilings, you can write on the wooden bar and there’s good live music daily. Music. Mai-tais. Sunsets. You can’t go wrong with a trip to Jimmy B's Beach Bar, where most bands play classic rock covers to the delight of flip flop-wearing dancers. There are multiple bars, and a boardwalk leads you out to the beach. visitstpeteclearwater.com
You won't be at the Palm Pavilion for long before you realize you’ve discovered paradise. Locals love this open-air beachfront eatery not just for the sunset celebrations but also for the laidback, down-home feel (not to mention the extensive menu). Head to Clearwater Beach for Shephard's Tiki Beach Bar and Grill, one of the liveliest beachfront spots you’ll find in the St.
July/August 2018
StPeteLifeMag.com
31
SPOTLIGHT
No Sweat!
Take A Sipp
Why drink wine from a sweaty plastic cup at the beach when you can sip from an insulated stemless wine cup or champagne flute? We love the variety of SWIG vacuum-insulated drinkware created for all your drink needs. The stainless steel exterior comes in an array of eye-catching colors and will keep your hands dry, while copper-coated insulation keeps your beverage cold. The line includes bottles, tumblers, lowball glasses, and (our fave) the Combo Cooler is a coozie that converts into a cup by adding a slide-closure lid. Pick from 20 colors like pearl, rose gold, coral, orange and ocean blue; prices start at $19.95. Go to swiglife.com to shop.
Perfect for the patio or poolside, Sipp mixes organic ingredients to create unique layers of blended fruits and herbs into a refreshing sparkling beverage. Certified USDA Organic, the non-alcoholic beverages come in bottles and cans in six yummy flavors: Zesty Orange, Ginger Blossom, Mojo Berry, Lemon Flower, Ruby Rose and Summer Pear, all lightly sweetened with agave. If you feel like going wild, they make great cocktail mixers. Find Sipp at Target stores; for recipes for Zesty Orange Margarita, Lemon Flower Martini, or Ginger Pineapple Moscow Mule, go to haveasipp.com
LOOKING FOR THE PERFECT SUMMER TIME JEWELRY? St Pete Life Readers Receive
$400
Off
*
Your Next Purchase of $1,000 or more
Jewelry & Watches 222 37th Ave N. St. Petersburg 727.896.3000 GoldAndDiamondStPete.com Expert Jewelry While You Wait Since 1984 One per customer per item. Expires August 15th. Photos are for illustration purposes only.
32
StPeteLifeMag.com
July/August 2018
SPOTLIGHT
Take A Spin Ok, so you’re not Tara Lipinsky or Johnny Weir. If you’ve ever thought about donning a pair of skates and taking to the ice, sign up for a trial ice skating class at one of three area ice rinks. The Learn to Skate trial class is $21, which includes a 30 minute class, 15 minutes of practice time and rental skates. If you want to continue, sign up for the figure skating or hockey classes. If you just feel like trying it on your own (a great date or family fun activity), go to tampabayice.com to find rink locations and public skate schedules. Clearwater Ice Arena 14044 Icot Blvd. (727) 536-5843 Tampa Bay Skating Academy – Countryside Mall 27001 US Hwy 19 North (727) 723-7785 Tampa Bay Skating Academy – Oldsmar 255 Forest Lakes Blvd (813) 854-4010
Most rinks offer daily public skating sessions starting at $8.50 for admission and $4.25 for skate rentals.
July/August 2018
StPeteLifeMag.com
33
SPOTLIGHT
Dog Beaches & Parks Feeling like a romp? You can bring your furry friend to one of these area dog beaches and parks. At Fort de Soto Park in Tierra Verde, a beautiful stretch of shoreline is reserved just for dogs. In addition, a Paw Playground offers two fenced-in play areas for large and small dogs to run free. There are even amenities like cooling stations with showers and dog-level water fountains. The 1,136-acre county park has been named one of Petside.com’s Top 10 DogFriendly Beaches and one of the South’s Best Dog Parks by Southern Living magazine. Honeymoon Island State Park in Dunedin has awardwinning beaches, one just for dogs. The park’s Osprey Trail is a great nature walk and allows dogs on a leash. Other dog parks: North Shore Park in downtown St. Petersburg, Sand Key Park in Clearwater, A.L. Anderson Park in Tarpon Springs, John Chesnut Sr. Park in Palm Harbor, Walsingham Park in Largo and Boca Ciega Millennium Park in Seminole.
Save Your Skin, Save The Planet Research shows that some 90 percent of sunscreens are taking a serious toll on the environment—contributing to the bleaching and inevitable death of coral. The culprits include common sunscreen ingredients oxybenzone and octinoxate which have proven to be toxic to living coral. Help the planet while protecting and moisturizing your skin with the all-natural and deliciously fragrant Florida Salt Scrubs line of skin care products. Based on locally sourced Atlantic sea salt and made in the Sunshine State, the line includes scrubs for hand, feet and body, soaps, spray lotion, and bath bombs in yummy tropical scents like grapefruit, mango and coconut. We love the spray sunscreen which is reef-safe (PABA and oxybenzone free), made with coconut oil and comes in SPF 15, SPF30 and SPF50. Don’t forget to protect your lips when you’re out in the sun. Florida Kiss lip balms are all SPF15 and come in flavors of orange, coconut, mango and key lime. Find the entire line of products at floridasaltscrubs.com
Did you know… A lovely beach with white powder sand and palm trees is within walking
distance of the downtown St. Pete core. Grab your beach chair and head to North Shore Park at 901 North Shore Dr. NE; the 33-acre complex includes the beach, playground, dog park, restrooms and pools. Spa Beach Park is now closed due to pier construction.
34
StPeteLifeMag.com
July/August 2018
SPOTLIGHT
Shake it, Baby Nothing beats the heat like a tropical – and oh, so tasty – craft cocktail
It’s summer. It’s hot. It’s time for a refreshing adult beverage, something fruity, or fizzy, or tropical. You know you want it. Yes, we love our margaritas, our mojitos, and our Moscow mules. Florida favorites all. But here in the Burg, it’s fair to say that the craft cocktail has risen to new heights on the popularity-meter thanks to some of the most talented mixologists on the planet. We reached out to a number of establishments with a reputation for concocting adventurous and,
36
StPeteLifeMag.com
July/August 2018
above all, tasty craft cocktails. One thing we learned was that an essential tool for almost all of them is a cocktail shaker. So if you attempt to create these at home, you’ll need one. You’ll also need a coil-rimmed strainer and the appropriate glass, is, of course, essential (or so say the bartenders). Note that most bars use specific brands of alcohol and liqueurs to get the precise meld of flavors. Substitute at your own risk.
Simple Syrup A key ingredient in most fruity summer cocktails is simple syrup. If you can boil water, you can make simple syrup. Although different ratios can be used depending on how sweet you want it, the rule of thumb is to use equal parts of granulated sugar and water. Here’s how to make a basic simple syrup: Heat 1 cup water in a saucepan, gradually stir in 1 cup granulated sugar, bringing to a boil and stirring until dissolved. For a thicker syrup, lower the heat and stir a few minutes longer. Do not use until cooled. Pour in a jar and keep refrigerated.
SPOTLIGHT Strawberry Social
Birch & Vine at the Birchwood 1.5 oz. Grey Goose Le Citron 1 oz. St. Germain Elderflower Liqueur 1 oz. strawberry simple syrup (see below) ½ oz. fresh lemon juice Splash of Prosecco; slapped mint leaf for garnish Combine all ingredients except Prosecco in a shaker with ice. Shake vigorously. Strain into a martini or coupe glass and top with Prosecco. Slap a mint leaf to release the oils and place on top for garnish. Strawberry Simple Syrup 1 pint strawberries 1 liter sugar 1 liter hot water Mash the strawberries into the liter of sugar until sugar is completely saturated. Add the liter of hot water and let sit in the refrigerator overnight. Strain the syrup the next day through a fine strainer. Will last up to 30 days refrigerated.
Rum Punch
Paul’s Landing at the Renaissance Vinoy 1 ½ oz. Hibiscus Infused Paul’s Landing Rum ½ oz. Abuelo Rum ¼ oz. Benedictine ½ oz. Demerara Syrup 1 oz. lime juice Lime wheel and ½ jerk seasoning for rim Combine all ingredients into a shaker, add ice and shake 12 times. Strain into a rocks glass over fresh ice rimmed in jerk seasoning and garnish with a lime wheel. Note: Demerara Syrup is a simple syrup made with darker demerara sugar.
Lemon Ginger-Tini 400 Beach
2 oz. citrus vodka Squeeze of lemon Squeeze of lime Splash of simple syrup Top off with ginger beer Place all ingredients in shaker and shake over ice. Pour into martini glass and serve with a slice of lemon.
July/August 2018
StPeteLifeMag.com
37
SPOTLIGHT Goblin Punch Ichicoro Ane
1 oz. Agricole Rum (preferably Paranubes from Mexico) ¾ oz dry vermouth (preferably Cocchi Americano) ½ oz. Velvet Falernum (available at liquor stores) ½ oz. grapefruit juice ½ oz. cinnamon syrup, see below ¾ oz. lemon juice Combine all ingredients in a cocktail shaker. Add cubed ice and shake hard for 8 seconds. Strain over pebble ice into a hurricane glass. Garnish with mint sprig, torched cinnamon stick and cocktail umbrella. Cinnamon Syrup 2 cups water 2 cups granulated sugar 2 cinnamon sticks Bring water to a high simmer in a sauce pot (do not allow water to boil). Whisk in granulated sugar until fully dissolved. Place cinnamon sticks into pot, cover and reduce heat to low. Allow contents to simmer for 45 minutes. Remove from heat, strain out cinnamon sticks and store syrup in a sealable container and place in refrigerator. Syrup has a shelf life of 2 weeks.
Bubbly Redhead Sea Salt
7-8 mint leaves 1 lime wedge ¾ oz. raspberry puree or 10-12 fresh raspberries ¾ oz. simple syrup 1 oz. elderflower liqueur Prosecco Muddle mint and lime in bottom of pint glass. Add ice then the remaining ingredients. Toss gently once with a shaker on top or stir with a long handled spoon. Top with Prosecco.
The Devereaux Callaloo
1 oz. Bulleit Rye Bourbon ¼ oz. St. Germain elderberry liqueur Squeeze of ¼ fresh lemon Champagne to top Place ingredients in shaker and shake over ice. Strain into a snifter glass over ice and top with champagne.
38
StPeteLifeMag.com
July/August 2018
SPOTLIGHT Eternal Sunshine Mandarin Hide
The key to this cocktail is the layering of ingredients. Mandarin Hide will be serving Eternal Sunshine as a specialty menu item during the months of July and August. 2 oz. Ketel One Botanical Mint & Cucumber Vodka 1 oz. fresh lemon juice ¾ oz. agave nectar 1 ½ oz. hibiscus tea Prosecco and soda water Cucumber for garnish With a vegetable peeler, peel one slice of cucumber lengthwise and place around the inside of an empty tall thin glass. Pour the hibiscus tea into the bottom of the glass and then add ice. In a cocktail shaker add vodka, lemon and agave and shake. Slowly pour into the glass over a small spoon to layer the cocktail on top of the hibiscus tea. The slower you pour the easier it is to layer. Top with a splash of Prosecco and soda water.
Calm Before the Storm The Mill
Fresh pineapple – 1 slice 1 oz. Mount Gary dark rum 1 oz. Oak & Palm Coconut Rum by St. Pete Distillery 1 oz. Orgeat (almond syrup) A few drops of bitters Muddle the pineapple and add to shaker. Shake all ingredients over ice and strain into a tall glass.
Sunshine Skyway Martini Hybar at Hyatt Place
2 oz. St. Pete Distillery Gin 1 oz. mango puree ½ oz. simple syrup Splash of Tipplers Orange Liqueur Shake all ingredients with ice, strain into martini glass.
July/August 2018
StPeteLifeMag.com
39
SPOTLIGHT
Dive In Sweet, sexy swimwear is all the rage. From bikinis to one-piece, the variety of styles to choose from can be endless. Be sure to add accessories from hats and bags to sunglasses and sandals. Add a matching cover-up to go from beach to bar. Fashions from Mermosa Winery and Boutique.
DINING OUT 200 1st Ave. S. St. Petersburg yeomanscaskandlion.com/st-pete/ (727) 513-9367
Yeoman’s Cask & Lion
Traditional British pub fare is a royal treat for the ‘Burg PHOTO/MARCIA BIGGS
Some locals cringe at restaurant theme concepts and think of them as tourist traps. Think that if you want, but you will certainly be wrong and missing out on some mighty tasty eats. I’m letting Yeoman’s slide on the themed decor, as they do it in a fun, tonguein-cheek way that has a more grown-up appeal. Speaking of “vibe,” they actually have a guy, Scott Estes, who is the VP of Vibe. On the wall, visitors are greeted by one-of-a-kind, hand-painted caricatures of British celebs, such as Ozzy Osborn, Queen Elizabeth, Paul McCartney, and boxer Lennox Lewis. Yeoman’s is definitely family-friendly and kid-friendly. The point is to get over any aversion to a themed restaurant. The first few things that popped into my mind for British pub grub were fish and chips, bangers and mash, and Scotch eggs. I didn’t even look at the menu. I just ordered those as soon as I sat down, and yes, of course, they are on the menu. Fish and Chips are made with North Atlantic cod, deep fried with their signature beer batter and served with chips (fries), coleslaw, lemon, and tartar sauce. The portion on the fish and chips is huge! It’s definitely one of the best we’ve had. The seasoning is perfect, it’s crisp and crunchy on the outside, and moist and flakey on the inside. The accompanying fries and slaw are very good, and the tartar sauce is made from scratch in-house. I loved everything about this dish.
PHOTO /KEVIN GODBEE
Bangers & Mash
42
StPeteLifeMag.com
July/August 2018
I loved the Bangers and Mash. Fresh sage sausage, homemade mashed potatoes with a hearty Guinness and onion gravy, served with peas and carrots. The Bangers and Mash are great comfort food. These sausages have a very fine casing and filling, compared to Italian sausage, with a mild, but appealing flavor. The Scotch Egg drizzled with a spicy mayo sauce was one of the best I’ve ever had. One of the surprise stars of the show was the Philly Steak Mac & Cheese, homemade and loaded with shaved rib eye steak. This was sinfully delicious! The chewiness of the macaroni, the gooeyness of the cheese, and the well-seasoned meat was extremely satisfying. I’ve been on a shrimp and grits kick lately, so decided to try the Shrimp and Chips. A half pound of Gulf shrimp, hand battered and deep fried, served with coconut-curry sauce, cocktail sauce, chips (fries), and coleslaw. Shrimp and Chips are a nice change of pace from Fish and Chips. We can’t wait to go back to try the Shepard’s Pie, Tikka Masala, and English Breakfast. Speaking of breakfast, they open at 10 am on Sunday for brunch, and there are bottomless mimosas for $9.95.
DINING OUT 4306 1st Avenue S. St. Petersburg fordsgarageusa.com (727) -295-3673
Ford’s Garage
Fill up your tank at this fun vintage-auto theme eatery PHOTO /MARCIA BIGGS
BY KEVIN GODBEE Formerly the space occupied by Rowdies Den, Ford’s Garage shares the same kitchen with Yeoman’s Cask & Lion, but with completely different menus and themes (both are owned by Tampa-based 23 Restaurant Services). This is the 9th Ford’s Garage location in Florida. Similar to Yeoman’s, Ford’s Garage is a themed restaurant — a 1920s garage with fuel pumps, Model “A” Fords, engines, the napkins are garage rags, and the female waitstaff are reminiscent of Rosie the Riveter. Some people roll their eyes at a themed restaurant, but Ford’s does it in a fun, non-cloying way. More important, though, the food is actually quite good — high quality and generous portions. We started with the Giant Funnel Tower of Jumbo Piston Onion Rings. These things are huge! They are served on an upside down flexible oil funnel, crispy and a little bit sweet, and definitely filling. You should share this with at least two or three people if you want to have room for something else. Other appetizers include Deep Fried Dill Pickle Planks and Buffalo Chicken Dip. The Model “A” is my kind of burger. Check out these ingredients: Black Angus with Aged Sharp Cheddar Cheese, Applewood Smoked
Bacon, Pico de Gallo, Arugula, Lettuce, Tomato, Red Onion, and a Fried Egg on a Brioche Bun. The BBQ Brisket Burger is a half pound of Grilled Black Angus, topped with Bourbon BBQ Sauce. I was told the Hickory Smoked Brisket Burger is very popular. It has Applewood Smoked Bacon, Red Onions, Shredded Cheddar Cheese, and Crispy Onion Straws. Ford’s is a burger joint. Why would I order chicken unless it’s wings? I would have never ordered chicken but I’m glad I took VP of Operations Billy Diamond’s advice. “This one’s a star,” he said referring to the Chicken Henry. He was right. Chicken Henry is a grilled marinated chicken breast, topped with a lemon butter sauce, goat cheese, basil, and sun-dried tomatoes, served with white cheddar mashed potatoes and green beans. The goat cheese and sun-dried tomatoes really make this item special. Ford’s Garage is your neighborhood burger and beer joint, where everyone is welcome. Kevin Godbee covers the local food scene at stpetersburgfoodies.com.
July/August 2018
StPeteLifeMag.com
43
SIPS AND SUDS
Food bites Pier Restaurant Announced Fans of Beach Drive’s Birch & Vine restaurant and Canopy rooftop bar will be happy to learn that the Birchwood’s Chuck Prather has been approved to bring to the new Pier District an upscale restaurant, a rooftop tiki bar and a casual walkup cafe that will sell coffee, ice cream and to-go foods. The city will lease the property to Prather for the highly anticipated 26-acre, $76 million Pier District. Doc Ford’s Rum Bar & Grille was named earlier this year as another restaurant tenant, with Florida owner/author Randy Wayne White at the helm. The projected date for the opening of the new pier is fall 2019.
creativity and give you the confidence to try new recipes, new techniques, and new cuisines.
Dining Deal Early birds can catch a great deal for a light dinner at 400 Beach this summer. Between 3 and 6 pm enjoy 3-course meals from a select menu for only $14.95. Entrees include shrimp scampi, miso-glazed salmon, shrimp linguini, sirloin and mushrooms, chicken tenderloin skewers and vegetable stir fry. Happy hour Monday through Friday from 3 to 6 pm also can’t be beat with $3 house wines and $6 bar bites. Beam me up, Scotty!
A Star is Born Healthy dining just got better in the Burg with the opening of Rawk Star Café, a gluten-free, organic, raw vegan restaurant. Don’t let the raw vegan part fool you, though. This is creative culinary genius at work that even a carnivore will enjoy. (Dehydrators are used to give an element of heat). Located next to the Chihuly Collection at 740 Central Ave., the bright, cheery café offers delicious vegan burgers and wraps, salads, and tacos, along with breakfast and dessert selections. Stop by for a super healthy smoothie, kombucha or coffee. Ooooohmy.
Cooking With Class All thumbs with a knife? Popcorn and spaghetti your main home dishes? The Chef’s Hat wants to help you reclaim your cooking
Chef Ivan Jeanblanc and his culinary team will present pop-up cooking classes and full-day boot camps that include dining in a restaurant style atmosphere and a fun social setting. Chef Ivan studied at the Culinary Institute of America and in programs throughout France. The themed cooking classes take place in the newly renovated Open House, a brand new event space where Ricky P’s formerly resided at 1113 Central Ave. For more information on upcoming classes, go to achefhat.com
Summer Time is the Perfect Time to Get The Skin You’ve Always Wanted Cosmetic Injectables • Skin Rejuvenation Laser Treatments • Skin Cancer • Psoriasis • Acne • Hair loss Medical & Cosmetic Dermatology Over 30 Years of Experience • Flexible Scheduling
727.528.0321
2191 9th Ave. North • St Petersburg DivineDermatology.com
New Patients Welcomed! We accept most insurance plans and Medicare.
All Female Medical Team
Receive a $25 Gift Certificate just for mentioning this ad. Valid on services $100 or more.
44
StPeteLifeMag.com
July/August 2018
SIPS AND SUDS
Summer Bubblies Sparkling wines are a perfect choice to beat the heat
EDITH SWIERZBINSKI Pop goes the cork! Time to celebrate sparkling wine on an everyday basis. Crisp, fresh, lively, aromatic and approachable. Aromas of apples, pears, peaches, oh my. Very food friendlyperfect to accompany brunch, cake or barbecue.
Zardetto Prosecco DOC Brut - 100% Glera. Elegant, refined and fine bubbles. Aromas of white flowers, apricot and herbs create a delicately soft bouquet. The palate is comprised of citrus, orange blossoms and stone fruits. Fresh citrus and floral notes linger on the finish. It’s an ideal apertif and the perfect choice for celebrations large and small. Not to mention, an excellent partner in sparkling cocktails. $15
Bubbles in your mixed drink? Yes, please. Cocktails featuring sparkling wine have made quite an impression. From the traditional, French 57 to sassing up a classic mojito, sparkling wine recipes are here to stay.
Champagne Lallier Grand Reserve Grand Cru Brut – 65% Pinot Noir and 35% Chardonnay. Aromas of delicate buttery brioche, apples, lemon meringue and hints of apricot. Fantastic balance, creamy texture and crisp minerality. Charcuterie, lobster and soufflé will make for a memorable pairing. $46
A few of my favorites: Anna de Codorniu Brut Rose - 70% Pinot Noir and 30% Chardonnay. Produced authentically, Methode Traditionelle, using the finest estate-grown fruit. This cava is well-balanced and refreshing, featuring aromas of summer berries followed with apple, cherry, strawberry flavors and a creamy texture. Great for mimosa’s and will equally pair well with smoky poultry, barbecue or chocolate. $14
So, go ahead, add some bubbles to your everyday. It’s happiness in a bottle with a pop! Edith Swierzbinski is owner of 4th and Vine, a boutique wine shop in St. Petersburg.
July/August 2018
StPeteLifeMag.com
45
FASHION
Flirty & Fabulous Step out and have fun with your summer style BY MEGAN SIMONS When it comes to summer fashion we need to be… daring! Well… yes daring, while at the same time having lots of fun showing more skin, wearing bright colors and bold patterns. Let’s start with possibilities -- what we imagine as the perfect style appropriate for hot and steamy days in the tropics. Start with nautical navy and white, adding in Lawrence of Arabia white with flowing linens, and khaki and beige in military styles from African safaris. Finally, there are the predictable palm trees and hummingbirds printed in countless blends and vibrant colors. What if we bravely and, of course, stylishly combine all ideas into one creative summer wardrobe that will make you feel like … you? Does that sound intimidating? Let me walk you through it. The easiest way to start creating a summer friendly outfit is to pick a loosely fitted white dress in a super-light fabric and natural texture. Then take the inspiration from the exotic islands and wild forests, where mystery has an erotic tone, and introduce to your white dress an ambient floral patch. Now we almost have it, with the additional eye-catching accessories like metallic sandals and a large straw hat you can step out with effortless and sensual style.
“Life is too short to wear boring clothes!" That concept can be repeated in a variety of ways since summer is the time of year to have the most fun with fashion. If you are not a dress lover, use the same idea and try to pair a bright tropical printed top with a bottom that gives you options. It could be white or bright colored trousers to refresh your look, or khaki loose shaped, lightweight pants. You can also reverse that concept with a solid top and a wild printed bottom, you get the idea. This summer trends do offer fashionable and functional pieces, a light, intriguing duster paired with a fitted dress or wide-leg pants is one of them. Don’t shy away from stripes. Either in nautical navy and white, or bright palette of super energetic orange, yellow, red and green will give you the style that’s easier to pull off than you think.
Dress from Allure Showroom Colors of Fashion
46
StPeteLifeMag.com
July/August 2018
FASHION PHOTO/BARRY LIVELY
One more beach season idea is worthy of joyfully exploring – the one, multi-use item, to complete your seashore ensemble. One piece that is bright and lightweight, easy to throw on and will serve double as beach cover-up but can also be rocked as a dress appropriate for a water front cocktail party or city dinner out. Plus, if you’re staying at a resort and need to walk through common areas, a cover-up is a must. In summary, it is you who sets the inimitable trend and the style for you. Choose sensibly and have lots of fun in the process, and keep in mind what Coco Chanel said – “Life is too short to wear boring clothes!” Megan Simons is owner and CEO of Capota Trends and owner of Pippa Pelure, a women’s fine clothing and accessories boutique in downtown St. Petersburg.
ART
ties US
the Arts of St. Petersburg and the Winners of the “Art Ties Us” Contest Benefiting: The Public Art Project
Striped cover-up caftan from Pippa Pelure.
Pippa Pelure Class & Sass One Stop Shopping Latest casual to evening fashions & jewelry by American European designers. Alberto Makali, Petit Pois, Lisette L, David Cline, Desigual, Balenciaga, Joseph Ribkoff, Frank Lyman, Komarov, Dolcezza, Bronte, Tribal, Dictons Barcelona, Sasha London, Vicenza, Volt, Liverpool Jeans, Sondra Roberts, Single, Theia, Jeff Lieb, French Kande, Konplott, Zsiska and much more
50 Beach Drive NE St. Petersburg, FL 33701 (727) 623-0926 Visit us on Facebook - Pippa Pelure
July/August 2018
StPeteLifeMag.com
47
FASHION
Made In The Shade Stay cool and look stylish in a tropical shirt BY JOSE MARTINEZ
As our Florida summer takes hold, many of us plan to beat the heat in the next few months sprawling by our best-in-class St. Pete/ Clearwater beaches or lounging by the pool. One of the most important aspects of a cool summer is wearing something comfortable and effortless that can transition from day to night and from beach to bar without feeling out of place. The first must-have is a short-sleeve patterned shirt. These shirts are not your father’s gaudy collared short-sleeve Hawaiian or bowling shirt that didn’t fit particularly well. These shirts are lightweight cotton, linen or rayon and come in flowered prints, bold patterns and a tailored fit profile which demonstrates a sense of style, confidence and daring in the wearer. Typically, they are buttondown and can be worn with shorts or jeans and sometimes even under a lightweight suit or your core denim jacket. Another stylish and popular option is short pants. These pants are built for instant comfort. They’re the right combination of fabric and wash formula or performance material (35% tencel, 35% cotton, 30% polyester). And with weights at around 7 ounces, they can be significantly lighter-weight than jeans or cargo pants. Another benefit of these shorts is they couple very well with stylish short-sleeve
48
StPeteLifeMag.com
July/August 2018
patterned shirts and can even be dressed up with a long-sleeve dress shirt. You have many options when it comes to footwear for summer. Espadrilles are the ultimate chill summer shoe that have been resurrected for the past few years now. They can be harmonizing with a great short pant and Irish linen shirt for a classic summer look. Deck shoes are another option that can go with just about anything to fit your summer style. Lastly, a great well-made flip-flop with an activityspecific design will facilitate a summer of memories. Some styles feature nonscuffing rubber with memory foam toe-post grips to keep the blisters away or a beveled sole edge to save you from tripping into the awkward flip-flop face plant. Many are also made with firm arch supports and insoles to channel away moisture and improve foot grip. So whatever you choose this summer, surviving the heat is a little easier when you’re confident you look good! Jose Martinez is owner of Sartorial, Inc., a menswear boutique in downtown St. Petersburg.
EVENTS
Summer of Soul
Our favorite cool cat, “Palladium Paul” Wilborn, is bringing a Florida-driven “Summer of Soul” to the Palladium with a series of bluesy bands and soulful singers performing at the intimate Side Door lounge. The season starts July 27 with jazz sax man Jeremy Carter and continues through September 15 with Sarasota Soul Diva Lauren Mitchell. Others in the series include Little Jake and the Soul Searchers (Aug. 4), Damon Fowler (Aug. 17), Selwyn Birchwood (Sept. 7-8), Rev. Billy C. Wirtz (Aug. 12), The Dukes of Juke and Blue Dice (July 28), among others. Tickets are on sale at the Palladium box office, 253 Fifth Avenue N. or at mypalladium.org.
Cool Art Show The 30th annual Cool Art Show presents the work of nearly 75 fine artists and craftsmen when it comes to the historic Coliseum July 14- 15. The fact that it will be in the comfort of air-conditioning in July is also pretty cool, we think. Look for high-quality creations in paint, wood, ceramics, photography, metal, glass, fiber, mixed media and jewelry. The Coliseum is located at 535 4th Avenue North. Show hours are 10am -5pm Saturday and Sunday; admission and parking are free. The juried show is sponsored by PAVA, the Professional Association of Visual Artists. pava-artists.org
July/August 2018
StPeteLifeMag.com
49
ARTS & CULTURE
Can I Get A Witness Herb Snitzer’s life of activism is chronicled in a thoughtful collection of his photographs
BY CINDY STOVALL The history of jazz and the history of civil rights in the United States are undeniably and profoundly intertwined. The images caught on film by iconic photographer Herb Snitzer over a six-decade career is sublime proof of this premise. Snitzer and his wife, artist Carol Dameron, call St. Pete home, but their lives and careers have spanned the world, giving them a global perspective on social justice and humanity. I had the pleasure of speaking with them recently in their South St. Petersburg home, lovingly called “Art on Alcazar” for all the salon gatherings they hope to hold in the future. We were joined by Robin O’Dell, curator of the photographic collection at the St. Petersburg Museum of Fine Arts. She and Snitzer, who have worked together before, collaborated closely to bring the museum’s exhibit, “Can I Get a Witness,” to life.
50
StPeteLifeMag.com
July/August 2018
Herb Snitzer’s work is most recognized for its documentation of jazz culture and its biggest stars. Some of the most famous images of icons like Nina Simone, Miles Davis, John Coltrane, Dizzy Gillespie, and Louis Armstrong are attributable to Snitzer. He knew and was close to many of them, particularly Simone, who Snitzer traveled with over a 30-year period. His work has appeared in many publications including Look and Life magazines, he has published books, shown his work all over the world, and is generally considered a master in his field. But what Robin O’Dell revealed as she studied Snitzer’s work for exhibition, was the pervasive message of activism and a profound sense of social justice that permeates his archive. “ I have always considered my work, even the jazz images, a metaphor for civil rights and equality,” says Snitzer. “It suggests
ARTS & CULTURE a view of oppressed communities like African Americans and the LGBT community that I believe have been overlooked and, in many cases, ignored.” Snitzer shared his friend Nina Simone’s views on bigotry and injustice and shot many images of protests and rallies, some of which are present in the MFA exhibit. “In 1958, as I was first establishing my career in New York City, I attended an NAACP event and took a photo of a small boy who was looking right at me. It was a personal moment between us and I saw his pain. It represents a desire for freedom that stays with me and it remains one of my personal favorites.” Visitors to the museum can see this photo in “Can I Get a Witness.” Snitzer still wonders what happened to that boy. “He would be in his 60s now. Unbelievable.” Robin O’Dell said the theme of the exhibit materialized as she reviewed a huge archive of Herb Snitzer’s historic images. “It was an honor to put this exhibit together, and I wanted to create a collection that was reflective of his whole career. As I studied more and more photographs, what became evident was Herb’s sense of social justice. I knew that had to be the focus.” “Whenever I plan an exhibit, especially one that involves a single artist,” the curator explains, “I always look to achieve several things. Of course, I want to show the depth and breadth of the work, but I also want it to tell a story.” How did she come up with the name, “Can I Get a Witness”? “It was just the feeling that I had,” O’Dell replied. “These photos were a sort of testimonial to the struggle and Herb was witness to all of it. It seemed to fit.” Other considerations make this exhibit work, such as its local ties. The opening of “Can I Get a Witness” was in conjunction with Pride month, and there are several photographs depicting our own St. Pete Pride parade, one of the largest such celebrations in the Southeast. This unique collection of photographs also tells us some things about its artist that we, as neighbors and fellow citizens, may not have known – like the fact that Herb Snitzer was co-founder and headmaster of a progressive school for children that taught art and social consciousness for 13 years . Or that he
actively served on the board of the NAACP in St. Petersburg for 5 years and received a lifetime achievement award from the organization. They honored his service to them and a career spent documenting the harsh realities of cultural evolution. Artist Carol Dameron, Snitzer’s wife, was present during some of these dramatic times caught on film. She recalls the scene one year outside of Fort Benning, GA, at an annual protest against the “School of the Americas,” or “assassins” as it came to be known. “Some 16,000 people would come each year led by people like Martin Sheen and Susan Sarandon to protest this program. There was one protestor dressed in a shroud with a stark white face that really struck me. The image Herb took, featured in the museum exhibit, captured the moment perfectly.” O’Dell talks about a favorite photo in the exhibit involving one of the large-scale photos of a drag queen in a St. Pete Pride parade. “She was just so joyous and beautiful with all her rhinestones,” O’Dell recalls. “We later found out that she was very well known and even a contestant on Ru Paul’s Drag Race named Coco Montrese.” Snitzer adds that “It would not have been possible to see an image like this in a museum 50 years ago.” There is still a nod to Snitzer’s more well-known photographs in the collection, for example, a mural of jazz legend Louis Armstrong. “Louis Armstrong did not know his father and his mother was a prostitute,” Snitzer recounts. “He was taken in and cared for by a Jewish family and he was given a Star of David as a gift. He wore it his whole life and was buried with it.” Armstrong’s band musicians were consistently diverse; an apparent homage to the diversity in his own life. “He was the personification of grace, dignity, and everything that is good about America,” said Snitzer. When asked what he’d most like to be known for, Herb Snitzer does not hesitate to say, “For my sense of social justice.” I don’t think it’s an exaggeration to say that “Can I Get a Witness” will go a long way in solidifying that perception. Cindy Stovall writes about the arts for St. Pete Life.
July/August 2018
StPeteLifeMag.com
51
ARTS & CULTURE
Keeping up with St. Petersburg Arts, Theater Events, Performances and Personalities
BY CINDY STOVALL “Hot town, summer in the city.” These lyrics always come to mind as July and August hit Florida. The temps here in the ‘Burg might be soaring but, the fact is, that the art scene is scorching all year long. There are a wealth of shows, exhibits, and events for you and your family to enjoy as you consider what to do with your summer breaks. Here is just a small sampling. Contact me with your art happenings at cstovall5@gmail.com and don’t forget to listen to my Beauty & The ‘Burg Podcast at heliumradio.com
On Stage
Summer is a great time to check out theater offerings beyond the mainstage. For example, American Stage’s “And Beyond” series offers great programming at affordable, family friendly pricing. Look for shows like The Delicious Beats, a 1960’s multi-media extravaganza playing at 7 pm August 25 with special guest comedian G. David Howard. Or, get your laugh
on with Hawk & Wayne’s Karaoke-Prov the first Sunday of each month at 7 pm. Gavin Hawk and Ricky Wayne, both California transplants, are two of our most talented, award-winning actors, improv artists, and performers. Treat yourself and discover alternative theater. americanstage.org The Tampa Bay Theatre Festival returns for its 5th year on August 31 through September 2. Organized by not-forprofit theater company RL Stage Inc., the festival takes place over three days and in multiple venues throughout the Bay area. Attendees will experience powerful, informative workshops led by trained professionals, in addition to scene, monologue, and short play competitions. A full schedule of theatrical entertainment , including the staging of submitted plays, parties, and networking opportunities can be found at tampabaytheatrefestival.com
Museum Happenings The Florida Holocaust Museum doesn’t only represent somber themes. Survival, hope and joy are ultimately the message, and music is a huge part of that. This summer FHM welcomes a nationally lauded show that the whole family will enjoy. “Bill Graham and the Rock & Roll Revolution” is the first comprehensive retrospective about the life and career of renowned music industry impresario Bill Graham. Recognized as one of the most influential concert promoters in history, Graham launched the careers of countless rock and roll legends in the 1960s at his famed Fillmore Auditorium. He conceived rock and roll as a powerful force for supporting humanitarian causes and was instrumental in the production of milestone benefit concerts such as Live Aid (1985) and Human Rights Now! (1988). As a promoter and manager, he
52
StPeteLifeMag.com
July/August 2018
worked with iconic artists including the Grateful Dead, Jefferson Airplane, Janis Joplin, Jimi Hendrix, Santana, Fleetwood Mac, the Who, Led Zeppelin, the Doors, and the Rolling Stones. The exhibit opens on August 18 and runs through Feb. 10, 2019. flholocaustmuseum.org In addition to the stunning exhibit “Clyde Butcher: Visions of Dali’s Spain.” summer brings some fun family events to the Dali Museum. For example, the showing of Cult Classics on the Dali lawn (first Thursdays) is a great free event. Food trucks and refreshments are available and the films show rain or shine! The August 2 film is “Die Hard” with doors opening at 7 and film starts at 8:30 pm. “DillyDally with Dali” happens every Saturday at 11:45 am through the summer and allows children to discover the creative world of Dali through games, puzzles, and arts and crafts activities which educate and encourage family interaction. thedali.org
ARTS & CULTURE
Galleries & Alternatives Florida CraftArt, 501 Central Ave., has been a mainstay of St. Pete’s cultural life since 1986. Representing the fine craft art of some 300 Florida artisans, this eclectic gallery and studio space is home to multiple events both in house and throughout the community. Here’s a summer sample: You still have time to catch the beautiful exhibition Dolls and Where They Live. This fascinating collection of artist-made dolls ranges from sweet to gothic and includes pieces from the National Institute of Doll Artists. But you’d better hurry! The exhibit runs through July 28. Florida CraftArt is also the starting point of one of the fastest growing activities in the Central Arts District - mural tours. Walk or bike to view the ever growing collection of large-scale murals that now dot and define the downtown St. Pete landscape. You’ll see and learn so much, so schedule your tour today! Walking tours are on Saturdays from 10-11:30 am. Bike tours are the first Saturday of each month from 9-11 am. Go to floridacraftart.org for more info or to reserve your spot on a mural tour. The Morean Arts Center continues to celebrate its centennial with another milestone event. “One Hundred: 2018 Members Show” literally commemorates the 100th such show. Morean members have been invited to submit a piece with “one hundred” as the theme – interpreted in any way they choose. This exhibit opens on July 14 and is free and open to the public, so bring the family and enjoy! Check moreanartscenter.org for associated events. Enjoy your summer! ‘Til next time … Cindy Stovall’s Beauty & the ‘Burg podcast covering the arts in St. Petersburg airs on Wednesdays at 6 pm at heliumradio.com. Archives of previous shows are available.
Great Wine at Great Prices
EXPERIENCE THE VIP DIFFERENCE
St. Pete’s First & Only Optical Boutique Second Location Coming Soon to Carillon
Natural and Organic Wines Vegan Wines, No Sulfites Added Exceptional Selection of Premium & Every Day Wines Boutique Environment Wine Tastings Every Thursday Night Mention this ad and receive 10% off your next purchase.
Dr. Mona Henri • Dr. Kaitlyn C. Rothberg JMC Center 2201 4th Street North, Suite A St Petersburg
727.894.0500
327 11th Ave. N. Suite 102 • St. Petersburg
727.400.3975 4thandvinestpete.com July/August 2018
StPeteLifeMag.com
53
ARTS & CULTURE
Dali’s Spain
The “Ansel Adams” of Florida landscapes has captured the sublime beauty of Dalí’s homeland in exquisite detail in Clyde Butcher: Visions of Dali’s Spain. The iconic photographer’s large scale, super-sharp monochromatic images explore three areas in the Catalonia region where Dali resided and served as inspiration for a number of his paintings.
to an environmental talk about “Real Florida” with author Jeff Klinkenberg on August 9. Clyde Butcher: Visions of Dali’s Spain will be on exhibit through November 25, 2018. For more information, go to thedali.org.
The thoughtfully curated collection takes viewers on a journey through the region, showing comparisons of landscapes and geologic formations to select Dali works along the way. An informative 11-minute documentary introduces the viewer to Clyde Butcher, showing his evolution over the decades, starting in California and moving to his epic photographs of Florida where he spent decades capturing images in Big Cypress and the Everglades. His attention to nature, particularly natural light and shadows and attention to the sky and clouds, enabled Butcher to find new aspects and subtleties within Dalí’s Spain. Butcher’s images show that Dalí’s painted landscapes were not a product of his imagination, but were indeed very real. In conjunction with Dali’s Spain, the museum will be hosting several events. Butcher himself will host book signings on Sept. 29 and Nov. 3. Check the online events schedule for added programs from photography workshops
Furniture, Accessories, Local Art and Interior Design Services 727-821-4100 tampabayfurnishings.com
3034 Dr. M.L.K. Jr St. N., St. Petersburg
54
StPeteLifeMag.com
July/August 2018
ARTS & CULTURE Libertine Contemporary Fine Art Gallery 200 Central Ave., Suite 111 St. Pete Libertinefineart.com
Gallery Spotlight
Stroll into the lobby level of the downtown Priatek building and you are immersed in contemporary fine art. For years, gallery owner Darin Kucera has featured internationally known as well as local St. Pete artists, some of whom you may meet while browsing their works. From paintings and sculptures to lithographs and photography, the selection is diverse.
Artist Timony Raines
Featured artists: • Timothy Raines is known for his representation of autos, teams and emblems created from iconic drops of paint • Ari Robinson is local to St. Pete, creating paintings and sculptures through form, color and energy • Jason Brueck is pioneering digital collage with cultural symbolism • Shane Bowden is an Australian-born modern artist working in paintings and prints • Zeus is a London artist, known for his 80’s inspired graffiti art
Artist Shane Bowden
Artist Ari Robinson
July/August 2018
StPeteLifeMag.com
55
CONVERSATIONS
Phil Yost
Compass Land & Title LLC St. Pete resident Phil Yost loves our town. He started Compass Land & Title over 10 years ago and his company quickly grew from two employees and 900 square feet on Central Avenue to 4,000 square feet in a towering building at 360 Central Avenue. A new office grand opening was held in January with friends and VIPs cheering his success. His office overflows with artwork and surfboards. Today the founder of a respected title agency, Phil oversees seven employees and handles real estate transactions for clients throughout Florida. The industry awardwinner manages an experienced team that takes pride in maintaining a small “boutique” firm downtown. Tell us about the local real estate scene from the eyes of the president of a title company. Is the local real estate scene booming? Yes, residential is really robust right now in St Pete. The amount of construction cranes operating downtown is very telling of the amount of residential properties being constructed in our downtown. The popularity of downtown, Snell Isle, Northeast, and Kenwood, in addition to lack of inventory is creating all kinds of new residential opportunities for our city. What is the status of the commercial real estate scene? St. Petersburg has a large need for Class “A” Office Space with accompanying parking in our downtown. The prospect that the 80acre Trop site appears will become available for re- development is a very unique opportunity for our city. The 2017 Federal Tax Law which will allow this project to be a Low Tax Opportunity Zone makes it very attractive to new developers from around the world. What are you most excited by re: the growth here in St. Pete? I am really enthused with how much of an emphasis we as a city have put on the arts. As I have traveled it always has jumped out that the difference between a good city and great city is the support of the arts and our amazing public green space. St. Petersburg has such fabulous public space and a community that supports the arts. I truly feel we are the best city in the state of Florida. My current passion project is the work I am doing for a development project here in St Petersburg that would bring international art and cinema lovers. Are you finding a lot of new real estate buyers are from outside the country? The State of Florida reported an all-time high for visitors in the first quarter of 2018 at 33.2 million. Yes, we have several clients who specialize in selling to buyers from very specific countries.
56
StPeteLifeMag.com
July/August 2018
I hear all of the time what a bargain St. Pete is for international buyers. We understand 10.7 million visitors came from overseas and 3.5 million from Canada in the first quarter. You are a great example of yet another successful St. Pete entrepreneur – have you always thought of yourself as a “noncorporate” kind of guy? I graduated from FSU about the same time we were going through a mini-recession, that coupled with a strong leaning towards entrepreneurial pursuits led me to being a business owner. I have been a business broker, owned an indoor rock climbing gym, restaurant owner, ran a construction company, licensed realtor for 20 years, food court franchisor, manufactured paddleboards in South Africa, to name a few of the ventures that led me to owning a title company. I love being in business and mentoring all of the amazing people I have gotten the pleasure to work with over the years. I wake up every morning excited about getting to work, so the spark that motivates me is as strong as it was when I first struck out on my own. Why do you love living here – has the sunshine always been in your DNA? Driving over the Skyway Bridge last weekend my wife Valerie and I spent much of the trip discussing how lucky we are to live in paradise. A sunset at Pass A Grill compares with any exotic location on the planet. Our art scene and dining have become world class. What are your passions? I am motivated by the following: travel with my amazing wife Valerie, stone crab season, daily fellowship of my work crew at the Vinoy, boat days to Egmont Key and Anna Maria Island, collecting great art, weekends on Longboat Key, the dessert room at Euphemia Haye, and the occasional surf trip to Cocoa Beach or Jupiter. You are a foodie and wine connoisseur – where do you love to dine in St. Pete? Lunch: LaV, Moon Under Water, Blake’s Crabs, Locale Market, Chiefs Creole, Cassis, and Alesia. Dinner: IL Ritorno, Rocco Steak, Sea Salt, Ruth Chris, Reading Room, Maritana Grille at the Don Cesar. Brunch at the Vinoy and happy hour at the municipal snack bar on Pass A Grill What’s your definition of relaxation? A destination where no shirt is required, and I only wear board shorts and flip flops for a week!.
CONVERSATIONS
Tanisha Chea Tala Baby
At 36 weeks pregnant, Tanisha Chea learned that she wouldn’t be returning to the ST. PETE corporate world. After 11 years in corporate America, Chea had been laid off from her position as VP of Marketing at Carrabba’s Italian Grill. Naturally, with C-suite experience and an MBA under her belt, Chea began building out a business plan for Tala Baby - an inspired baby line that teaches morals, values, and character traits that she and her husband believe in. Born out of a need she saw for her own children, Chea did what every good entrepreneur does - filled a need and solved a problem. Now she’s expanding her line to toddlers, Mommy & me apparel, and wall prints. What do you do at Tala Baby? I do everything - I create the brand, I create the garments, the designs, I take all of the orders, I sell, I package, I ship. Why do you do it? I wanted my boys to have clothing that taught them good morals and values and the character traits that my husband and I believed in. So instead of just creating it for them, I made it for others as well. The name Tala comes from the Icelandic language, which means “to speak.” It’s about speaking positivity into the lives of your babies. Every onesie comes with a Tala card that has a story about the trait and the animal that’s on the onesie.
How did you get started? I started this company when I was 36 weeks pregnant with twins, and I now have 4 month old twin boys. And it really started because I was doing a lot of online shopping and I couldn’t find what I was looking for for my boys. I wanted more for them than to have onesies that said, “Future Heartbreaker.” What’s a common misconception or unknown aspect of what you do? The most common misconception is that I just found these clothes online and I’m reselling them. Because I think a lot of people do that these days, especially with baby clothes. And I actually created these myself based on the inspiration of my babies. And so I think to know that this has been from conception - from start to finish - thought of in my own brain and done with my own two hands is the message I want to get out to people. What’s the most challenging part of your job? People think that it’s just clothes but the Tala card that comes along with it is really a big piece of that. What’s the most valuable piece of business advice/insight that’s helped you? I have a corporate background so all of that I’ve parlayed into being an entrepreneur. Some of the best business advice has been from my 11 years in corporate America in marketing. From my packaging, to my branding, to reusability. Find the Tala Baby line at talababy.com Content provided by the Catalyst, St Pete’s daily business platform. Subscribe for free at StPeteCatalyst.com.
July/August 2018
StPeteLifeMag.com
57
CONVERSATIONS
Ryan Griffin Attorney. Entrepreneur. Influencer. Ryan is a true community asset in the classic sense. ST. PETE He generously gives of his time, expertise and money on civic initiatives like Grow Smarter. He brings value in jobs and service in his entrepreneurial endeavors. He serves clients, righting business wrongs as an attorney. This combination of contributions is important and greatly appreciated. The greatest treasure he delivers to St Pete, however, is culture our culture. Mandarin Hide, Souzou and the new Trophy Fish are St. Pete culture factories, pumping out the intangible sweetness that sustains our je ne sais quoi. His establishments are unique and varied, each delivering the complex nuances of a passionate owner. That they all came from the same person is impressive. And he’s just getting started. Not yet 40, Griffin has more projects close to launch and more planned to follow those. As neighborhoods wind their way through the treacherous journey to gentrification, it’s Influence like Griffin’s that will protect that quality in St. Pete that’s difficult to articulate. Organizations involved in Chamber of Commerce - the Executive Committee as well as the Vice-Chair of the Grow Smarter strategy What gets you out of bed every day? Every day is a new day for me. I don’t do the same thing every day. There’s a lot of excitement with it, a lot of stress. I like to make new accomplishments and at the end of the day when the dust settles, I like to see if I won more than I lost. Why St. Pete? St Pete is the perfect sized city to do some really spectacular things. It’s up and coming with unique character. Very distinct place and people. It’s a place you can make an impact. What is one habit that you keep? Sundays are holy to me in the sense that I stay away from emails and technology and getting immersed in too much work. I use that as a day to spend with my wife. Who are some people that influence you? As a lawyer, I have a partner Guy Burns, he’s a very successful lawyer. He’s got a great balance and way with people. For restaurants, design wise, Phillip Stark and his design. I like that, his whimsical nature, keeping everything fun. Besides that - every entrepreneur that I see. They are always giving me creative ideas. When I see people working hard, that motivates me, pushes me.
58
StPeteLifeMag.com
July/August 2018
What is one piece of insight – a book, methodology, practice – that you would share with our readers? You never know how much sand is in the hour glass. Don’t leave anything for tomorrow that you can do today. Take risks. What is the one thing you wish you knew about your work 3 years ago? For the restaurants, it’s about the people. When you boil everything down, it’s the people who are part of your team. That also translates to the law world as well. What’s Next? Working on our fish restaurant, which opens up soon. And Mandarin Heights, in Tampa. Partnered with Bodega. That’s an exciting project in the Seminole Heights area. I’ve got a couple other projects after that - a fish market and some other cool concepts. Content provided by the Catalyst, St. Pete’s daily business platform. Subscribe for free at StPeteCatalyst.com.
CONVERSATIONS
The Doyenne Maghan Morin and Jeanine Suah There’s not much difference between Maghan Morin ST. PETE and Jeanine Suah’s professional and personal lives. They live for their work, and they believe in what they do. The co-founders and directors of the Doyenne identified and solved a problem – where can women business professionals come together in a stress-free work environment, to get things done and network with one another, outside of the typically maledominated workplace? As successful businesswomen, they’d each wondered the same thing individually. Together, they dreamed up a combined professional development platform and co-working space designed specifically for women. Following its inaugural “pop-up” event in December, interest and membership in the Doyenne has been exponentially growing – which means that Morin and Suah’s idea, conceived over coffee, crullers and a pair of well-worn laptops, is quite likely to impact the way business is done in St. Petersburg in the future. Why Do The Doyenne? Maghan: To make a difference, in whatever I do. Right now, to make a difference in St. Petersburg as a black woman entrepreneur. I can make a difference in my life, and in my family’s lives. My vision for what I do is bigger than St Pete – so I can make a difference within the world. Jeanine: To make people feel good, and smile, and feel loved. We’re hustling our butts off because it’s what fulfills us. How did you get started? Jeanine: Not wanting to live for anybody else’s dream but my own. There’s just something more than working for somebody who doesn’t always appreciate it. That’s not a great feeling. What’s the most challenging part of your hustle? Maghan: Raising money, absolutely. Also, always thinking of new ways to be engaging – to keep women interested, and keep them coming.
Jeanine: It’s hard to have a concept and not be able to put it into effect right now. But it requires patience. What’s the most valuable piece of business advice/insight that’s helped you? Maghan: To always keep going, no matter what. Whether it’s someone who doesn’t answer your email, or blows you off on an email, or you don’t get the client that was about to sign … keep going. Because there’s always going to be something, as long as you’re persistent. Jeanine: My business philosophy: Always give more in value than you receive in payment. Even if you haven’t got the money. Megan and I are very much like, ‘All right, we don’t have it, but we’ll give it.’ And by the grace of God, and by the universe just being in our favor, it comes right back. And we’re like ‘Why did we even worry to begin with? Why St. Pete? Maghan: I don’t see many people of color who are growing large businesses and succeeding, especially in the downtown area. That’s why Jeanine and I are very adamant about having the Doyenne in downtown St. Pete. I really want to show the black children here that yes, you can be successful in St. Petersburg. And you can come back here after college, or after traveling out of the country. St. Pete is a place you can be successful, as long as you work hard towards it. Jeanine: It’s a very vibrant community which prides itself on diversity, and there are a lot of diverse groups here. But it shouldn’t be an anomaly to see people of color, or blackowned businesses, downtown. This city is too great to not have something like this downtown. I feel like this is one of my homes, even though I’ve only been here for two years. And it’s the way that the city has treated me. Keep updated on the Doyenne pop-ups in St. Pete by going to the Facebook page at facebook.com/TheDoyenneDTSP Content provided by the Catalyst, St. Pete’s daily business platform. Subscribe for free at StPeteCatalyst.com
July/August 2018
StPeteLifeMag.com
59
SPL NEWS
Free Museum Passes
Got a Pinellas County library card? If so, you can get free admission passes to the Florida Holocaust Museum, the Great Explorations Children’s Museum, and the Museum of Fine Arts thanks to a new partnership program with the Pinellas Public Library Cooperative. Up to two adults and three children can receive free family admission with the checkout of a museum pass from a Pinellas County library. Libraries will have two museum passes each. Passes are available
for checkout on a first-come, firstserved basis. You may check the availability of passes on the PPLC online catalog or by contacting your local library. Check out a pass in person, at your local library circulation desk, and take the printed receipt with you. Present the receipt from the library to the museum staff upon entry. There is a checkout limit of one pass per family. Passes may be checked out by adult cardholders and are valid for one visit within the 7-day loan period.
Considering Solar Residents of St. Petersburg are invited to learn more about the benefits of joining a solar co-op on July 19 when a public information session wlll be offered by Solar United Neighbors at 6 pm at Campbell Park Recreation Center, 601 14th Street
South. The co-op helps to organize 50 to 100 neighbors into a group, or co-op, and give you support through each stage of the solar process. Co-op members leverage bulk-purchasing power to get discounted pricing and a quality installation, while
still signing individual contracts that ensure the right system for your home. To learn more about the co-op forming in St. Pete or the solar co-op model, go to solarunitedneighbors.org/stpete.
m Your contemporary Summer d Of Beautiful e Skin r n
Injectables • Microdermabrasion • PDO Threads Chemical Peels • Photorejuvenation IPL Hair Removal • Microneedling Medical Grade Products
Beauty in the hands you can trust.
4965 Central Ave. • 727-323-8074 Dr. Mona Mangat
First time customer? Mention this ad and receive 10 units of Botox for $75.
60
StPeteLifeMag.com
July/August 2018
Mon-Sat 10-6 Sun 12-5 7211 S Tamiami Trail Sarasota 941-923-2569 copenhagen-imports.com
FURNITURE+LIGHTING+ACCENTS
SPL NEWS
Smart Homes On The Rise St. Pete-based Salt Palm Development was recently certified as the first B Corp real estate development company in the southeastern United States. B Corps are certified for-profit companies that are required to meet rigorous standards of social and environmental performance, accountability, and transparency. Think of it as a USDA Organic certification for businesses. Salt Palm’s first multi-unit development, Sabal Smart Homes, is located in downtown St. Pete at 532 4th Avenue
South. The project consists of two high efficiency four-story buildings — each containing four townhomes. The high-tech, high-efficiency units include a spacious rooftop covered lanai and open deck, with such options as rooftop alfresco kitchen, electric car charger, and a connected-home “Einstein Package.” Salt Palm has pledged to give at least half of its profits toward community improvement endeavors such as beautification projects. For more information on Sabal Smart Homes, go to saltpalm.com.
USFSP News USF St. Petersburg recently signed a collaboration agreement to become the first American higher education institution to form a relationship with South East European University in Macedonia. The agreement promotes joint educational and research activities between the two
universities, such as allowing for the exchange of visiting scholars to participate on joint projects. It also begins discussions for developing a future undergraduate and graduate exchange program that would offer
students from both universities the opportunity to study in a different country and experience that nation’s culture. South East European University is the first and only private-public not for profit university in Macedonia, established in 2001.
Give Your Child The Gift of Language Language. Culture. Diversity. Discovery.
A child is best equipped to learn and become fluent in a second language between the ages of 3 & 8, in an immersion setting. We offer authentic, full-time French Immersion for preschool and elementary grades. No prior French language education required. Contact us today to explore for your child an education immersed in language, culture & ideas.
2100 62nd Ave N. St. Pete • fastb.org • 727.800.2159 July/August 2018
StPeteLifeMag.com
61
TRAVEL
Key West Getaway Latitude determines attitude when you vacation on Florida’s Southernmost Island
BY BETH ANN DRAKE
62
Bustling with tourists, scooters and revelers, Mallory Square has become a mecca of restaurants, bars, museums and attractions. But at the northern tip of Key West, 24 North Hotel is the perfect destination to enjoy a “shoes off ” vacation. Complete with resort style pool and the Toasted Coconut tiki bar for drinks and snacks, stress levels decline upon arrival at this Highgate property.
burlap bed runners and throw pillows by local artisan Noelle Rose. This is the savvy traveler’s destination away from the fast pace of Duval Street. Bicycles are one of the best ways to explore Key West, and cruisers are available for hotel guests. For a quick ride to Mallory Square, however, their Old Town Shuttle runs every half hour until 11 p.m. and is by far the best way to get to where the action is.
24 North Hotel has forged unique partnerships with local artisans and brands, which creates a one-of-a-kind authentic Key West experience for guests. Rooms showcase imagery by local photographer Jorge de la Torriente, and reclaimed touches including
24 North Hotel, 3820 North Roosevelt Blvd., Key West, FL 33040 (305) 320-0940 Rates start at $199 a night with complimentary parking. 24northhotel.com
StPeteLifeMag.com
July/August 2018
TRAVEL
Tour A Rum Distillery
Have A Smoke
Key West’s first legal rum distillery hosts tours and hands-on mojito making classes. Owner Paul Menta (kiteboarder, chef and author) opened the distillery in 2013 at the old Coca-Cola bottling facility location. Their original white rums are distilled six times from Florida sugar cane with infusions of coconut, vanilla crème brulee and real key lime. After sailing to Cuba and learning the true art of making mojitos, Paul brought the tradition back to Key West where you can make your own right there in the distillery. Free tours are offered Monday through Saturday. keywestlegalrum.com
As the oldest cigar manufacturer in the Keys, the Rodriguez family brought their Cuban heritage to the states after the Cuban government nationalized their private plantation. In 1971 they opened in Key West, keeping the traditional Cuban manufacturing process and sourcing tobacco from private plantations worldwide. Touring the factory is a special treat, with several generations of the family working side by side. Spend five minutes with Danny Rodriguez and you will understand the pride and tradition that his family carries on. Conversations usually include good cigars and rare rum, ending with a hands-on lesson rolling your own cigar. rodriguezcigarskeywest.com
Invested in St. Pete. We attribute our success to the success of our clients. For almost three decades we have been Tampa Bay’s leading community bank. When we put our community first, everyone wins. Scott Gault I Market President 727.502.8401 I sgault@bankoftampa.com 200 Central Avenue I St. Petersburg
TheBankofTampa.com COMMERCIAL BANKING
Scott Gault, Pinellas Market President & Dianne Cohrs, Private Relationship Manager
PERSONAL BANKING
WEALTH MANAGEMENT*
TRUST SERVICES*
* Wealth Management and Trust Services are not FDIC insured, not deposits or other obligations of the Bank and may lose value.
July/August 2018
StPeteLifeMag.com
63
TRAVEL
New York City Sizzles In Summer BY CINDY COCKBURN Looking back at my childhood growing up in and around New York City, Manhattan was the equivalent to a Disney World today. If every kid had access to Radio City, Broadway shows and museums like I did, there would be no need for an Epcot or Magic Kingdom.
58th Street, overflowing with Europeans and very tiny rooms but with an outdoor café, just a block from Central Park and the shops of Columbus Circle. I try to avoid the larger convention hotels and prefer a smaller hotel where the staff smiles and says “welcome back” and knows my name. Best kept secret: the Europeanstyle Hotel Elysee located on 60 East 54th Street, where you can relax on the second floor with a “living room style” den for complimentary breakfast, tea throughout the day and a wine happy hour every evening.
As a grown up, I became an uptown girl preferring caviar at the Russian Tea Room and shopping at Bloomingdales a few blocks from my apartment on East 64th Street. The city continues to offer amazing adventures. Yes, I do recommend visiting Manhattan in the heat of the sizzling summer. The locals have escaped to the Hamptons or Newport, so the best culinary and cultural options are yours to discover.
My new favorite hotel is the Benjamin, at 125 East 50th at Lexington Avenue. They have refrigerators in each room and a popular restaurant off the lobby called the National. You’re close to all that Fifth Avenue offers from St. Patrick’s Cathedral to the Museum of Modern Art, and a quick walk over to the theater district.
Finding a Room Where to stay? Depends on your passions. The West side is closest to Times Square and the theater district. Downtown is casual and trendy with fun hotels located in Soho, the Village and Tribeca. New York has over 115,000 hotel rooms and the biggest sustained hotel-building surge since the 1920s-’30s. The rise of Airbnb hasn’t made a dent. Summer rates are discounted on most NYC hotels. If you only need to book a standard single room and stay away from the Upper East Side, rates start around $300 a night. Or, live like royalty at The Ritz-Carlton overlooking Central Park and pay at least $900 a night. Best bet: look online for best prices, deals, packages and call ahead. After August, room rates can jump to over $400 a night for a bare bones room once the new “season” comes along in September. One of the most affordable hotels is the Hudson at 358 West
64
StPeteLifeMag.com
July/August 2018
Dining Out Be proactive with your dining plans, the best restaurants are usually booked months in advance. Stroll around town and you’ll find outdoor cafes every few blocks for al fresco dining. The West 79th Street Boat Basin Café overlooks the Hudson. Every summer, Rockefeller Center replaces their winter ice skating rink with the Rock Center Café, a beautiful garden and bar for relaxing in the sun. For a romantic dinner, the upper east side has my heart, but venturing over to Brooklyn at least once is a MUST since the River Café on Water Street should be on every food lovers bucket list. The stunning views of the water and the
TRAVEL Brooklyn Bridge are totally seductive at night. The fresh menu ingredients, ambiance, wine list and outstanding service is worth the trip. The piano bar in the intimate lounge is a plus, before or after dinner. Some guests book a year in advance for a Saturday night special celebration here. If you are fortunate enough to have a very close friend score a reservation and drive you over the Brooklyn Bridge to the restaurant like I did - congrats! Otherwise, jump in one of 65,000 vehicles affiliated with Uber in the city for your romance fix.
Celebrity Watching Dress up and enjoy a pre-dinner cocktail at the St. Regis Hotel at 1 East 55th Street in midtown and think of Dali and his wife Gala sipping champagne in the King Cole bar. Walk literally across the street for an intimate “who’s who” dining experience at Ralph Lauren’s Polo Bar serving people-watching and classic dishes. Warning: You can’t just pop into the tiny, intimate bar and check out the country club atmosphere complete with polo mallets. You need a dinner reservation first. At least two months in advance. Last time we were there, Regis Philbin and Kathie Lee Gifford were holding court over dinner with Savannah and Hoda. A woman named Trump was sipping champagne.
Outdoor cafes line the streets in New York City.
When booking your flight, remember LaGuardia airport is closer to Manhattan, but it’s under major construction. Kennedy airport is further away from midtown. Newark will be closer to the west side of the city. It’s all about logistics from uptown to downtown and everywhere in between with five boroughs.
Cindy’s Favorites Best Breakfast: Sarabeth with outdoor seating and the best pancakes in town overlooking Central Park on 59th Street. Best Outdoor Dining: Bar Boulud overlooking Lincoln Center at 1900 Broadway with Chef Daniel Boulud presiding. Best Italian: NELLO outside dining in the heart of the upper east side with fine Northern Italian cuisine in a casual yet sophisticated (yes, very expensive) ambiance. Located at 696 Madison Avenue Best Seafood: Marea on Central Park South, filled with male executives at lunch, very high-end Italian seafood and fresh pastas from Michael White in a chic setting at 240 Central Park South Best French: Le Bernadin at 155 West 51st Street, very oh-la-la Reasonably Priced: Becco at 355 West 46th Street for family style Italian pre-theater Best Rooftop Bar: Bar 13 on the lower east side at 1335 East 13th Street located downtown in Union Square, two floors and a newly remodeled rooftop deck. Hotel Elysee on the upper east side is a quaint place to stay. .
July/August 2018
StPeteLifeMag.com
65
NEIGHBORHOODS
Tropical Oasis In The Heart of St. Pete PHOTO S/KELLER WILLIAMS
We all know we live where others vacation. This home, nestled on a quarter acre in the heart of St. Pete, makes life a year ‘round tropical retreat. Why not have a custom tiki hut and swim up bar in the backyard? Take evening entertainment outdoors and relax by the pool or fire bowls and enjoy multiple water fountains. The outdoor kitchen makes cooking a breeze and guests can sip cocktails at the bar on underwater stools.
66
StPeteLifeMag.com
July/August 2018
NEIGHBORHOODS Just blocks from the Snell Isle bridge, downtown is only a short bike ride away. Perfect for a large family, there are 5 bedrooms and 4 baths plus two bonus rooms to spread out in. And if you want to take the party inside, the custom kitchen is ideal for the aspiring chef. This home is offered for $749,000 by Denise Antonewitz, Keller Williams Realty (727) 204-3138
Style and service with North America’s #1 choice..
Call now for your free in-home consultation!
727.522.6695 BudgetBlinds.com
Blinds • Shutters • Shades • Drapes • Home Automation
25% OFF
*
Enlightened Style Products For St. Pete Life Readers
©2018 Budget Blinds, LLC. All Rights Reserved. Budget Blinds is a trademark of Budget Blinds, LLC and a Home Franchise Concepts Brand. Each franchise independently owned and operated. *Must notify sales representative when booking appointment and must present magazine upon arrival.
July/August 2018
StPeteLifeMag.com
67
NEIGHBORHOODS
Smart Home Technology Home security is changing just as rapidly as smart phone technology. While most security systems only protect your home, Select SmartHome systems protect your home and your lifestyle. A single 7-inch LCD touchscreen control panel and single mobile app keeps you connected to your home 24/7. Here are a few advantages you have at your fingertips: • Lock and unlock your door remotely, or set scenes to control your heating/cooling and lighting, check on your pets and children using interior cameras – all from one app. • The term “porch pirates” has become all too common, even after the holiday delivery season has ended. Never worry about packages getting stolen off your porch again. A Skybell doorbell camera allows you to see who’s at the front door. Need to let in the dog walker? Use the camera to verify their identity, then the two-way voice function to let them know you’ve just remotely unlocked the door for them.
THE ENDLESS SEARCH FOR THE PERFECT TITLE COMPANY IS OVER. Compasslandandtitle.com St. Pete - tampa 68
StPeteLifeMag.com
July/August 2018
NEIGHBORHOODS
District on 9th Townhomes
• Going on vacation and worried about flooding in your home during potential storms? Use flood sensors combined with a Watercop valve. In the case of flooding, your system will shut down any water supply and notify you immediately of an issue so you can call for help.
ICON Residential is building its third luxury urban townhome community, The District on 9th. The sister community to The Arlington St. Pete and Uptown Kenwood, is set to open its model in Fall 2018. The townhome community is located just outside the heart of downtown. The District on 9th will offer four units with unique live-work floor plan options which will give home owners the option to occupy the space for their own office/retail uses or rent out an additional work space that generates income.
• Want to know who just disarmed the security system? The panel captures and sends disarm photos whenever the system is disarmed directly from the panel. So from any devise, smart phone, laptop or computer, you can control your home environment easily. Lighting, temperature, camera surveillance and locks can all be remotely activated. And whether you need to check on children or fur babies, today’s technology makes it simple. To learn more about Select SmartHome systems, call 1-844-735-3285 or go to selectsmarthome.com
This portable wifi interior camera is small enough to be placed in a hidden spot and features night vision. The camera interfaces with your home security system and allows you to connect to an app on your smartphone for up-tothe-minute viewing.
The work space totals around 800 square feet and includes a restroom and private street-level entrance with abundant windows for a retail store/office space front entry. Designed to give home owners the equity/ rental option, The District on 9th’s limited Live-Work units are a perfect option for those looking to capitalize on their new home’s prime urban location. The separate entry provides homeowners with the option to live upstairs and use the first-floor work area for business or lease it out or use the work area for business and rent out the upstairs living space. The District on 9th will have a total of 34 luxury townhomes with four unique floor plans ranging in size from 1,500 to 2,600 square feet, featuring a sleek and contemporary design. For more information, go to iconresliving.com
July/August 2018
StPeteLifeMag.com
69
KINDNESS
Against All Odds
A life-long foster child who made national news reaches a new milestone PHOTO/CHRIS DAVIS PHOTOGRAPHY
Five years ago, the likelihood of Davion Only-Going graduating high school was low. He had lived in the foster care system his entire life and never had a forever family to support him through the normal challenges of school.
But, years of shifting homes and schools and very little familial support left Davion far behind in school. He struggled until his new mother discovered MYcroSchool Pinellas Charter High School.
In 2013, when Davion was only 14 years old, he did the most extraordinary and heartbreaking thing a child in his situation could do. With the support of his case worker, he stood up at the podium in church and begged the audience for a family. The plea was recorded on video and then went viral. His story swept the web and then national TV news. He went on “The View” to tell his story to increase the odds of finding a family.
MYcroSchool charter high schools are nonprofit, tuition-free high schools in Florida. They offer flexible schedules so students can work at their own pace and learn the skills needed to finish high school. They have small class sizes that allow teachers to focus on students like Davion who need help the most. This individual attention ensures students ages 16- 21 are not left behind and obtain skills to be ready for the future. Graduates go on to enroll in college, start a career, and/or join the military.
Thousands of requests came into the agency overseeing his adoption and a minister in Ohio asked him to come live with him. But that did not work out either and he was sent back to Florida and into foster care once again. Now at 16, Davion understood that the likelihood of being adopted was low and so did the case worker who had been by his side since he was 7. Davion then took another brave step and asked his case worker, Connie Going, to adopt him. But Connie was a single mom with three kids who didn’t know if she would be enough for him. Without hesitation her family decided that Davion would have the family that he had begged for and in April of 2015, the adoption was final.
70
StPeteLifeMag.com
July/August 2018
Without the support that Connie Going and Davion received, it is hard to imagine how he would have been able to accomplish this unlikely goal. Against all odds, Davion and more than 50 former at-risk youth celebrated their graduation on June 7 at the Palladium. With a diploma in hand and the support of his forever family, Davion is now planning to attend St. Petersburg College in Fall 2018 and eventually a college specializing in the Culinary Arts. To learn more about Pinellas MYcroSchool, go to PinellasMYcroSchool.org or call (727) 825-3710.
BUSINESS
The Importance of Banking Local ADVERTORIAL
Scott Gault, Pinellas County Market President at The Bank of Tampa For The Bank of Tampa, a community bank with a more than 30 year history in the Tampa Bay area, being part of the “Go Local” movement is essential to doing business. The bank, which is privately owned by its staff, directors and clients, prides itself in being a relationship bank that understands that when the Tampa Bay community does well, everyone wins. We had the opportunity to catch up with Scott Gault, Pinellas County market president at The Bank of Tampa, and he shared with us the importance of banking local with a community bank. What is a community bank, anyway? In the simplest of terms, a community bank derives funds from and lends to its local community. A community bank like The Bank of Tampa specializes in relationship banking, rather than transactional banking. For us, profits are driven through long-term, multi-account relationships that suit the needs for each individual client or business. Larger banks, which often operate on a transactional model through economies of scale, make money on volume through standardized accounts and automated service. What is the benefit to banking with a community bank? There are endless benefits to banking with a community bank. I usually like to speak to benefits that don’t always immediately come to mind – like the fact that at The Bank of Tampa, our leaders are based in the areas where we operate. Therefore, decisions are made locally allowing for quick decision making without bureaucracy. At community banks, loan approvals and other key decisions are made by people who live in the community, have face-to-face relationships with their clients, and understand the local economy and its needs.
What do community banks bring to the economy? When you partner with a community bank like The Bank of Tampa, you are investing in your local economy. Your dollar in a community bank will likely go to underwriting a local business or be invested in U.S. government-backed securities. Community banks tend to obtain deposits from local individuals and businesses and lend them out to local borrowers. More or less, your money stays local. Community banks have a critical role in keeping local economies vibrant. One of the key ways we do that is by lending to creditworthy borrowers in the Tampa Bay area. Because we can respond to lending requests with agility due to our knowledge of client needs, we are able to help facilitate a growing economy by partnering with business and enabling them to buy new equipment, add employees and make investments in their future. At a glance, those effects may appear to be modest. However, when you multiply these effects across the thousands of community banks in the country, you can see the impact we can make on a national economy. Community banks are also intimately tied to the prosperity of the local community. When a community prospers, the community benefits. That is why so many local banks are involved in their communities. In 2017 alone, The Bank of Tampa donated more than $745,000 to local community organizations. Our employees get out there and get involved. In fact, The Bank of Tampa supports more than 200 community organizations, either through financial giving or volunteer support. We want to ensure we’re giving back to the community that gives so much to us.
July/August 2018
StPeteLifeMag.com
71
Jaime mcKnight Realtor
727-430-2491 cell 727-821-3322 office Jaime.McKnight@FloridaMoves.com
Operated by a subsidiary of NRT LLC
280 Beach Dr, NE St. Petersburg, FL 33701 ColdwellBankerHomes.com
Wood • Carpet • Tile • Vinyl Plank Backsplashes • Fireplaces • Bathrooms 727-864-0077 1110 Pinellas Bayway S., Suite 105 Tierra Verde, FL 33715 Lic# C-10946
Beverly DiMarino
Donnie B’s Local Eatery & Spirits
Where the Locals Hang Out • Most Eclectic Menu on The Suncoast • Happy Hour Daily Until 6 p.m. • Amazing Gulf View & Sunsets • Nightly Food Specials • Late Night Drink Specials • Kitchen Open Until Midnight
727-289-6249
First Vice President Market Manager
1840 4th Street North St. Petersburg, Florida 33704 Phone: 727.394.3165 Fax: 727.394.3180 Cell: 727.631.9067 Email: kawright@valleynationalbank.com
72
StPeteLifeMag.com
July/August 2018
ME DECO
AVenable Consulting D MOR
E
N
A
Kathy Wright
NMLS ID #157479
O
R
H
1110 Pinellas Bayway S. Tierra Verde
Interior/Exterior Color Consults Merchandising • Interior Decorating Art & Sculpture annette@avenableconsulting.com 727.439.8116
EVENTS
Good Burgers Celebrate our local do-gooders at the 6th annual Good ‘Burger Awards, presented by the St. Petersburg Area Chamber of Commerce on August 30 from 6 to 9 pm at the State Theater. Nominations are closed, but the drum roll has started for this evening off fun, camaraderie and salutations to local businesses, organizations, and individuals who have made a positive impact in St. Petersburg. Tickets are $15 before August 10, then $20/members $25/non-members. Purchase tickets at stpete.com/goodburger.html.
Start Your Ovens Think you make great cupcakes? Novice and professional bakers in all age groups (even kids) can show off your stuff in the Morean Arts Center’s 2018 Best Cupcake in St. Petersburg contest. Specialty categories include gluten free and vegan. Registration closes August 20; limit is 100 contestants. The official tasting tastes place on August 25 at the Morean Center for Clay. For entry forms and more information, go to moreanartscenter.org/calendar/cupcake-contest
Need some shopping inspiration? Visit THE MUSEUM STORE for jewelry, games, toys, apparel, home goods, books, men’s gifts, ladies’ accessories, local & international art, and so much more. Open 7 days a week.
255 Beach Drive NE
|
mfastpete.org
Named “BEST MUSEUM GIFT SHOP” by Tampa Bay Magazine, 2018
74
StPeteLifeMag.com
July/August 2018
EVENTS
Hit Musical Returns In 1999, the Pinellas County Millennium Celebration commissioned Bill Leavengood and Lee Ahlin to create a new musical based on the life and times of J. E. “Doc” Webb, the colorful St. Petersburg entrepreneur called the “P.T. Barnum of Retail.” In 1925, Webb opened a tiny drug store on 9th Street and over the next five decades transformed it into a shopping complex covering 10 city blocks. In its heyday, Webb’s City attracted thousands of visitors per day. The 2000 world premiere was presented at both Ruth Eckerd Hall and the Mahaffey Theater, where it broke their box office records, selling over 10,000 tickets. A September 2017 production was such a success, despite Hurricane Irma, that producers, sponsors, and performers have come on board again to revive it once more. The Palladium Theater and Will Knot Di Artists have teamed with The St. Petersburg Museum of History to present this multimedia musical set for September 21-23 at the Palladium. Tickets are on sale at the Palladium box office, at (727) 822-3590 or visit mypalladium.org
Al Lang Concert Veteran rock band Counting Crows will celebrate 25 years of making music by hitting the road for a worldwide “25 Years and Counting” tour, stopping at Al Lang Stadium on Tuesday, July 31. Tickets are $32-$125 at ticketmaster.com or mahaffey.com.
DON’T FORGET! Get the Drain Unclogged!
SEWER DRAIN CLEANING WITH FREE CAMERA INSPECTION
93 OR FREE
$
We’ll open the drain or it’s FREE!* Plus it’s Guaranteed for 1 Year! *Some restrictions apply. Call for details. Cannot be combined with other offers. Expires one month from date of publication.
727.266.6767 • Schedule online at RayDuncanPlumbingInc.com July/August 2018
StPeteLifeMag.com
75
SPL SCENE
St. Pete Pride Parade PHOTOS/ST PETE PRIDE
76
StPeteLifeMag.com
July/August 2018
SPL SCENE
Crown Eurocars Mercedes-Benz Grand Opening
Earlier this year, local artists submitted a painting in the Art Ties Us contest. The community voted online for their favorite piece and now the big reveal takes place. On July 27, the Art Ties Us Gala will present the top artworks during a fundraising evening to be held from 7 to 10 pm in the Grand Ballroom at The Birchwood, 340 Beach Dr. NE. The tribute to the arts and artists of St. Petersburg will
feature food from six top restaurants, live music, art auctions, and open bar; proceeds will benefit The Public Art Project founded by Derek Donnelly. Special guest Duncan McClellan will be presenting awards to the 1st, 2nd and 3rd place winners as well as this year’s Enso award. Tickets are $125 and can be purchased at ArtTiesUs.com.
Copa Caliente Those Queens are at it again! The glitzy St. Pete Glitter Queen 7th Annual Royal Ball is being held September 28 at 7 pm at the St. Petersburg Marriott Clearwater. “Copa Caliente: A Night in Old Havana” is the theme for this year’s evening of fun and fund-raising for underserved children’s charities in
Pinellas County. Evening includes dinner, dancing (of course), and a silent auction. The Glitter Queens have donated over $302,000 to local women’s and children’s charities since 2012. For more information or tickets, go to stpeteglitterqueens.org
July/August 2018
StPeteLifeMag.com
77
SPL SCENE
A new 28,000-square-foot regional skatepark opened with an official ribbon cutting on June 2. The park has features for all levels, including a street course, a snake run, an intermediate and a 12-foot-deep bowl designed for advanced level skateboarders and BMX riders. Part of the St. Petersburg Parks & Recreation Department, the skatepark is located at 600 12th Street S.
ST. PETE'S BUSINESS DAILY
FREE
LIFETIME SUBSCRIPTION
BRING YOUR NEWS, YOUR P E R S P E C T I V E A N D YO U R S PA R K T O T H E S T. P E T E C ATA LY S T
78
StPeteLifeMag.com
July/August 2018
PHOTOS/CITY OF ST. PETERSBURG
Skatepark Ribbon Cutting
SPL SCENE
Heels To Heal The 5th Annual Heels to Heal fashion show fundraiser was held May 11 at the Renaissance Vinoy.
PHOTOS/BARRY LIVELY
July/August 2018
StPeteLifeMag.com
79
SPL SCENE
Fashion Funds The Cure Over $500,000 was raised for the National Pediatric Cancer Foundation on May 19 at Fashion Funds the Cure Tampa hosted by Jesse Palmer.
80
StPeteLifeMag.com
July/August 2018
It is with much pleasure that we introduce to you the winners of the Teachers ROCK $20,000 Teacher Appreciation Dream Wedding, Kaylan Figueroa & Will Burnham. Kaylan and Will are both from the Tampa Bay area and Kaylan is a 5th grade teacher at Riverhills Elementary Magnet School in Hillsborough county. On behalf of Old McMicky’s Farm, our Sponsors, Media and Vendor Partners and all of our staff, customers and supporters of the Farm, Thank You to all of our community’s teachers, school staff and administrators for your huge contribution in helping to shape the lives of our next generation of children. Thank you to our Sponsors: DeBartolo Family Foundation, Pepin Family Foundation, Avalon Building Corporation Thank you to our Media Partners: Tampa Bay Times, Newspapers in Education, Beasley Media Group, B98.7 FM, St Pete Life Magazine Thank you to our amazing vendor partners for donating their great services: Catering - Whaley’s Blazin BBQ Wedding Planning - Exquisite Events Rings - Gold & Diamond Source Cake - Chantilly Cakes Bar Service - Spunky Spirits Floral - Tampa Wedding Studio
Limo Services - Showtime Transportation DJ - Events Done Right Photography - Darin Crofton Photography Bridal Gown - Truly Forever Bridal Tuxedo - Sacino’s Formal Wear
Linens - Connie Duglin Linens Officiant Services - A Beautiful Wedding in Florida Videography - Viegas Photography Invitation - Eva Lu Designs Atlas Results Hummingbird Consulting
A bi-monthly magazine about St. Petersburg and downtown St. Pete (Florida). We curate the best of the city's arts, culture, nightlife, shopp...
Published on Jul 1, 2018
A bi-monthly magazine about St. Petersburg and downtown St. Pete (Florida). We curate the best of the city's arts, culture, nightlife, shopp... | https://issuu.com/stpetelifemag/docs/spl_july_august_2018_issuu?e=30269705/62810257 | CC-MAIN-2020-29 | refinedweb | 24,328 | 63.7 |
NAME
acct - switch process accounting on or off
SYNOPSIS
#include <unistd.h> int acct(const char *filename); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): acct(): _BSD_SOURCE || (_XOPEN_SOURCE && _XOPEN_SOURCE < 500)
DESCRIPTION
The
SVr4, 4.3BSD (but not POSIX).
NOTES
No accounting is produced for programs running when a system crash occurs. In particular, non-terminating processes are never accounted for. The structure of the records written to the accounting file is described in acct(5).
SEE ALSO
acct(5)
COLOPHON
This page is part of release 3.01 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.ubuntu.com/manpages/intrepid/man2/acct.2.html | CC-MAIN-2014-15 | refinedweb | 109 | 51.75 |
Difference between revisions of "CSharp on RPi"
Latest revision as of 12:49, 11 March 2013
C# can be used, in both compiled and interactive mode, on the Raspberry Pi via Mono. However, different Linux distributions differ in how floating-point parameters are passed to functions, resulting in some complications for JIT-compiled languages like Mono. See below for details.
Contents
Hard/Soft Float ABI Issue
There are two different versions of the ABI (application/binary interface) calling convention for ARM. The "hardfloat" convention means that floating-point parameters may be passed between functions in the floating-point registers. The "softfloat" convention means that floating point parameters may only be passed using integer registers.
Mono for ARM is written to use the softfloat ABI (which is the standard on iPhone and Android, for example). Unfortunately, the standard Raspberry Pi distributions (such as Raspian) use the hardfloat convention. On such a distro, Mono will install and run, but some functions generate incorrect results, and many apps will simply crash. See the official bug reports: [1], [2], and [3].
If you have Mono installed, you can test whether you are affected by this problem by executing the "csharp" command, and entering the code "Math.Pow(2,4)". If this returns 16, your installation is working correctly. If it returns some other number (such as 0.000746999867260456), then your installation has tripped over the hardfloat/softfloat issue.
Solution 1: Use a Softfloat Distro
To use the standard builds of Mono requires a distribution compiled with the softfloat ABI. This does not mean it can't use hardware floating-point operations; it's only the passing of floating-point values to functions that matters. In practice, though, current distributions built with the "soft float" settings do not make use of the floating-point hardware at all. This avoids the problem, but also reduces performance on floating-point operations substantially.
On the Raspberry Pi Downloads page, select a "soft float" distribution, such as "Soft-float Debian 'wheezy'."
Solution 2: Patch Mono
A patch to Mono is available to make it work on hardfloat ARM is available here [4]. This patch is for Mono 2.x; you can check what version of Mono you have with "mono --version".
In addition, it may be that this patch is not right or is incomplete for armv6 (the chip used in the Raspberry Pi). More details are needed.
Installation
To use C#, you must first install the mono-complete package. Under Debian:
sudo apt-get install mono-complete
Under Arch:
sudo pacman -S mono-complete
Usage
To use the interactive C# environment [5], simply type "csharp" at a command prompt. Enter some C# code, and it will be executed or evaluated immediately. Press Ctrl+D (or enter "quit;") to exit.
To compile, use the "mcs" (Mono C Sharp) command [6], specifying the name of your source file. This produces a new file with a ".exe" extension; run this executable with the "mono" command. For example:
mcs HelloWorld.cs
mono HelloWorld.exe
Examples
Hello World (Interactive)
The following shows how to launch the C# interactive environment and get it to print "Hello world!"
$ csharp Mono C# Shell, type "help;" for help Enter statements below. csharp> print("Hello world!"); Hello world! csharp>
Hello World (Compiled)
Put the follow code in a text file called "HelloWorld.cs" (using your favorite text editor; "nano" [7] comes standard). Compile with "mcs HelloWorld.cs" and run with "mono HelloWorld.exe".
using System; public class HelloWorld { public static void Main() { Console.WriteLine("Hello world!"); } }
Serial Port (Interactive)
This example shows creating the serial port by name (the standard name for the UART on the expansion header is /dev/ttyAMA0), checking its status, opening it, and finally writing some data.
csharp> using System.IO.Ports; csharp> SerialPort sp = new SerialPort("/dev/ttyAMA0", 9600); csharp> sp.IsOpen; false csharp> sp.Open(); csharp> sp.IsOpen; true csharp> sp.WriteLine("Hello world!"); | http://elinux.org/index.php?title=CSharp_on_RPi&diff=229424&oldid=229340 | CC-MAIN-2017-34 | refinedweb | 652 | 58.28 |
Introduction to SourcePawn
From AMWiki.
[edit] Non-Programmer Intro
This section is intended for non-programmers. If you're still confused, you may want to pick up a book on another language, such as PHP, Python, or Java, to get a better idea of what programming is like.
[edit].
[edit])
[edit].
[edit] Comments
Note any text that appears after a "//" is considered a "comment" and is not actual code. There are two comment styles:
- // - Double slash, everything following on that line is ignored.
- /* */ - Multi-line comment, everything in between the asterisks is ignored. You cannot nest these.
[edit].
[edit] languages design decisions were made by ITB CompuPhase. It is designed for low-level embedded devices and is thus very small and very fast.
[edit].
[edit]
[edit] Assignment
Variables can be re-assigned data after they are created. For example:
new a, Float:b, bool:c; a = 5; b = 5.0; c = true;
[edit] Arrays
An array is a sequence of data in a sequential list. Arrays are useful for storing multiple pieces of data in one variable, and often greatly simplify many tasks.
[edit].
[edit] Usage
Using an array is just like using a normal variable. The only difference is the array must be indexed. Indexing an array means choosing the element which you wish to use.
For example, here is an example of the above code using indexes:
new..
[edit].
[edit] Usage
Strings are declared almost equivalently to arrays. For example:
new String:message[] = "Hello!"; new String:clams[6] = "Clams";
These are equivalent to doing:
new String:message[7], String.
[edit] Characters
A character of text can be used in either a String or a cell. For example:
new String:text[] = "Crab"; new:
new clams[] = "Clams"; //Invalid, needs String: type new clams[] = {'C', 'l', 'a', 'm', 's', 0}; //Valid, but NOT A STRING.
[edit] five types of functions:
- native: A direct, internal function provided by the application.
- public: A callback function that is visible to the application and other scripts.
- normal: A normal function that only you can call.
- stock: A normal function provided by an include file. If unused, it won't be compiled.
- forward: This function is a global event provided by the application. If you implement it, it will be a callback.
All code in Pawn must exist in functions. This is contrast to languages like PHP, Perl, or Python, which let you write global code. That is because Pawn is a callback-based language; it responds to actions from a parent application, and functions must be built to handle those actions. Although our examples often contain free-floating code, this is purely for demonstration purposes. Free-floating code in our examples implies the code is part of some function.
[edit].
[edit].
[edit] Natives
Natives are builtin functions provided by the application. You can call them as if they were a normal function. For example, SourceMod has the following function:
native FloatRound(Float:num);
It can be called like so:
new num = FloatRound(5.0); //Results in num = 5
[edit].
[edit] 7
As noted, expressions can contain variables, or even functions:
new a = 5 * 6; new b = a * 3; //Evaluates to 90 new c = AddTwoNumbers(a, b) + (a * b);
[edit] be before the variable (preincrement, predecrement) or after the variable (postincrement, postdecrement). The difference is in the order of execution. For example:
new a = 5; new b = a++; new c = ++a;
In this example, b will be equal to 5, after which a will be incremented to 6. But then a will be incremented to 7, and c will receive the value of 7.
[edit] Truth Operators
As noted earlier, expressions can either be true or false depending on if they're non-zero or zero. This is especially useful with truth operators. There are five important truth operators:
- && - Tests whether two expressions are both true.
- || - Tests whether one of two expressions is true.
- ! - Flips the truth value of an expression.
- == - Tests whether two expressions are numerically equivalent.
- != - Tests whether two expressions are numerically inequivalent.
There are also some mathematical truth operators (L is left hand, R is right hand):
- > - True if L is greater than R
- >= - True if L is greater than or equal to R
- < - True if L is less than R
- <= - True if L is less than or equal to R
For example:
(1 || 0); //Evaluates to true because the expression 1 is true (1 && 0); //Evaluates to false because the expression 0 is false (!1 || 0); //Evaluates to false because !1 is false. (1 != 3); //Evaluates to true because 1 is not equal to 3 (3 + 3 == 6); //Evaluates to true because 3+3 is 6.
Note that these operators do not work on arrays. That is, you cannot compare strings or arrays using ==.
[edit]:
new a = 5;
In this example a is an l-value and 5 is an r-value.
The rules:
- Expressions are never l-values.
- Variables are both l-values and r-values.
[edit] Conditionals
Conditional statements let you only run code if a certain condition is matched.
[edit] */ }
[edit] ran. If a single case matches, its code is ran, and the switch is then immediately terminated.
[edit] Loops
Loops allow you to conveniently repeat a block of code while a given condition remains true.
[edit] For Loops
For loops are loops which have four properties:
- The begin statement, which is ran before the first loop occurs.
- The condition statement, which checks whether the next loop should run.
- The iterator statement, which is ran after each loop runs.
- The code block itself, which is what's run every loop..
[edit] While Loops
While loops are less common than for loops, but are actually the simplest possible loop. It only has two properties:
- The condition, which is checked before each loop.
- The code, which is what's run each loop.
As long as the condition expression remains true, the loop will continue. Here is an example of the previous for loop as a while loop:
SumArray(const array[], count) { new { /* Code */ } while(CONDITION);
[edit]. */ SearchInArray(const array[], count, value) { new index = -1; for (new:
SumEvenNumbers(const array[], count) { new sum; for (new i = 0; i < count; i++) { /* If divisibility by 2 is 1, we know it's odd */ if (array[i] % 2 == 1) { /* Skip the rest of this loop iteration */ continue; } sum += array[i]; } return sum; }
[edit] Scope
Scope refers to the visibility of code. That is, code at one level may not be "visible" to code at another level. For example:
new A, B, C; Function1() { new B; Function2(); } Function2() { new:
Function1() { new:.
[edit] Dynamic Arrays
Dynamic arrays are arrays which don't have a hardcoded size. For example:
Function1(size) { new array.
[edit] Extended Variable Declarations
Variables can be declared in more ways than simply new.
[edit] decl
[edit].
);
[edit]';
[edit].
[edit] Notes
This example is NOT as efficient as a decl:
new String:blah[512] = "a";
Even though the string is only one character, the new operator guarantees the rest of the array will be zeroed as well.
Also note, it is invalid to explicitly initialize a decl:
decl String:blah[512] = "a";
The above code will not compile, because the purpose of decl is to avoid any initialization.
[edit] static
The static keyword is available at global and local scope. It has different meanings in each.
[edit]#.
[edit] Local static
A local static variable is a global variable that is only visible from its local lexical scope. For example:
MyFunction(inc) { static:
MyFunction(inc) { if (inc > 0) { static counter; return (counter += inc); } return -1; } | http://wiki.alliedmods.net/Introduction_to_SourcePawn | crawl-001 | refinedweb | 1,259 | 65.32 |
32363/image-pull-policy-not-working-in-kubernetes
I have pushed a custom image on dockerhub lets say for this example the image is test:latest. My kubernetes deployment is using this image and I was expecting a new deployment of the pods when I push the new version of the image.
I've created my deployment like this:
kubectl run sample-app --image=`test:latest` --namespace=sample-app --image-pull-policy Always
But this is not happening. What am I doing wrong here?
If the Pod is part of a ...READ MORE
There was a recent outage to gce ...READ MORE
Follow these steps
Add --bind-address=0.0.0.0 option to the line
Created ...READ MORE
adding to @kalgi's answer
Using just the hostname ...READ MORE
Hey @nmentityvibes, you seem to be using ...READ MORE
Try using ingress itself in this manner
except ...READ MORE
It can work if you try to put ...READ MORE
When you use docker-compose down, all the ...READ MORE
You can use an ingress controller on ...READ MORE
Try something like this:
containers:
- name: ...READ MORE
OR | https://www.edureka.co/community/32363/image-pull-policy-not-working-in-kubernetes | CC-MAIN-2019-39 | refinedweb | 187 | 77.94 |
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 5 for the current design information.
A summary of the changes in Apocalypse 5:
Unchanged features
- Capturing: (...)
- Repetition quantifiers: *, +, and ?
- Alternatives: |
- Backslash escape: \
- Minimal matching suffix: ??, *?, +?
Modifiers
- The extended syntax (
/x) is no longer required...it's the default.
- There are no
/sor
/mmodifiers (changes to the meta-characters replace them - see below).
- There is no
/eevaluation modifier on substitutions; use
s/pattern/$( code() )/instead.
- The
/gmodifier has been renamed to
e(for
each).
- Modifiers are now placed as adverbs at the start of a match/substitution:
@matches = m:ei/\s* (\w*) \s* ,?/;
- The single-character modifiers also have longer versions:
:i :ignorecase :e :each
- The
:c(or
:cont) modifier causes the match to continue from the string's current
.pos:
m:c/ pattern / # start at end of # previous match on $_
- The new
:o(
:once) modifier replaces the Perl 5
?...?syntax:
m:once/ pattern / # only matches first time
- The new
:w(
:word) modifier causes whitespace sequences to be replaced by
\s*or
\s+subpattern:
m:w/ next cmd = <condition>/
- Same as:
m/ \s* next \s+ cmd \s* = \s* <condition>/
- The new
:uNmodifier specifies Unicode level:
m:u0/ .<2> / # match two bytes m:u1/ .<2> / # match two codepoints m:u2/ .<2> / # match two graphemes m:u3/ .<2> / # match language dependently
- The new
:p5modifier allows Perl 5 regex syntax to be used instead:
m:p5/(?mi)^[a-z]{1,2}(?=\s)/
- Any integer modifier specifies a count. What kind of count is determined by the character that follows.
- If followed by an
x, it means repetition:
s:4x{ (<ident>) = (\N+) $$}{$1 => $2};
# same as:
s{ (<ident>) = (\N+) $$}{$1 => $2} for 1..4;
- If followed by an
st,
nd,
rd, or
th, it means find the Nth occurance:
s:3rd/(\d+)/@data[$1]/;
# same as:
m/(\d+)/ && m:c/(\d+)/ && s:c/(\d+)/@data[$1]/;
- With the new
:anymodifier, the regex will match every possible way (including overlapping) and return all matches.
$str = "abracadabra";
@substrings = $str =~ m:any/ a (.*) a /;
# br brac bracad bracadabr c cad cadabr d dabr br
- The
:i,
:w,
:c,
:uN, and
:p5modifiers can be placed inside the regex (and are lexically scoped):
m/:c alignment = [:i left|right|cent[er|re]] /
- User-defined modifiers will be possible
m:fuzzy/pattern/;
- Single letter flags can be ``chained'':
s:ewi/cat/feline/
- User-defined modifiers can also take arguments:
m:fuzzy('bare')/pattern/;
- Hence parentheses are no longer valid regex delimiters
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 5 for the current design information.
Changed metacharacters
- A dot
.now matches any character including newline. (The
/smodifier is gone.)
^and
$now always match the start/end of a string, like the old
\Aand
\z. (The
/mmodifier is gone.)
- A
$no longer matches an optional preceding
\nso it's necessary to say
\n?$if that's what you mean.
\nnow matches a logical (platform independent) newline not just
\012.
- The
\A,
\Z, and
\zmetacharacters are gone.
New metacharacters
- Because
/xis default:
#now always introduces a comment.
- Whitespace is now always metasyntactic, i.e. used only for layout and not matched literally (but see the :w modifier described above).
^^and
$$match line beginnings and endings. (The
/mmodifier is gone.)
.matches an ``anything'', while
\Nmatches an ``anything except newline''. (The
/smodifier is gone.)
Bracket rationalization
(...)still delimits a capturing group.
[...]is no longer a character class.
- It now delimits a non-capturing group.
{...}is no longer a repetition quantifier.
- It now delimits an embedded closure.
- You can call Perl code as part of a regex match.
- Embedded code does not usually affect the match - it is only used for side-effects:
/ (\S+) { print "string not blank\n"; $text = $1; } \s+ { print "but does contain whitespace\n" } /
- It can affect the match if it calls
fail:
/ (\d+) {$1<256 or fail} /
<...>are now extensible metasyntax delimiters or ``assertions'' (i.e. they replace
(?...)).
Variable (non-)interpolation
- In Perl 6 regexes, variables don't interpolate.
- Instead they're passed ``raw'' to the regex engine, which can then decide how to handle them (more on that below).
- The default way in which the engine handles a scalar is to match it as a \Q[...] literal (i.e.it does not treat the interpolated string as a subpattern).
- In other words, a Perl 6: / $var /
is like a Perl 5: / \Q$var\E /
- (To get regex interpolation use an assertion - see below)
- An interpolated array:
/ @cmds /
is matched as if it were an alternation of its elements:
/ [ @cmds[0] | @cmds[1] | @cmds[2] | ... ] /
- And, of course, each one is matched as a literal.
- An interpolated hash matches a
/\w+/and then requires that sequence to be a valid key of the hash.
- So:
/ %cmds /
- is like:
/ (\w+) { fail unless exists %cmds{$1} } /
Extensible metasyntax (
<...>)
- The first character after
<determines the behaviour of the assertion.
- A leading alphabetic character means it's a grammatical assertion (i.e. a subpattern or a named character class - see below):
/ <sign>? <mantissa> <exponent>? /
- The special named assertions include:
/ <before pattern> / # was /(?=pattern)/ / <after pattern> / # was /(?<pattern)/ # but now a real pattern!
/ <ws> / # match any whitespace
/ <sp> / # match a space char
- A leading number, pair of numbers, or pair of scalars means it's a repetition specifier:
/ value was (\d<1,6>) with (\w<$m,$n>) /
- A leading
$,
@,
%, or
&interpolates a variable or subroutine return value as a regex rather than as a literal:
/ <$key_pat> = <@value_alternatives> /
- A leading
(indicates a code assertion:
/ (\d<1,3>) <( $1 < 256 )> /
- Same as:
/ (\d<1,3>) {$1<256 or fail} /
- A leading
{indicates code that produces a regex to be interpolated into the pattern at that point:
/ (<ident>) <{ cache{$1} //= get_body($1) }> /
- A leading
[indicates an enumerated character class:
/ <[a-z_]>* /
- A leading
-indicates a complemented character class:
/ <-[a-z_]> <-<alpha>> /
- A leading
'indicates an interpolated literal match (including whitespace):
/ <'match this exactly (whitespace matters)'> /
- The special assertion
<.>matches any logical grapheme (including a Unicode combining character sequences):
/ seekto = <.> / # Maybe a combined char
- A leading
!indicates a negated meaning (a zero-width assertion except for repetition specifiers):
/ <!before _> # We aren't before an _ \w<!1,3> # We match 0 or >3 word chars /
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 5 for the current design information.
Backslash reform
- The
\pand
\Pproperties become intrinsic grammar rules (
<prop ...>and
<!prop ...>).
- The
\L...\E,
\U...\E, and
\Q...\Esequences become
\L[...],
\U[...], and
\Q[...](
\Eis gone).
- Note that
\Q[...]will rarely be needed since raw variables interpolate as
eqmatches, rather than regexes.
- Backreferences (e.g.
\1) are gone;
$1can be used instead, because it's no longer interpolated.
- New backslash sequences,
\hand
\v, match horizontal and vertical whitespace respectively, including Unicode.
\snow matches any Unicode whitespace character.
- The new backslash sequence
\Nmatches anything except a logical newline; it is the negation of
\n.
- A series of other new capital backslash sequences are also the negation of their lower-case counterparts:
hex character.
Regexes are rules
- The Perl 5
qr/pattern/regex constructor is gone.
- The Perl 6 equivalents are:
rule { pattern } # always takes {...} as delimiters rx/ pattern / # can take (almost any) chars as delimiters
- If either needs modifiers, they go before the opening delimiter:
$regex = rule :ewi { my name is (.*) }; $regex = rx:ewi/ my name is (.*) /;
- The name of the constructor was changed from
qrbecause it's no longer an interpolating quote-like operator.
- As the syntax indicates, it is now more closely analogous to a
sub {...}constructor.
- In fact, that analogy will run very deep in Perl 6.
- Just as a raw
{...}is now always a closure (which may still execute immediately in certain contexts and be passed as a reference in others)...
- ...so too a raw
/.../is now always a regex (which may still match immediately in certain contexts and be passed as a reference in others).
- Specifically, a
/.../matches immediately in a void or Boolean context, or when it is an explicit argument of a
=~.
- Otherwise it's a regex constructor.
- So this:
$var = /pattern/;
no longer does the match and sets
$varto the result.
- Instead it assigns a regex reference to
$var.
- The two cases can always be distinguished using
m{...}or
rx{...}:
$var = m{pattern}; # Match regex, assign result $var = rx{pattern}; # Assign regex itself
- Note that this means that former magically lazy usages like:
@list = split /pattern/, $str;
are now just consequences of the normal semantics.
- It's now also possible to set up a user-defined subroutine that acts like grep:
sub my_grep($selector, *@list) { given $selector { when RULE { ... } when CODE { ... } when HASH { ... } # etc. } }
- Using
{...}or
/.../in the scalar context of the first argument causes it to produce a
CODEor
RULEreference, which the switch statement then selects upon.
Backtracking control
- Backtracking over a single colon causes the regex engine not to retry the preceding atom:
m:w/ \( <expr> [ , <expr> ]* : \) /
(i.e. there's no point trying fewer
<expr>matches, if there's no closing parenthesis on the horizon)
- Backtracking over a double colon causes the surrounding group to immediately fail:
m:w/ [ if :: <expr> <block> | for :: <list> <block> | loop :: <loop_controls>? <block> ] /
(i.e. there's no point trying to match a different keyword if one was already found but failed)
- Backtracking over a triple colon causes the current rule to fail outright (no matter where in the rule it occurs):
rule ident { ( [<alpha>|_] \w* ) ::: { fail if %reserved{$1} } | " [<alpha>|_] \w* " }
m:w/ get <ident>? /
(i.e. using an unquoted reserved word as an identifier is not permitted)
- Backtracking over a
<commit>assertion causes the entire match to fail outright, no matter how many subrules down it happens:
rule subname { ([<alpha>|_] \w*) <commit> { fail if %reserved{$1} } } m:w/ sub <subname>? <block> /
(i.e. using a reserved word as a subroutine name is instantly fatal to the ``surrounding'' match as well)
- A <cut> assertion always matches successfully, and has the side effect of deleting the parts of the string already matched.
-.
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 5 for the current design information.
Named Regexes
- The analogy between
suband
ruleextends much further.
- Just as you can have anonymous subs and named subs...
- ...so too you can have anonymous regexes and named regexes:
rule ident { [<alpha>|_] \w* }
# and later...
@ids = grep /<ident>/, @strings;
- As the above example indicates, it's possible to refer to named regexes, such as:
rule serial_number { <[A-Z]> \d<8> }) rule type { alpha | beta | production | deprecated | legacy }
in other regexes as named assertions:
rule identification { [soft|hard]ware <type> <serial_number> }
Nothing is illegal
- The null pattern is now illegal.
- To match whatever the prior successful regex matched, use:
/<prior>/
- To match the zero-width string, use:
/<null>/
Hypothetical variables
- In embedded closures it's possible to bind a variable to a value that only ``sticks'' if the surrounding pattern successfully matches.
- A variable is declared with the keyword
letand then bound to the desired value:
/ (\d+) {let $num := $1} (<alpha>+)/
- Now
$numwill only be bound if the digits are actually found.
- If the match ever backtracks past the closure (i.e. if there are no alphabetics following), the binding is ``undone''.
- This is even more interesting in alternations:
/ [ (\d+) { let $num := $1 } | (<alpha>+) { let $alpha := $2 } | (.) { let $other := $3 } ] /
- There is also a shorthand for assignment to hypothetical variables:
/ [ $num := (\d+) | $alpha:= (<alpha>+) | $other:=(.) ] /
- The numeric variables (
$1,
$2, etc.) are also ``hypothetical''.
- Numeric variables can be assigned to, and even re-ordered:
my ($key, $val) = m:w{ $1:=(\w+) =\> $2:=(.*?) | $2:=(.*?) \<= $1:=(\w+) };
- Repeated captures can be bound to arrays:
/ @values:=[ (.*?) , ]* /
- Pairs of repeated captures can be bound to hashes:
/ %options:=[ (<ident>) = (\N+) ]* /
- Or just capture the keys (and leave the values undef):
/ %options:=[ (<ident>) = \N+ ]* /
- Subrules (e.g.
<rule>) also capture their result in a hypothetical variable of the same name as the rule:
/ <key> =\> <value> { %hash{$key} = $value } /
Return values from matches
- A match always returns a ``match object'', which is also available as (lexical)
$0
- The match object evaluates differently in different contexts:
- in boolean context it evaluates as true or false (i.e. did the match succeed?):
if /pattern/ {...} # or: /pattern/; if $0 {...}
- in numeric context it evaluates to the number of matches:
$match_count += m:e/pattern/;
- in string context it evaluates to the captured substring (if there was exactly one capture in the pattern) or to the entire text that was matched (if the pattern does not capture, or captures multiple elements):
print %hash{$text =~ /,? (<ident>)/}; # or: $text =~ /,? (<ident>)/ && print %hash{$0};
- Within a regex,
$0acts like a hypothetical variable.
- It controls what a regex match returns (like
$$does in yacc)
- Use
$0:=to override the default return behaviour described above:
rule string1 { (<["'`]>) ([ \\. | <-[\\]> ]*?) $1 }
$match = m/<string1>/; # default: $match includes # opening and closing quotes
rule string2 { (<["'`]>) $0:=([ \\. | <-[\\]> ]*?) $1 }
$match = m/<string2>/; # $match now excludes quotes # because $0 explicitly bound # to second capture only
Editor's note: this document is out of date and remains here for historic interest. See Synopsis 5 for the current design information.
Matching against non-strings
- Anything that can be tied to a string can be matched against a regex. This feature is particularly useful with input streams:
my @array := <$fh>; # lazy when aliased my $array is from(\@array); # tie scalar
# and later...
$array =~ m/pattern/; # match from stream
Grammars
- Potential ``collision'' problem with named regexes
- Of course, a named
identregex shouldn't clobber someone else's
identregex.
- So some mechanism is needed to confine regexes to a namespace.
- If subs are the model for rules, then modules/classes are the obvious model for aggregating them.
- Such collections of rules are generally known as ``grammars''.
- Just as a class can collect named actions together:. }
- Like classes, grammars can inherit:
grammar Letter { rule text { <greet> <body> <close> }
rule greet :w { [Hi|Hey|Yo] $to:=(\S+?) , $$}
rule body { <line>+ }
rule close :w { Later dude, $from:=(.+) }
# etc. }
grammar FormalLetter is Letter {
rule greet :w { Dear $to:=(\S+?) , $$}
rule close :w { Yours sincerely, $from:=(.+) }
}
- Inherit rule definitions (polymorphically!)
- So there's no need to respecify body, line, etc.
- Perl 6 will come with at least one grammar predefined:
grammar Perl { # Perl's own grammar
rule prog { <line>* }
rule line { <decl> | <loop> | <label> [<cond>|<sideff>|;] }
rule decl { <sub> | <class> | <use> }
# etc. etc. }
- Hence:
given $source_code { $parsetree = m/<Perl::prog>/; }
Transliteration
- The tr/// quote-like operator now also has a subroutine form.
- It can be given either a single
PAIR:
$str =~ tr( 'A-C' => 'a-c' );
- Or a hash (or hash ref):
$str =~ tr( {'A'=>'a', 'B'=>'b', 'C'=>'c'} ); $str =~ tr( {'A-Z'=>'a-z', '0-9'=>'A-F'} ); $str =~ tr( %mapping );
- Or two arrays (or array refs):
$str =~ tr( ['A'..'C'], ['a'..'c'] ); $str =~ tr( @UPPER, @lower );
- Note that the array version can map one-or-more characters to one-or-more characters:
$str =~ tr( [' ', '<', '>', '&' ], [' ', '<', '>', '&' ]); | http://www.perl.com/pub/2002/06/ | CC-MAIN-2014-42 | refinedweb | 2,493 | 55.74 |
Back to: Java Tutorials For Beginners and Professionals
Looping Statements in Java with Examples
In this article, I am going to discuss the Looping Statements in Java with Examples. Please read our previous article, where we discussed Decision Making Statements in Java with examples. At the end of this article, you will understand what are Looping Statements and their type with examples.
What are Looping Statements in Java?
Looping in programming languages is a feature that facilitates the execution of a set of instructions/functions repeatedly while some condition evaluates to true.
Types of looping statements in Java:
There are three types of looping statements in Java. They are as follows:
- For loop
- While loop
- Do while loop
For Loop in Java:
The Java for loop repeats the execution of a set of Java statements. A for loop executes a block of code as long as some condition is true. The syntax to use for the loop is given below.
for(initialization; condition; increment/decrement)
{
Statement(s);
}
The initialization expression initializes the loop and is executed only once at the beginning when the loop begins. The termination condition is evaluated in every loop and if it evaluates to false, the loop is terminated. And lastly, increment/decrement expression is executed after each iteration through the loop.
Flow chart of for loop:
Sample Program for For Loop in Java:
class ForLoopDemo { public static void main(String args[]) { for(int num = 1; num <= 5; num++) { System.out.println(num); } } }
Output:
1
2
3
4
5
Enhanced for loop in Java:
Enhanced for loop provides a simpler way to iterate through the elements of a collection or array. It is inflexible and should be used only when there is a need to iterate through the elements in a sequential manner without knowing the index of the currently processed element.
Also, note that the object/variable is immutable when enhanced for loop is used, i.e it ensures that the values in the array cannot be modified, so it can be said as a read-only loop where you can’t update the values as opposed to other loops where values can be modified. The syntax to use the enhanced for loop in java is given below.
for(T element : Collection object / array)
{
Statement(s);
}
Example to Understand Enhanced for loop in Java:
public class EnhancedforloopDemo { public static void main(String args[]) { String array[] = {"Ron", "Harry", "Hermoine"}; //enhanced for loop for (String x:array) { System.out.println(x); } /* for loop for same function for (int i = 0; i < array.length; i++) { System.out.println(array[i]); } */ } }
Output:
Ron
Harry
Hermoine
While Loop in Java:
The while statement or loop continually executes a block of statements while a particular condition is true. The while statement continues testing the expression and executing its block until the expression evaluates to false. The syntax to use while loop in java is given below.
while(condition)
{
Statement(s);
}
Flow chart of While Loop:
Example to Understand while Loop in Java:
class whileLoopDemo { public static void main(String args[]) { int x = 1; // Exit when x becomes greater than 4 while (x <= 4) { System.out.println("Value of x:" + x); // Increment the value of x for // next iteration x++; } } }
Output:
Value of x: 1
Value of x: 2
Value of x: 3
Value of x: 4
Do-while Loop in Java:
The difference between do-while and while is that do-while evaluates its expression at the bottom of the loop instead of the top. Therefore, the statements within the do block are always executed at least once.
Note that the do-while statement ends with a semicolon. The condition expression must be a boolean expression. The syntax to use the do-while loop is given below.
do{
Statement(s);
}
while(condition);
Flow chart of the do-while loop:
Example to Understand do-while Loop in Java:
class dowhileloopDemo { public static void main(String args[]) { int x = 21; do { // The line will be printed even // if the condition is false System.out.println("Value of x:" + x); x++; } while (x < 20); } }
Output: Value of x: 21
In the next article, I am going to discuss Branching Statements in Java with examples. Here, in this article, I try to explain Looping Statements in Java with Examples and I hope you like this Looping Statements in Java with Examples article. I would like to have your feedback. Please post your feedback, question, or comments about Looping Statements in Java with Examples article. | https://dotnettutorials.net/lesson/looping-statements-in-java/ | CC-MAIN-2022-27 | refinedweb | 749 | 51.07 |
Introduction
Python is a great choice for anyone wanting to play with the increasingly popular ZIP or GZIP (not covered here) file formats, and as usual Python makes it surprisingly fun/easy!
Don’t believe me?
In this article we’ll look at creating, extracting, and adding to Zip archives using Pythons standard zipfile module and defining a set of functions you can use with your own Zip files; ending with an example which recursively scans a Zip file and sub-archives.
This does require some prior knowledge of Python, so if you have never used Python before you should read Vikram Vaswani’s Python 101 before reading this.
Creating Our Zip File
Lets jump right in and create our Zip file, then add a few sample files to it.
[code]
>>> import zipfile
>>> zip = zipfile.ZipFile(‘Python.zip’, ‘w’)
>>> zip.write(‘file1.txt’)
>>> zip.write(‘file2.gif’)
>>> zip.close()
[/code]
So we should have small Zip containing two files (file.txt and file.gif) sitting in our current working directory. Easy enough and pretty neat overall. How about something a little more interesting? Adding all the .txt files in a directory to our archive, perhaps?
[code]
#!/usr/bin/env python
import os, zipfile
def zipdir(path, extension, zip):
for each in os.listdir(path):
if each.endswith(‘.txt’):
try: zip.write(path + each)
except IOError: None
if __name__ == ‘__main__’:
zip = zipfile.ZipFile(‘Python.zip’, ‘w’)
zipdir(”, ‘.txt’, zip)
zip.close()
[/code]
Still pretty simple. This example basically defines a new user function named zipdir(), which follows three steps..
- Loop though a list of all the names in our directory.
- If each ends with .txt try and write it to zip.
- If an IOError is raised skip this name and move onto the next one (this could happen if you have a folder ending with .txt)
There is a problem with this one though… because ZipFile is a file-based object, data already in our Zip gets wiped when we start writing again just like with normal files. Luckily this also means we can use other flags beside write, to show this we’ll add a few more files to our Zip using append.
[code]
>>> import zipfile
>>> zip = zipfile.ZipFile(‘Python.zip’, ‘a’)
>>> zip.write(‘file.txt’)
>>> zip.write(‘file.gif’)
>>> zip.write(‘folder/file.html’)
>>> zip.close()
[/code]
So we’ve seen how to create a Zip file and we’ve added a set of files to it using write and append flags, what’s next?
{mospagebreak title=Going Full Monty with the Zip File}
You guessed it, where going to unzip them. (Using our file.txt and file.gif sample files again just to make things easier to follow.)
[code]
>>> import zipfile
>>> zip = zipfile.ZipFile(‘Python.zip’, ‘r’)
>>> file(‘ ‘file.txt’, ‘w’).write(zip.read(‘file.txt’))
>>> file(‘file.gif’, ‘wb’).write(zip.read(‘file.gif’))
>>> zip.close()
[/code]
Note: Images are binary; I’ve used the ‘wb’ (write binary) flag for the second file although this may not always be necessary.
Ok we just extracted two files from our Zip, and in only five lines! And this example is fine if you know the names of the files you want to extract, but what if you don’t?
[code]
#!/usr/bin/env python
import zipfile
def inzip(filename, zip):
return filename in zip.namelist()
if __name__ == ‘__main__’:
zip = zipfile.ZipFile(‘Python.zip’, ‘r’)
inzip(‘file.txt’, zip)
zip.close()
[/code]
Short and sweet just like its name, this function simply returns True or False (True in the example above) if the string filename is in the list of files in the zip.
The namelist() method (along with its brothers and sisters) provides information about a Zip file, namelist() itself returns a list of all the files within a Zip. For example:
[code]
>>> zip.namelist()
[‘1.txt’, ‘2.txt’, ‘3.txt’, ‘file.gif’, ‘file.txt’, ‘folder/file.html’, ‘folder/’]
>>>
[/code]
You’ve checked the contents of the file and you want to get extracting. Rather than sitting there typing names into your Python shell one by one (which lets face it is pretty boring), I’m going to show you how.
[code]
#!/usr/bin/env python
import zipfile
def unzip(zip):
for name in zip.namelist():
file(name, ‘wb’).write(zip.read(name))
if __name__ == ‘__main__’:
zip = zipfile.ZipFile(‘Python.zip’, ‘r’)
unzip(zip)
zip.close()
[/code]
This is fine for a flat Zip files (those without subfolders) but it’d just barf all over the screen if we passed a name that included a none existent directory to file(), there are two choices:
- Remove everything before the filename, simple yes, but you could end up two files named the same and we all know what happens next.
- We can create all the directories before unzipping our file, which is a lot safer, though requires a little more work…
Of course we’re going for the second choice, not only is it the most interesting but also the most Pythonic!
To borrow from another TV snake (Black Adder) “I have a cunning plan!”
[code]
#!/usr/bin/env python
import os, zipfile
def unzip(path, zip):
isdir = os.path.isdir
join = os.path.join
norm = os.path.normpath
split = os.path.split
for each in zip.namelist():
if not each.endswith(‘/’):
root, name = split(each)
directory = norm(join(path, root))
if not isdir(directory):
os.makedirs(directory)
file(join(directory, name), ‘wb’).write(zip.read(each))
if __name__ == ‘__main__’:
zip = zipfile.ZipFile(‘Python.zip’, ‘r’)
unzip(”, zip)
zip.close()
[/code]
Don’t panic! This is a little more advanced than the other functions we’ve created so far and there’s actually quite a lot going on inside it so we’ll go though step by step; you might have noticed the os module sitting at the core of this example too.
The first part of this function is pretty strange as functions go; basically all it does is create some local copies of some of the functions from os.path (to improve performance).
Next we loop though each of the names in zip.namelist() and if the name isn’t a directory (end with a forward slash).
[code]
>>> for each in zip.namelist(): print each
1.txt
2.txt
3.txt
file.gif
file.txt
folder/file.html
folder/
[/code]
The path is split from the filename and assigned to root, name. Our next line creates a variable named directory that holds the new path for the file, which is simply path and root joined.
Note: This won’t work with absolute paths like C:FolderFolderFile.ext; in this case the file should be extracted to that location (tested on windows). For this example I’m assuming that absolute paths won’t be used.
All we do then is check if the directory tree does NOT already exist before attempting to create it and extracting our file. Overall, it’s a very small function (especially compared to some other languages).
{mospagebreak title=Listings in the Key of Zip}
Finally were going to look at a function that uses recursion to move though a Zip file and its sub archives; returning a complete list of all none Zip files. But what’s the point in this? Let’s say for instance that you want to count the number of files in a Zip, this way all you have to do is call len() on our function. Enough talk lets see this function in action.
[code]
#!/usr/bin/env python
import os, zipfile
def rezipe(path, files = []):
zip = zipfile.ZipFile(path)
for name in zip.namelist():
if name.endswith(‘.zip’):
file(name, ‘wb’).write(zip.read(name))
rezipe(name)
os.remove(name)
elif not name.endswith(‘/’):
files.append(name)
return files
print len(rezipe(‘Python.zip’))
[/code]
But wait, there’s… uhm… nevermind. That’s it. Sorry if I hyped that last one up a bit.
Unlike our other examples rezipe() opens the Zip file itself instead of using one we’ve already opened. It then loops though zip.namelist() and if name ends with .zip we extract it to the current working directory and call rezipe() on it, removing it when rezipe() is complete. The next part simply says if name isn’t a Zip file or a folder append it to the end of our list.
If your anything like me by now I’m sure you can see the potential this little guy (in particular) has and what this means for your Zip files!
If you’ve found this article interesting and you want to learn more about Python or some of the subjects covered here: – Python homepage – Python tutorial – Python module index – The zipfile module
Note: All the examples shown and discussed in this article where tested on Windows XP with Python 2.3 and are meant only as examples.
2 thoughts on “Python UnZipped”
/?php comment_form(); ?>[gp-comments width="770" linklove="off" ]
Pingback: How to create a zip archive of a directory - DexPage Pingback: » Python:How to create a zip archive of a directory | http://www.devshed.com/c/a/Python/Python-UnZipped/ | CC-MAIN-2018-17 | refinedweb | 1,508 | 75.61 |
ConfigObj allows you to read and write config files, from Python, with the greatest of ease. Usually this means single line commands. For this reason it is also handy for storing data in a human readable way [1]. ConfigObj has gone through several permutations. It can read config files with sections, like windows INI files, and makes it very easy for users to write config files for programs that use it.
In a nutshell, if you give it a filename it will read all the keyword/value pairs from it. You access or change the values by treating it like a dictionary. You can then call the write method to save any changes made. Keywords are case insensitive, but ConfigObj preserves the original case when writing out. It is ease of use that gives ConfigObj the advantage over ConfigParser, but; ConfigObj also has the following features :
- Ease of use - dictionary like access
- Sections are optional - flat config files if you want them
- Multiple (list) values for keywords
- Easy to create, modify, and write files
- Human readable
- Quoting is optional [where unambiguous]
- Will understand several keyword/value dividers
- Many options for parsing and writing
- Powerful integrated Validation scheme
- lines (including lists) can be split over more than one line
- Support for unicode and input/output encoding
- String interpolation - similar to ConfigParser
- Optional attribute access for shorthand access to keywords/sections
Feedback on this module, including this document, is very much welcomed. Support for ConfigObj, along with bug reports, comments, etc, happens at the PythonUtils Mailing List. Bugs do pop up from time to time - but if you have problems I can usually sort them out within a few days.
ConfigObj needs python 2.2 or higher.
This documentation is a guide on how to use ConfigObj in your programs. You may also be interested in the ConfigObj API Docs.
validate.py is maintained at the Validate Home. There may be additional documentation or resources relating to it. The Pythonutils Package is a python package that includes ConfigObj and all dependencies. It comes with a setup.py and is simpler if you use ConfigObj regularly.
from configobj import ConfigObj config = ConfigObj(infile=[], indict = {}, **keywargs) from configobj import ConfigObj config = ConfigObj(filename)
You create a basic configobj by giving it a filename. You can also pass in any options you want to set. The options can either be in the form of keyword arguments, or as a dictionary. A useful default dictionary is available to modify - but we'll look more at the options later.
ConfigObj then reads the file and parses the values from it. Values are always read in as strings - or lists of strings. The file can either be a straightforward config file with just keywords and values - in which case we call it a flatfile. Alternatively it can be divided into sections. Each section then has it's own set of keywords. See the paragraph on config files to see the difference.
You can then access the keywords in the same way as you access a dictionary :
value1 = config['keyword1'] value2 = config['keyword2'] . .
You can even use the normal dictionary method to change the values, and then write it out using the write method :
config['keyword1'] = value1 config['keyword2'] = value2 . . config.write()
ConfigObj inherits from the built in type dictionary - so all the normal dictionary methods are available. Keywords in ConfigObj are case insensitive. This is done using a class called caseless. If you ever need a case insensitive dictionary or list, you can use these ! ConfigObj does preserve your original casing when it writes back out. [2]
This means that the following two lines are equivalent :
print config['FISH'] print config['fish']
If the config file has sections in it, then each section will be a dictionary of keywords and values.
print config['section1'] {'keyword1': 'value 1', 'keyword2': 'value 2', 'keyword3': 'value 3'}
You can create new sections in the following two ways :
config['section 2'] = None # this is the same as config['section 2'] = {} config['section 2'] = {'keyword1': 'value 1', 'keyword2': 'value 2', 'keyword3': 'value 3'}
You access values in a section with :
value1 = config['section 1']['keyword 1'] value2 = config['section 1']['keyword 2']
You can also use ConfigObj to just read in some values from a file, and then just update those values in the file. This is done using a configspec when you read a file and the writein method to write it out again. But we'll see more about those later.
You can even create a completely empty configobj from scratch :
config = ConfigObj() config.filename = filename config['keyword1'] = value config['keyword2'] = value . . configobj.write()
It can then be read back in with :
config = ConfigObj(filename)
Easy hey !! This will be a flatfile by default. The same applies if you specify a filename which doesn't yet exist. To make it a configobj with sections you need to specify :
config = ConfigObj(flatfile=False)
The last thing I'll mention when covering the basics is list values in config files. Values are always strings - if you want integers, or anything else, you can do the conversion yourself ! They can however be comma separated lists of strings. The following three lines all represent the same list :
'keyword' = [value1, value2, value3] 'keyword' = (value1, value2, value3) 'keyword' = value1, value2, value3
These are read in, and turned into lists using the listquote module. If you pass in a list to a configobj then it will be written out as a list. This includes nested lists !! (lists of lists !! - unless you turn recursivelist off - see the options section). This means that the following line is perfectly valid :
config['keyword'] = [ value1, [value2, value3], value4 ]
The next few sections cover the options and more complex possibilities in greater detail. If you just want a list and brief description of all the options, methods and attributes associated with a configobj object then go to the summary section.
Before we look at config files themselves, which are easy enough, I'll mention the different types of object that ConfigObj can read a config file from. You create a new ConfigObj using config = ConfigObj(infile). infile can be a list of lines, a filename, a dictionary, or a StringIO instance. Because we support StringIO objects, it can actually use any object with the 'getvalue', 'seek', 'readlines', and 'writelines' methods. Beware that if you use cStringIO [3], you won't be able to use the 'write' or 'writein' methods. Whatever infile is (other than a dictionary), each line will be 'rstripped' to remove any trailing 'n'.
If infile is a filename, the file will be automatically read - and can be automatically written to. ConfigObj will attempt to work out for itself whether the file is a flatfile or a sectionfile. You can override this by passing in the 'flatfile' argument. If the file doesn't exist, ConfigObj will assume that you want a flatfile unless you tell it otherwise.
If infile is a list of lines (unicode strings or normal strings, possibly with encoding specified) then the write and writein methods will always return lists of lines. You can change this by setting config.filename
Passing in a dictionary is a convenient way of quickly creating a config file. In this case, ConfigObj will also attempt to work out whether the dictionary represents a flatfile or not. For it to be a flatfile, every member must be a string, tuple, or list. For it to be a sectionfile every member must be a dictionary - a section. If you mix types, or have members of other types, a TypeError will be raised and the object won't be created. If you pass in the wrong value for flatfile (or a configspec that tests for the opposite of the dictionary) then a ValueError will be raised. Currently the configspec isn't used when you create a ConfigObj from a dictionary - this is likely to change. Also input encodings aren't used when a dictionary is passed in. This could create problems when you try to output. For the moment, if you want an output encoding then make sure you have already converted keys and values to unicode.
There are two types of config files that ConfigObj can read.
Basic files with just keyword = value pairs - we call these 'flatfiles'. Each keyword must be unique. The second type are slightly more complicated - we'll call these 'sectionfiles'.
These are config files with sections. Each section is separated with a section marker : [section 1]
Each section has it's own list of keyword = value pairs. Keywords that appear in one section can be repeated in another section. Any keywords that appear at the start - before any section marker - go in the 'empty section'.
Example 'flatfile' (normal config file) :
# this is a comment line ; so is this # The initial comments in a file are preserved as config.inital_comment (a list of lines) 'keyword 1' = 'value 1' # this is a comment for keyword1 keyword2=value2 keyword3 : value3 ; this is a comment for keyword3 'keyword 4' value 4 # quotes are optional if unambiguous keyword5, value5 # notice the different sorts of dividers "keyword '6'" : "value6" "keyword 7" "value 7" 'keyword 8' 'A very long entry for value 8 \ so long that it goes over onto two lines' # this comment is not on a keyword line - so gets ignored 'keyword 9' '&mjf-quot;value 9&mjf-quot;&mjf-lf;' 'keyword 10' [value1, 'value 2', "value3", [value1, (value1, value2)]] 'keyword 11' = (value1, value2, value3) 'keyword 12' = value1, value2, value3, \ value4, value5 /* This marker starts a multi line comment. It must be the first thing on the line. The last few comment lines are preserved as config.final_comment (a list of lines) This is all still a comment, but it ends, here. */
It looks a bit of a mess, but I've tried to illustrate the variety of ways to separate keyword and value. It looks a lot neater if you stay consistent within a file ! Keywords can be quoted with either single quotes ' or double quotes ". The quoting is optional; but if the keyword name has spaces in it then it must be quoted.
You can also have comments on the same line as keywords and values; perhaps to explain what that keyword is for. Comments on the same line as a keyword are preserved. They can also be modified and will be written back out if the config file is saved. See the section on comments for more details.
In keyword 9, you can see the strange values '&mjf-quot;' and '&mjf-lf;'. These are what ConfigObj uses to escape double quotes and line feeds. If a value just uses " it can be quoted with ' (and vice-versa). If it uses both then it will be quoted with " and any double quotes inside the value will be translated into '&mjf-quot;'. Any line-feeds in a value will automatically be translated into '&mjf-lf;' when written out. These special escapes will automatically be translated back when read in. This means you can use them yourself when writing config files by hand.
Keywords 10-12 show lists of values. Individual members of the list can also be lists - ConfigObj preserves the list structure when it reads it back in. This can be useful for preserving data in a config file. A list can be just comma separated values or enclosed between '(..)' or '[..]'. Obviously nested lists must be between brackets.
ConfigObj will also allow multi-line comments using /* long comments */. This allows you to put an introduction or a description into your config files - or even embed other data structures ! These will only function properly if the first marker /* is the first thing on the line and the last one */ is the last thing on a line. This allows for a much simpler implementation !
Keyword 8 and 12 illustrate that you can split lines over two lines by ending the line with \. This just joins the two lines together. Because whitespace isn't significant in lists, you can indent the entries. For a single string value, the extra whitespace would get included. We don't support RFC822 style continuations like ConfigParser does. This is because it is incompatible with allowing nested sections, based on indentation. At some point ConfigObj will allow multiple levels of sections (subsections), where whitespace will be significant. The current method of splitting a long line will still work. The way to include a 'n' in a value is to use the escape '&mjf-lf;'.
Example 'sectionfile' :
'keyword1' 'value 1' 'keyword2' 'value 2' 'keyword3' 'value 3' [section 1] # comments on section marker lines are discarded 'keyword1' 'value 1' ; other comments will be preserved, as with flatfiles 'keyword2' 'value 2' 'keyword3' 'value 3' ; empty lines # and ones with just comments on ; are ignored ['section 2'] 'keyword1' 'value 1' 'keyword2' 'value 2' 'keyword3' 'value 3' ["section 3"] 'keyword1' 'value 1' 'keyword2' 'value 2' 'keyword3' 'value 3' [ section 4 ] keyword1 = 'value 1'
As you can see the keyword names can be repeated across the sections. Sections must be unique though, and only defined once in a file. The keywords that appear at the top of the file, before a section marker, are put in a special section called the 'empty section'. This is accessed by using '' as the key.
Parsing the above file would result in :
config = ConfigObj(linelist) print config {'': {'keyword3': 'value 3', 'keyword2': 'value 2', 'keyword1': 'value 1'}, 'section 4': {'keyword1': 'value 1'}, 'section 1': {'keyword3': 'value 3', 'keyword2': 'value 2', 'keyword1': 'value 1'}, 'section 2': {'keyword3': 'value 3', 'keyword2': 'value 2', 'keyword1': 'value 1'}, 'section 3': {'keyword3': 'value 3', 'keyword2': 'value 2', 'keyword1': 'value 1'}}
so the empty section is :
print config[''] {'keyword3': 'value 3', 'keyword2': 'value 2', 'keyword1': 'value 1'}
section 1 is accessed with :
print config['section 1'] {'keyword3': 'value 3', 'keyword2': 'value 2', 'keyword1': 'value 1'}
Each section is a dictionary of keywords/values. So you access individual keywords in a section using :
print config['section 1']['keyword1'] value 1
You can also use ordinary dictionary methods on each section - like has_key, update, get and setdefault etc.... In actual fact each section is a caselessDict rather than a normal dictionary. This means that keywords in a section are case insensitive too - it also means that sections have a couple of extra methods associated with caselessDict that ordinary dictionaries don't. See the docs in the caseless module if you are curious.
Note that the last section, 'section 4', doesn't have quotes inside the section marker. The leading and trailing spaces in the name are removed. This means that [section 4] and [ section 4 ] are identical !!
You can create a new-section either by giving a configobj a dictionary of keyword/values or initialise a new, empty section by passing it None. Both the following work :
config['section 2'] = None config['section 2'] = {'keyword1': 'value 1', 'keyword2': 'value 2', 'keyword3': 'value 3'}
Configspecs are optional. They can be used for several purposes :
There are various attributes that affect their behaviour, including how your ConfigObj handles errors. These attributes can be set when the ConfigObj is created and/or changed afterwards. If you don't provide a configspec then it will be set to None - this means parse/write everything from the file.
When first reading a config file, a configspec is used to specify exactly which keywords/sections should be read from a file. Any missing ones will have a default value filled in. This avoids key errors if your user provides an incorrectly filled in config file and your program tries to access values that ought to be there. It also allows you to just read a subset of keywords from a larger config file. Alternatively they could be used to specify just a subset of values to write out or the order you want them to be written out in. Normally a dictionary is an unordered object. When writing out a ConfigObj there is no guarantee about what order the keyword-values will get written out in. A configspec allows you to present a consistent and readable config file for your user.
Validation is a bit more complicated. This allows you to use another object - an instance of the Validator class to check that all the entries make sense. This uses the validate module written by Michael Foord and Mark Andrews. The validate module is available from, but is also distributed with ConfigObj. See the Validation section for the further details. Also see the Other Methods section for details of the 'parseconfigspec', 'stripconfigspec', and 'validate' methods which are relevant.
It does mean that a configspec can either be a list of keywords, or it can be a full configspec with validation information. We will refer to the simpler configspecs as 'simple configspecs' or 'stripped configspecs'. The more complicated ones we will call 'full configspecs' or 'configspecs with validation info'.
You pass in a configspec when you create a ConfigObj, by using the configspec option. config = ConfigObj(filename, configspec=configspec). The configspec you pass in can either be a filename, StringIO instance, or a list of lines. If you have specified an encoding, that will be used to decode the configspec.
When reading a file, a simple configspec is just a list of which keywords to parse (one per line). If the configspec is for a 'sectionfile' then it contains section markers and keywords. The ConfigObj will only read keywords that are present in the configspec, and ignore additional values. When writing a file it specifies the order to write out the keywords. Inside a configspec, you can specify that the configobj is to parse/write everything from a particular section - by listing just the section marker and no keywords.
By default ConfigObj will work out for itself whether the file it is reading is a flatfile or has sections. It first checks the configspec, if there are any sections in it, it decides it's supposed to be parsing a 'sectionfile'. You can override the default behaviour and tell ConfigObj to expect a flatfile or sectionfile - but passing in a configspec that contradicts that setting is a guaranteed way to cause problems. Alternatively passing in a configspec that looks like a flatfile and giving it a sectionfile is another sure way to disaster !
The way this works is easy in practice. Before we look at settings that affect it's behaviour, I'll show you some examples. One configspec for a flatfile and one for a sectionfile configobj.
flatfile configspec :
keyword1 keyword2 keyword3 keyword4
sectionfile configspec :
keyword1 keyword2 keyword3 keyword4 [section1] keyword1 keyword2 keyword3 keyword4 [section2] keyword1 keyword2 keyword3 keyword4 [section3] keyword1 keyword2 keyword3 keyword4
When a config file is read it will only read in values that are in the configspec. Any keywords that are in the configspec but are missing from the config file will be set to a default value. This is normally None rather than an empty string (''). This means you can tell which values were missing from the file by testing if they are None. (A blank value of '' may be a perfectly valid entry in a config file, but to have the keyword missing might not be !!). You can change this default value with the 'default' keyword when you create the configobj.
As I mentioned earlier you can also specify everything from a particular section by providing just the section marker. For example :
[section 1] [section 2] [section 3]
Means parse everything from 'section 1', 'section 2' and 'section 3'. This isn't quite the end of the story though....
You can see in the first sectionfile configspec that some of the keywords occurred before the first section marker. These obviously relate to keywords in the 'empty section'. There is a marker that means the 'empty section'. Unsurprisingly it's : [] or [ ] (whitespace in markers is removed). This is so that you can specify 'parse everything in the empty section'. To parse everything from the empty section and section 1-3 the config spec becomes :
[ ] [section 1] [section 2] [section 3]
You can also explicitly specify keywords in the empty section using this marker :
[ ] keyword1 keyword2
Means keywords 1 and 2 from the empty section. In fact the empty section marker is optional in this case - any keywords before the first section marker are automatically in the empty section.
The empty section marker can only be used at the start of a configspec and must never be used inside a config file. Using it anywhere else in a configspec will raise an EmptyError - "Empty section defined in the wrong place." What effect this has on your program is determined by the settings you passed in to the exceptions keyword. The default is to print the error and carry on - see the exceptions section.
It's worth noting that there are a few errors that can be raised from a configspec - from badly built lines to duplicate keywords to badly built section markers. Whilst these aren't necessarily fatal, they're guaranteed to confuse your user who will think there is something wrong with his config file... when in fact it's your fault. configspecs are so simple that there is no need to make mistakes !!
Two attributes of ConfigObj are particularly relevant to configspecs :
configspec_only : Raise errors if there are extra/missing values from the configspec ? write_full : Write out all values and sections (even if missing from the configspec) ? configspec_only, default : False write_full, default : True
You can either set these with keywords when you create the object or by setting the attribute directly. e.g. config.write_full = False
If you want your config file to contain all the keywords defined in your configspec, and nothing more, then set configspec_only to True. This causes a conspecerror to be raised if there are any extra/missing keywords or sections. It applies both when reading and when writing a configobj. How ConfigObj responds to this error is determined by your exceptions settings ! See the exceptions section. If it's not set then missing keywords will have the default set when reading (or not be output when writing) and extra ones will be ignored (when reading) or output (when writing - but also dependent on the write_full attribute).
Unlike the configspec_only keyword, the write_full keyword is only relevant when writing files out. The default is for the write and writein methods to write out all the values in the config object. This means that if you have added values and they aren't in the configspec, they will simply be added at the end (or the end of that section). If you only want to write out the ones in the configspec then set write_full to False.
If you pass in a configspec when you read a file, that configspec will be preserved in the object - as the attribute config.configspec. If you later use the write method without passing in a different configspec then the original will be used. At any time you can update the configspec by changing config.configspec - or even just setting it to None (which means write everything).
In order to help you create a configspec, a configobj has a method called makeconfigspec. You call it without arguments and it returns a full configspec for that configobj. Obviously the ordering of keywords/sections will be random (except the empty section always appears first).
configspec = config.makeconfigspec()
If you have a full configspec (with validation info) and you want to turn it into a simple configspec, you can use the stripconfigspec method. This also removes all comments etc.
stripped_spec = config.stripconfigspec(configspec)
As we've mentioned, specifying which keywords to read, and the order to write them, isn't the whole story with configspecs. They can also be used for validating your config files. It may be important to your program that the user supplies a valid IP address as one of the options, or that his shoe size is specified as a number. The Validation interface provides a way for you to automatically specify these checks. It uses a new Validator class created by Mark Andrews and myself for this purpose, although it has wider application than just ConfigObj.
This system allows you to create a config file that specifies a check for each keyword/value in your ConfigObj. This config file is your configspec. The check is specified as a function call with parameters, using basically the same syntax as Python. You register the functions you want to use with your Validator instance and then call the validate method of your ConfigObj. This walks the configspec and returns a list of keywords, or (section, keyword) tuples, that failed. If they all pass then it returns True. Validator can also use regular expressions (which you give a name to) and test for a match. validate.py comes with a battery of functions and regular expressions included and examples of how to use them.
from validate import Validator val = Validator() def test_range(value, min, max): value = float(value) min = float(min) max = float(max) return (min <= value <= max) val.functions['range'] = test_range print val.test('range(20, 50)', '30')3 True
In the above example we define a function and set it in the 'functions' attribute of val. We give it the name 'range'. [5] Our actual test is the string 'range(20, 50)'. This means that any value tested will be tested by calling the range function with min set to '20' and max set to '50'. Because we are parsing from strings the values will always be passed in as strings. The actual value being tested is always passed as the first argument to the function. In this case '30' is between '20' and '50' so it returns True. Normally you won't directly call the test method, but call config.validate(val).
You can pass named keyword arguments using this system. There is a standard test called 'none' which always passes. There is also a standard test called 'multiple' that allows you to choose several tests - which are all parameters to the multiple function. This can be an 'and' test (where all the tests must pass) or an 'or' test, where a single pass will suffice.
See the examples, and the functions used, for more details. [6] The Validator class has many more methods and can be used in situations other than just with ConfigObj. Because the configspec is easily expressed as a config file it is convenient to use ConfigObj to parse it though.
An example full configspec will look something like :
test1 = range(25, 75) test2 = length(3) [fish] fish1 = upper fish2 = upper() [eggs and ham] eggs = multiple('and', none, "frange(1.1, 2.3)", digit) ham = option('bum', 'bim', 'bam') [out] oforder = string
This specifies validation info for several keywords, over several sections. For example the 'test1' keyword in the empty section must have a value between 25 and 75. The 'ham' keyword in the 'eggs and ham' section must be either 'bum', 'bim', or 'bam'. The 'fish1' keyword in the 'fish' section must be upper case. The 'eggs' keyword in the 'eggs and ham' section must pass three tests. One of these, 'none' is an automatic pass - so it's a pretty dumb test.
Using the configspec is done in the same way as with a stripped configspec. It can be used for all the same purposes. :
from validate import Validator val = Validator() config = ConfigObj(filename, configspec=filename2) print config.validate(val) True
Having created a ConfigObj instance, you get it to validate itself by passing the 'validate' method your Validator instance. For flatfiles it returns either True (everything passed) or a list of keywords that failed. For sectionfiles it returns a list of tuples (section, keyword).
When you give ConfigObj a configspec it first checks to see if it contains validation information. If parsing it fails, then it assumes it doesn't ! [7] This means a badly built configspec will cause weird errors ! It parses the configspec as a ConfigObj instance, saved at config.__configspec__. It then strips the validation information and saves the stripped configspec at config.configspec. The functions to do this are available to you as methods of ConfigObj.
For example, if you are confident your configspec is valid, you can call config.__configspec__, config.configspec = config.parseconfigspec(configspec)
When you call the validate method, only sections/keywords with an entry in config.__configspec__ will be checked. Any entries in config.__configspec__ that don't exist (raise KeyError) will be marked as fail. If your functions raise exceptions the test will be marked as a fail (so check them carefully).
You can also manually parse and add your configspec for validation. If you do this you must use the lists=False option. (Because the function calls look like badly built lists unless they have quotes round them). Whether you do it manually or when the ConfigObj is created, you have all the usual ConfigObj methods available on your configspec :
config.__configspec__ = ConfigObj(filename, lists=False) config.__configspec__['section 1']['value 1'] = 'frange(0, 1.0)' config.__configspec__.write()
The ConfigObj methods and attributes relevant to validation are : the parseconfigspec method, the stripconfigspec method, the validate method,and the __configspec__ attribute.
We've basically covered this method. :
config.validate(val)
If there is no config.__configspec__ attribute it returns None. If every test passes it returns True. Otherwise it returns a list of failed tests. A list of keywords for a flatfile, and a list of (section, keyword) tuples for a section file. Missing keywords, or tests that raise exceptions, are counted as fails. If you parsed your config file with the configspec and their were missing keywords these will have been given the default value - usually None. Some tests expect a string, i.e. they call upper or some other string method. These will raise an AttributeError if passed None and therefore automatically (and probably rightly) fail. Your functions could automatically fail if passed None - or you could parse with 'configspec_only' on.
We've also covered this attribute. __configspec__ is your configspec turned into a ConfigObj. This is used by the validate method to do the validation.
This method takes a full configspec and parses it, then strips it. It calls 'ConfigObj' and then 'stripconfigspec'.
__configspec__, configspec = config.parseconfigspec(configspec, encoding=None)
If configspec is not a full configspec (or ConfigObj raises an exception whilst parsing) then __configspec__ will be None. 'configspec' is the stripped version of the configspec you pass it. By default it uses config.encoding (if set) to decode the configspec you supply. You can override this by passing in an encoding. Setting encoding=False forces no decoding to be done. The configspec you give it must be a list of lines, filename, or StringIO instance. Anything else will raise a TypeError.
This method takes a full configspec and removes comments and validation information. It returns a stripped configspec. :
strippedspec = config.stripconfigspec(configspec)
When you create a configobj there are various options you can set, either as keywords or by passing in a dictionary of options.
Every configobj has the default values built in. You only need to specify any options that you are different from the defaults. In the configobj module there is also a copy of this dictionary. This is called pref_dict - it is probably more useful to copy and modify it than import it.
The options are (shown with the default setting) :
So to read a config file, or cause an IOError if the filename specified doesn't exist, use :
config = ConfigObj(filename, fileerror=True)
To change the default divider when writing files, you could use :
config = ConfigObj(filename, divider=' : ')
Or you can put the options in a dictionary and pass that in :
config = ConfigObj(filename, pref_dict)
If you pass in a dictionary and keywords, the keywords will take precedence over the options set in the dictionary.
Most of these options (except the ones that are only relevant to the initial reading of files) remain as attributes of the created configobj. For example you can set 'force_return'=True when you create the configobj. You can later amend this by setting config.force_return = False
This keyword is particularly useful when creating empty configobjs'. It doesn't just apply to new ones though. This option specifies whether the config file is to be a flatfile or a sectionfile. If set to False it is a configobj with sections. If True it is a flatfile. If the filename you pass in doesn't exist then it defaults to True. If you leave it at the default value of None then ConfigObj will attempt to work out for itself whether a config file it reads is a flatfile or not. It first checks the configspec for section markers - if you haven't passed in a configspec then it checks the file itself. This value exists as an attribute of the configobj after creation - config.flatfile. However it is read only, changing it has no effect other than causing spectacular crashes the next time you use the configobj !! If you want to create a blank configobj that is a 'sectionfile', use :
``config = ConfigObj(flatfile=False)``
When ConfigObj writes out a config file using the write or writein methods it writes lines using the keyword divider value format. By default divider is set to ' = ', so lines it writes look like keyword = value. You can alter this divider to be anything you want - make sure it's a valid one though ! The valid dividers are :
' ', ',', ':', '\t', '=' ('\t' means TAB)
Any extra whitespace is ignored and can be used for formatting purposes.
I've mentioned exception handling several times now. Whilst parsing the config file, or even reading the configspec, there are several errors that can occur. ConfigObj defines a general base class of exceptions - the ConfigObjError. All the errors mentioned here are subclasses of this exception. The exceptions option can be used to define what ConfigObj does when it encounters the various possible errors. This enables you to make some errors fatal (raise an exception), some issue warnings (just print the error) or even ignore some errors altogether.
Before I describe the different errors - note that it is possible to make ConfigObj raise an error if the config file doesn't exist, using the fileerror option. It is also possible to cause an exception if we attempt to create an empty file with an invalid pathname, because the createempty option is set. In both these cases it is probably an IOError that will be raised - and in neither case will it be handled by this exception system.
You can either set a general error handling value, or set individual values for each of the different error types.
False means ignore errors. None means print the error message True means raise the exception.
If you choose to set individual error values, the default value is None and you only need to pass in values for errors you wish to change. Instead of setting the exceptions option to a single value - you give it a dictionary. Each keyword is the name of the error and the value is either False, None, True. Again, the configobj module has a default one defined that you can modify. It's called exceptionlist : the actual exceptions raised when these errors occur (if you have it set to raise exceptions !) and a brief explanation of the error :
'badline': FormatError The line doesn't appear to be a keyword-value, section marker or comment line.
The two methods associated with a configobj object for writing out config files - are the write and writein methods. Both methods write out the comments associated with any keywords they write, but are slightly different in the way they work.
The write method just creates a new config file. If the config has a filename set - config.filename - then any existing file will be overwritten. The write method uses the configspec (or a new configspec explicitly given to it) as an orderlist.
The writein method will read an existing file, and will only modify lines that it needs to. This means you can use a configobj to parse and modify a few keywords from a larger config file. This could be useful if your application is divided into several programs, each of which only use part of the config file.
Both methods use several of the attributes to determine their behaviour. Ones common to both of them are :
config.filename config.write_full config.configspec_only config.force_return config.newline config.stringify config.encoding config.backup_encoding
For the write method, if no config.filename is defined (i.e. it is set to None) it will return the config file as list of lines. Whether they are terminated with a newline or not depends on the newline attribute. If you pass it a configspec it will use that, otherwise it will use config.configspec. If this is set to None it means 'write everything'. If you pass write a configspec it will use it, but it won't remove any configspec that was already in place. If you want a new configspec to affect all further write operations then use :
config.configspec = newconfigspec config.write()
rather than
config.write(newconfigspec)
Alternatively, if you want to write out a large configobj as several files, you can use several configspecs - where each of the configspecs only contains a few keywords..... Turn write_full off first :
config.write_full = False config.filename = filename1 config.write(configspec1) config.filename = filename2 config.write(configspec2) . .
The write method adds the contents of config.initial_comment to the start of the file and config.final_comment to the end.
The writein method doesn't use configspec, but instead expects to be given an existing file to update the values (and comments) for. It goes through the file line by line and only changes lines where the line has a valid keyword and that keyword is in the configobj. If write_full is on, then it will write any missing keywords in at the end of the section and also add any missing sections at the end of the file. If configspec_only is set, then extra or missing keywords/sections will cause an error to be raised.
You can pass the infile in as a filename or a list of lines. By default it uses config.filename. If you pass in another filename it will modify that, without changing config.filename. (Neither the original file, nor the configobj attribute will be changed - config.filename will still be the same after the writein operation !!).
One use for this, is for a configobj to only represent a small subset of large config file - perhaps one program out of several that all share a config file. The program can have a configspec containing a few keywords that it needs to read :
config = ConfigObj(filename, configspec=['keyword1', 'keyword2', 'keyword3'])
This just parses a few values from the config file. If any of these values are modified, then they can be written back into the config file without affecting keywords that we didn't read. config.writein() does the job......
For more details on this subject see the Character Encoding section. Both write and writein now take an optional 'encoding' keyword argument. By default they use the config.encoding attribute to do output encoding. (But only if the attribute is set). You can force no encoding to be done by setting encoding=False, or pass in an alternative encoding. In addition the writein method uses this value to decode the file it reads, the write method will use it to decode any configspec you pass it.
Both write and writein will recognise and preserve the UTF8 'BOM' [9].
ConfigObj has support for python character encodings, both when reading files and when saving. When encodings are enabled, keywords and values will be unicode internally. Character encoding is handled by passing in the 'encoding' keyword. It can subsequently be altered as the 'encoding' attribute of the ConfigObj. :
config = ConfigObj(filename, encoding='UTF8') print config.encoding UTF8
If an encoding is specified then any config file or configspec will be turned into unicode objects internally using that encoding.
If the config file starts with the UTF8 BOM, then this will be preserved as the 'BOM' attribute of the ConfigObj and used if the write method is called. In future updates the other BOM will be supported. The UTF8 BOM is also recognised in configspecs and by the writein method.
When you use the write and writein methods it is possible to specify an alternative encoding for the output config file. If you don't specify one then the encoding attribute will be used instead (if any). If you specify False then no encoding is done.
When reading in with an encoding, the config file keywords and values are stored as unicode strings. It is possible that keywords or values that have been set may be normal byte strings instead of unicode strings. In order to encode these strings for outputting, ConfigObj must first turn them into unicode strings. The 'backup_encoding' option specifies which encoding is used to decode those strings. It does not directly alter them in the ConfigObj, but just decodes them for outputting. If this is set to None it reverts to using 'encoding'. If it is set to False then no conversion is done. The default is 'latin1'.
The normal way to access values/sections stored in a ConfigObj is using the dictionary syntax. Through a little magic you can also access and set values as attributes. Keywords in sections can also be accessed as attributes of the sections. This means the following constructs are all valid and equivalent :
print config['keyword'] print config.keyword config['keyword'] = value config.keyword = value print config['section'] print config.section print config['section']['keyword'] print config['section'].keyword print config.section['keyword'] print config.section.keyword config['section']['keyword'] = value config['section'].keyword = value config.section['keyword'] = value config.section.keyword = value
The attribute method of access doesn't work for names with spaces in them, names that contain characters that are invalid in attribute names, and also any existing attributes. As the ConfigObj already has attributes like 'default', 'errors', etc, the dictionary syntax is preferred and attribute access is for convenience only.
ConfigObj supports interpolation from the 'default' section into any other value. This can be turned off by setting the 'interpolation' option/attribute to False. Each ConfigObj also has an 'interpolate' method for manually performing interpolation on a value, even when interpolation is switched off. The syntax is similar to the interpolation done by the ConfigParser module. It only applies to config files with sections.
You can specify any keyword from the 'default' section of the ConfigObj to be inserted into a value. The syntax is '%(keyword)s'. If your ConfigObj doesn't have a defaults section, or that keyword isn't present, then no interpolation is done. If keyword' is in the 'default' section, then the value for that keyword will be used to replace '%(keyword)s' in your value. If the value it is replaced with also has an interpolation string, then that will be replaced too. This kind of 'nested interpolation' can be done up to a maximum of ten levels deep. This prevents infinite recursion ! Unlike ConfigParser we don't raise an error after reaching this depth, the value is just left.
If you want to create your own default section then you can simply assign to it. To add a set of defaults to any that might be in the config file already you can use the update method (which will over write any existing ones !). Below is an example of string interpolation. This also illustrates attribute access. It also shows what happens when an interpolated value also contains a value to be interpolated (a), a recursive value (c and d), and a value that isn't there (%(not present)s) :
config = ConfigObj(flatfile=False) config.filename = 'config.ini' defaults = {'a' : 'hello - %(b)s', 'b' : 'goodbye', 'datadir' : 'c:\\test', 'c' : '%(d)s', 'd' : '%(c)s' } config['default'] = None # create a new, blank, section config['default'].update(defaults) config['section'] = {'a' : '%(datadir)s\\some path\\file.py', 'b' : 'Yo %(a)s, %(datadir)s, %(not present)s, recursion %(c)s'} print config.section.a c:\test\some path\file.py print config.section.b Yo hello - goodbye, c:\test, %(not present)s, recursion %(c)s config.interpolation = False print config.section.a %(datadir)s\some path\file.py print config.section.b Yo %(a)s, %(datadir)s, %(not present)s, recursion %(c)s
Values fetched from the default section will have interpolation done, unless interpolation is switched off. You may need to switch interpolation off to work on the raw values. If you still want to do interpolation, but with it off by default, you can use the 'interpolate' function. This just receives a raw value and returns the string after any interpolation has been done. newvalue = config.interpolate(rawvalue)
config = ConfigObj(filename) config.interpolation = False # could have done ConfigObj(filename, interpolation=False) rawvalue = config['section']['keyword'] value = config.interpolate(rawvalue)
We've already covered the methods write, writein, validate, parseconfigspec, stripconfigspec, makeconfigspec, interpolate and sortcomments. There are currently four other methods that are defined in ConfigObj. Additionally, any of the dictionary and caslessDict methods can be used on a configobj, it's sections and the __comments__ dictionary. If it isn't None the __configspec__ attribute will be a full ConfigObj.
Passed an infile - as a filename or a list of lines, it tests if a config file is a flatfile or if it has sections. Returns True for a flatfile or False if it appears to have sections. Also returns True if the file doesn't exist.
A section is defined as any line that has only whitespace at the start followed by [section name] and possibly whitespace and a comment afterwards. Section name can be in quotes if required (inside the square brackets). It currently doesn't check that the section names are correctly built - just if a line looks like a section name.
If you pass in None as the value to the 'flatfile' option, when you create a configobj, it will attempt to work out for itself if the config file is a flatfile or not. (If you give it a configspec it actually checks the configspec). This method is available if you want it. A more sensible approach is to parse the file and then check the config.flatfile attribute - but this method is available if you want to use it.
This acts like object.reset() - but only for missing values (where the keyword entry is None). If a different value was passed in for 'default' this method will have no effect. The following to examples have the same effect :
config = ConfigObj(infile, configspec=configspec, default='') config = ConfigObj(infile, configspec=configspec) config.verify()
Reset and verify are two similar methods. If you use a configspec to parse a config file then any missing keywords will be set to None. (So long as you don't change the 'default' option of course). This can be convenient to tell if there were any missing values - but inconvenient if you need to do anything with them. The verify method goes through every value and sets any that are at None to ''. You could use this method after checking the values for missing ones and deciding if any that are missing are fatal. The reset method does the same thing... for all keywords and comments. Both are called without arguments.
istrue is a shorthand way of having a 'boolean' type of value. You point it at a keyword, or a section and a keyword, and it tells you if the value there is True or not. The value must be a string. It is True if it is in the following list : ``['yes', 'on', '1', 'true']`. The test is case insensitive. If the value pointed to doesn't exist you will get a KeyError. If the value pointed to isn't a string (e.g. a list) you will get an AttributeError.
istrue = config.istrue(keyword, section=None)
All dictionary methods are usable on a configobj, it's sections and the __comments__ dictionary. They are all fully documented in the python documentation and include :
has_key, get, setdefault, update, pop, keys, popitem, items, values, iteritems, iterkeys, itervalues
Plus any others that I've forgotten. Any methods that return or remove an item, will return or remove a whole section if you use them on a sectionfile configobj.
The following extra methods are available because ConfigObj is a subclass of caselessDict :
For changing the casing of a key. If a key exists that is a caseless match for 'item', it will be changed to 'item'. This is useful when initially setting up default keys - but later might want to preserve an alternative casing. (e.g. if later read from a config file - and you might want to write back out with the user's casing preserved).
(e.g. config.changekey('ITEM') will change config['item'] to config['ITEM'] )
See the caseless module for more details.
This section contains a summary of all options, attributes and methods associated with a ConfigObj object.
config = ConfigObj(infile=[], indict = {}, **keywargs)
infile can either be a filename or a config file as a list of lines.
options can be passed in as keyword arguments and/or as a dictionary of options. Any keyword arguments take precedence over options set in the dictionary.
There is a default dictionary, called pref_dict with all the options in it - defined in the configobj module. It can be got with :
from configobj import pref_dict # A dictionary containing all the keywords for ConfigObj, and their default values. # You can modify this and pass it in as a dictionary instead of using individual keywords - # You only need include keywords which you are changing from their defaults. # Additional individual keyword arguments take precedence over any dictionary passed in. pref_dict = { 'configspec' : None, # The keywords or sections/keywords to parse for # None means parse everything in the file. # This also specifies the order to write out the file in. 'recursivelist' : True, # Can list keywords have nested lists in them ? 'fileerror' : False, # If set to True, raises IOError if the specified filename # doesn't exist 'createempty' : False, # If set to True, creates an empty file # if the specified filename doesn't exist # (which also checks that the filename is a possible, valid path) 'newline' : False, # If set to True - it writes a newline with every line, # when force_return is on or infile was a list of lines. 'force_return' : False, # If True the write and writein methods return lists of lines # rather than writing to file. 'default' : None, # The default value to set missing keywords to from the configspec # may be helpful to set to '' 'flatfile' : None, # Is the config file a 'flatfile' (True) # or does it have sections ? (False) # None means it will work it out for itself. If the file doesn't exist # and flatfile is set to None - it will create a flatfile # (same as flatfile True) 'exceptions' : None, # What to do with errors - see the section on exceptions 'divider' : ' = ', # The divider to use when writing out keyword - value pairs # (whitespace is optional !) 'configspec_only' : False, # Expect the configspec to contain all keywords ? # (extra/missing ones will raise an error) 'write_full' : True, # Write out all the keywords/sections in the configobj # add ones missing from the configspec ? 'stringify' : True, # Convert all non-string values to strings before writing # (otherwise non-string values will raise a TypeError) 'encoding' : None, # input encoding and output encoding 'backup_encoding' : 'latin1', # used for string conversion (non unicode values) # when doing conversions 'lists' : True, # do we allow interpret lists in values ? (if set to False they are # left as strings - only when reading) 'interpolation' : True, # is interpolation of values switched on ? }
pref_dict contains all the default values. In fact though, you only need to pass in values that you change.
The 'exceptions' keyword allows you to specify the behaviour for all errors when parsing - or to set different behaviour for different types of error. The three different possible options on encountering an error are :
1) ignore it, False 2) print the error, None 3) raise an exception, True
the configobj module has a dictionary of all the errors. Set the value for an error to True or False to modify the behaviour of ConfigObj when it encounters that error :
from configobj import exceptionlist # The 'exceptions' keyword' (relating to errors in parsing) : # You can *either* pass in a single value for the 'exceptions' keyword - False, None or True (None is the default) # False means ignore errors. # None means print the error message # True means raise the exception. # *or* you can set a different value for each possible error, by modifying the following exceptionlist dictionary - and passing that in instead. # None is the default for every error - you only need pass in a value for ones you wish to change. # Even if an error is ignored - that line won't have been processed ! # Note - there are a few errors prior to parsing - like passing in a badly built configspec - that won't be trapped using this method. # See the __init__ function for the list of exceptions that will be raised for each error, and the corresponding error message. a list of the actual exceptions raised when these errors occur (if you have it set to raise exceptions !) The exceptions are created inside the configobj module :
'duplicate' : DuplicateKey, 'badquote' : BadlyQuoted, 'badline': FormatError, 'badlist' : FormatError, 'badconspec' : badconspec, 'conspecerror' : conspecerror, 'emptyerror' : EmptyError, 'badsection' : BadSection
They are all subclasses of : ConfigObjError
ConfigObj is a subclass of dictionaries and caselessDict. This means that any methods associated with dictionaries and caselessDict will work on a configobj. Don't forget that methods like pop, get etc on a sectionfile configobj will remove or return a whole section - not just a single value. In addition to this every section itself is a caselessDict - and you can use methods like has_key and update on these too.
Takes the members of this ConfigObj and writes out as a config file.
If config.filename is set, it overwrites the existing file. The order can be set using a sequence (of keywords) passed in - the configspec acts as an orderlist. Comments are preserved in the created config file.
Any values in config that are of value None (weren't read in from the original file) will be output as ''
If config.newline is set to True, terminates lines with a 'n'.
If no config.filename is set or config.force_return is set, the config file is returned as a list of lines.
A configspec acts as an orderlist. It tells ConfigObj what order to write out values. If you don't explicitly pass one in, but one is set for the configobj (config.configspec) - that will be used. If a configspec is passed in to this method then the old one is still preserved. The configspec can be a list of lines, filename, or StringIO instance.
If write_full is off (default is on) then only values in the configspec will be written - to allow you to write out a subset of a configobj, otherwise extra values and sections not in the configspec are written out at the end. If 'configspec_only' is set, it means you expect the configobj to contain all the values in the configspec - and no extra. Extra values and missing values will then raise errors, depending on your error settings.
If encoding is set then that is used for both decoding the configspec that was passed in (if any) and encoding the output. If encoding is left as None then self.encoding is used. If encoding is False then the system default will be used. If you have an encoding specified and any of the stored keywords/values are strings, ConfigObj uses 'config.backup_encoding' to turn the strings into unicode before encoding for the output.
Writes the values back into the file - only overwriting lines that it has keywords for.
Useful if you have only parsed a few keywords from a file and want to only update those values.
infile can be a list of lines, StringIO instance, or a filename. By default it will use self.filename, if there is one set. Even if a new filename is passed in, the old one is preserved.
If write_full is off (default is on) then only values in the infile will be written - no extra values will be added. By default extra values and sections (not in infile) are written out at the end.
Duplicate keywords or sections in the infile will raise errors according to the error settings.
If encoding is set then that is used for both decoding the infile (if any) and encoding the output. If encoding is left as None then self.encoding is used. If encoding is False then the system default will be used. If you have an encoding specified and any of the stored keywords/values are strings, ConfigObj uses self.backup_encoding to turn the strings into unicode before encoding for the output.
configspec_only is not checked with writein.
Passed an infile - as a filename or a list of lines, it tests if a config file is a flatfile or if it has sections. Returns True for a flatfile or False if it appears to have sections.
Returns True if the file doesn't exist.
A section is defined as any line that has only whitespace at the start followed by [section name] and possibly whitespace and a comment afterwards. It doesn't properly check that the section names are correctly built - just if a line looks like a section name.
It can check a StringIO instance. It can also check a dictionary - a dictionary must either have every entry as a dictionary (section file) or every entry a string or list (flatfile). If the dictionary has a mixture (or other types) a TypeError will be raised. If the dictionary is empty, this returns True.
This acts like object.reset() - but only for missing values (where the keyword entry is None).
If a different value was passed in for 'default' this will have no effect.
configobj = ConfigObj(infile, configspec=configspec, default='')
is equivalent to :configobj = ConfigObj(infile, configspec=configspec) configobj.verify()
Return a copy of the configobj as a configspec. This will contain every section and every key - and can be used for reading and writing. Especially useful the first time you create a config file - can be used to test later config files for completeness (using configspec_only).
The ordering is random. configspec is returned as a list - you'll have to write it to file yourself.
configobj.newline is observed but the output of this function is not encoded.
It doesn't set the current configspec for the configobj. This can be done with :
configobj.configspec = configobj.makeconfigspec()
This takes a configspec and if necessary parses it as a config file and returns the stripped configspec. If the configspec is invalid, or just a straight list (with no validation information) it leaves it as it is.
returns (newconfig, stripped_config) where newconfig is a ConfigObj representing the configspec
If encoding is set then that is used for decoding the configspec. If encoding is left as None then self.encoding is used. If encoding is False then the system default will be used (i.e. no decoding takes place).
Given a validator object, walk self.__configspec__ and validate.
If self.__configspec__ is None then we can't validate and returns None
If we validate successfully we return True - Otherwise we return a list of failed entries. For a sectionfile these will be (section, keyword) Tuples.
If the test raises an exception - this is a fail. Any missing keywords (that raise a KeyError) will also be marked as fail. Only members with entries in self.__configspec__ are tested.
See the documentation and examples in validate.py to see how to write your tests.
Interpolate a given value with entries from the 'defaults' section. This can be used even if interpolation is switched off. If there is no interpolation to be done, or the specified key is missing, the value is returned unchanged. There is a max interpolation depth of 10, after which the value is returned with no more interpolation done.
Code borrowed from ConfigParser
A configobj has various attributes. Some of these relate to the internal state of the object - these are read only. The rest relate to the options passed in when the configobj is created. These can be changed and affect config files that are written out.
ConfigObj comes in a couple of different packages. In time that may expand to include a proper distutils distribution with self-installing windows package. It's more envisaged that you will want to bundle ConfigObj with your own programs - in which case just including the modules in your distribution is most straightforward.
It is currently available in three forms :
ConfigObj is dependent on the caseless and listquote modules. These modules stand in their own right and may be useful individually. It may however be simpler to only have one file to include in your distributions, in which case you can use fullconfigobj.py
Both configobj and fullconfigobj distributions include this file, as reST source (configobj.txt) and html (configobj.html). They should also have a test subdirectory. At the moment that contains 6 files - 3 test ini files and 3 test configspecs... configobj.zip should have a directory called htmldoc. This is an epydoc generated set of docs for ConfigObj. I haven't yet made the docstrings fully 'epydoc friendly', but it's still helpful. You can see them online at ConfigObj API Docs.
The pythonutils package contains everything that the config.zip distribution does, plus a couple of extra small modules. They are packaged together for convenience. It comes with a distutils setup.py and is available as a windows installer file. If you install the pythonutils package then ConfigObj will be placed in your site-packages folder and be on sys.path. If you want to distribute modules that use ConfigObj, without having to distribute ConfigObj itself, it is simplest to suggest that your users isntall the pythonutils package. See the pythonutils package.
In terms of 'installing' ConfigObj, the simplest thing is still to include configobj.py or fullconfigobj.py in the same directory as the script calling it. If you use ConfigObj regularly then it may be worthwhile putting it somewhere in your normal sys.path - e.g. the site-packages subdirectory of your python installation for Windows. Once I've done a setup.py this will be more straightforward of course..... If you use the smaller configobj.py then you will also need caseless.py and listquote.py within 'reach' of the calling script. If you install the pythonutils package, then ConfigObj will automatically be available to you.
ConfigObj will attempt to import and use the psyco module. If available this will speed everything up. To disable this, set the PSYCOON variable at the start of configobj.py to 0. Because fullconfigobj is built automagically from the three modules (using includer.py) it may have this chunk of code three times !! (This won't do any harm. includer can be useful for all sorts of things by the way).
This document doesn't attempt to be documentation for the caseless and listquote modules, but they are an integral part of the functioning of ConfigObj. Both modules contain pretty full documentation in the source code and are well worth a look if you think you can use them.
This provides two new classes and a sort function.
caseless can be got from the Downloading section or at
Has functions for quoting and unquoting strings and also for turning a text representation of a list back into a list. (And also for simple reading and writing of CSV files)
listquote can be found at
These are in no particular order by the way. If you have any more suggestions, or ideas as to which should be priorities, then please let me know. Some of these have question marks at the end. This means the idea is a suggestion, but may not actually be desirable - or worth the effort.
See also the TODO/ISSUES sections in caseless.py and listquote.py. If any of these issues particularly matter to you then let me know..... (or if you have any other request/suggestions).
Also in no particular order.
2005/03/01 Version 3.3.0 Requires listquote 1.2.0 - which is improved/optimised Requires caseless 2.2.0 which has support for unicode Adding support for validation using the configspec To be done with an external validator. (Validator class created with help of Mark Andrews) This means added methods/attributes : parseconfigspec method stripconfigspec method validate method __configspec__ attribute BOM attribute Experimental unicode internally. 'encoding' and 'backup_encoding' keywords added 'lists' keyword added - can turn off list handling (lists are left as strings) A ConfigObj can be created by passing in a dictionary Added a __repr__ method for the ConfigObj configspec can now be a filename (or StringIO instance...) - including for the write method Now raises a TypeError rather than a KeyError if you pass in invalid options writein can now return a config file as a list of lines if no filename is set duplicate keys/sections in writein now raise 'duplicate' errors, rather than 'conspecerror' String interpolation from the 'default' section - using '%(keyword)s' format - similar to ConfigParser Attribute access as well as dictionary syntax Coerce sections created as dictionaries to caselessDict (bug fix) Escaped '&mjf-lf;' and '&mjf-quot;' in unquoted values are converted (bug fix) Bug fixed in configspec with section files (bug fix) Bug fix in reporting of duplicate sections with configspec. (bug fix) Fixed bugs in sectionfiles with 'configspec_only' (errors in the empty last section would be missed) (bug fix) Bug fix in __buildconfigspec (bug fix) Improved support for '/*... */' in the writein method (bug fix) Fixed typo in verify and reset methods (bug fix) configspec is no longer destroyed for flatfiles (bug fix) Missing keys and Extra keys errors the wrong way round in write method (bug fix) Plus other minor bugfixes, simplifications and optimisations. 2004/12/03 Version 3.2.3 Fixed bug in creating sectionfiles from scratch. (__comments__ KeyError). Tuple entries are written out as lists rather than being converted to strings. When an exception is raised, it's no longer printed first. Added the istrue method. Changed the license to BSD-License. 09-09-04 Version 3.2.2 Defined basestring for versions of Python prior to 2.3. (Thanks to Imre Andras Robert for reporting this). 30-08-04 Version 3.2.1 Fixed a bug causing the dictionary of options to be lost... oops 16-08-04 Version 3.2.0 Removed the charmap 'function' that eliminated unicode errors. Unicode problems will now raise an exception rather than just stamping out unicode characters from latin1.. Spelling in docs corrected…. 20-06-04 Version 3.1.0 Now uses the listquote module instead of listparse. (Which now does all the linehandling - not just lists). This allows lists without the '[..]' and also allows lists using '(..)' - by default lists are now written out without the enclosing '[..]' Fixed bug in __repr__ - oops ! 30-05-04 Version 3.0.2 Added support for 'C-style' comments - whole sections commented out using '/*' and '*/' - see DOCS. This allows embedding other data in a config file. 27-05-04 Version 3.0.1 Slight change to the parameters you pass in when you create a new configobj. Now uses infile=None, indict=None and creates the [] and {} inside __init__ - will make no difference to anyone ! __setitem__ calls caselessDict.__setitem__ and so is simpler and less dependent on the details of the caselessDict implementation Also added the clear method in caseless and __repr__ here. Does a new version of a dependent module warrant a new version number here ? 24-05-04 Version 3.0.0 Several incompatible changes - another major overhaul and change. Lot's. Comments are also saved using stout was removed - use the new version of StandOut instead - gives a greater degree of control - or the new exception handling stuff. A few new methods added. Charmap is now incorporated into ConfigObj 30-04-04 Version 2.1.2 No longer crashes with non-string keys. Turns all keys into strings first. 29-04-04 Version 2.1.1 Changed to use the slightly updated version of listparse. I've switched 'escaping' of elements off. This avoids some confusion and allows you to use the '\' character in config files. '\n' in cofigobj values are now escaped as '&mjf-lf;' when written out. 07-04-04 Version 2.1.0 Made stout and recursivelist into keyword arguments. Added a couple of extra keyword arguments to do with specifying filenames that don't exist. 15-03-04 Version 2.0.0 beta Re-written it to subclass dict. My first forays into inheritance and operator overloading. The config object now behaves like a dictionary. I've completely broken the interface - but I don't think anyone was really using it anyway. This new version is much more 'classy' ;-). 13-03-04 Version 1.0.7 Made the default value for infile an empty list - I seem to using ConfigObj to create new configobj (empty ones) more often. Added a new parameter to the write and update methods (optional) - newline - to decide if you want to terminate every line in your returned config file with a newline. This resets the behaviour changed in 1.0.6 - bad I know - but I realised I had different defaults for the write and update methods.... Fixed an obscure bug causing empty lists to be wrongly written out sometimes. In the process vastly simplified the write and update methods - making debugging and changing much simpler. 29-02-04 Version 1.0.6 Added a default standard output object - to save you passing one in when you don't really need one. The write method now terminates lines with a '\n' 29-01-04 Version 1.0.5. ConfigObj originated in a set of functions for reading config files in the atlantibots_ project. () These were written by Rob McNeur... I'm not sure if any trace of them remains in the current code - but that's where it all started.
ConfigObj, and related files, are licensed under the BSD license. This is a very unrestrictive license - but it comes with the usual disclaimer. This is free software - test it, break it, just don't blame me if it eats your data ! (If it does though, let me know and I'll (try to) fix that it so it doesn't happen to anyone else :-)
Copyright (c) 2004 & 2005, Michael Foord nor the name of Voidspace should also be able to find a copy of this license at : BSD License
If you use this program, please help Sponsor Voidspace, to help defray the costs of hosting Voidspace. Even $1 or $2 is helpful !
11 Comments
ConfigObj will preserve any comments at the start and end of the config file. These will then be written back out when the write method is used. They are stored as lists of lines in the config.initial_comment and config.final_comment attributes.
Comments on the same line as a keyword are also preserved. It stores them in a dictionary of comments : config.__comments__
The __comments__ dictionary is also a caselessDict (case insensitive) and follows the structure of the configobj. When you parse a file, every keyword will have an entry in the __comments__ dictionary, even if it's blank. If the configobj is a sectionfile, then each section in __comments__ will also be a caselessDict - and each keyword will have an entry in that.
i.e. for config['keyword1'] the matching comment is stored at config.__comments__['keyword1'] for config['section1']['keyword1'] the matching comment is stored at config.__comments__['section1']['keyword1']
The comments that are preserved in this way are comments on the same line as the keyword :
'keyword1' = 'value1' # this comment is preserved
The reason for this, is so that you can have a comment associated with each keyword explaining what it is for. When the write and writein methods are used, this comment will be preserved.
e.g. 'tablesize' = '32k' # the amount of memory to allocate per table. 32k is recommended
One thing to be careful of, it is easy to create additional entries in a configobj without creating corresponding entries in the __comments__ dictionary. The write and writein methods use a method called sortcomments, to ensure that every keyword at least has a blank entry. If you intend to do anything with the comments, it's worth either calling sortcomments or making careful use of the has_key method..... sortcomments also turns any occurrence of '\n' in the comments to '&mjf-lf;' - if they were written out as '\n' they would cause an invalid line in the config file.
The sortcomments method is called without arguments : config.sortcomments()
The write method adds the contents of config.initial_comment to the start of the file and config.final_comment to the end.
See the note in config Files about multi-line comments - /*...*/. Unless they are at the start or end of a file they aren't saved by ConfigObj. They are however preserved in a file if you use the writein method. | http://www.voidspace.org.uk/python/configobj3.html | crawl-002 | refinedweb | 12,244 | 65.01 |
#include <fcntl.h>
The integer expression formed from one or more of these constants determines the type of reading or writing operations permitted. It is formed by combining one or more constants with a translation-mode constant.
The file constants are as follows:
Repositions the file pointer to the end of the file before every write operation.
Creates and opens a new file for writing; this has no effect if the file specified by filename exists.
Returns an error value if the file specified by filename exists. Only applies when used with _O_CREAT.
Opens file for reading only; if this flag is given, neither _O_RDWR nor _O_WRONLY can be given.
Opens file for both reading and writing; if this flag is given, neither _O_RDONLY nor _O_WRONLY can be given.
Opens and truncates an existing file to zero length; the file must have write permission. The contents of the file are destroyed. If this flag is given, you cannot specify _O_RDONLY.
Opens file for writing only; if this flag is given, neither _O_RDONLY nor _O_RDWR can be given. | http://msdn.microsoft.com/en-us/library/53xa7z70(v=vs.100).aspx | CC-MAIN-2014-23 | refinedweb | 177 | 65.12 |
This small example shows:
MyMesh,
For each program the first step is to define your type
MyMesh. OpenMesh supports general polygonal meshes (faces are polygons with varying number of vertices) as well as specialized triangle meshes (all faces are triangles). In this example we want to build a cube from six quadrangles, therefore we choose the polygonal mesh.
OpenMesh also supports different mesh kernels, specifying how all the vertices, edges, and faces are stored internally (see also Mesh Kernels). However, the storage must provide an array like interface. For the tutorial we use the supplied ArrayKernel. The predefined combinations of TriMesh/PolyMesh and the kernel are contained in
OpenMesh/src/OpenMesh/Core/Mesh, we use the
PolyMesh_ArrayKernelT.
Now since we have declared our type
MyMesh, we only have to add 8 vertices and 6 quadrangles to build a cube. Adding a vertex is done using the
add_vertex method. It gets a coordinate and returns a handle to the inserted vertex. We store all handles in an array, since we need them for specifying the faces.
In order to add a face to the mesh, we have to build a vector holding the handles to the face's vertices. This vector is passed to the
add_face method. The following block will create a face from the first four vertices:
The orientation of the face is defined by the order in which the vertices are given: If you look at the frontfacing side of the polygon, then the vertices are in counter-clockwise order.
After creating all of the six faces, we want to write the resulting mesh to standard output. OpenMesh provides some basic input/output methods in the namespace OpenMesh::IO:
To use the IO facility of OpenMesh make sure that the include MeshIO.hh is included first.
The complete source looks like this: | http://www.openmesh.org/media/Documentations/OpenMesh-6.3-Documentation/a00025.html | CC-MAIN-2017-13 | refinedweb | 304 | 63.39 |
FOSSASIA‘s Open Event Server project uses a certain set of functions in order to resize image from its original, example to thumbnail, icon or larger image. How do we test this resizing of images functions in Open Event Server project? To test image dimensions resizing functionality, we need to verify that the the resized image dimensions is same as the dimensions provided for resize. For example, in this function, we provide the url for the image that we received and it creates a resized image and saves the resized version.
def create_save_resized_image(image_file, basewidth, maintain_aspect, height_size, upload_path, ext='jpg', remove_after_upload=False, resize=True): """ Create and Save the resized version of the background image :param resize: :param upload_path: :param ext: :param remove_after_upload: :param height_size: :param maintain_aspect: :param basewidth: :param image_file: :return: """ filename = '{filename}.{ext}'.format(filename=get_file_name(), ext=ext) image_file = cStringIO.StringIO(urllib.urlopen(image_file).read()) im = Image.open(image_file) # Convert to jpeg for lower file size. if im.format is not 'JPEG': img = im.convert('RGB') else: img = im if resize: if maintain_aspect: width_percent = (basewidth / float(img.size[0])) height_size = int((float(img.size[1]) * float(width_percent))) img = img.resize((basewidth, height_size), PIL.Image.ANTIALIAS) temp_file_relative_path = 'static/media/temp/' + generate_hash(str(image_file)) + get_file_name() + '.jpg' temp_file_path = app.config['BASE_DIR'] + '/' + temp_file_relative_path dir_path = temp_file_path.rsplit('/', 1)[0] # create dirs if not present if not os.path.isdir(dir_path): os.makedirs(dir_path) img.save(temp_file_path) upfile = UploadedFile(file_path=temp_file_path, filename=filename) if remove_after_upload: os.remove(image_file) uploaded_url = upload(upfile, upload_path) os.remove(temp_file_path) return uploaded_url
In this function, we send the image url, the width and height to be resized to, and the aspect ratio as either True or False along with the folder to be saved. For this blog, we are gonna assume aspect ratio is False which means that we don’t maintain the aspect ratio while resizing. So, given the above mentioned as parameter, we get the url for the resized image that is saved.
To test whether it has been resized to correct dimensions, we use Pillow or as it is popularly know, PIL. So we write a separate function named getsizes() within which get the image file as a parameter. Then using the Image module of PIL, we open the file as a JpegImageFile object. The JpegImageFile object has an attribute size which returns (width, height). So from this function, we return the size attribute. Following is the code:
def getsizes(self, file): # get file size *and* image size (None if not known) im = Image.open(file) return im.size
As we have this function, it’s time to look into the unit testing function. So in unit testing we set dummy width and height that we want to resize to, set aspect ratio as false as discussed above. This helps us to test that both width and height are properly resized. We are using a creative commons licensed image for resizing. This is the)
In the above code from create_save_resized_image, we receive the url for the resized image. Since we have written all the unittests for local settings, we get a url with localhost as the server set. However, we don’t have the server running so we can’t acces the image through the url. So we build the absolute path to the image file from the url and store it in resized_image_file. Then we find the sizes of the image using the getsizes function that we have already written. This gives us the width and height of the newly resized image. We make an assertion now to check whether the width that we wanted to resize to is equal to the actual width of the resized image. We make the same check with height as well. If both match, then the resizing function had worked perfectly. Here is the complete) self.assertTrue(os.path.exists(resized_image_file)) self.assertEqual(resized_width, width) self.assertEqual(resized_height, height)
In open event orga server, we use this resize function to basically create 3 resized images in various modules, such as events, users,etc. The 3 sizes are names – Large, Thumbnail and Icon. Depending on the one more suitable we use it avoiding the need to load a very big image for a very small div. The exact width and height for these 3 sizes can be changed from the admin settings of the project. We use the same technique as mentioned above. We run a loop to check the sizes for all these. Here is the code:
def test_create_save_image_sizes(self): with app.test_request_context(): image_url_test = '' image_sizes_type = "event" width_large = 1300 width_thumbnail = 500 width_icon = 75 image_sizes = create_save_image_sizes(image_url_test, image_sizes_type) resized_image_url = image_sizes['original_image_url'] resized_image_url_large = image_sizes['large_image_url'] resized_image_url_thumbnail = image_sizes['thumbnail_image_url'] resized_image_url_icon = image_sizes['icon_image_url'] resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1] resized_image_file_large = app.config.get('BASE_DIR') + resized_image_url_large.split('/localhost')[1] resized_image_file_thumbnail = app.config.get('BASE_DIR') + resized_image_url_thumbnail.split('/localhost')[1] resized_image_file_icon = app.config.get('BASE_DIR') + resized_image_url_icon.split('/localhost')[1] resized_width_large, _ = self.getsizes(resized_image_file_large) resized_width_thumbnail, _ = self.getsizes(resized_image_file_thumbnail) resized_width_icon, _ = self.getsizes(resized_image_file_icon) self.assertTrue(os.path.exists(resized_image_file)) self.assertEqual(resized_width_large, width_large) self.assertEqual(resized_width_thumbnail, width_thumbnail) self.assertEqual(resized_width_icon, width_icon)
Resources:
- Another good reference link to pillow or PIL:
- A blog post on image processing in python with pillow by Joyce Echessa: | https://blog.fossasia.org/open-event-server-testing-image-resize-using-pil-and-unittest/ | CC-MAIN-2018-47 | refinedweb | 869 | 50.53 |
Issues
ZF-12173: Zend_Form and Zend_Form_Element prefix paths are not prefix agnostic (namespaces)
Description
I've migrated my personal library to namespaces and I've detected that the namespaced prefix paths are not properly handled by the plugin loader because both Zend_Form and Zend_Form_Element add "_$type" as suffix to the plugin path when they should detect if the namespace separator is _ or \
Posted by Antonio J García Lagar (ajgarlag) on 2012-04-26T07:50:46.000+0000
I've submitted two diff files, one for tests and one for the fix itself. Note that this issue depends on ZF-11330, so the fix for ZF-11330 should be applied in order to make this one work.
Posted by Frank Brückner (frosch) on 2012-04-26T08:18:51.000+0000
Hi Antonio, your patch does not include:
Posted by Antonio J García Lagar (ajgarlag) on 2012-04-26T08:51:48.000+0000
I've fixed the Zend_Form_Element_File and Zend_Form_Element_Captcha addPrefixPath methods too.
Posted by Rob Allen (rob) on 2012-05-31T19:29:06.000+0000
Fixed in SVN r24848. | http://framework.zend.com/issues/browse/ZF-12173?focusedCommentId=50374&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-27 | refinedweb | 180 | 61.16 |
Jim Bosch <talljimbo <at> gmail.com> writes: > >. You right. I got it wrong when reported the error. The problem, is that when I print mymodule.X.Y I am getting <class 'mymodule.Y'> which is wrong IMO > I do get mymodule.X.__module__ == 'mymodule' and mymodule.X.Y.__module__ > == 'mymodule', but that's also correct as far as mirroring the > pure-python inner class result. Well I hoped to mimic some kind of subnamespace with scope. Thus the module should be mymodule.X or the name should be X.Y >?) Can you show an example? I am currently tweaking this Python as well (post loading) like this: mymodule.X.Y.__name__ = 'X.Y' It's still not completely satisfactory, since help(mymodule.X) does not do the right job, but at least it does not show Y as 'mymodule.Y' Gennadiy | https://mail.python.org/pipermail/cplusplus-sig/2010-March/015367.html | CC-MAIN-2018-26 | refinedweb | 141 | 71.21 |
#include <deal.II/matrix_free/tensor_product_kernels.h>
Internal evaluator for 1d-3d shape function using the tensor product form of the basis functions.
This class implements a different approach to the symmetric case for values, gradients, and Hessians also treated with the above functions: It is possible to reduce the cost per dimension from N^2 to N^2/2, where N is the number of 1D dofs (there are only N^2/2 different entries in the shape matrix, so this is plausible). The approach is based on the idea of applying the operator on the even and odd part of the input vectors separately, given that the shape functions evaluated on quadrature points are symmetric. This method is presented e.g. in the book "Implementing Spectral Methods for Partial Differential Equations" by David A. Kopriva, Springer, 2009, section 3.5.3 (Even-Odd-Decomposition). Even though the experiments in the book say that the method is not efficient for N<20, it is more efficient in the context where the loop bounds are compile-time constants (templates).
Definition at line 1455 of file tensor_product_kernels.h.
Empty constructor. Does nothing. Be careful when using 'values' and related methods because they need to be filled with the other constructor passing in at least an array for the values.
Definition at line 1472 of file tensor_product_kernels.h.
Constructor, taking the data from ShapeInfo (using the even-odd variants stored there)
Definition at line 1482 of file tensor_product_kernels.h.
Constructor, taking the data from ShapeInfo (using the even-odd variants stored there)
Definition at line 1494 of file tensor_product_kernels.h.
This function applies the tensor product kernel, corresponding to a multiplication of 1D stripes, along the given
direction of the tensor data in the input array. This function allows the
in and
out arrays to alias for the case n_rows == n_columns, i.e., it is safe to perform the contraction in place where
in and
out point to the same address. For the case n_rows != n_columns, the output is only correct if
one_line is set to true.
Definition at line 1628 of file tensor_product_kernels.h. | https://www.dealii.org/developer/doxygen/deal.II/structinternal_1_1EvaluatorTensorProduct_3_01evaluate__evenodd_00_01dim_00_01n__rows_00_01n__col18a610eafe2a5b8d6bb220ce6d41b2a8.html | CC-MAIN-2019-47 | refinedweb | 352 | 53 |
29452/how-to-read-data-from-s3-in-regular-inetrval-using-spark-scala
I was trying to find application for my need and found one java application which dump data(in csv file) to s3 on daily basis.
This application create folder in S3 based on system date like(MM-DD-YYYY format) and then add files to the folder created.
Now i want to read those files from S3 on regular interval like
val fileFromS3= sc.textFile("s3a://digital/MM-DD-YYYY/abc.csv")
Now the script should replace 'MM-DD-YYYY' with the system date.
Please suggest possible solution or any other way to achieve this.
Inorder to get it done first you need to import calender
First you need some imports:
import java.util.Calendar
import java.text.SimpleDateFormat
import java.util.Date
val now = Calendar.getInstance().getTime() //you can get the current Time:
val formatter = new SimpleDateFormat("MM-dd-yyyy")
val dateAsString = formatter.format(now) //use your formatter to get the date as String
Then you can load your resource with the dateAsString value using String interpolation:
val fileFromS3= sc.textFile(s"s3a://digital/${dateAsString}/abc.csv")
The code would be something like this:
import ...READ MORE
Muchas gracias. ?Como puedo iniciar sesion? READ MORE
You can use the below command
$ aws ...READ MORE
You can use method of creating object ..
In a serverless way, i.e. from a ...READ MORE
Yes there is an option where you ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/29452/how-to-read-data-from-s3-in-regular-inetrval-using-spark-scala?show=29456 | CC-MAIN-2020-45 | refinedweb | 251 | 60.61 |
In January I talked about using Custom Prerequisites to add a little more specialization to the VSTO ClickOnce experience. I didn't go too much in depth into this and I've since seen a few comments crop up about them in some of the VSTO community spaces I watch. So I'm going to go through a basic walk through of how you might go about doing this.
So first you need to create your custom prerequisite installer. I believe any executable can be used but for this post I'm going to cover a method where instead of using an "executable" I'm going to build an MSI (Windows Installer). I might come back to the executable option in another post, but with that lets move on.
Creating a Custom Prerequisite MSI that does something:
For my first try I actually created a custom .NET DLL that did my prerequisite step (in which I just to display an Message Box to ensure it was working). To do this I created a C# class library project and added a new "Installer Class" to the solution. Here's what the code for that class looks like:
namespace CustomAction1
{
[RunInstaller(true)]
public partial class Installer1 : Installer
{
public Installer1()
{
InitializeComponent();
}
public override void Install(IDictionary stateSaver)
{
base.Install(stateSaver);
string messageString = "Message Strings: \n";
messageString = messageString + "Message:";
messageString = messageString + this.Context.Parameters["Message"];
messageString = messageString + "\n" + "OtherMessage:";
messageString = messageString + this.Context.Parameters["OtherMessage"];
System.Windows.Forms.MessageBox.Show(messageString);
}
}
}
I built this action and then created a "standard" Windows Installer Project. I ripped out all of the steps from the MSI project and then added Custom action. To do this, I added the "Primary Output" of the dll project to the project and then go to the Custom Actions pane via the view option in Setup Project.
Once there I was able to add the "Primary output from CustomAction1" (the dll) as an install custom action. In the properties window for the custom action (you have to select it) I set the following values:
CustomActionData = /Message="Message1" /OtherMessage="This is the other Message"
InstallerClass = True
And with that I had my CustomAction MSI (I left it labelled "Setup1" in my example). This is about as simple as it gets and it doesn't nearly cover everything (I didn't implement Uninstall or Update steps). I believe the "power" in the MSI method is that you can hook into a standardized setup framework and it handles a lot of details you would otherwise need to implement. This goes a little outside my particular area of expertise and I haven't had time to do the full degree of experimentation to give a definite reasons beyond that yet though. If anyone is interested in educating me or clarifying anything on this stuff, please feel free to leave a comment.
In the next installment I'll cover how I authored the package manifest files so VS can include this at publish time.
My post frequency has been a down a bit lately due to being busy with a lot of things but I am responsive to feedback (I've gotten a littler here and there, please feel free to post any comments or questions here so others may also get the benefit of it). And with that I'm going to wrap up this post.
Thank You for reading.
Kris | http://blogs.msdn.com/b/krimakey/archive/2008/07/18/in-the-before-of-the-beginning-custom-prerequisites-for-vsto-clickonce-part-i.aspx | CC-MAIN-2014-35 | refinedweb | 565 | 58.21 |
Important: Please read the Qt Code of Conduct -
Crash when trying to bind Window.height to WebEngineView.contentsSize.height (short code)
- alwayslearning last edited by alwayslearning
Hi,
I use Qt 5.10.1 on up-to-date Windows 10 and the following simple program does not show any window:
import QtQuick 2.10 import QtQuick.Window 2.10 import QtWebEngine 1.5 Window { id: w visible: true title: "Test" // with this line, the program crashes before showing anything: height: v.contentsSize.height WebEngineView { id: v anchors.left: w.left anchors.right: w.right anchors.top: w.top onContentsSizeChanged: { console.log(contentsSize) // no output if not both width and height properties of the web view are specified w.height = contentsSize.height } // if any of the following 2 lines are omitted, the web view the ":-)" string in the web view does not show up and the window looks empty although anchors.left and anchors.right are set above and the height is set // width: 100 // height: 100 // The following line crashes the program before showing the window // height: v.contentsSize.height Component.onCompleted: { loadHtml(":-)"); } } }
I specifically want the window to be as high as the web view when its size is not constrained. Relevant documentation link:.
Thanks!
- Diracsbracket last edited by
@alwayslearning
This seems to be related with a bug when accessing
contentsSizetoo early:
Apparently, this still doesn't work correctly in Qt 5.10.0, although the bug was considered solved?
As a check, I tried accessing the property in the
onLoadProgressChangedhandler:
onLoadProgressChanged: { if (loadProgress===100) console.debug(contentsSize.height) }
This works (I tried by setting the
urlproperty to a webpage). Maybe as a workaround, you could set your window height in that handler.
- SGaist Lifetime Qt Champion last edited by
Hi,
The bug report states that 5.10.0 is also affected. It's been solved for 5.9.5 so it will likely be also for the release of 5.11. | https://forum.qt.io/topic/89599/crash-when-trying-to-bind-window-height-to-webengineview-contentssize-height-short-code | CC-MAIN-2021-43 | refinedweb | 322 | 59.6 |
The QScrollBar widget provides a vertical or horizontal scroll bar. More...
#include <qscrollbar.h>
Inherits QWidget and QRangeControl.
List of all member functions.
The QScrollBar widget provides a vertical or horizontal scroll bar.
A scroll bar allows the user to control a value within a program-definable range, and to give the user visible indication of the current value of a range control.
Scroll bars include four separate controls:
QScrollBar has not much of an API of its own; provided Windows and Motif styles, also use the pageStep() value to calculate the size of the slider.
In addition to the access functions from QRangeControl, QScrollBar has a comprehensive set of signals:
QScrollBar only offers integer ranges. Note that while QScrollBar handles very large numbers, scroll bars on today's screens cannot usefully control ranges above about 100,000 pixels. Beyond that, it becomes difficult for the user to control the scroll bar using either keyboard or mouse.
A scroll bar can be controlled by the keyboard, but it has a default focusPolicy() of NoFocus. Use setFocusPolicy() to enable keyboard focus. See keyPressEvent() for a list of key bindings.
If you need to add scrollbars to an interface, consider using the QScrollView class which encapsulates the common uses for scrollbars.
See also QSlider, QSpinBox, QScrollView, and GUI Design Handbook: Scroll Bar.
The parent and name arguments are sent to the QWidget constructor.
The orientation must be QScrollBar::Vertical or QScrollBar::Horizontal.
The parent and name arguments are sent to the QWidget constructor.
\arg minValue is the minimum scroll bar value. \arg maxValue is the maximum scroll bar value. \arg lineStep is the line step value. \arg pageStep is the page step value. It is also used to calculate the size of the slider. \arg value is the initial value. \arg orientation must be QScrollBar::Vertical or QScrollBar::Horizontal.
The parent and name arguments are sent to the QWidget constructor.
See also setOrientation().
Calls the virtual stepChange() function if the new line step is different from the previous setting.
See also lineStep(), QRangeControl::setSteps(), setPageStep(), and setRange().
See also setRange().
See also setRange().
See also orientation().
Calls the virtual stepChange() function if the new page step is different from the previous setting.
See also pageStep(), QRangeControl::setSteps(), setLineStep(), and setRange().
Sets the background color to the mid color for Motif style scroll bars.
Reimplemented from QWidget.
If tracking is enabled (the default), the scroll bar emits the valueChanged() signal while the slider is being dragged. If tracking is disabled, the scroll bar emits the valueChanged() signal only when the user releases the mouse button after moving the slider.
See also tracking().
This signal is emitted when the slider is moved by the user, with the new scroll bar value as an argument.
This signal is emitted even when tracking is turned off.
See also tracking(), valueChanged(), nextLine(), prevLine(), nextPage(), and prevPage().
See also sliderStart().
It is equivalent to sliderRect().y() for vertical scroll bars or sliderRect().x() for horizontal scroll bars.
Tracking is initially enabled.
See also setTracking().
This file is part of the Qtopia platform, copyright © 1995-2005 Trolltech, all rights reserved. | http://doc.trolltech.com/qtopia2.2/html/qscrollbar.html | crawl-001 | refinedweb | 521 | 60.72 |
The Fuchsia project follows the public Google C++ style guide, with a few exceptions.
Using clang-format is a good practice as it ensures your code is in compliance with the style guide. Tricium checks in Gerrit also use clang-format as a non-gating linter. However, you may still manually format code as long as it complies with these guidelines.
Fuchsia-specific style
TODO comments
The Google C++ style guide requires referencing a bug number in TODO comments.
On Fuchsia this is done in the format:
TODO(fxbug.dev/12345).
Compilation flags
Please do not add
-Wall or
-Wextra.
source_set("my_sources") { ... cflags = [ # Don't do this! "-Wall", ] }
This flag is already added in a centralized place in the build, followed by additional flags that suppress certain warnings globally.
Occasionally new compiler warnings are added upstream. In order to roll the
latest compiler, global maintainers may opt to temporarily suppress a
newly-introduced warning in a centralized place while we work through a backlog
of fixing instances of this warning throughout the codebase. If your project
uses
-Wall then it may break with a Clang roll.
Do feel free to enable/disable specific warnings that are not set globally. If these warnings are set globally later in a manner that's congruent with your own preferences then it's a good idea to remove any local overrides.
source_set("my_sources") { ... cflags = [ # We don't want any write-only params/vars in our code. # TODO(fxbug.dev/56202): delete the below when these warnings are # enabled globally. "-Wunused-but-set-parameter", "-Wunused-but-set-variable", ] }
Exceptions
Line length
Fuchsia uses 100 columns instead of 80.
Braces
Always use braces
{ }when the contents of a block are more than one line. This is something you need to watch since Clang-format doesn't know to add these.
// Don't do this. while (!done) doSomethingWithAReallyLongLine( wrapped_arg); // Correct. while (!done) { doSomethingWithAReallyLongLine( wrapped_arg); }
Conditionals and loops
Do not use spaces inside parentheses (the Google style guide discourages but permits this).
Do not use the single-line form for short conditionals and loops (the Google style guide permits both forms):
// Don't do this: if (x == kFoo) return new Foo(); // Correct. if (x == kFoo) return new Foo;
Namespace names
- Nested namespaces are forbidden, with the following exceptions:
internal(when required to hide implementation details of templated code)
- code generated by the FIDL compiler
- The following top-level namespaces are forbidden:
internal
fuchsia(except in code generated by the FIDL compiler)
- Namespaces in IDK libraries must be kept to as short a list as possible. A later document will provide an explicit list of allowed namespaces; in the meantime, new namespaces should be introduced thoughtfully.
- Namespaces in non-IDK libraries should also be chosen so as to reduce the risk of clashes. Very generic nouns (eg.
media) are best avoided.
Rationale: Tip of the Week #130: Namespace Naming
Includes
If the header being included is a system, global, or library header (see Naming C/C++ objects for precise definitions), use
<angle brackets>and the complete name of the header. These headers are considered "C library headers" for the purposes of the Google C++ Style Guide:
#include <zircon/syscalls.h> // System header #include <fuchsia/io/cpp/fidl.h> // Global header #include <lib/fdio/fd.h> // Library header
If the header being included is a implementation header, use
"quotes"and use the full path to the header from the root of the source tree. These headers are considered "your project's headers" for the purposes of the Google C++ Style Guide:
#include "src/ui/scenic/bin/app.h" // Implementation header
Third-party headers can be included using the root-relative path (e.g.
#include "third_party/skia/include/core/SkPaint.h") or using their canonical header names (e.g.
#include <gtest/gtest.h>). | https://fuchsia.dev/fuchsia-src/development/languages/c-cpp/cpp-style | CC-MAIN-2022-21 | refinedweb | 632 | 55.44 |
| 3799 | 4783 Views | Created by Cindy Meister MVP - Thursday, October 04, 2012 10:17 AM
- Unanswered0Votes
How to get ColumnIndex value in VBA when navigating in a Word TableHi, I have a Word document with tables, and I ask via VBA to go through table 4 from top to bottom in column 4. What I want to know is the ColumnIndex of the cell in which is ...0 Replies | 11 Views | Created by Guido.Muylaert - 41 minutes ago
- Unanswered1Votes
Delete all instances of a line which contains certain textIs there a way to delete all instances of a line containing specific word? ex: Lines that starts with "temp" Thank you5 Replies | 39 Views | Created by Raj Krish - Tuesday, April 15, 2014 2:48 PM | Last reply by Raj Krish - 2 hours 27 minutes ago
- Unanswered0Votes
Issue While Extract the Embedded Object present inside word 2010 file.Dear Friends, I have developed application c#.Net 2008, Step 1. It will open the word file and find the embedded ...3 Replies | 44 Views | Created by T.Sathish Kumar - Tuesday, April 15, 2014 1:53 PM | Last reply by Cindy Meister MVP - 2 hours 58 minutes ago
- Unanswered0Votes
How to capture when a value inside a Content Control changes without actually entering or exiting the Content Control?Here's my situation: Using Word 2010 and Sharepoint 2010. In the Word document, I have a Document Information Panel (DIP) that displays fields/values from a Sharepoint document ...9 Replies | 103 Views | Created by cpnc - Friday, April 11, 2014 6:33 PM | Last reply by Cindy Meister MVP - 3 hours 0 minutes ago
- Unanswered0Votes
how can I operate Charts in word ?At first I created a Word document like that : Then I use the Microsoft.Office.Interop.Word API to change the data of the Charts dynamicaly like ...1 Replies | 29 Views | Created by 赵召 - 14 hours 49 minutes ago | Last reply by Cindy Meister MVP - 3 hours 4 minutes ago
- Answered0Votes
Value does not fall within the expected range. Microsoft.Office.Interop.WordHi, I have already asked this question on MSDN here, but didn't find any solution! starting new thread! I am having following ...15 Replies | 260 Views | Created by Zaffar Mahmood - Friday, March 14, 2014 5:27 AM | Last reply by macropod - 7 hours 29 minutes ago
- Unanswered0Votes
Find Function Word 2013 AutomationHi, I am having an issue with using the .Find function using Automation from Visual C++ 6. I am getting a similar error to the one outlined in the blog thread below. Is there ...3 Replies | 64 Views | Created by Bob Moore SpeedAuthor - Monday, April 14, 2014 2:35 PM | Last reply by Fei Xue - MSFT - 9 hours 15 minutes ago
- Answered0Votes
office 2010 templatehello gurus, i downloaded a template from this link below: i just want to know how ...1 Replies | 28 Views | Created by cguan - 12 hours 30 minutes ago | Last reply by cguan - 10 hours 2 minutes ago
- Answered0Votes
clear highlights in word document except Yellow color HighlightHi, I wanted to have macro function, to clear highlight from entire word document, exclude those words which has already highlight of Yellow color (those are my place ...7 Replies | 146 Views | Created by Raj Dotnet - Thursday, April 10, 2014 5:53 PM | Last reply by Luna Zhang - MSFT - 13 hours 26 minutes ago
- Answered0Votes
How can I get the searched words?... use Microsoft.Office.Interop.Word; ... public class BG{ .. public void ...4 Replies | 44 Views | Created by 赵召 - Tuesday, April 15, 2014 10:15 AM | Last reply by 赵召 - 13 hours 26 minutes ago
- Unanswered0Votes
Word document with embedded Excel objectsHi, I have a Word document opening in Word 2010 with embedded Excel worksheets. I have a VSTO addin that accesses the Excel objects and modifies them. I use the Activate method of ...1 Replies | 30 Views | Created by Alan Rutter - 17 hours 16 minutes ago | Last reply by macropod - 13 hours 29 minutes ago
- Unanswered0Votes
Macro code help: Highlighting all the occurence of words that once occurs in quotes.Hello everyone, I'm running Microsoft Word 2010 off of a Dell laptop. I have a .docx (created on a Mac) with lots of information ...1 Replies | 33 Views | Created by namanksrivastava - Tuesday, April 15, 2014 1:54 PM | Last reply by macropod - 17 hours 44 minutes ago
- Answered0Votes
Highlight selected word in MS Word documentIn my application I select word in document, then I click on button and I want to highlight this word. It works but when I have word followed by comma or dot, for example ' hello, ', in docRange.Text ...2 Replies | 35 Views | Created by Lenkita - Tuesday, April 15, 2014 2:12 PM | Last reply by Lenkita - 23 hours 38 minutes ago
- Unanswered0Votes
Word 2013 Mailmerge with .dbfDoes Word 2013 support mailmerge with with the data source being dbf? Is there a link of what options are supported?2 Replies | 97 Views | Created by pld_32 - Monday, April 14, 2014 5:19 PM | Last reply by Cindy Meister MVP - 23 hours 57 minutes ago
- Unanswered0Votes
Microsoft word crashDear Sir/Madam I have been developing a code to dump a data from Excel file to word. For the word file I am using a template of the old 2003 version ie: .dot ...4 Replies | 81 Views | Created by SanjaySoni2014 - Monday, April 14, 2014 4:47 AM | Last reply by Cindy Meister MVP - Tuesday, April 15, 2014 4:31 PM
- Unanswered0Votes
How do I identify the section of a range of text, and then copy the footer of the next section to the current section?I am working on a large report (795 pages) with 652 sections. The report is split into chapters with each chapter having a separator page with a picture covering the footer. The original authors did ...1 Replies | 24 Views | Created by RiaanJbt - Tuesday, April 15, 2014 2:21 PM | Last reply by Cindy Meister MVP - Tuesday, April 15, 2014 4:18 PM
- Proposed0Votes
disable date function when reopening documentHi, I use a word document with the "date" function in a date field, giving the current date. When using the document again, I need to have the date stay ...1 Replies | 29 Views | Created by Black Santa - Tuesday, April 15, 2014 1:52 PM | Last reply by Hans Vogelaar MVP - Tuesday, April 15, 2014 2:09 PM
- Proposed1Votes
Word 2010 File Names from SharePoint 3.0Up to last week, I had been using an XP computer with Windows 2007 and SharePoint 2007. I have several VBA macros that perform various search tasks on documents in a SharePoint folder or ...1 Replies | 49 Views | Created by ScottSox - Monday, April 14, 2014 6:53 PM | Last reply by Eugene Astafiev - Tuesday, April 15, 2014 1:07 PM
- Unanswered0Votes
Selection.MoveDown wdLine, 4, wdExtend doesn't work for one customer with Word 2013Hello, I have a problem with one of our many clients. We have an "old" software written in VB6.0 that uses Word automation. Since the software runs on various ...13 Replies | 337 Views | Created by MonaLisa ML - Tuesday, March 18, 2014 3:23 PM | Last reply by MonaLisa ML - Tuesday, April 15, 2014 12:32 PM
- Unanswered0Votes
Invalid COMObject Exception occuring when word document is closed.Hi, I have a ribbon application for MSWord plugin. In one of my modules,i have included a multithreading scenario, where,one thread with apartment state set as STA has ...5 Replies | 138 Views | Created by Samjukta Paul - Thursday, April 10, 2014 11:49 AM | Last reply by Jun Zh - MSFT - Tuesday, April 15, 2014 6:29 AM
- | http://social.msdn.microsoft.com/Forums/office/en-us/home?forum=worddev&announcementId=afc8aaf8-03f8-44ba-bd36-08f89d10c426 | CC-MAIN-2014-15 | refinedweb | 1,260 | 67.28 |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
I thought I had a major breakthrough for my QA team when I discovered the Behaviours plugin has long been installed on our instance yet totally unused. Finally, I could populate the Description field of bugs with QA's Steps to Reproduce template!!
Alas, it appears that adding a Behaviour to a field disables inline editing of that field from the View Issue screen.
I would really really like to give Bug Descriptions a default value without having to sacrifice inline editing. How can I do that?
Jira 7.1.9 ScriptRunner 4.3.5
Hi Todd,
In your version of ScriptRunner, inline editing is automatically disabled when a behavior is added to a field and the user won't be able to use the field on the view screen. One workaround provided is to manually set the field to be editable by placing code like this in the Initialiser Function of the behaviour:
def descField = getFieldById("description") descField.setAllowInlineEdit(true)
Josh
Hi Josh,
Somehow Behaviour Initialiser script is NOT enabling inline edit required for "Summary and Description" fields in Jira.
Please refer the attached screen shot where my behaviour script is written using SR version 4.3.16 inside my Jira s/w version 7.2.7
-Manju
Even I tried with "current user is reporter" condition added into under "When" and Jira issue was reported by me and still inline edit is disabled...
I would like give permission of Inline Edit to change both Summary and Description fields ONLY to Reporter of the issue in my Jira project.
def desc = getFieldById("description")
Lower-case 'd' for 'description', perhaps?
Todd,
Even I tried with description and summary with first letter Lower case before.
it's not working...
- Manju
solution doesnt work for me. :(
Tried placing the setAllowInlineEdit into initializer as well as the actual code on the field.
Does anyone have a solution for this problem? It is a very big problem for my clients
Please help!
Hi guys,
In one of the later releases, we changed the implementation so when you try to edit inline a field with a behaviour then the edit dialog pop up.
Now if you still want the inline edit to work without behaviours, then try to disable the inline behaviours module
Manage Addons -> Adaptavist ScriptRunner for JIRA -> modules -> search for inline edit issue -> disable
Hi Thanos,
I don't understand you. The problem is that we can't inline edit a field whitch also is in a behaviour. For example the field "description". I want it in a behaviour and i want it to edit this inline and not with a pop up edit dialog.
Is there a solution for that problem?
Michiel
Manage Addons -> Adaptavist ScriptRunner for JIRA -> modules -> search for inline edit issue -> disable
that works and i can inline edit the description field. But when i create a new story the behaviour i made (on the Description field) doesn't work...
What is the name of the module that you disabled ?
Version of ScriptRunner ?
I disabled module --> Behaviour inline edit issue assets
ScriptRunner Version 5.4.11
Hi Thanos,
I have a update ;)
It works fine when i do a create of a new story in the same window. When i try to do a create in another window (new tab) then i does not work. This is when i use Chrome.
When i use IE both the methods work fine! ;)
@Thanos Batagiannis [Adaptavist] is it possible to have both? I must have the Behaviour on the field (part of the business logic), but the lack of inline edit is a huge annoyance.. | https://community.atlassian.com/t5/Adaptavist-questions/The-Behaviours-plugin-disables-inline-edit/qaq-p/703474 | CC-MAIN-2018-51 | refinedweb | 625 | 63.9 |
std::string destructor crashing in Release when using ::toStdString methods
I'm developing an app using QT 5 on Windows. What I am seeing is MSVC10 in RELEASE mode causing std::string to throw an exception. The application then crashes, hitting a breakpoint in the destructor's call to _tidy(). The call stack is useless in this case, as there is a stack/heap corruption. I turned on /DEBUG symbols for the project and walked the code from the debugger. Here is the distilled version of the code I am running:
Foo::doSomething(const QString& temp)
{
std::string temp_str = temp.toStdString();
<do something with temp_str>
}
What I am seeing is that the application crashes in the destructor for the std::string object "temp_str" when the method goes out of scope. This only happens in RELEASE mode. In DEBUG mode, everything works fine.
I am using the DLLs that where installed with Qt via the binary installer. I did not compile Qt for my platform as the installer package already contains what I need.
Any ideas what could be going wrong?
- Chris Kawa Moderators
If it's 100% reproducible and you're sure this isn't a random symptom of some other bug in your code then it might be a case of mixed debug/release dlls. Make sure your app doesn't use debug dllls in release mode by running it through profile mode in Dependency Walker or Process Explorer.
- alex_malyu
I would add to Chris's comment, if this under Window you have to make sure all dependencies use the same C runtime library.
It is a bit more then just release/debug , it also static and dynamic C runtime, multi-/single-threaded version ( last is mostly disappeared from modern compilers, but just in case) .
Thanks for the replies. I am trying to navigate DependencyWalker results now. I don't see anything obvious so far.
By the way, I misspoke in my original post. It is NOT Qt that is crashing explicitly. It is the destructor for the std::string that is crashing, but only when the std::string in question is generated from QString::toStdString method
- cybercatalyst
std::string temp_str = temp.toStdString();
Can you try what happens with:
std::string temp_str(temp.toStdString());
?
That was how the code originally was written, when I first observed the problem. I though perhaps I could solve it by creating a non-temporary copy, but it didn't work.
DependencyWalker did not show me anything obvious. At least I didn't see any reference to debug DLL's. I'm in the process of recompiling the Qt Libraries on my compiler. I am using Visual Studio 2010
Hi,
could you post the code inside your function?
The example I originally posted is already sufficient to reproduce the problem in my environment, but here is the exact code that is causing the problem:")))); TapMessagingLogger::instance()->logMessage(TapMessagingLogger::RECEIVED, to_log.toStdString()); }
When the call to logMessage goes out of scope, it crashes. If I modify the code as follows:")))); std::string temp_str = to_log.toStdString(); TapMessagingLogger::instance()->logMessage(TapMessagingLogger::RECEIVED, temp_str); }
The crash occurs when the if() clause goes out of scope. It is important to note that this code causes the crash only because its the first time this type of operation is performed. If I prevent it from executing, the problem occurs at a different point using a similar operation.
The bottom line is, in Release mode, whenever a std::string is produced using ::toStdString, the std::string destructor will cause a crash. I added a code snippet like the one in my original post in the constructor of my main application class and it crashed immediately
Hi,
if I understood correctly you're trying to log something that is in binary format; IIRC
toStdString()converts Unicode (QString) into a std::string using utf-8 encoding.
Could you try to use directly
QByteArray::toStdString()instead of passing through a QString? In that case you can avoid encoding step
I am not trying to log the binary portion of the payload. The QByte array is an array of chars from a socket read. The front portion of the array is string data representing the header of the message. The following line:
QString to_log(msg_in.mid(0, (msg_in.indexOf("aop\0"))));
Isolates the string portion, starting at position zero, continuing until the first instance of "aop\0", which denotes the start of the binary payload. The resulting QString contains just the header, which is string data. It is this portion that is being logged The binary payload is handled farther downstream.
Ok,
have you tried to Debug you application to understand what happens??
And, as I said before, you can use directly
QByteArray::toStdString()do avoid double conversion
ascii --> UNICODE --> ascii
In my original post, I pointed out that this has been impossible to debug thus far, and believe me I tried for quite a few days before my first post. The call stack is unusable as the crash occurs deep within Qt and and Windows libraries for which I have nothing but disassembly code to look at. The call to
_tidy()in the
std::stringis what's crashing indicating a stack/heap corruption, and the crash report supports this.
Here is another example of a crash that WAS occurring, which is much more straightforward:
CommunicationHandler::CommunicationHandler(QObject* owner_in) : QObject(owner_in) { QString cur_dir("../output/"); dna::FilenameTimestamp start_timestamp; std::string filename = start_timestamp.getTimestampedFilename( cur_dir.toStdString(), "TAPDisplayMessagingLog", ".xml"); TapMessagingLogger::instance()->openLogFile(filename); }
In the above case, the crash was occurring in the call to
::getTimestampedFilename()This was due to the destruction of the temporary
std::stringcreated from the call to
cur_dir.toStdString()in the argument list
I modified the code to eliminate the temporary variable thinking something might be wrong); }
In this case, the crash was occurring when the constructor went out of scope. Again, the crash was occurring in the
_tidy()method of
std::stringwhile trying to destroy
dir_str. I modified the code once again, eliminating the initial QString altogether:
CommunicationHandler::CommunicationHandler(QObject* owner_in) : QObject(owner_in) { std::string dir_str = "../output/"; dna::FilenameTimestamp start_timestamp; std::string filename = start_timestamp.getTimestampedFilename(dir_str , "TAPDisplayMessagingLog", ".xml"); TapMessagingLogger::instance()->openLogFile(filename); }
And the crash was resolved. This is not a code problem. There is some insidious problem with my compiled code and the precompiled binaries installed with the Qt Installer. I am in the process of trying to compile the Qt Libraries from source using my compiler to see if this resolves the problem
Compiling the Qt library did not resolve the problem.
Hi,
I created a sample program:
#include <QFileDialog> #include <string> Widget::Widget(QWidget *parent) : QWidget(parent), ui(new Ui::Widget) { ui->setupUi(this); } Widget::~Widget() { delete ui; } void Widget::on_pushButton_clicked() { std::string my_str = ui->lineEdit->text().toStdString(); qDebug("Hello %s", my_str.c_str()); } void Widget::on_lineEdit_textChanged(const QString &arg1) { ui->pushButton->setDisabled(arg1.isEmpty()); } void Widget::on_toolButton_clicked() { QString fileName = QFileDialog::getOpenFileName( this, tr("Choose a file"), "."); if(!fileName.isEmpty()) ui->lineEdit->setText(fileName); }
In a form I use a
QFileDialogto choose a file from the file-system and when a button is clicked I convert the fileName to a
std::stringand print it to the
stdout.
I run this program without crash.
I'm on OS X with Qt 5.4.1.
In your function use the the conversion with
toStdString()and comment the following lines; verify is the crash is still); }
I posted almost the exact same pseudocode in my original post, and yes I have already tried this in code, as I alluded to in a follow-up post. On my system it crashes, which is a Windows system. I don't have this problem on the Mac. This is a problem (for me) on the Windows platform using MSVC2010.
Qt and your code are probably not compiled with the same STL version. Something like this:
@sandy.martel23 said:
I recompiled the Qt Library from source using the same compiler and that did not solve the problem. I am trying different C++11 options now to see if that's implicated
I think I might be on the right track to figuring out what's going on. My project builds three executables. I recreated the same code that causes a crash in the problem executable in another of the remaining executables and the crash is not reproducible there. This leads me to suspect there is an alignment issue or something else that is affecting ONLY this one executable.
My previous assertion that only one executable had the problem was false. I quickly mocked up the following in the main method of all three executables and got similar results:
using std::string; int main(int argc, char *argv[]) { {
As you can see, the string is already junk when it's streamed to
std::coutbut looks fine in the watch list in the debugger. The string literal outputs fine. The copying of
temp_strto
temp_str2does not crash, nor does the append call, as evidenced by the last line. When the code goes out of scope, the crash occurs as normal. This suggests that even the original call to
temp.toStdString()resulted in what looked like a valid string, but the
std::coutresults and the copy output clearly shows that something is wrong even before we go out of scope.
- alex_malyu
Do you have MSVC10 service packs installed?
I remember MSVC10 in 64 bit had a nasty bug which took microsoft about a year to fix.
I'm running 10.0.40219.1 SP1Rel
I have many other large legacy projects which don't have any problems with STL in general. It's just the use of Qt's STL conversions that are presenting a problem.
- alex_malyu
SP1 should have solved the problem I mentioned, so it is not it.
I recompiled the Qt Library from source using the same compiler and that did not solve the problem.
But it seems that your still mixing debug and release STL object.
sizeof( debug std::string ) != sizeof( release std::string ).
DependencyWalker has confirmed I am not using Debug STL libraries. However, I will use the
sizeof()test as you suggest. At this point, I'll try anything!
I ran the test and the results support DependencyWalker:
21:47:49.858 INFO: SIZE OF STD::STRING (DEBUG): 32
21:51:34.968 INFO: SIZE OF STD::STRING (RELEASE): 32
- JKSH Moderators
- Did you run your RELEASE program inside the IDE, or by double-clicking the executable?
- Could you post the contents of your PATH?
There's something very wrong indeed. What happens when you:
- Print through qDebug() instead of std::cout?
- Print the QString itself?
- Do an indirect conversion that bypasses QString::toStdString()?
QString temp; QByteArray arr = temp.toUtf8(); std::string temp_str(arr.data()); qDebug() << temp; qDebug() << arr; qDebug() << temp_str.c_str();
At this point, I'll try anything!
One other thing I can think of is to install the MinGW version of Qt and use that to build you project. The binaries are very different, and might free you from the strange string issue.
I modified the code as suggested:
{ QString temp("blah"); qDebug() << "QString: " << temp; string temp_str = temp.toStdString(); qDebug() << "TEMP_STR: " << temp_str.c_str(); qDebug() << "blah2"; string temp_str2 = temp_str; temp_str2.append("a"); qDebug() << temp_str2.c_str(); }
In this case, the copy from
temp_strto
temp_str2caused the crash. The output from what did execute:
QString: "blah" TEMP_STR: blah2
In both tests, I ran from the IDE, since the behavior is easily reproducible. However, I also ran it from the command line with the same results. Here is the contents of my PATH variable:
C:\Users\roscoe.NASALAB>echo %PATH% C:\Python27\;C:\Python27\Scripts;C:\Perl64\site\bin;C:\Perl64\bin;C:\Program Fil es (x86)\IBM\RationalSDLC\common;C:\Program Files (x86)\Intel\iCLS Client\;C:\Pr ogram Files\Intel\iCLS Client\;C:\Windows\system32;C:\Windows;C:\Windows\System3 2\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Intel\Intel( R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Manage ment Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Com ponents\IPT;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\I PT;c:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\ Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\Microsoft SQL Server\100\D TS\Binn\;C:\Program Files (x86)\IBM\RationalSDLC\ClearCase\bin;C:\Program Files\ SourceGear\Common\DiffMerge\;C:\Program Files\Microsoft Windows Performance Tool kit\;C:\Program Files (x86)\Common Files\Plantronics\;C:\Program Files (x86)\Pla ntronics\MyHeadsetUpdater\;C:\Program Files (x86)\Plantronics\VoyagerEdge\;C:\Pr ogram Files\IBM\RationalPurifyPlus;C:\Ruby21\bin;%APPDATA%\Python\Scripts
Using an indirect conversion as you suggest would not be convenient, thus I've avoided it, but may be my only option. I implemented the following:
{ QString temp("blah"); QByteArray arr = temp.toUtf8(); std::string temp_str(arr.data()); qDebug() << temp; qDebug() << arr; qDebug() << temp_str.c_str(); }
The program did not crash and the output:
"blah" "blah" blah
I could try the minGW version, but I have concerns. Our Visual Studio projects are generated using CMake. The CMake scripts generate our project files for Windows, Linux/Unix and MacOS. I am fairly certain it can handle an output compatible with MinGW, but may introduce difficulties if changes need to be made, as the ripples could cause us to rework the scripts to maintain the ability to generate project files for the other platforms. Again, I may have no other option, so I won't discount it.
Another thing I have not considered, we use the Adaptive Communications Environment (ACE) which is a public-domain, platform-independent middleware that abstracts our threading, networking and many other tasks. This requires that our executables have ACE_main as its entry point. Since its only an entry point to the main execution thread, I doubt it's implicated, but I do plan on trying a sample app without it to remove this variable to be certain. Nothing is making sense, so it's worth a shot. Without a fix for this problem our project is dead in the water with a deadline looming.
In this case, the copy from
temp_strto
temp_str2caused the crash. The output from what did execute:
QString: "blah" TEMP_STR: blah2
There's no doubt then that the std::string is being constructed with invalid data (possibly with an invalid char* pointer).
Here is the contents of my PATH variable:
I was looking for common crash culprits in Qt-based projects, but didn't find any in your PATH. I don't recognize a number of the tools you're using though.
Using an indirect conversion as you suggest would not be convenient, thus I've avoided it, but may be my only option.
...
The program did not crash and the output:
"blah" "blah" blah
Ok, we now have one workaround, at least.
The implementation of QString::toStdString() is pretty straightforward:
-
-
Maybe try
temp.toUtf8().toStdString()and see if that avoids the crash. If so, perhaps you can do a find-and-replace.
I could try the minGW version, but I have concerns.
Agreed; that is a drastic change. I would use the multi-step conversion, personally.
Another thing I have not considered, we use the Adaptive Communications Environment (ACE).... I do plan on trying a sample app without it to remove this variable to be certain.
Yes, do test it.
I tested the shorthand conversion you suggested with the same result. I have also confirmed that the ACE_main entry is not causing a problem. What I have done is implement a static function to do the conversion you gave me, that I've already confirmed works. I hate not having a proper solution for this, but at least I can move forward.
I'm not going to mark this one as solved, as it technically is not. I am planning on trying this on MSVC 2013 to see if it makes a difference.
Thanks!
You're welcome :)
I tested the shorthand conversion you suggested with the same result. I have also confirmed that the ACE_main entry is not causing a problem.
Very very strange. It appears that an
QByteArray::toStdString()doesn't work on rvalues in your system. I've never seen this before.
(For academic curiosity, I'm tempted to try
qDebug() << QByteArray("blah").toStdString();)
I am planning on trying this on MSVC 2013 to see if it makes a difference.
If you have time and a fresh PC, try the same compiler version and Qt version on the fresh PC. I'm curious to know if the corruption is due to your environment, or a long-standing and undetected bug.
But, I know you have a deadline to meet; these experiments aren't crucial. (I did try
string temp_str = temp.toStdString(); qDebug() << "TEMP_STR: " << temp_str.c_str();on my machine (MSVC 2013 32-bit, Qt 5.4.1) but it was fine in both Debug and Release mode.)
Anyway, all the best!
" (For academic curiosity, I'm tempted to try
qDebug() << QByteArray("blah").toStdString();)"
I tried this with the same result. I had to add
.c_str()as QDebug does not support
std::string
"If you have time and a fresh PC, try the same compiler version and Qt version on the fresh PC. I'm curious to know if the corruption is due to your environment, or a long-standing and undetected bug."
I probably won't have time to try this fully, but I do know other developers in our group are experiencing the same problem
UPDATE: I created an empty MSVC2010 console application project that does nothing but bring in QT5Core and a few other Qt support libraries and tried it from there. Same behavior. It's something very basic and fundamental. I'd say it was a Qt bug, but that wouldn't explain why more people aren't having the same problem. With the simpler project, its even crashing in DEBUG
UPDATE2:
I have found two more workarounds for this problem:
QString temp("blah"); std::string temp_str(std::string(temp.toLatin1().data()));
and
QString temp("blah"); std::string temp_str = temp.toLatin1();
Both of these result in valid strings which do not crash. I have now tried my original problem on several different systems at two different facilities with the same result. Either there is something lacking in the documentation for the MSVC10 build of Qt or its an outright bug. Why more people haven't reported this still baffles me.
I am going to label this as SOLVED, though its a bit of a stretch. Thanks to all for your help!
- sandy.martel23
Why more people haven't reported this still baffles me
Personally, I haven't reported it because it has always worked perfectly, in Qt4, all Qt5 versions and with all MSVC versions I've used. I suspect it's the same for most people.
It is also tested in the Qt5 test suite, so I doubt Qt would be released with this failing.
Sorry, there is something with your configuration that you haven't told us (unintentionally, for sure) and we haven't guess.
Out of curiosity, what version of Qt and Windows ? (you left that out, we only know it's Qt5). And what configure command do you use to compile it.
Thanks for posting your updates, @DRoscoe.
Either there is something lacking in the documentation for the MSVC10 build of Qt or its an outright bug. Why more people haven't reported this still baffles me.
It definitely doesn't belong in the documentation, as the crash should not happen. Like @sandy.martel23, I believe it hasn't been reported much because very few people are affected by it -- I tried to reproduce it on my system but couldn't (not that I have MSVC 2010 anymore).
Anyway, I'm glad you can get on with your project.
@JKSH It can't just be a local problem either. Several other developers at my location are having the same problem. I created a very simple Win32 console app that reproduces the problem, eliminating all of the third-party libraries we use. I sent the project to another group down in Virginia and all who tried it had the same problem (our group is standardized on MSVC 2010, but we are moving to 2015 when available). What I failed to realized was that the basic console app also crashed in DEBUG, whereas our main project crashes only in RELEASE. I now believe that its a problem with MSVC 2010 only, as my MSVC 2013 project doesn't have the problem. I think the problem occurred when Qt went to UTF8 which would explain why toLatin1() and toLocal8Bit() work.
LAST UPDATE
The problem is solved! Prior to installing Qt 5.4, I had been running Qt 4.8.6. This was installed with a binary installer package (not web installer). When I installed Qt 5.4, I used the web installer. It ended up installing several earlier versions of Qt as well. There were also some VC redistributables in the Qt folder. I gutted the whole thing and re-installed only Qt 5.4 clean. I installed no other versions. I rebuilt my project and ran the test again, and voila! The problem disappeared.
The one thing in common in my case, our other internal developers and the group in Virginia is that we all started fro Qt 4.8.6 using the same installer package and upgraded via web-installer.
A configuration issue ? I can't say that I'm shocked...
I have several Qt versions installed on Windows, version 4, version 5, for 64 & 32 bits and for different VS (2010 and 2013). I would suspect those VC redistributables, if they were ot in sync with the compiler used.
On the other hand, I only use Qts that I compile myself, so I keep a close control on access paths and get a "clean" dev install for each, no upgrades, maybe the (web) installer is mixing up versions.
Glad you found it.
@sandy.martel23 When you build your Qt sources for VS2010 can you tell me how you do it? When I did it, I ended up having the same problem. Do you specify any special compiler settings?
OK, this time, I have nailed down the specific line in our projects that are causing the problem:
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /D \"_ITERATOR_DEBUG_LEVEL=1\"")
Turning on STL iterator debugging is not compatible with Qt libraries. I suspect that turning this feature on when compiling Qt would solve the problem, but for now it's easier to turn it off in my own projects. It is only marginally useful.
9 days ago, @sandy.martel23 said:
But it seems that your still mixing debug and release STL objects
told you :-)
As for building Qt, I do nothing special. | https://forum.qt.io/topic/53424/std-string-destructor-crashing-in-release-when-using-tostdstring-methods | CC-MAIN-2018-13 | refinedweb | 3,807 | 65.22 |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
README ^^^^^^ o Environments - Installing Cygwin - Ubuntu Bash under Windows 10 o Installation - Download and Unpack - Semi-Optional apps/ Package - Installation Directories with Spaces in the Path - Downloading from Repositories - Related Repositories - Notes about Header Files o Configuring NuttX - Instantiating "Canned" Configurations - Refreshing Configurations - NuttX Configuration Tool - Finding Selections in the Configuration Menus - Reveal Hidden Configuration Options - Make Sure that You are on the Right Platform - Comparing Two Configurations - Making defconfig Files - Incompatibilities with Older Configurations - NuttX Configuration Tool under DOS o Toolchains - Cross-Development Toolchains - NuttX Buildroot Toolchain o Shells o Building NuttX - Building - Re-building - Build Targets and Options - Native Windows Build - Installing GNUWin32 o Cygwin Build Problems - Strange Path Problems - Window Native Toolchain Issues o Documentation ENVIRONMENTS ^^^^^^^^^^^^ NuttX requires a POSIX development environment such as you would find under Linux or macOS. NuttX may also be installed and built on Windows system if you also provide such a POSIX development environment. Options for a POSIX development environment under Windows include: - An installation of Linux on a virtual machine (VM) in Windows. I have not been happy using a VM myself. I have had stability problems with open source VMs and commercial VMs cost more than I want to spend. Sharing files with Linux running in a VM is awkward; sharing devices connected to the Windows box with Linux in a VM is, at the very least, confusing; Using Windows tools (such as Segger J-Link) with files built under the Linux VM is not a possibility. - The Cygwin environment. Instructions for installation of Cygwin on a Windows system are provided in the following paragraph, "Installing Cygwin". Cygwin is a mature, well-tested, and very convenient environment. It is especially convenient if you need to integrate with Windows tools and files. Downsides are that the installation time is very long and the compile times are slow. - Ubuntu/Bash shell under Windows 10. This is a new option under Windows 10. See the section "Ubuntu Bash under Windows 10" below. This is an improvement over Cygwin if your concern is compile time; its build performance is comparable to native Linux, certainly better than the Cygwin build time. It also installs in a tiny fraction of the time as Cygwin, perhaps 20 minutes for the basic Ubuntu install (vs. more than a day for the complete Cygwin install). There have been even more recent ports of Linux environment to Windows. I need to update this section to include some mention of these alternatives. - The MSYS environment. MSYS derives from an older version of Cygwin simplified and adapted to work more naturally in the Windows environment. See if you are interested in using MSYS. The advantages of the MSYS environment is that it is better integrted with the native Windows environment and lighter weight; it uses only a minimal number of add-on POSIX-land tools. The download link in that Wiki takes you to the SourceForge download site. The SourceForge MSYS project has been stagnant for some time. The MSYS project has more recently moved to. Downloads of current .zip files are available there but no instructions for the installation. - MSYS2 appears to be a re-write of MSYS based on a newer version of Cygwin. Is it available at. A windows installer is available at that site along with very good installation instructions. The download is relatively quick (at least compared to Cygwin) and the 'pacman' package management tool supports supports simple system updates. For example, 'pacman -S git' will install the GIT command line utilities. - Other POSIX environments. Check out: UnxUtils:, MobaXterm: Gow: Disclaimer: In priniple, these should work. However, I have never used any of these environments and cannot guarantee that there is not some less-than-obvious issues. NuttX can also be installed and built on a native Windows system, but with some potential tool-related issues (see the discussion "Native Windows Build" under "Building NuttX" below). GNUWin32 is used to provide compatible native windows tools. Installing Cygwin ----------------- Installing Cygwin on your Windows PC is simple, but time consuming. See for installation instructions. Basically you just need to download a tiny setup.exe program and it does the real, network installation for you. Some Cygwin installation tips: 1. Install at C:\cygwin 2. Install EVERYTHING: ." If you use the "default" installation, you will be missing many of the Cygwin utilities that you will need to build NuttX. The build will fail in numerous places because of missing packages. NOTE: You don't really have to install EVERYTHING but I cannot answer the question "Then what should I install?" I don't know the answer to that and so will continue to recommend installing EVERYTHING. You should certainly be able to omit "Science", "Math", and "Publishing". You can try omitting KDE, Gnome, GTK, and other graphics packages if you don't plan to use them. Perhaps a minimum set would be those packages listed below for the "Ubuntu Bash under Windows 10" installation? After installing Cygwin, you will get lots of links for installed tools and shells. I use the RXVT native shell. It is fast and reliable and does not require you to run the Cygwin X server (which is neither fast nor reliable). Unless otherwise noted, the rest of these instructions assume that you are at a bash command line prompt in either Linux or in Cygwin shell. UPDATE: The last time I installed EVERYTHING, the download was about 5GiB. The server I selected was also very slow so it took over a day to do the whole install! Using MSYS ---------- MSYS is an environment the derives from Cygwin. Thus, most things said about Cygwin apply equally to MSYS. This section will, then, focus on the differences when using MSYS, specifically MSYS2. Here is it assumed that you have already downloaded and installed MSYS2 from using the windows installer available at that location. It is also assumed that you have brought in the necessary tools using the 'pacman' package management tool Tools needed including: pacman -S git pacman -S make pacman -S gcc pacman -S gdb And possibly others depending upon your usage. Then you will need to build and install kconfig-frontends per the instructions of the top-level README.txt file in the tools repository. This required the following additional tools: pacman -S bison pacman -S gperf pacman -S ncurses-devel pacman -S automake-wrapper pacman -S autoconf pacman -S pkg-config Because of some versioning issues, I had to run 'aclocal' prior to running the kconfig-frontends configure script. See "Configuring NuttX" below for futher information.ifq Unlike Cygwin, MSYS does not support symbolic links. The 'ln -s' commnad will, in fact, copy a directory! This means that you Make.defs file will have to include definitions like: ifeq ($(CONFIG_WINDOWS_MSYS),y) DIRLINK = $(TOPDIR)/tools/copydir.sh DIRUNLINK = $(TOPDIR)/tools/unlink.sh endif This will force the directory copies to work in a way that can be handled by the NuttX build system. NOTE: The default link.sh script has been updated so that is should now be MSYS2 compatible. The above is preferred but no longer necessary in the Make.defs file. To build the simulator under MSYS, you also need: pacman -S zlib-devel It appears that you cannot use directory names with spaces in them like "/c/Program\ Files \(86\)" in the MSYS path variable. I worked around this by create Windows junctions like this:: 1. Open the a windows command terminal, 2. CD to c:\msys64, then 3. mklink /j programfiles "C:/Program\ Files" and 4. mklink /j programfiles86 "C:/Program\ Files\ \(x86\)" They then show up as /programfiles and /programfiles86 with the MSYS2 sandbox. Thos paths can then be used with the PATH variable. I had to do something similar for the path to the GNU Tools "ARM Embedded Toolchain" which also has spaces in the path name. Ubuntu Bash under Windows 10 ---------------------------- A better version of a command-line only Ubuntu under Windows 10 (beta) has recently been made available from Microsoft. Installation ------------ Installation instructions abound on the Internet complete with screen shots. I will attempt to duplicate those instructions in full here. Here are the simplified installation steps: - Open "Settings". - Click on "Update & security". - Click on "For Developers". - Under "Use developer features", select the "Developer mode" option to setup the environment to install Bash. - A message box should pop up. Click "Yes" to turn on developer mode. - After the necessary components install, you'll need to restart your computer. Once your computer reboots: - Open "Control Panel". - Click on "Programs". - Click on "Turn Windows features on or off". - A list of features will pop up,. This will take awhile. -. Accessing Windows Files from Ubuntu ----------------------------------- File systems will be mounted under "/mnt" so for example "C:\Program Files" appears at "/mnt/c/Program Files". This is as opposed to Cygwin where the same directory would appear at "/cygdrive/c/Program Files". With these differences (perhaps a few other Windows quirks) the Ubuntu install works just like Ubuntu running natively on your PC. A good tip for file sharing is to use symbolic links within your Ubuntu home directory. For example, suppose you have your "projects" directory at C:\Documents\projects. Then you can set up a link to the projects/ directory in your Ubuntu directory like: ln -s /mnt/c/Documents/projects projects Accessing Ubuntu Files From Windows ----------------------------------- In Ubuntu Userspace for Windows, the Ubuntu file system root directory is at: %localappdata%\lxss\rootfs Or C:\Users\Username\AppData\Local\lxss\rootfs However, I am unable to see my files under the rootfs\home directory. After some looking around, I find the home directory %localappdata%\lxss\home. With that trick access to the /home directory, you should actually be able to use Windows tools outside of the Ubuntu sandbox with versions of NuttX built within the sandbox using that path. Executing Windows Tools from Ubuntu ----------------------------------- You can also execute Windows tools from within the Ubuntu sandbox: /mnt/c/Program\ Files\ \(x86\)/Microchip/xc32/v1.43/bin/xc32-gcc.exe --version Unable to translate current working directory. Using C:\WINDOWS\System32 xc32-gcc.exe (Microchip Technology) 4.8.3 MPLAB XC32 Compiler v1.43 Build date: Mar 1 2017 ... The error message indicates that there are more issues: You cannot mix Windows tools that use Windows style paths in an environment that uses POSIX paths. I think you would have to use Linux tools only from within the Ubuntu sandbox. Install Ubuntu Software ----------------------- Use "sudo apt-get install <package name>". As examples, this is how you would get GIT: sudo apt-get install git This will get you a compiler for your host PC: sudo apt-get install gcc This will get you an ARM compiler for your target: sudo apt-get install gcc-arm-none-eabi NOTE: That is just an example. I am not sure if apt-get will give you a current or usable compiler. You should carefully select your toolchain for the needs of your project. You will also need to get the kconfig-frontends configuration as described below under "NuttX Configuration Tool". In order to build the kconfig-frontends configuration tool you will also need: make, gperf, flex, bison, and libncurses-dev. That is enough to do a basic NuttX build. Integrating with Windows Tools ------------------------------ If you want to integrate with Windows native tools, then you would need deal with the same kind of craziness as with integrating Cygwin with native toolchains, see the section "Cygwin Build Problems" below. However, there is currently no build support for using Windows native tools with Ubuntu under Windows. This tool combination is made to work with Cygwin through the use of the 'cygpath -w' tool that converts paths from say '/cydrive/c/Program Files' to 'C:\Program Files'. There is, however, no corresponding tool to convert '/mnt/c/Program Files' in the Ubuntu environment. Graphics Support ---------------- The Ubuntu version support by Microsoft is a command-line only version. There is no support for Linux graphics utilities. This limitation is not a limitation of Ubuntu, however, only in what Microsoft is willing to support. If you install a X-Server, then you can also use basic graphics utilities. See for example: Many Linux graphics programs would, however, also require a graphics framework like GTK or Qt. So this might be a trip down the rabbit hole. INSTALLATION ^^^^^^^^^^^^ There are two ways to get NuttX: You may download released, stable tarballs from wither the Bitbucket or Sourceforge download locations. Or you may get NuttX by cloning the Bitbucket GIT repositories. Let's consider the released tarballs first: Download and Unpack ------------------- Download and unpack the NuttX tarball. If you are reading this, then you have probably already done that. After unpacking, you will end up with a directory called nuttx-version (where version is the NuttX version number). You might want to rename that directory nuttx to match the various instructions in the documentation and some scripts in the source tree. Download locations: Semi-Optional apps/ Package --------------------------- All NuttX libraries and example code used to be in included within the NuttX source tree. As of NuttX-6.0, this application code was moved into a separate tarball, the apps tarball. If you are just beginning with NuttX, then you will want to download the versioned apps tarball along with the NuttX tarball. If you already have your own product application directory, then you may not need the apps tarball. It is called "Semi-optional" because if you don't have some apps/ directory, NuttX will *fail* to build! You do not necessarily need to use the NuttX apps tarball but may, instead, provide your own custom application directory. Such a custom directory would need to include a valid Makefile to support the build and a valid Kconfig file to support the configuration. More about these files later. Download then unpack the apps tarball in the same directory where you unpacked the NuttX tarball. After you unpack the apps tarball, you will have a new directory called apps-version (where the version should exactly match the version of the NuttX tarball). Again, you might want to rename the directory to simply apps/ to match what you read in the documentation After unpacking (and renaming) the apps tarball, you will have two directories side by side like this: | +----+----+ | | nuttx/ apps/ This is important because the NuttX build will expect to find the apps directory in that (default) location. That default location can be changed by modifying your NuttX configuration file, but that is another story. Installation Directories with Spaces in the Path ------------------------------------------------ The nuttx build directory should reside in a path that contains no spaces in any higher level directory name. For example, under Cygwin, your home directory might be formed from your first and last names like: "/home/First Last". That will cause strange errors when the make system tries to build. [Actually, that problem is probably not too difficult to fix. Some Makefiles probably just need some paths within double quotes] I work around spaces in the home directory name, by creating a new directory that does not contain any spaces, such as /home/nuttx. Then I install NuttX in /home/nuttx and always build from /home/nuttx/nuttx-code. Downloading from Repositories ----------------------------- Cloning the Repository The current NuttX du jour is available in from a GIT repository. Here are instructions for cloning the core NuttX RTOS (corresponding to the nuttx tarball discussed above):: git clone nuttx And the semi-optional apps/ application directory and be cloned like: git clone apps That will give you the same directory structure like this: | +----+----+ | | nuttx/ apps/ Configuring the Clones The following steps need to be performed for each of the repositories. After changing to the clone directory: Set your identity: git config --global user.name "My Name" git config --global user.email my.name@example.com Colorized diffs are much easier to read: git config --global color.branch auto git config --global color.diff auto git config --global color.interactive auto git config --global color.status auto Checkout other settings git config --list Cloning NuttX Inside Cygwin If you are cloning the NuttX repository, it is recommended to avoid automatic end of lines conversions by git. These conversions may break some scripts like configure.sh. Before cloning, do the following: git config --global core.autocrlf false Related Repositories -------------------- These are standalone repositories: * This directory holds an optional package of applications and libraries can be used with the NuttX RTOS. There is a README.txt file there that will provide more information about that package. * This is the NuttX C++ graphics support. This includes NxWM, the tiny NuttX Window Manager. * This repository contains a version of the uClibc++ C++ library. This code originates from and has been adapted for NuttX by the RGMP team (). * A environment that you can to use to build a custom, NuttX GNU toolchain. * There are snapshots of some tools here that you will need to work with NuttX: kconfig-frontends, genromfs, and others. * A few drivers that are not integrated into the main NuttX source tree due to licensing issues. * Yes, this really is a Pascal compiler. The Pascal p-code run-time and pcode debugger can be built as a part of NuttX. Notes about Header Files ------------------------ Other C-Library Header Files. When a GCC toolchain is built, it must be built against a C library. The compiler together with the contents of the C library completes the C language definition and provides the complete C development environment. NuttX provides its own, built-in C library. So the complete, consistent C language definition for use with NuttX comes from the combination of the compiler and the header files provided by the NuttX C library. When a GCC toolchain is built, it incorporates the C library header files into the compiler internal directories and, in this way, the C library really becomes a part of the toolchain. If you use the NuttX buildroot toolchain as described below under "NuttX Buildroot Toolchain", your GCC toolchain will build against the NuttX C library and will incorporate the NuttX C library header files as part of the toolchain. If you use some other, third-party tool chain, this will not be the case, however. Those toolchains were probably built against some other, incompatible C library distribution (such as newlib). Those tools will have incorporated the incompatible C library header files as part of the toolchain. These incompatible header files must *not* be used with NuttX because they will conflict with definitions in the NuttX built-in C-Library. For such toolchains that include header files from a foreign C-Library, NuttX must be compiled without using the standard header files that are distributed with your toolchain. This prevents including conflicting, incompatible header files such as stdio.h. The math.h and stdarg.h are probably the two most trouble some header files to deal with. These troublesome header files are discussed in more detail below. Header Files Provided by Your Toolchain. Certain header files, such as setjmp.h, stdarg.h, and math.h, may still be needed from your toolchain and your compiler may not, however, be able to find these if you compile NuttX without using standard header files (i.e., with -nostdinc). If that is the case, one solution is to copy those header file from your toolchain into the NuttX include directory. Duplicated Header Files. There are also a few header files that can be found in the nuttx/include directory which are duplicated by the header files from your toolchain. stdint.h and stdbool.h are examples. If you prefer to use the stdint.h and stdbool.h header files from your toolchain, those could be copied into the nuttx/include/ directory. Using most other header files from your toolchain would probably cause errors. math.h Even though you should not use a foreign C-Library, you may still need to use other, external libraries with NuttX. In particular, you may need to use the math library, libm.a. NuttX supports a generic, built-in math library that can be enabled using CONFIG_LIBM=y. However, you may still want to use a higher performance external math library that has been tuned for your CPU. Sometimes such tuned math libraries are bundled with your toolchain. The math library header file, math.h, is a then special case. If you do nothing, the standard math.h header file that is provided with your toolchain will be used. If you have a custom, architecture specific math.h header file, then that header file should be placed at arch/<cpu>/include/math.h. There is a stub math.h header file located at include/nuttx/lib/math.h. This stub header file can be used to "redirect" the inclusion to an architecture- specific math.h header file. If you add an architecture specific math.h header file then you should also define CONFIG_ARCH_MATH_H=y in your NuttX Configuration file. If CONFIG_ARCH_MATH_H is selected, then the top-level Makefile will copy the stub math.h header file from include/nuttx/lib/math.h to include/math.h where it will become the system math.h header file. The stub math.h header file does nothing other than to include that architecture-specific math.h header file as the system math.h header file. float.h If you enable the generic, built-in math library, then that math library will expect your toolchain to provide the standard float.h header file. The float.h header file defines the properties of your floating point implementation. It would always be best to use your toolchain's float.h header file but if none is available, a default float.h header file will be provided if this option is selected. However, there is no assurance that the settings in this float.h are actually correct for your platform! stdarg.h In most cases, the correct version of stdarg.h is the version provided with your toolchain. However, sometimes there are issues with using your toolchains stdarg.h. For example, it may attempt to draw in header files that do not exist in NuttX or perhaps the header files that it uses are not compatible with the NuttX header files. In those cases, you can use an architecture-specific stdarg.h header file by defining CONFIG_ARCH_STDARG_H=y. See the discussion above for the math.h header. This setting works exactly the same for the stdarg.h header file. CONFIGURING NUTTX ^^^^^^^^^^^^^^^^^ Instantiating "Canned" Configurations ------------------------------------- configure.sh and configure.bat: "Canned" NuttX configuration files are retained in: configs/<board-name>/<config-dir> Where <board-name> is the name of your development board and <config-dir> is the name of the sub-directory containing a specific configuration for that board. Only a few steps are required to instantiate a NuttX configuration, but to make the configuration even easier there are scripts available in the tools/ sub-directory combines those simple steps into one command. There is one tool for use with any Bash-like shell that does configuration steps. It is used as follows: tools/configure.sh <board-name>/<config-dir> There is an alternative Windows batch file that can be used in the windows native environment like: tools\configure.bat <board-name>\<config-dir> And, to make sure that other platforms are supported, there is also a C program at tools/configure.c that can be compiled to establish the board configuration. See tools/README.txt for more information about these scripts. General information about configuring NuttX can be found in: {TOPDIR}/configs/README.txt {TOPDIR}/configs/<board-name>/README.txt The Hidden Configuration Scripts: As mentioned above, there are only a few simple steps to instantiating a NuttX configuration. Those steps are hidden by the configuration scripts but are summarized below: 1. Copy Files Configuring NuttX requires only copying two files from the <config-dir> to the directory where you installed NuttX (TOPDIR): Copy configs/<board-name>/<config-dir>/Make.def to{TOPDIR}/Make.defs OR Copy configs/<board-name>/scripts/Make.def to{TOPDIR}/Make.defs Make.defs describes the rules needed by your tool chain to compile and link code. You may need to modify this file to match the specific needs of your toolchain. NOTE that a configuration may have its own unique Make.defs file in its configuration directory or it may use a common Make.defs file for the board in the scripts/ directory. The first takes precedence. Copy configs/<board-name>/<config-dir>/defconfig to{TOPDIR}/.config The defconfig file holds the actual build configuration. This file is included by all other make files to determine what is included in the build and what is not. This file is also used to generate a C configuration header at include/nuttx/config.h. Copy other, environment-specific files to{TOPDIR} This might include files like .gdbinit or IDE configuration files like .project or .cproject. 2. Refresh the Configuration New configuration setting may be added or removed. Existing settings may also change there values or options. This must be handled by refreshing the configuration as described below. NOTE: NuttX uses only compressed defconfig files. For the NuttX defconfig files, this refreshing step is *NOT* optional; it is also necessary to uncompress and regenerate the full making file. This is discussed further below. Refreshing Configurations ------------------------- Configurations can get out of date. As new configuration settings are added or removed or as dependencies between configuration settings change, the contents of a default configuration can become out of synch with the build systems. Hence, it is a good practice to "refresh" each configuration after configuring and before making. To refresh the configuration, use the NuttX Configuration Tool like this: make oldconfig AFTER you have instantiated the NuttX configuration as described above. The configuration step copied the .config file into place in the top-level NuttX directory; 'make oldconfig' step will then operate on that .config file to bring it up-to-date. If your configuration is out of date, you will be prompted by 'make oldconfig' to resolve the issues detected by the configuration tool, that is, to provide values for the new configuration options in the build system. Doing this can save you a lot of problems down the road due to obsolete settings in the default board configuration file. The NuttX configuration tool is discussed in more detail in the following paragraph. Confused about what the correct value for a new configuration item should be? Enter ? in response to the 'make oldconfig' prompt and it will show you the help text that goes with the option. If you don't want to make any decisions are willing to just accept the recommended default value for each new configuration item, an even easier way is: make olddefconfig The olddefconfig target will simply bring your configuration up to date with the current Kconfig files, setting any new options to the default value. No questions asked. NuttX Configuration Tool ------------------------ An automated tool has been incorporated to support re-configuration of NuttX. This automated tool is based on the kconfig-frontends application available at (A snapshot of this tool is also available from the tools repository at). This application provides a tool called 'kconfig-mconf' that is used by the NuttX top-level Makefile. The following make target is provided: make menuconfig This make target will bring up NuttX configuration menus. WARNING: Never do 'make menuconfig' on a configuration that has not been converted to use the kconfig-frontends tools! This will damage your configuration (see). How do we tell a new configuration from an old one? See "Incompatibilities with Older Configurations" below. The 'menuconfig' make target depends on two things: 1. The Kconfig configuration data files that appear in almost all NuttX directories. These data files are the part that is still under development (patches are welcome!). The Kconfig files contain configuration information for the configuration settings relevant to the directory in which the Kconfig file resides. NOTE: For a description of the syntax of this configuration file, see kconfig-language.txt in the tools repository at 2. The 'kconfig-mconf' tool. 'kconfig-mconf' is part of the kconfig-frontends package. You can download that package from the website or you can use the snapshot in the tools repository at. Building kconfig-frontends under Linux may be as simple as 'configure; make; make install' but there may be some build complexities, especially if you are building under Cygwin. See the more detailed build instructions in the top-level README.txt file of the tools repository at. The 'make install' step will, by default, install the 'kconfig-mconf' tool at /usr/local/bin/mconf. Where ever you choose to install 'kconfig-mconf', make certain that your PATH variable includes a path to that installation directory. The kconfig-frontends tools will not build in a native Windows environment directly "out-of-the-box". For the Windows native case, you should use the modified version of kconfig-frontends that can be found at The basic configuration order is "bottom-up": - Select the build environment, - Select the processor, - Select the board, - Select the supported peripherals - Configure the device drivers, - Configure the application options on top of this. This is pretty straight forward for creating new configurations but may be less intuitive for modifying existing configurations. Another ncurses-based tool that is an option to kconfig-mconf is kconfig-nconf. The differences are primary in in the aesthetics of the UI. If you have kconfig-nconf built, then you can invoke that front end with: make nconfig If you have an environment that supports the Qt or GTK graphical systems (probably KDE or gnome, respectively, or Cygwin under Windows with Qt or GTK installed), then you can also build the graphical kconfig-frontends, kconfig-qconf and kconfig-gconf. In these case, you can start the graphical configurator with either: make qconfig or make gconfig Some keyboard shortcuts supported by kconfig-mconf, the tool that runs when you do 'make menuconfig': - '?' will bring up the mconfig help display. - '/' can be used find configuration selections. - 'Z' can be used to reveal hidden configuration options These last two shortcuts are described further in the following paragraphs. Finding Selections in the Configuration Menus --------------------------------------------- The NuttX configuration options have gotten complex and it can be very difficult to find options in the menu trees if you are not sure where to look. The "basic configuration order" describe above can help to narrow things down. But if you know exactly what configuration setting you want to select, say CONFIG_XYZ, but not where to find it, then the 'make menuconfig' version of the tool offers some help: By pressing the '/' key, the tool will bring up a menu that will allow you to search for a configuration item. Just enter the string CONFIG_XYZ and press 'ENTER'. It will show you not only where to find the configuration item, but also all of the dependencies related to the configuration item. Reveal Hidden Configuration Options ----------------------------------- If you type 'Z', then kconfig-mconf will change what is displayed. Normally, only enabled features that have all of their dependencies met are displayed. That is, of course, not very useful if you would like to discover new options or if you are looking for an option and do not realize that the dependencies have not yet been selected and, hence, it is not displayed. But if you enter 'Z', then every option will be shown, whether or not its dependencies have been met. You can then see everything that could be selected with the right dependency selections. These additional options will be shown the '-' for the selection and for the value (since it cannot be selected and has no value). About all you do is to select the <Help> option to see what the dependencies are. Make Sure that You are on the Right Platform -------------------------------------------- Saved configurations may run on Linux, Cygwin (32- or 64-bit), or other platforms. The platform characteristics can be changed use 'make menuconfig'. Sometimes this can be confusing due to the differences between the platforms. Enter sethost.sh sethost.sh is a simple script that changes a configuration to your host platform. This can greatly simplify life if you use many different configurations. For example, if you are running on Linux and you configure like this: tools/configure.sh board/configuration The you can use the following command to both (1) make sure that the configuration is up to date, AND (2) the configuration is set up correctly for Linux: tools/sethost.sh -l Or, if you are on a Windows/Cygwin 64-bit platform: tools/sethost.sh -c Or, for MSYS/MSYS2: tools/sethost.sh -g Other options are available from the help option built into the script. You can see all options with: tools/sethost.sh -h Recently, the options to the configure.sh (and configure.bat) scripts have been extended so that you both setup the configuration, select for the host platform that you use, and uncompress and refresh the defconfig file all in one command like: tools/configure.sh -l board/configuration For a Linux host or for a Windows/Cygwin host: tools/configure.sh -h board/configuration Other options are available from the help option built into the script. You can see all options with: tools/configure.sh -h Comparing Two Configurations ---------------------------- If you try to compare two configurations using 'diff', you will probably not be happy with the result. There are superfluous things added to the configuration files that make comparisons with the human eye difficult. There is a tool at nuttx/tools/cmpconfig.c that can be built to simplify these comparisons. The output from this difference tool will show only the meaningful differences between two configuration files. This tool is built as follows: cd nuttx/tools make -f Makefile.host This will create a program called 'cmpconfig' or 'comconfig.exe' on Windows. Why would you want to compare two configuration files? Here are a few of the reasons why I do this 1. When I create a new configuration I usually base it on an older configuration and I want to know, "What are the options that I need to change to add the new feature to the older configurations?" For example, suppose that I have a boardA/nsh configuration and I want to create a boardA/nxwm configuration. Suppose I already have boardB/nsh and boardB/nxwm configurations. Then by comparing the boardB/nsh with the boardB/nxwm I can see the modifications that I would need to make to my boardA/nsh to create a new boardA/nxwm. 2. But the most common reason that I use the 'cmpconfig' program is to check the results of "refreshing" a configuration with 'make oldconfig' (see the paragraph "Refreshing Configurations" above). The 'make oldconfig' command will make changes to my configuration and using 'cmpconfig', I can see precisely what those changes were and if any should be of concern to me. 3. The 'cmpconfig' tool can also be useful when converting older, legacy manual configurations to the current configurations based on the kconfig-frontends tools. See the following paragraph. Making defconfig Files ---------------------- .config Files as defconfig Files: The minimum defconfig file is simply the generated .config file with CONFIG_APPS_DIR setting removed or commented out. That setting provides the name and location of the apps/ directory relative to the nuttx build directory. The default is ../apps/, however, the apps directory may be any other location and may have a different name. For example, the name of versioned NuttX releases are always in the form apps-xx.yy where xx.yy is the version number. Finding the apps/ Directory Path: When the default configuration is installed using one of the scripts or programs in the NuttX tools directory, there will be an option to provide the path to the apps/ directory. If not provided, then the configure tool will look around and try to make a reasonable decision about where the apps/ directory is located. Compressed defconfig Files: The Makefile also supports an option to generate very small defconfig files. The .config files are quite large and complex. But most of the settings in the .config file simply have the default settings from the Kconfig files. These .config files can be converted into small defconfig file: make savedefconfig That make target will generate a defconfig file in the top-level directory. The size reduction is really quite remarkable: wc -l .config defconfig 1085 .config 82 defconfig 1167 total In order to be usable, the .config file installed from the compressed defconfig file must be reconstituted using: make olddefconfig NOTE 1: Only compressed defconfig files are retained in the NuttX repository. All patches and PRs that attempt to add or modify a defconfig file MUST use the compressed defconfig format as created by 'make savdefconfig.' NOTE 2: When 'make savedefconfig' runs it will try several things some of which are expected to fail. In these cases you will see an error message from make followed by "(ignored)." You should also ignore these messages CAUTION: This size reduction was accomplished by removing all setting from the .config file that were at the default value. 'make olddefconfig' can regenerate the original .config file by simply restoring those default settings. The underlying assumption here is, of course, that the default settings do not change. If the default settings change, and they often do, then the original .config may not be reproducible. So if your project requires 100% reproducibility over a long period of time, you make want to save the complete .config files vs. the standard, compressed defconfig file. Configuring with "Compressed" defconfig Files: As described above defconfig, all NuttX defconfig files are compressed using 'make savedeconfig'. These compressed defconfig files are generally not fully usable as they are and may not build the target binaries that you want because the compression process removed all of the default settings from the defconfig file. To restore the default settings, you should run the following after configuring: make olddefconfig That will restore the the missing defaulted values. Using this command after configuring is generally a good practice anyway: Even if the defconfig files are not "compressed" in this fashion, the defconfig file may be old and the only way to assure that the installed .config is is up to date is via 'make oldconfig' or 'make olddefconfig'. See the paragraph above entitled ""Refreshing Configurations" for additional information. Incompatibilities with Older Configurations ------------------------------------------- ***** WARNING ***** The current NuttX build system supports *only* the new compressed, defconfig configuration files generated using the kconfig-frontends tools as described in the preceding section. Support for the older, legacy, manual configurations was eliminated in NuttX 7.0; support for uncompressed .config-files-as-defconfig files was eliminated after NuttX-7.21. All configurations must now be done using the kconfig-frontends tool. The older manual configurations and the new kconfig-frontends configurations are not compatible. Old legacy configurations can *not* be used with the kconfig-frontends tool and, hence, cannot be used with releases of NuttX 7.0 and beyond: If you run 'make menuconfig' with a legacy configuration the resulting configuration will probably not be functional. Q: How can I tell if a configuration is a new kconfig-frontends configuration or an older, manual configuration? A: Only old, manual configurations will have an appconfig file Q: How can I convert a older, manual configuration into a new, kconfig-frontends toolchain. A: Refer to ***** WARNING ***** As described above, whenever you use a configuration, you really should always refresh the configuration with the following command *before* you make NuttX: make oldconfig OR make olddefconfig This will make sure that the configuration is up-to-date in the event that it has lapsed behind the current NuttX development (see the paragraph "Refreshing Configurations" above). But this only works with *new* configuration files created with the kconfig-frontends tools. Further, this step is *NOT* optional with the new, compressed defconfig files. It is a necessary step that will also uncompress the defconfig file, regenerating the .config and making it usable for NuttX builds. Never do 'make oldconfig' (OR 'make menuconfig') on a configuration that has not been converted to use the kconfig-frontends tools! This will damage your configuration (see). NuttX Configuration Tool under DOS ---------------------------------- configuration steps most recent versions of NuttX require the kconfig-tweak tool that is not not available in the the above. However, there has been an update to this Kconfig Windows tools that does include kconfig-tweak: Source code is available here: and It is also possible to use the version of kconfig-frontends built under Cygwin outside of the Cygwin "sandbox" in a native Windows environment: 1. You can run the configuration tool using Cygwin. However, the Cygwin Makefile.win will complain so to do this will, you have to manually edit the .config file: a. Delete the line: CONFIG_WINDOWS_NATIVE=y b. Change the apps/ directory path, CONFIG_APPS_DIR to use Unix style delimiters. For example, change "..\apps" to "../apps" And of course, after you use the configuration tool you need to restore CONFIG_WINDOWS_NATIVE=y and the correct CONFIG_APPS_DIR. 2) You can, with some effort, run the Cygwin kconfig-mconf tool directly in the Windows console window. In this case, you do not have to modify the .config file, but there are other complexities: a. You need to temporarily set the Cygwin directories in the PATH variable then run kconfig-mconf manually like: kconfig-mconf Kconfig There is a Windows batch file at tools/kconfig.bat that automates these steps: tools/kconfig menuconfig b. There is an issue with accessing DOS environment variables from the Cygwin kconfig-mconf running in the Windows console. The following change to the top-level Kconfig file seems to work around these problems: config APPSDIR string - option env="APPSDIR" + default "../apps" TOOLCHAINS ^^^^^^^^^^ Cross-Development Toolchains ---------------------------- In order to build NuttX for your board, you will have to obtain a cross- compiler to generate code for your target CPU. For each board, configuration, there is a README.txt file (at configs/<board-name>/README.txt). That README file contains suggestions and information about appropriate tools and development environments for use with your board. In any case, the PATH environment variable will need to be updated to include the location where the build can find the toolchain binaries. NuttX Buildroot Toolchain ------------------------- For many configurations, a DIY set of tools is available for NuttX. These tools can be downloaded from the NuttX Bitbucket.org file repository. After unpacking the buildroot tarball, you can find instructions for building the tools in the buildroot/configs/README.txt file. Check the README.txt file in the configuration directory for your board to see if you can use the buildroot toolchain with your board (this README.txt file is located in configs/<board-name>/README.txt). This toolchain is available for both the Linux and Cygwin development environments. Advantages: (1) NuttX header files are built into the tool chain, and (2) related support tools like NXFLAT tools, the ROMFS genromfs tools, and the kconfig-frontends tools can be built into your toolchain. Disadvantages: This tool chain is not was well supported as some other toolchains. GNU tools are not my priority and so the buildroot tools often get behind. For example, until recently there was no EABI support in the NuttX buildroot toolchain for ARM. NOTE: For Cortex-M3/4, there are OABI and EABI versions of the buildroot toolchains. If you are using the older OABI toolchain the prefix for the tools will be arm-nuttx-elf-; for the EABI toolchain the prefix will be arm-nuttx-eabi-. If you are using the older OABI toolchain with an ARM Cortex-M3/4, you will need to set CONFIG_ARMV7M_OABI_TOOLCHAIN in the .config file in order to pick the right tool prefix. If the make system ever picks the wrong prefix for your toolchain, you can always specify the prefix on the command to override the default like: make CROSSDEV=arm-nuttx-elf SHELLS ^^^^^^ The NuttX build relies on some shell scripts. Some are inline in the Makefiles and many are executable scripts in the tools/. directory. The scripts were all developed using bash and many contain bash shell dependencies. Most of the scripts begin with #!/bin/bash to specifically select the bash shell. Some still have #!/bin/sh but I haven't heard any complaints so these must not have bash dependencies. There are two shell issues that I have heard of: 1. Linux where /bin/sh refers to an incompatible shell (like ksh or csh). In this case, bash is probably available and the #!/bin/bash at the beginning of the file should do the job. If any scripts with #!/bin/sh fail, try changing that to #!/bin/bash and let me know about the change. 2. FreeBSD with the Bourne Shell and no bash shell. The other, reverse case has also been reported on FreeBSD setups that have the Bourne shell, but not bash. In this base, #!/bin/bash fails but #!/bin/sh works okay. My recommendation in this case is to create a symbolic link at /bin/bash that refers to the Bourne shell. There may still be issues, however, with certain the bash-centric scripts that will require modifications. BUILDING NUTTX ^^^^^^^^^^^^^^ Building -------- NuttX builds in-place in the source tree. You do not need to create any special build directories. Assuming that your Make.defs is setup properly for your tool chain and that PATH environment variable contains the path to where your cross-development tools are installed, the following steps are all that are required to build NuttX: cd{TOPDIR} make At least one configuration (eagle100) requires additional command line arguments on the make command. Read{TOPDIR}/configs/<board-name>/README.txt to see if that applies to your target. Re-building ----------- Re-building is normally simple -- just type make again. But there are some things that can "get you" when you use the Cygwin development environment with Windows native tools. The native Windows tools do not understand Cygwin's symbolic links, so the NuttX make system does something weird: It copies the configuration directories instead of linking to them (it could, perhaps, use the NTFS 'mklink' command, but it doesn't). A consequence of this is that you can easily get confused when you edit a file in one of the linked (i.e., copied) directories, re-build NuttX, and then not see your changes when you run the program. That is because build is still using the version of the file in the copied directory, not your modified file! Older versions of NuttX did not support dependencies in this configuration. So a simple work around this annoying behavior in this case was the following when you re-build: make clean_context all This 'make' command will remove of the copied directories, re-copy them, then make NuttX. However, more recent versions of NuttX do support dependencies for the Cygwin build. As a result, the above command will cause everything to be rebuilt (because it removes and will cause recreating the include/nuttx/config.h header file). A much less gracefully but still effective command in this case is the following for the ARM configuration: rm -rf arch/arm/src/chip arch/arm/src/board This "kludge" simple removes the copied directories. These directories will be re-created when you do a normal 'make' and your edits will then be effective. Build Targets and Options ------------------------- Build Targets: Below is a summary of the build targets available in the top-level NuttX Makefile: all The default target builds the NuttX executable in the selected output formats. clean Removes derived object files, archives, executables, and temporary files, but retains the configuration and context files and directories. distclean Does 'clean' then also removes all configuration and context files. This essentially restores the directory structure to its original, unconfigured stated. Application housekeeping targets. The APPDIR variable refers to the user application directory. A sample apps/ directory is included with NuttX, however, this is not treated as part of NuttX and may be replaced with a different application directory. For the most part, the application directory is treated like any other build directory in the Makefile script. However, as a convenience, the following targets are included to support housekeeping functions in the user application directory from the NuttX build directory. apps_clean Perform the clean operation only in the user application directory apps_distclean Perform the distclean operation only in the user application directory. The apps/.config file is preserved so that this is not a "full" distclean but more of a configuration "reset" for the application directory. export The export target will package the NuttX libraries and header files into an exportable package. Caveats: (1) These needs some extension for the KERNEL build. (2) The logic in tools/mkexport.sh only supports GCC and, for example, explicitly assumes that the archiver is 'ar' download This is a helper target that will rebuild NuttX and download it to the target system in one step. The operation of this target depends completely upon implementation of the DOWNLOAD command in the user Make.defs file. It will generate an error an error if the DOWNLOAD command is not defined. The following targets are used internally by the make logic but can be invoked from the command under certain conditions if necessary. depend Create build dependencies. (NOTE: There is currently no support for build dependencies under Cygwin using Windows-native toolchains.) context The context target is invoked on each target build to assure that NuttX is properly configured. The basic configuration steps include creation of the the config.h and version.h header files in the include/nuttx directory and the establishment of symbolic links to configured directories. clean_context This is part of the distclean target. It removes all of the header files and symbolic links created by the context target. Build Options: Of course, the value any make variable an be overridden from the make command line. However, there is one particular variable assignment option that may be useful to you: V=1 This is the build "verbosity flag." If you specify V=1 on the make command line, you will see the exact commands used in the build. This can be very useful when adding new boards or tracking down compile time errors and warnings (Contributed by Richard Cochran). Native Windows Build -------------------- The beginnings of a Windows native build are in place but still not often used as of this writing. The build was functional but because of lack of use may find some issues to be resolved with this build configuration. The windows native build logic initiated if CONFIG_WINDOWS_NATIVE=y is defined in the NuttX configuration file: This build: - Uses all Windows style paths - Uses primarily Windows batch commands from cmd.exe, with - A few extensions from GNUWin32 In this build, you cannot use a Cygwin or MSYS shell. Rather the build must be performed in a Windows console window. Here is a better terminal than the standard issue, CMD.exe terminal: ConEmu which can be downloaded from: or . Build Tools. The build still relies on some Unix-like commands. I use the GNUWin32 tools that can be downloaded from using the 'Download all' selection. Individual packages can be download instead if you know what you are doing and want a faster download (No, I can't tell you which packages you should or should not download). Host Compiler: I use the MingGW GCC compiler which can be downloaded from. If you are using GNUWin32, then it is recommended the you not install the optional MSYS components as there may be conflicts. This capability should still be considered a work in progress because: (1) It has not been verified on all targets and tools, and (2) it still lacks some of the creature-comforts of the more mature environments. Installing GNUWin32 ------------------- The Windows native build will depend upon a few Unix-like tools that can be provided either by MSYS or GNUWin32. The GNUWin32 are available from. GNUWin32 provides ports of tools with a GPL or similar open source license to modern MS-Windows (Microsoft Windows 2000 / XP / 2003 / Vista / 2008 / 7). See for a list of all of the tools available in the GNUWin32 package. The SourceForge project is located here:. The project is still being actively supported (although some of the Windows ports have gotten very old). Some commercial toolchains include a subset of the GNUWin32 tools in the installation. My recommendation is that you download the GNUWin32 tools directly from the sourceforge.net website so that you will know what you are using and can reproduce your build environment. GNUWin32 Installation Steps: The following steps will download and execute the GNUWin32 installer. 1. Download GetGNUWin32-x.x.x.exe from. This is the installer. The current version as of this writing is 0.6.3. 2. Run the installer. 3. Accept the license. 4. Select the installation directory. My recommendation is the directory that contains this README file (<this-directory>). 5. After running GetGNUWin32-0.x.x.exe, you will have a new directory <this-directory>/GetGNUWin32 Note that the GNUWin32 installer didn't install GNUWin32. Instead, it installed another, smarter downloader. That downloader is the GNUWin32 package management tool developed by the Open SSL project. The following steps probably should be performed from inside a DOS shell. 6. Change to the directory created by GetGNUWin32-x.x.x.exe cd GetGNUWin32 7. Execute the download.bat script. The download.bat script will download about 446 packages! Enough to have a very complete Linux-like environment under the DOS shell. This will take awhile. This step only downloads the packages and the next step will install the packages. download 8. This step will install the downloaded packages. The argument of the install.bat script is the installation location. C:\gnuwin32 is the standard install location: install C:\gnuwin32 NOTE: This installation step will install *all* GNUWin32 packages... far more than you will ever need. If disc space is a problem for you, you might need to perform a manual installation of the individual ZIP files that you will find in the <this directory>/GetGNUWin32/packages directory. CYGWIN BUILD PROBLEMS ^^^^^^^^^^^^^^^^^^^^^ Performance ----------- Build performance under Cygwin is really not so bad, certainly not as good as a Linux build. However, often you will find that the performance is not just bad but terrible. If you are seeing awful performance.. like two or three compilations per second.. the culprit is usually your Windows Anti-Virus protection interfering with the build tool program execution. I use Cygwin quite often and I use Windows Defender. In order to get good build performance, I routinely keep the Windows Defender "Virus & Threat Protections Settings" screen up: I disable "Real-Time Protection" just before entering 'make' then turn "Real-Time Protection" back on when the build completes. With this additional nuisance step, I find that build performance under Cygwin is completely acceptable. Strange Path Problems --------------------- If you see strange behavior when building under Cygwin then you may have a problem with your PATH variable. For example, if you see failures to locate files that are clearly present, that may mean that you are using the wrong version of a tool. For example, you may not be using Cygwin's 'make' program at /usr/bin/make. Try: which make /usr/bin/make When you install some toolchains (such as Yargarto or CodeSourcery tools), they may modify your PATH variable to include a path to their binaries. At that location, they may have GNUWin32 versions of the tools. So you might actually be using a version of make that does not understand Cygwin paths. The solution is either: 1. Edit your PATH to remove the path to the GNUWin32 tools, or 2. Put /usr/local/bin, /usr/bin, and /bin at the front of your path: export PATH=/usr/local/bin:/usr/bin:/bin:$PATH Window Native Toolchain Issues ------------------------------ There are many popular Windows native toolchains that may be used with NuttX. Examples include CodeSourcery (for Windows), devkitARM, and several vendor- provided toolchains. There are several limitations with using a and Windows based toolchain in a Cygwin environment. The three biggest are: 1. The Windows toolchain cannot follow Cygwin paths. Path conversions are performed automatically in the Cygwin makefiles using the 'cygpath' utility but you might easily find some new path problems. If so, check out 'cygpath -w' 2. Windows toolchains cannot follow Cygwin symbolic links. Many symbolic links are used in Nuttx (e.g., include/arch). The make system works around these problems for the Windows tools by copying directories instead of linking them. But this can also cause some confusion for you: For example, you may edit a file in a "linked" directory and find that your changes had no effect. That is because you are building the copy of the file in the "fake" symbolic directory. If you use a Windows toolchain, you should get in the habit of making like this: make clean_context all An alias in your .bashrc file might make that less painful. The rebuild is not a long as you might think because there is no dependency checking if you are using a native Windows toolchain. That bring us to #3: General Pre-built Toolchain Issues ---------------------------------- To continue with the list of "Window Native Toolchain Issues" we can add the following. These, however, are really just issues that you will have if you use any pre-built toolchain (vs. building the NuttX toolchain from the NuttX buildroot package): There may be incompatibilities with header files, libraries, and compiler built-in functions detailed below. For the most part, these issues are handled in the existing make logic. But if you are breaking new ground, then you may encounter these: 4. Header Files. Most pre-built toolchains will build with a foreign C library (usually newlib, but maybe uClibc or glibc if you are using a Linux toolchain). This means that the header files from the foreign C library will be built into the toolchain. So if you "include <stdio.h>", you will get the stdio.h from the incompatible, foreign C library and not the nuttx stdio.h (at nuttx/include/stdio.h) that you wanted. This can cause confusion in the builds and you must always be sure the -nostdinc is included in the CFLAGS. That will assure that you take the include files only from 5. Libraries. What was said above header files applies to libraries. You do not want to include code from the libraries of any foreign C libraries built into your toolchain. If this happens you will get perplexing errors about undefined symbols. To avoid these errors, you will need to add -nostdlib to your CFLAGS flags to assure that you only take code from the NuttX libraries. This, however, may causes other issues for libraries in the toolchain that you do want (like libgcc.a or libm.a). These are special-cased in most Makefiles, but you could still run into issues of missing libraries. 6. Built-Ins. Some compilers target a particular operating system. Many people would, for example, like to use the same toolchain to develop Linux and NuttX software. Compilers built for other operating systems may generate incompatible built-in logic and, for this reason, -fno-builtin should also be included in your C flags And finally you may not be able to use NXFLAT. 7. NXFLAT. If you use a pre-built toolchain, you will lose all support for NXFLAT. NXFLAT is a binary format described in Documentation/NuttXNxFlat.html. It may be possible to build standalone versions of the NXFLAT tools; there are a few examples of this in the buildroot repository at However, it is possible that there could be interoperability issues with your toolchain since they will be using different versions of binutils and possibly different ABIs. Building Original Linux Boards in Cygwin ---------------------------------------- Some default board configurations are set to build under Linux and others to build under Windows with Cygwin. Various default toolchains may also be used in each configuration. It is possible to change the default setup. Here, for example, is what you must do in order to compile a default Linux configuration in the Cygwin environment using the CodeSourcery for Windows toolchain. After instantiating a "canned" NuttX configuration, run the target 'menuconfig' and set the following items: Build Setup->Build Host Platform->Windows Build Setup->Windows Build Environment->Cygwin System Type->Toolchain Selection->CodeSourcery GNU Toolchain under Windows In Windows 7 it may be required to open the Cygwin shell as Administrator ("Run As" option, right button) you find errors like "Permission denied". Recovering from Bad Configurations ---------------------------------- Many people make the mistake of configuring NuttX with the "canned" configuration and then just typing 'make' with disastrous consequences; the build may fail with mysterious, uninterpretable, and irrecoverable build errors. If, for example, you do this with an unmodified Linux configuration in a Windows/Cgwin environment, you will corrupt the build environment. The environment will be corrupted because of POSIX vs Windows path issues and with issues related to symbolic links. If you make the mistake of doing this, the easiest way to recover is to just start over: Do 'make distclean' to remove every trace of the corrupted configuration, reconfigure from scratch, and make certain that the set the configuration correctly for your platform before attempting to make again. Just fixing the configuration file after you have instantiated the bad configuration with 'make' is not enough. DOCUMENTATION ^^^^^^^^^^^^^ Additional information can be found in the Documentation/ directory and also in README files that are scattered throughout the source tree. The documentation is in HTML and can be access by loading the following file into your Web browser: Documentation/index.html NuttX documentation is also available online at. Below is a guide to the available README files in the NuttX source tree: nuttx/ | |- arch/ | | | |- arm/ | | `- src | | |- lpc214x/README.txt | | `- stm32l4/README.txt | |- renesas/ | | |- include/ | | | `-README.txt | | |- src/ | | | `-README.txt | |- x86/ | | |- include/ | | | `-README.txt | | `- src/ | | `-README.txt | `- z80/ | | `- src/ | | |- z80/README.txt | | `- z180/README.txt, z180_mmu.txt | `- README.txt |- audio/ | `-README.txt |- binfmt/ | `-libpcode/ | `-README.txt |- configs/ | |- amber/ | | `- README.txt | |- arduino-mega2560/ | | `- README.txt | |- arduino-due/ | | `- README.txt | |- avr32dev1/ | | `- README.txt | |- b-l475e-iot01a/ | | `- README.txt | |- bambino-200e/ | | `- README.txt | |- c5471evm/ | | `- README.txt | |- clicker2-stm32 | | `- README.txt | |- cloudctrl | | `- README.txt | |- demo0s12ne64/ | | `- README.txt | |- dk-tm4c129x/ | | `- README.txt | |- ea3131/ | | `- README.txt | |- ea3152/ | | `- README.txt | |- eagle100/ | | `- README.txt | |- efm32-g8xx-stk/ | | `- README.txt | |- efm32gg-stk3700/ | | `- README.txt | |- ekk-lm3s9b96/ | | `- README.txt | |- ez80f910200kitg/ | | |- ostest/README.txt | | `- README.txt | |- ez80f910200zco/ | | |- dhcpd/README.txt | | |- httpd/README.txt | | |- nettest/README.txt | | |- nsh/README.txt | | |- ostest/README.txt | | |- poll/README.txt | | `- README.txt | |- fire-stm32v2/ | | `- README.txt | |- flipnclick-pic32mz/ | | `- README.txt | |- flipnclick-sam3x/ | | `- README.txt | |- freedom-k28f/ | | `- README.txt | |- freedom-k64f/ | | `- README.txt | |- freedom-k66f/ | | `- README.txt | |- freedom-kl25z/ | | `- README.txt | |- freedom-kl26z/ | | `- README.txt | |- gapuino/ | | `- README.txt | |- hymini-stm32v/ | | `- README.txt | |- imxrt1050-evk | | `- README.txt | |- kwikstik-k40/ | | `- README.txt | |- launchxl-cc1312r1/ | | `- README.txt | |- launchxl-tms57004/ | | `- README.txt | |- lincoln60/ | | `- README.txt | |- lm3s6432-s2e/ | | `- README.txt | |- lm3s6965-ek/ | | `- README.txt | |- lm3s8962-ek/ | | `- README.txt | |- lpc4330-xplorer/ | | `- README.txt | |- lpc4337-ws/ | | `- README.txt | |- lpc4357-evb/ | | `- README.txt | |- lpc4370-link2/ | | `- README.txt | |- lpcxpresso-lpc1115/ | | `- README.txt | |- lpcxpresso-lpc1768/ | | `- README.txt | |- lpcxpresso-lpc54628/ | | `- README.txt | |- maple/ | | `- README.txt | |- max32660-evsys/ | | `- README.txt | |- mbed/ | | `- README.txt | |- mcb1700/ | | `- README.txt | |- mcu123-lpc214x/ | | `- README.txt | |- metro-m4/ | | `- README.txt | |- micropendous3/ | | `- README.txt | |- mikroe-stm32f/ | | `- README.txt | |- mirtoo/ | | `- README.txt | |- misoc/ | | `- README.txt | |- moteino-mega/ | | `- README.txt | |- ne63badge/ | | `- README.txt | |- nrf52-generic/ | | `- README.txt | |- ntosd-dm320/ | | |- doc/README.txt | | `- README.txt | |- nucleo-144/ | | `- README.txt | |- nucleo-f072rb/ | | `- README.txt | |- nucleo-f091rc/ | | `- README.txt | |- nucleo-f303re/ | | `- README.txt | |- nucleo-f334r8/ | | `- README.txt | |- nucleo-f4x1re/ | | `- README.txt | |- nucleo-f410rb | | `- README.txt | |- nucleo-l432kc/ | | `- README.txt | |- nucleo-l452re/ | | `- README.txt | |- nucleo-l476rg/ | | `- README.txt | |- nucleo-l496zg/ | | `- README.txt | |- nutiny-nuc120/ | | `- README.txt | |- olimex-efm32g880f129-stk/ | | `- README.txt | |- olimex-lpc1766stk/ | | `- README.txt | |- olimex-lpc2378/ | | `- README.txt | |- olimex-lpc-h3131/ | | `- README.txt | |- olimex-stm32-h405/ | | `- README.txt | |- olimex-stm32-h407/ | | `- README.txt | |- olimex-stm32-p107/ | | `- README.txt | |- olimex-stm32-p207/ | | `- README.txt | |- olimex-stm32-p407/ | | `- README.txt | |- olimex-strp711/ | | `- README.txt | |- open1788/ | | `- README.txt | |- p112/ | | `- README.txt | |- pcduino-a10/ | | `- README.txt | |- photon/ | | `- README.txt | |- pic32mx-starterkit/ | | `- README.txt | |- pic32mx7mmb/ | | `- README.txt | |- pic32mz-starterkit/ | | `- README.txt | |- pizero/ | | `- README.txt | |- qemu-i486/ | | `- README.txt | |- sabre-6quad/ | | `- README.txt | |- sama5d2-xult/ | | `- README.txt | |- sama5d3x-ek/ | | `- README.txt | |- sama5d3-xplained/ | | `- README.txt | |- sama5d4-ek/ | | `- README.txt | |- samd20-xplained/ | | `- README.txt | |- samd21-xplained/ | | `- README.txt | |- saml21-xplained/ | | `- README.txt | |- sam3u-ek/ | | `- README.txt | |- sam4cmp-db | | `- README.txt | |- sam4e-ek/ | | `- README.txt | |- sam4l-xplained/ | | `- README.txt | |- sam4s-xplained/ | | `- README.txt | |- sam4s-xplained-pro/ | | `- README.txt | |- same70-xplained/ | | `- README.txt | |- samv71-xult/ | | `- README.txt | |- sim/ | | |- include/README.txt | | `- README.txt | |- shenzhou/ | | `- README.txt | |- skp16c26/ | | `- README.txt | |- stm3210e-eval/ | | |- RIDE/README.txt | | `- README.txt | |- stm3220g-eval/ | | `- README.txt | |- stm3240g-eval/ | | `- README.txt | |- stm32_tiny/ | | `- README.txt | |- stm32f103-minumum/ | | `- README.txt | |- stm32f3discovery/ | | `- README.txt | |- stm32f4discovery/ | | `- README.txt | |- stm32f411e-disco/ | | `- README.txt | |- stm32f429i-disco/ | | |- fb/README.txt | | `- README.txt | |- stm32f746g-disco/ | | _- fb/README.txt | | _- nxdemo/README.txt | | _- nxterm/README.txt | | `- README.txt | |- stm32f769i-disco/ | | `- README.txt | |- stm32l476-mdk/ | | `- README.txt | |- stm32l476vg-disco/ | | `- README.txt | |- stm32l4r9ai-disco/ | | `-README.txt | |- stm32ldiscovery/ | | `- README.txt | |- stm32vldiscovery/ | | `- README.txt | |- sure-pic32mx/ | | `- README.txt | |- teensy-2.0/ | | `- README.txt | |- teensy-3.x/ | | `- README.txt | |- teensy-lc/ | | `- README.txt | |- tm4c123g-launchpad/ | | `- README.txt | |- tm4c1294-launchpad/ | | `- README.txt | |- twr-k60n512/ | | `- README.txt | |- tms570ls31x-usb-kit/ | | `- README.txt | |- twr-k64f120m/ | | `- README.txt | |- u-blox-co27/ | | `- README.txt | |- ubw32/ | | `- README.txt | |- us7032evb1/ | | `- README.txt | |- viewtool-stm32f107/ | | `- README.txt | |- xmc5400-relax/ | | `- README.txt | |- z16f2800100zcog/ | | |- ostest/README.txt | | |- pashello/README.txt | | `- README.txt | |- z80sim/ | | `- README.txt | |- z8encore000zco/ | | |- ostest/README.txt | | `- README.txt | |- z8f64200100kit/ | | |- ostest/README.txt | | `- README.txt | |- zkit-arm-1769/ | | `- README.txt | |- zp214xpa/ | | `- README.txt | `- README.txt |- drivers/ | |- eeprom/ | | `- README.txt | |- lcd/ | | | README.txt | | `- pcf8574_lcd_backpack_readme.txt | |- mtd/ | | `- README.txt | |- sensors/ | | `- README.txt | |- syslog/ | | `- README.txt | `- README.txt |- fs/ | |- binfs/ | | `- README.txt | |- cromfs/ | | `- README.txt | |- mmap/ | | `- README.txt | |- nxffs/ | | `- README.txt | |- smartfs/ | | `- README.txt | |- procfs/ | | `- README.txt | |- spiffs/ | | `- README.md | `- unionfs/ | `- README.txt |- graphics/ | `- README.txt |- libs/ | |- README.txt | |- libc/ | | |- zoneinfo | | | `- README.txt | | `- README.txt | |- libdsp/ | | `- README.txt | |- libnx/ | | |- nxfongs | | | `- README.txt | | `- README.txt | |- libxx/ | `- README.txt |- mm/ | |- shm/ | | `- README.txt | `- README.txt |- net/ | |- sixlowpan | | `- README.txt | `- README.txt |- pass1/ | `- README.txt |- syscall/ | `- README.txt `- tools/ `- README.txt Below is a guide to the available README files in the semi-optional apps/ source tree: apps/ |- examples/ | |- bastest/README.txt | |- json/README.txt | |- pashello/README.txt | `- README.txt |- gpsutils/ | `- minmea/README.txt |- graphics/ | |- tiff/README.txt | `- traveler/tools/tcledit/README.txt |- interpreters/ | |- bas/ | | `- README.txt | |- ficl/ | | `- README.txt | `- README.txt |- modbus/ | `- README.txt |- netutils/ | |- discover/ | | `- README.txt | |- ftpc/ | | `- README.txt | |- json/ | | `- README.txt | |- telnetd/ | | `- README.txt | `- README.txt |- nshlib/ | `- README.txt |- NxWidgets/ | `- README.txt |- system/ | |- cdcacm/ | | `- README.txt | |- i2c/ | | `- README.txt | |- inifile/ | | `- README.txt | |- install/ | | `- README.txt | |- nsh/ | | `- README.txt | |- nxplayer/ | | `- README.txt | |- symtab/ | | `- README.txt | |- usbmsc/ | | `- README.txt | `- zmodem/ | `- README.txt `- wireless |- bluetooth/ | `- btsak/ | `- README.txt `- ieee802154 `- i8sak/ `- README.txt Additional README.txt files in the other, related repositories: NxWidgets/ |- Doxygen | `- README.txt |- tools | `- README.txt |- UnitTests | `- README.txt `- README.txt buildroot/ `- README.txt tools/ `- README.txt uClibc++/ `- README.txt pascal/ `- README.txt | https://bitbucket.org/nuttx/nuttx/overview | CC-MAIN-2018-51 | refinedweb | 11,245 | 59.4 |
jGuru Forums
Posted By:
Anonymous
Posted On:
Wednesday, February 12, 2003 03:21 PM
I have problem compilling JUnit test class. I use gentoo Linux version. I have an relatively up-to-date JDK : 1.4.1_01. My classpath is set corectly to use the junit package.
When I try to compile the test, it always say to me :
SimpleMathTest.java:11: TestCase() is not public in
unit.framework.TestCase; cannot be accessed from outside
package
public class SimpleMathTest extends TestCase
My package begin with :
import junit.framework.*;
public class SimpleMathTest extends TestCase
So, I don't understand, I have followed all the How-to that I found and try what the cookbook say, but, nothing more have ever happen.
Thanks in advance for the answer.
Re: Can't compile a Test
Posted By:
Jeanne_Boyarsky
Posted On:
Wednesday, February 12, 2003 05:35 PM
public SimpleMathTest(String name) { super(name);} | http://www.jguru.com/forums/view.jsp?EID=1056308 | CC-MAIN-2014-15 | refinedweb | 150 | 57.87 |
LinearGradient QML Type
Draws a linear gradient. More...
Properties
Detailed Description
A gradient is defined by two or more colors, which are blended seamlessly. The colors start from the given start point and end to the given end point.
Example
The following example shows how to apply the effect.
import QtQuick 2.0 import QtGraphicalEffects 1.0 Item { width: 300 height: 300 LinearGradient { anchors.fill: parent start: Qt.point(0, 0) end: Qt.point(0, 300) gradient: Gradient { GradientStop { position: 0.0; color: "white" } GradientStop { position: 1.0; color: "black" } } } } ending point where the color at gradient position of 1.0 is rendered. Colors at smaller position values are rendered linearly towards the start point. The point is given in pixels and the default value is Qt.point(0, height). Setting the default values for the start and end results in a full height linear gradient on the y-axis., and the color is definded by the color property..
This property defines the starting point where the color at gradient position of 0.0 is rendered. Colors at larger position values are rendered linearly towards the end point. The point is given in pixels and the default value is Qt.point(0, 0). Setting the default values for the start and end results in a full height linear gradient on the y. | http://doc.qt.io/qt-5/qml-qtgraphicaleffects-lineargradient.html | CC-MAIN-2016-44 | refinedweb | 222 | 61.93 |
styled-components
Visual primitives for the component age. Use the best bits of ES6 and CSS to style your apps without stress!
Channels
Team
Show previous messages
BEM + SASS vs Styled ComponentsFebruary 23, 2018 at 11:08am (Edited 3 years ago)
Do we have anyone switched from one another OR thinks either one is better suit his / her needs?
We'd love to hear your thoughts and reasons!
Show more messages
February 23, 2018 at 1:34pm
February 23, 2018 at 1:34pm
2
CSS-in-JS scales much much better than any naming methodology could ever scale.
Imagine working in a team with fifty frontend developers, all of whom work on the same site(s). You might split your CSS into multiple files with Sass, but even then you end up with (what some people lovingly refer to as) append-only stylesheets: you never delete anything from them, because you never know where it might be used, so you can't. That's why they're append-only, all the developers only add to them but never take any CSS away.
It takes a lot of focused effort to scale a naming methodology. It can surely be done, and any naming methodology is better than none, but nothing scales better than not even worrying about a global namespace.
The best naming methodology is no naming methodology.
YMMV and this all depends on your team and your teams experience, but I don't think it makes sense to choose BEM over any CSS-in-JS solution because "BEM scales better"
3
Are you ever afraid to delete styles when using styled-components? No, why should you, they're only used in that component _and you know it_!
On top of that, you don't have to constantly be on the lookout for duplicate classes or anything like that. You can focus on the code, rather than having to worry about managing global namespace.
BEM is not only naming convention for Blocks btw. You can use my bemto-components with styled-components (and its the best way to use them), and get easy ways to do: batch modification based on a prop (especially if its only visual ones), really easy way to declare and use “elements”, which would be coupled with the initial styled component, would have classnames inherited from its name etc.
Its totally possible to use styled-components AND bem together, to mutual benefits.
Thanks a ton for putting it together, something I couldn't say clearly. Mostly because I did BEM 2 years ago on relatively big React project and after that project I found styled-components and chose it without hesitate.
So I clearly love SC and not just emotional but practically, I'm so productive when I have CSS be writing with the same concepts I'm using throughout the app (components, modules, JS interpolations, etc).
As a reference I want to share Max answer for structuring styled-components:
2
A team went from BEM to Styled Components:
1
And why they use it:
1
Ohh I totally forgot about those two posts by ! I'm sure he can answer any question about the switch y'all have
Hey! 👋 Thanks, Max. I'm just getting started this morning ☕, but I'll read a little more and respond in a bit.
1
I've been on teams where BEM is a successful pattern, and on teams where styled-components is successful. The difference being that the team using BEM was smaller with fairly strong CSS skills and working on smaller applications. BEM is still a useful pattern for providing structure and preventing unintentional style collisions. But the main problem is that at some point (what some people call "at scale") maintaining consistency becomes really time consuming.
1
And to clarify those articles ^^ aren't about switching from BEM to SC, they're about how we were using SC, but still wanted some structure to help us organize and reuse components. That's something that BEM does really well, so we applied the pattern to our structure and it's been really helpful for our team.
Yeah, basically, you was using BEM -> switched to SC -> implemented some of BEM concepts in a SC way, amirite?
We’ve structured our components in folders and subfolders (i.e. /Forms/Checkbox, /Layout/Flag, etc) to help with keeping everything organized. S-C meshes really well with React because they’re both centered around the idea of the encapsulated component.
BEM is a really great system when working with systems that don’t enforce component-based structures. Also I feel that the React-way of separating concerns with styled-components works much better than what you can achieve with separate files.
1
Naming is both good and bad. A lot of times thinking about how to name something will lead you to define its purpose and structure. That usually leads to writing more thoughtful code. But at the same time, naming is hard and error-prone. One of the superpowers SC (and CSS in JS) gives us is that we can let the library handle the cognitive load (which I think is what you're referring to, ). But naming is still a valuable practice, IMO. | https://spectrum.chat/styled-components/general/bem-sass-vs-styled-components~34b75b86-273d-434f-bc2b-dbb847678e89?m=MTUxOTM5Mjg3MTE3Mg== | CC-MAIN-2021-31 | refinedweb | 873 | 60.04 |
_STATFS — return file system status
#include <sys/param.h>
#include <sys/mount.h>
#include <sys/vnode.h>
int
VFS_STATFS(struct mount *mp, struct statfs *sbp);
The VFS_STATFS() macro returns various pieces of information about the
file system, including recommended I/O sizes, free space, free inodes,
etc.
The arguments it expects are:
mp The file system.
sbp A statfs structure, as defined by <sys/mount.h>, into which
information is placed about.
VFS(9), vnode(9)
This manual page was written by Doug Rabson.
All copyrights belong to their respective owners. Other content (c) 2014-2018, GNU.WIKI. Please report site errors to webmaster@gnu.wiki.Page load time: 0.100 seconds. Last modified: November 04 2018 12:49:43. | http://gnu.wiki/man9/VFS_STATFS.9freebsd.php | CC-MAIN-2020-24 | refinedweb | 120 | 63.66 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Microcontroller Programming » How to create a loop timer while checking 2 pin inputs
1st time posting. I have done a lot of searching and playing with code and can't figure out how to get this to do what I want. I've got the if && to check for high on both pins and count using a delay, but it is not what I need.
What I need:
Loop and monitor, no problem. Monitor inputs and turn on outputs, no problem.
Figuring out what statement to use to monitor and time the input pin so it will go to the next 'if' if the time is held for only 2 seconds--problem.
I don't want a 3 second delay, I need a 3 second counter.
It is probably a no brainer for someone on here and I just can't seem to put it together.
Thanks for looking. Love the kit. I have burned many hours. Too much on this last part though.
It is easiest to use interrupts to do what you want.
In your case, it seems that you have only 'a single event at a time' to measure so it it will be pretty straight forward. (If you have more than 'a single event to time' then this approach would still work, but you would have to us this basic skeleton to configure multiple 'software timers'.)
// Global Variables
#define TIMER_TIMER0MAX 200
volatile uint16_t system_timertick;
void timer0_init() {
//
OCR0A = TIMER_TIMER0MAX;
TIMSK0 |= (1<<OCIE0A);
timeclear();
}
ISR(TIMER0_COMPA_vect)
{
// Timer 0 Interrupt Overflow Service Routine
// One tick
// Our Timer is setup as follows:
// Timer0 Prescaler is set to /8 and the comparator value is 200
// So that means there will be a 'timeout/interrupt' every 1,600
// Close cycles. So for my 16mhz clock that is 10,000 interrupt/sec
// system_timertick is an unsigned integer so its max is 65,536
system_timertick++;
if (system_timertick > 59999) {
system_timertick = 0;
}
}
So basically this should allow system_timertick to count up to 6 seconds. (In 1/10,000 of a second intervals).
NOTE: As mentioned in the code I am using a 16mhz crystal NOT the standard Nerdkits Crystal.
This should help you to 'get started'.
NOTE: The 'volatile' keyboard must be used on the system_timertick. What 'volatile' does is to tell the compiler that the program will ALWAYS need to load the system_timertick value from memory and avoid optimizations on teh value. The reason for this is because the Interrupt routine can change this value, and the compiler cannot predict when this will occur, so if the compiler decided to optimize that value by keeping in an AVR Register then the program will never see the change in the value.
So use this you could simply clear the system_timertick, and then do what you need to do. After you have an event that you wanted to time then check the value.
NOTES:
1 - When you check the value you will need to either 'disable interrupts' or 'write the value to a temporary variable. The reason for this is because the Interrupts will continue to fire, and the value could change if you reference more than once in your inline code. It is MUCH BETTER to just write it to a temporary variable in your mainline code that way the interrupt can continue along it's merry way. WHICH YOU WOULD NEED TO ALLOW TO HAPPEN if you needed to have multiple/software timers.
2 - If there is a possibility that your timed value can exceed the 6 seconds, or timeout at a certain point, you will need to check the system_timertick in your main loop to determine if a timeout has occurred.
Blackbird -
If you don't need 1/10,000 sec resolution, you could use the NK code from the "realtime_clock" project (in tutorial section). This will only give you 1/100 sec resolution but will not "reset" until more than 200 days has elapsed.
You'll also benefit by using Pin Change Interrupts. This will trigger each time PC1 or PB1 changes state. Fortunately for you, by using PC1 and PB1, you can actually use two separate interrupt service routines (ISR) to monitor the pins. You can initialize the interrupts using:
void pin_change_init(){
//Enable interrupts 1 and 0
PCICR |= (1<<PCIE1) | (1<<PCIE0);
//Set the mask on Pin change interrupt 1 so that only PCINT9 (PC1) triggers
PCMSK1 |= (1<<PCINT9);
//Set the mask on Pin change interrupt 0 so that only PCINT1 (PB1) triggers
PCMSK0 |= (1<<PCINT1);
return;
}
As Jim stated above, the ISR's work best by simply updating global (and volatile) variables. So the following "pseudocode" can be used:
ISR(PCINT1_vect){
//Check if PC1 is high or low
//If high - store current time as "start_time"
//If low - set a variable so step 6 can be detected
}
ISR(PCINT0_vect){
//Check if PB1 is high or low
//If high - store current time as "start_time"
//If low - set a variable so step 5 can be detected
}
Then in your monitoring loop you just test all the possibilities:
monitoring_loop(){
//For step 2 - check if PB1 and PC1 are high
//If so, get current time, subtract "start_time" set from ISR
//Check if that value is greater than 3 seconds
//More code here...
You'll most likely need more flag variables and tests but it's a start. Sounds like you're making an interesting project.
Wow. Thanks for the quick responses. I see now that I have bit off a bit more than I can chew at the moment since some of the coding and terms are over my head, but I will dig into the examples and learn about each term. I read a bit on the interrupts and it seemed like what I needed, but could not figure out how to integrate the timer, so thanks again for getting me going.
I little more backgound for the curious.
I am on my second version of a scoreboard for my friends horseshoe pits. First version was with simple counter and BCD 4511 chips and IR diodes. I have since opened my mouth furthur saying it should be improved, and here we are.
I have already made the 2 LED displays, 4" digits, (both scores pointing towards each pit) and decided to make the reset cooler and add reverse (the source of my current issues), oh and add speach and sound.
The scoring is done via laser diodes and IR receivers (already working) at each pit that you just wave your hand/finger through. (had to use a debounce chip for that (i love that chip,works great)).
To keep things simple (for me, and to keep pins open on the 168) I decided to use chips to drive the displays and use the 168 as the brains.
That is where the current code comes into play. Since it is so easy to increment a score, people love to blast past the mark (it will score as fast as you can wave your fingers through, even 4 at a time), so I thought reverse would be cool, by holding your hand in one beam for 3 seconds it would start slowly counting backwards and by blocking both beams for 3 seconds it would reset.
The speech/sound is (hopefully) going to come from a RC Systems Doubletalk RC8660 voice chip. Already have that and can program via the development board. It says it communicates via Standard serial (UART) and 8-bit bus interfaces, so it will be my next learing phase once the scoring is working.
Thanks again for the help and I am more than willing to accept any helpful advice or suggestions.
I have found it much easier for me to learn new stuff when I want a result, rather than being force fed, and since there are so many distractions in life, it is good to have a goal, even if it is just a hobby.
Great community. Glad I decided on the NerdKit.
I have a large portion of the scoreboard working, including the sound interface (UART code, not all the variables) but I am having a problem with the score not responding after some time.
I changed the code a bit to spit the output back to the computer and it appears that at some point the time starts counting backwards back to 0 then back up. After this occurs, the scoring wont work. It will still recognize the input (blocking the laser) but does not change score or count blue or green 'hold' time. It happens right around the 5 minute mark. The timer code is from the 'real time clock'.
Time=system time, Green score, Green input y/n, green 'Y' time, Blue score, blue input, blue 'y' time
Here is the screen shot when it changes.
Time:32766 Gnscr: 4 Gnhg: 0 Gntm: 0 bluscr: 9 bluhg: 0 blutm: 0
Time:32767 Gnscr: 4 Gnhg: 0 Gntm: 0 bluscr: 9 bluhg: 0 blutm: 0
Time:-32768 Gnscr: 4 Gnhg: 0 Gntm: 0 bluscr: 9 bluhg: 0 blutm: 0
Time:-32767 Gnscr: 4 Gnhg: 0 Gntm: 0 bluscr: 9 bluhg: 0 blutm: 0
Because I am still testing and modifing the code, I have a lot of lines that are just // so I don't have to retype them if I use them later.
When I use the LCD display, the same freeze happens. The time continues to count (down) so I know the chip is not locking up, but the negative numbers are messing with my If/Then statements.
Any thoughts would be appreciated.
// Scoreboard.c
// for LW Pitts with ATmega168 & RCSystems RC8660 voice chip
// sbuelow@comcast.net
#define FOSC 14745600 // clock speed
// #define BAUD 2400 // 2400 Baud
#include <stdio.h>
#include <stdlib.h>
#include <avr/io.h>
#include <inttypes.h>
#include "../libnerdkits/uart.h"
#include "../libnerdkits/lcd.h"
#include "../libnerdkits/delay.h"
#include <avr/pgmspace.h>
#include <avr/interrupt.h>
// PIN DEFINITIONS:
//
// PC0-- Score reset - output-
// PC1-- Green score trigger -input-
// PC2-- Green score count -output-
//
// PB1-- Blue score trigger -input-
// PB2-- Blue score count -output-
// PB3-- Reverse -output-
//);
}
volatile int32_t the_time;
SIGNAL(SIG_OUTPUT_COMPARE0A) {
// when Timer0 gets to its Output Compare value,
// one one-hundredth of a second has elapsed (0.01 seconds).
the_time++;
}
int main (){
realtimeclock_setup();
// Define variables
int bluetime;
int greentime;
int bluehigh;
int greenhigh;
int bluestart;
int greenstart;
int bluescore;
int greenscore;
int lowtime; // Time of no trigger
int nullhigh; // Status of no trigger
// Outputs
DDRB |= (1<<PB2); // Blue count
DDRB |= (1<<PB3); // Reverse
DDRB |= (1<<PB4); // Loop Led
DDRC |= (1<<PC2); // Green count
DDRC |= (1<<PC0); // Score reset
// Inputs
DDRB &= ~(1<<PB1); // Blue trigger
DDRC &= ~(1<<PC1); // Green trigger
// turn on interrupt handler
sei();
// start up the LCD
// lcd_init();
// FILE lcd_stream = FDEV_SETUP_STREAM(lcd_putchar, 0, _FDEV_SETUP_WRITE);
// lcd_home();
// Start up the serial port
uart_init();
FILE uart_stream = FDEV_SETUP_STREAM(uart_putchar, uart_getchar, _FDEV_SETUP_RW);
stdin = stdout = &uart_stream;
// UBRR0H = 1;
// UBRR0L = 127; // for 2400bps with 14.7456MHz clock
// enable uart RX and TX
UCSR0B = (1<<RXEN0)|(1<<TXEN0);
// set 8N1 frame format
UCSR0C = (1<<UCSZ01)|(1<<UCSZ00);
// reset routine
reset:
PORTC |=(1<<PC0);
delay_ms(200);
PORTC &= ~(1<<PC0);
bluehigh=0; bluestart=0; bluetime=0; bluescore=0;
greenhigh=0; greenstart=0; greentime=0; greenscore=0;
nullhigh=0;lowtime=0;
delay_ms(4000);
// ??? if the time > XX special reset sound file?
while (1) {
whileloop:
// monitor loop
if (PINB & (1<<PB1)) {bluehigh=1; nullhigh=0;}
else {bluehigh=0;}
if (PINC & (1<<PC1)) {greenhigh=1; nullhigh=0;}
else {greenhigh=0;}
if (bluehigh==1 && bluestart==0) {bluestart=the_time;}
if (greenhigh==1 && greenstart==0) {greenstart=the_time;}
if (bluehigh==1 && bluestart > 0) {bluetime=the_time - bluestart;}
if (greenhigh==1 && greenstart > 0) {greentime=the_time - greenstart;}
if (bluehigh==0 && bluetime !=0 && bluetime < 300) {
PORTB |=(1<<PB2); bluescore++;
PORTB &= ~(1<<PB2);
bluehigh=0; bluestart=0; bluetime=0;
nullhigh=1; lowtime=the_time;}
if (greenhigh==0 && greentime !=0 && greentime < 300) {
PORTC |=(1<<PC2); greenscore++;
PORTC &= ~(1<<PC2);
greenhigh=0; greenstart=0; greentime=0;
nullhigh=1; lowtime=the_time;}
// Reverse checks
if (bluehigh==1 && greenhigh==0 && bluetime > 300) {goto bluereverse;}
if (greenhigh==1 && bluehigh==0 && greentime > 300) {goto greenreverse;}
// Reset check
if (bluehigh==1 && greenhigh==1 && bluetime > 300 && greentime > 300) {goto reset;}
// PORTB |= (1<<PB4);
// delay_ms(100);
// PORTB &= ~(1<<PB4);
// write message to LCD
// lcd_home();
// lcd_write_string(PSTR("ADC: %"),greenscore);
// fprintf_P(&lcd_stream, PSTR("Green: % "),greenscore);
// lcd_line_two();
// lcd_write_string(PSTR("The time=%"), the_time);
// Write message to serial
printf_P(PSTR("Time:%.d "), the_time);
printf_P(PSTR("Gnscr: %.d "), greenscore);
printf_P(PSTR("Gnhg: %.d "), greenhigh);
printf_P(PSTR("Gntm: %.d "), greentime);
printf_P(PSTR("bluscr: %.d "), bluescore);
printf_P(PSTR("bluhg: %.d "), bluehigh);
printf_P(PSTR("blutm: %.d \r\n"), bluetime);
goto whileloop;
bluereverse:
// Blue reverse loop
if (PINB & (1<<PB1)) { // if blue trigger high
PORTB |= (1<<PB3); // turn on reverse
delay_ms(100);
PORTB |=(1<<PB2); // turn on blue score
bluescore--; // subtract 1 from blue
PORTB &= ~(1<<PB2); // turn off blue score
delay_ms(400);
goto bluereverse;}
else {
bluehigh=0; bluestart=0; bluetime=0; // zero out blue inputs
PORTB &= ~(1<<PB3); // turn off reverse
nullhigh=1; lowtime=the_time;
goto whileloop;}
greenreverse:
// Green reverse loop
if (PINC & (1<<PC1)) { // if green trigger high
PORTB |=(1<<PB3); // turn on reverse
delay_ms(100);
PORTC |=(1<<PC2); // turn on green score
greenscore--; // subtract 1 from green
PORTC &= ~(1<<PC2); // turn off green score
delay_ms(400);
goto greenreverse;}
else {
greenhigh=0; greenstart=0; greentime=0; // zero out green inputs
PORTB &= ~(1<<PB3); // turn off reverse
nullhigh=1; lowtime=the_time;
goto whileloop;}
goto whileloop;
// soundloop:
delay_ms(4000);
uart_write(1); printf_P(PSTR("0O\r")); //voice change to perfect paul
printf_P(PSTR("Welcome to the Wagners perfect paul\r")); // text string speak
//printf_P(PSTR("\r")); //uart_write(13);
printf_P(PSTR("\r\n")); // carrige return & line return
//delay_ms(2000);
//uart_write(72); uart_write(73); uart_write(00); // hi
delay_ms(2000);
uart_write(1);
uart_write(57);
uart_write(79);
uart_write(13);
printf_P(PSTR("Welcome to the Wagners, Alvin\r"));
//printf_P(57);
//printf_P(2&);
//printf_P(01h);
printf_P(PSTR("\r\n"));
// uart_write(34);
//uart_write(1);
//uart_write(94);
//uart_write(13); //uart_write(13);
uart_write(1); printf_P(PSTR("100&\r")); // Play sound file-recording
delay_ms(2000);
uart_write(1); printf_P(PSTR("1&\r")); // Play sound file-recording
delay_ms(2000);
}
return 0;
}
Great project. Glad to see you've progressed so far. I think what is happening is when you declared your time variables (bluetime, greentime etc) you declared them as "int". In AVR programming, an "int" is a signed 16-bit variable. This means it can only hold values from -32,768 to 32,767. In hex representation -32768 is 0x8000 and 32767 is 0x7FFF, so it "kind of" makes sense that after 0x7FFF the timer will go to 0x8000. The easiest way to change this is to make any time variable 32-bits (signed or unsigned...won't really matter here), by declaring them int32_t (just like "the_time"). I'm surprised the compiler didn't holler at you. BTW, did you start programming in assembly? I only ask because the way you use the "goto" routines reminds me of assembly. Some "C" programmers don't like to use "goto" but I say if it works, more power to you.
Thanks. I will make those changes tonight and try it again. As far as programming, this is my first time. BASIC was as close as I got back in the 80's. I am learning C as I go, on an 'as-needed' basis. I seem to get more from reading in the forums and reverse engineering then from the numerous 'quick starts' I have found online, although I pick up a tidbit each time I read one. I am more into elecrtonics, led's, and gadgets, but this microcontroller has so many possibilities.
I am trying to get a better grasp of C, but with it being a hobby it is hard to dedicate the time to learn it the way I should. If you think of a book or guide that may be helpful to me, let me know.
Thanks again for your help.
One more question as I am sitting here thinking, this all occurs with me not providing any input to the controller. The 'time' value that is counting up to 32767 then going to -32768 and counting backwards is 'the_time' which is already set at 32-bits. I am missing how the variables I created are causing 'the_time' to change and count backwards. I could see it locking up or something, but I thought the timer was part of the chip hardware. Thanks for your insight.
You're right, the timer is part of the chip hardware and actually operates independently of the main code. The timer you're using (Timer0) is actually only 8-bits, it only counts from 0 to 255 (max), then either counts down or restarts at zero. The way it is set up now, it stops at 143 then restarts at 0. Whenever it hits 143 (once every 1/100th of a second) the interrupt is triggered and "the_time" is incremented. When you assign one of your 16-bit time variables to the 32-bit "the_time" value, the upper 16-bits are lost. If a variable is "signed" the processor looks at the left-most bit to see if the number is positive or negative (0 is positive, 1 is negative). So when you count up "the_time" goes from 0x00007FFF to 0x00008000 and the left-most bit is 0 in both cases. When you copy those numbers into a 16-bit variable, you lose the left-most 16-bits and get 0x7FFF and 0x8000 (rememeber each hex digit is 4-bits). 0x7FFF in binary is 0111 1111 1111 1111 and positive. 0x8000 is 1000 0000 0000 0000 and therefore negative. I know it's odd but it actually saves time in the processor.
That makes sense to me for the variables that I assign to equal 'the_time' but not for 'the_time' itself.
I thought this statment,
volatile int32_t the_time;
made 'the_time' a 32 bit variable and would therefore not run out of space for a couple hundred days?
Are you saying that my assigning a variable like 'bluetime' to equal the variable 'the_time' is causing an issue with 'the_time'?
The explaination may be beyond me at the moment, so feel free to use a bigger hammer. :)
I will try the modifications to my assigned variables and try it out. I just hope I am not misunderstanding your direction.
Just out of curiosity, how would you redirect multiple subroutines to the same point without 'goto'? Seems to make the logic flow much easier for me, but I also did not know another way.
Thanks again.
Ah...I see now where the confusion is. "the_time" should NOT change when another variable is assigned to it, it will have enough space to last a couple of hundred days. When you use the 16-bit variables in the "if" tests, you will run into trouble.
So the question is "why does 'the_time', a 32-bit variable, start to count backwards?" The answer is "it doesn't". It's just getting displayed wrong by the "printf_P" statement on line 161 above. The "%.d" assumes a 16-bit variable. To get it to display correctly, try using "%ld" (that's a lower "L" not 1). You won't need the decimal point in it.
As for the "multiple subroutines to one point" question, just format it a little differently. Take your "bluereverse" block, and format it like this:
void bluereverse()
while (1) {
if (PINB & (1<<PB1)) {
PORTB |= (1<<PB3);
delay_ms(100);
PORTB |=(1<<PB2);
bluescore--;
PORTB &= ~(1<<PB2);
delay_ms(400);
}
else {
bluehigh=0;
bluestart=0;
bluetime=0;
PORTB &= ~(1<<PB3);
nullhigh=1;
lowtime=the_time;
return;
}
}
}
Now anytime you want to execute it place this in your main loop:
bluereverse();
When "bluereverse()" is finished, it jumps to the line after the code that called the subroutine.
But like I said before....if it is easier for you just stick to what's comfortable.
I made the changes to only the time variables as you suggested.
// Define variables
volatile int32_t bluetime;
volatile int32_t greentime;
int bluehigh;
int greenhigh;
volatile int32_t bluestart;
volatile int32_t greenstart;
int bluescore;
int greenscore;
volatile int32_t lowtime; // Time of no trigger
int nullhigh; // Status of no trigger
It appears to work still @ 14 minutes, so I am feeling better thanks to your help.
The PC output still goes up to 32k and then starts counting down to 0, then goes back up.
I am thinking that it is my print string causing the display issue,
printf_P(PSTR("Time:%.d "), the_time);
I am still curious as to what was happening with the assigned variables if you feel like trying to explain it again.
Thanks again for getting me past this road block.
By the way, I also make decals as a hobby and would be HAPPY to send you some if you would like. I know you do this stuff to help others, but I really appreciate your efforts and sharing your knowledge.
Still working at 22 min. :)
Correction - Line 1 should read:
void bluereverse() {
Can't forget the braces :-)
I posted while you were responding.
Thanks.
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/2194/ | CC-MAIN-2020-40 | refinedweb | 3,507 | 67.99 |
I…
Slap a TreeView widget on your form, then in the form’s constructor, you can simply do this:
SimpleTreeView tree=new SimpleTreeView(controlOnForm); tree.AddColumn("Col1"); tree.AddColumn("Col2"); // adding columns tree.AddData(string, ...); // add data, one string per column tree.Finish(); // finalise the process with this
All cells are string data only, there’s no sorting or anything. Just a grid of data. The code for the class is below. Yes, it’s rough around the edges, yes, it could do with some tidying, and I’m sure much improvement, but that’s left as an exercise for the reader :) …
using System; using System.Collections.Generic; namespace tim{ // MCD Marengo 2012 // This class assumes that you have a GTK tree view on a form - just // instantiate this class with that control and this does all the // hard stuff. Note all columns are treated as text. // To use: // SimpleTreeView tree=new SimpleTreeView(controlOnForm); // tree.AddColumn("Col1"); tree.AddColumn("Col2"); // adding columns // tree.AddData(string, ...); // add data, one string per column // tree.Finish(); // finalise the process with this public class SimpleTreeView{ List<Gtk.TreeViewColumn> _cols; List<Gtk.CellRendererText> _cells; Gtk.TreeView _tree; Gtk.ListStore _list=null; bool _colsAdded=false; public SimpleTreeView(Gtk.TreeView tree){ _tree=tree; _cols=new List<Gtk.TreeViewColumn>(); _cells=new List<Gtk.CellRendererText>(); } public void AddColumn(string colName){ if(_colsAdded == true){ // can't add columns after you've started adding data. throw new Exception("Cannot add columns after data has been added"); } _cols.Add(new Gtk.TreeViewColumn()); _cols[_cols.Count - 1].Title = colName; } public void AddData(params string[] fieldData){ // it is assumed when this is called that the user has finished adding columns. _colsAdded = true; if(_cols.Count != fieldData.Length){ throw new Exception("Mismatch on number of columns defined and items of data passed"); } if(_list == null){ Type[] t = new Type[_cols.Count]; for(int n=0; n<_cols.Count; ++n){ t[n] = typeof(string); } _list = new Gtk.ListStore(t); } _list.AppendValues(fieldData); } public void Finish(){ for(int n=0; n<_cols.Count; ++n){ _cells.Add(new Gtk.CellRendererText()); _cols[n].PackStart(_cells[_cells.Count-1], true); } for(int n=0; n<_cols.Count; ++n){ _cols[n].AddAttribute(_cells[n], "text", n); } for(int n=0; n<_cols.Count; ++n){ _tree.AppendColumn(_cols[n]); } _tree.Model=_list; } public List<string> GetSelected(){ // returns the cols of the current line Gtk.TreeSelection selection =_tree.Selection; Gtk.TreeModel model; Gtk.TreeIter iter; List<string> rc=new List<string>(); if(selection.GetSelected(out model, out iter)){ for(int n=0; n<_cols.Count; n++){ rc.Add(model.GetValue(iter,n).ToString()); } } return rc; } } }
11 Replies to “Using a GTK# TreeView Widget in C# – Very Simply”
My compiler (I use monodev 4.0) gives me a CS0201 from this part of the code.
if(_list == null){
Type[] t = new Type[_cols.Count];
for(int n=0; n<_cols.Count; ++n){
t[n] = typeof(string);
}
The compiler halt at the for statement. But I belivive it’s the row above it complains about.
*sc0201 Only assignment, call, increment, decrement, and new object expressions can be used as a statement
Interesting, does this tiny test program fail as well? It works for me in MonoDevelop 3.0.3.2 and Visual Studio 2012:
Type[] t = new Type[3];
t[0] = typeof(string);
t[1] = typeof(int);
t[2] = typeof(decimal);
foreach (var x in t)
{
Console.WriteLine(x);
}
No, it works well.
Just to simplivatcate the problem with the code I coied from you, I added the class to the same file as the consoleapplcaton. Without tryng to acces the class, so no errors could be because of that.
Still the same problem.
If you wish I can send the project and the build log.
I found that, afterall – the problem is in the for-statement.
I commented out _cols.Count; .
for(int n=0; n< /* _cols.Count;*/ ++n).
AFAIK, that does no harm.?
But now it halts on the List declaration at the very beginning.
List _cols;
***CS0305: Using the generic type `Stack’ requires `1′ type argument(s)***
Maybe i am not implanting the code right. I am very new to MonoDevelop, and to gui-programming in general. So any advices would be avesome.
Is it to much ask for a working project.?
Other than that- your code seems to be very good, and I am sure that I could earn a lot from it If I only can get it to work,
Of course the for state-ment is like this.
for(int n=0; n</*_cols.Count;*/ ++n){
the n< thing was a typo. If you want, edit away the typo from the last post and remove this one.
Looking at the code in Pastebin which you sent privately it looks like you’re hitting problems in the code you’re copying from this site where the “less than” symbol (<) is being copied as "<" (which is an HTML code for it). Instead of commenting out parts of that for loop, search your code for the string "<" and replace it with "<". The confusing thing is when you post your code here any HTML entities such as < get displayed as the character they represent :)
That was very funny. In my browser I see “<” in your for-loops.
After reading you last post here – I installed Opera – same result. Chromium – Same result. And finally Google Chrome, and now I see your code as it should be :).
But Ok, I copie-pasted the code from Chrome. And now no errors within the foor-loops.
But the compiler still complains about the
List _cols;
List _cells;
part
giving the error:
“Error CS0305: Using the generic type `System.Collections.Generic.List’ requires `1′ type argument(s) (CS0305) (A6)”
Sorry Peter, I’m starting to see the problem here – not sure how but some of the code is lost because of HTML interpretation.
As we are using System.Collections.Generic at the top that’s a clue that we’re not using non-generic collections, and those lines you quote have no type parameter.
They should actually read:
List<Gtk.TreeViewColumn> _cols;
List<Gtk.CellRendererText> _cells;
As angle brackets are used in HTML to specify tags I can see why they disappeared! I’m going to replace the whole code section as I see other bits like this aren’t displaying.
Edit: The code in the article has now been replaced.
That sound great.
If you have further patience with me :), so please can you tell
where is the ‘Forms Constructor ‘ , you are referring to.?
Also I would like to know, where am I supposed to add your class.
Finally, this
‘SimpleTreeView tree=new SimpleTreeView(controlOnForm);’
the controlOnForm, is that the same as the name I put in properties-pane in stetic designer.?
If so, how am I supposed to access it – as it iss burried within the autogenerated gtk-gui/MainWindow – file.?
Assuming you’ve got a form called MainWindow then in MainWindow.cs in your project explorer you’ll see some code like this:
The function with the Build() in it is the class’s constructor (i.e. the form’s constructor). You can put your code in there after the Build() line.
Any controls you’ve added to the form (in this case the TreeView) will be in scope here (note it’s a partial class, the rest of the class is built by the designer) so, assuming you’ve given the name “tvTest” to it in the stetic designer then you can use
Thank you sir.! Your code works well and it seems that I can use it in my project. It makes it much easier to work with the treeview.
In case that other newcomers like me are in doubt how to use your class, maybe it’s a good idea you tell the newcomer that you have written about it in the blog.
I guess one side-effect of that could be that more people also writes in the blog :-) | http://www.martyndavis.com/?p=357 | CC-MAIN-2017-22 | refinedweb | 1,328 | 68.06 |
Write a program to find a pair sum in array equal to given input number.
Pair sum equal to a given number
Method 1
Time Complexity: O(NlogN)
Space Complexity: O(1)
Concept:
Let say there are numbers below and we need to find 2 numbers from an array whose sum is equal to 17
-1 5 7 -8 12 16 -30 -4And we need to find a pair whose sum is say 17
Below video explains the algorithm to find a pair of numbers from an array whose sum should be equal to given input number 17
Then the efficient way is shown below
1. Arrange the numbers in ascending order (Sort it )2. Run a loop this way :
- Let say one person stand at start of array and the second person stand at end
- They add up the numbers on which they are standing=> -30 +16 =14
- If their number is equal to the given number X then they enjoy
- Else if the sum is lesser than the given number they ask the first person standing at start of array to move one step towards the end because if he travels right the number increases and hence the sum.
- Similarly if the sum is more the person at the end is asked to move towards start by one step
- We repeat until the persons collide as after that the same pairs will be obtained.
- If no such pair is found then both become sad.
Input: -3 ,-4 , 10 , 0 ,3 ,-2 ,15 , 3
Sum: 7
Answer: -3 and 10
Code for above algorithm in C++
#include <bits/stdc++.h> using namespace std; int main() { int arr[] = {-3,-4,10,0,3,-2,15,3}; int size_of_array = sizeof(arr)/sizeof(arr[0]); //RequiredSum is number to which the pair should sum to int RequiredSum = 7; sort(arr,arr + size_of_array); //sort the array int startIndex = 0 , endIndex = size_of_array - 1 , sum =0; //variables pointing on their respective indices and sum to store sum of the pair while(startIndex <endIndex) //We require a pair so 2 elements and hence both elements should be of different indices { sum = arr[startIndex] + arr[endIndex]; if( sum == RequiredSum) { cout << "The numbers are " << arr[startIndex] <<" and " << arr[endIndex] <<endl; return 0; } else if(sum < RequiredSum) //if sum is less then we need to increase the smaller one startIndex ++; else //if the sum if more we need to decrease the larger number endIndex --; } cout << "No such pair exists."; return 0; }
Method 2
Time Complexity – O(NlogN)
Space Complexity – O(N)
Concept:
Let say there are numbers in an array and we have to find the set of two elements from an array whose sum is equal to input number 23
-1 5 7 -8 12 16 -30 -4
and we need to find a pair whose sum is say 23
This concept is based on the Mapping approach to find elements pair from array whose sum equal to number
1. Create a map such that each element is added.
2. If there exists a pair that sums to X then both the elements are present in map.
3. So we loop through the array and do this
- Find if the (X – present) element is present in map
- If present then print the number.
You can use STL map data structure for this purpose.Or else you can create an array for indexing where index are the value of the array element itself.(But using array has a disadvantage that its size is required.)
EXPECTED
Input: -3 ,-4 , 10 , 0 ,3 ,-2 ,15 , 3
Sum: 7
Answer: -3 and 10
Code for above algorithm in C++
#include <bits/stdc++.h> using namespace std; map <int,int> m; map<int,int>::iterator it; int main() { int arr[] = {-3,-4,10,0,3,-2,15,3}; int size_of_array = sizeof(arr)/sizeof(arr[0]); //cout << "Enter the number to which the pair should sum to"<<endl; //Let Sum = 7 int x = 7; //cin >> x; // the number to which the sum of pair of elements must be equal int x = 7; //cin >> x; // the number to which the sum of pair of elements must be equal for (int i = 0; i < size_of_array;++i) { //Scan and add elements into the map it = m.find(arr[i]); if(it == m.end()) m.insert(make_pair(arr[i],1)); //Add the element in the map and set the count to 1 that represents it is present } for (int i = 0; i < size_of_array;i++) { it = m.find((x - arr[i])); //If we have two numbers say m and n that sums to x then //if we have m and if we find n in the map then we got the numbers. if(it != m.end()) //If it exists then we got the pair { pair<int,int> p = *it; //Obtain the pair so as to reference the 2nd number cout << "The numbers are " << arr[i] <<" and " << p.first <<endl; return 0; } } cout << "No such pair exists."; return 0; }
Here we discussed two algorithms to find elements pair from array whose sum equal to number. This is a commonly asked question in technical interviews.
Conclusion
If interviewer is OK to use extra space then Method 2 is the best option to solve this question. But if he/she is not allowing you to use extra space for using Map then Method 1 is the best way to solve this question. | https://www.tutorialcup.com/interview/array/pair-element-sum-array-equal-number.htm | CC-MAIN-2020-10 | refinedweb | 900 | 64.38 |
In my last post, we looked at keeping code clean using linters to keep a consistent coding style for our project. A different topic that we will look at today is tests and code coverage.
Something to keep in mind
Tests are not only useful to ensure our code meets our functional expecations, but they are also useful for any new developers introduced to our project. Having a good test foundation can help new developers getting started quicker, and well written tests can allow them to ensure they are not breaking existing functionality when introducing changes to the code base. In some cases they can also be useful source for information about how the application behaves and how data flows through it.
With that said, no test suite is perfect and one should not fully rely on the tests to verify that everything works as expected. Tests can be written wrong or miss use cases that will still make the functionality fail.
Test suites and code coverage can give a false sense of security, and even though your project has 100% code coverage it does not mean that all edge cases, inputs, and outcomes are tested.
Getting started
This time we are going to continue the work on the module I created in the previous post. We will add some simple tests and produce a code coverage report. In addition to this, we will look at how we can run these tests in our Azure Pipeline and publish the test results and code coverage for display and verification.
I am going to make a few assumptions before we move any further into this post:
- You have basic knowledge about writing tests.
- You have a project in Azure DevOps that the pipeline will run in.
The code for this post will be available on my GitHub. Sources from my last post are also available here.
Setting up our tests and coverage
Installing dependencies
For our tests, we will be using Testing Library, Jest, and ts-node. We will also install some helper packages.
Descriptions are from the packages npm page
Let us get started with installing the needed dependencies by running the following command:
npm i -D @testing-library/react jest @types/jest @testing-library/jest-dom ts-jest jest-junit-reporter
Configuring our setup
Next, we need to configure Jest. Create a new file in the root of the project called
jest.config.js with the following content:
/* eslint-disable */ module.exports = { roots: ["<rootDir>/src"], testMatch: [ "**/__tests__/**/*.+(ts|tsx|js)", "**/?(*.)+(spec|test).+(ts|tsx|js)", ], transform: { "^.+\\.(ts|tsx)$": "ts-jest", }, moduleNameMapper: { "\\.(css)$": "<rootDir>/src/__mocks__/styleMock.js", }, coverageReporters: ["text", "cobertura"], testResultsProcessor: "jest-junit-reporter", };
A few things are happening here. First, we set the root directory of our sources and tell Jest where it can find our tests. We specify that all
ts and
tsx files should be preprocessed by
ts-jest during the process. We import our CSS file in the
RandomNumberCard component, since we have not added a preprocessor that can handle CSS files we need to mock our these files.
For coverage reporters, we have added
text that will allow console output and
cobertura that is one of the supported formats for Azure Pipelines. For test results we have added
jest-junit-reporter as a processor so we can see what tests have been run.
We also need to add the mock file for our styles, so create the file
src/__mocks__/styleMock.js with the following content:
/* eslint-disable */ module.exports = {};
Writing some simple tests
We now have the test setup configured, and we can start to write our tests. Create a new file
src/__tests__/RandomNumberCard.test.tsx with the following content:
import "@testing-library/jest-dom/extend-expect"; import { fireEvent, render } from "@testing-library/react"; import React from "react"; import RandomNumberCard from "../RandomNumberCard"; describe("<RandomNumberCard />", () => { it("should start on 0", () => { const { container } = render(<RandomNumberCard />); const tag = container.getElementsByClassName("RandomNumberCard__number")[0]; expect(tag.innerHTML).toEqual("0"); }); it("should change when clicked", () => { const { container } = render(<RandomNumberCard />); const card = container.getElementsByClassName("RandomNumberCard")[0]; fireEvent.click(card); const tag = card.getElementsByClassName("RandomNumberCard__number")[0]; expect(tag.innerHTML).not.toEqual("0"); }); });
Since this post focuses on getting the tests and coverage up and running in our pipeline, I will not go deep into the technical details about how to write tests.
We will however take a quick look at what is going on so we are all on the same page. For the first test, we want to ensure that our random value starts at 0. First, we render our component before we do a query against the DOM to fetch all the elements with the class
RandomNumberCard__number. We then pick the first element before checking that the value of this element is equal to
"0"
Our next test starts the same, but we have some additional steps here. After retrieving the element we fire off the
click event for our component to update the random number. We then repeat the process from our previous test, but this time we check that the value is not equal to
"0"
To make our tests runs a bit easier to start, we will add some new scripts to our
package.json file.
"test": "jest --coverage", "test:ci": "jest --coverage --ci"
The first script we use to run our tests locally, while the second script is intended for our pipeline. Directly for our tests, the
--ci option does not do anything, but you should add this if your tests contain snapshots. Additional information about this option can be found in the Jest documentation.
Let us try to run our tests locally using the command
npm run test. When the test run completes, we should see an output similar to this:
> jest --coverage PASS src/__tests__/RandomNumberCard.test.tsx <RandomNumberCard /> ✓ should start on 0 (25 ms) ✓ should change when clicked (10 ms) ----------------------|---------|----------|---------|---------|------------------- File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s ----------------------|---------|----------|---------|---------|------------------- All files | 100 | 100 | 100 | 100 | RandomNumberCard.tsx | 100 | 100 | 100 | 100 | ----------------------|---------|----------|---------|---------|------------------- Test Suites: 1 passed, 1 total Tests: 2 passed, 2 total Snapshots: 0 total Time: 1.51 s, estimated 7 s Ran all test suites.
All of our tests passed and we can also see the total code coverage.
Updating our pipeline
We have our tests with code coverage running locally. It is time to update our pipeline to run the same tests. For this we need to implement a few new steps:
At the end of our current pipeline we will add the following step to trigger our test run:
- task: [email protected] displayName: "Run tests" inputs: command: "custom" customCommand: "run test:ci"
I like to move all the files being published to the artifact staging directory, but if you want you can publish the files directly from their created paths by just changing the paths in our tasks. Let us add the following to move the files:
- task: [email protected] displayName: "Move cover files" inputs: targetType: "inline" script: | cp -r $(Build.SourcesDirectory)/coverage $(Build.ArtifactStagingDirectory) - task: [email protected] displayName: "Move report" inputs: targetType: "inline" script: | mkdir $(Build.ArtifactStagingDirectory)/reports cp $(Build.SourcesDirectory)/test-report.xml $(Build.ArtifactStagingDirectory)/reports/test-report.xml
To be able to view our test result and code coverage we need to publish the files. The first taks will publish the test run report (This is the report created by the
jest-junit-reportertest results processor) while the second step publishes out code coverage.
- task: [email protected] displayName: "Publish test results" inputs: testResultsFormat: "JUnit" testResultsFiles: "$(Build.ArtifactStagingDirectory)/reports/test-report.xml" - task: [email protected] inputs: codeCoverageTool: "Cobertura" summaryFileLocation: "$(Build.ArtifactStagingDirectory)/coverage/cobertura-coverage.xml" pathToSources: "$(Build.SourcesDirectory)/src"
At this point we have everything we need in our pipeline, if you want to see the full file after the changes you can view it here
Push your changes and run the pipeline, when it completes we should see that our test results and code coverage have been published.
Checking our tests we can see all the test files and individual tests
We can also view the coverage report of our code
Sidenote
When running the
PublishCodeCoverageResults step, you might see a warning like this in the log:
The Cobertura report is not well formed. The
element should contain elements and not elements.
This is a bug in Istanbul that is used by jest to create coverage and is caused when all test files are located in the same directory. Depending on how your project is set up you might see this. Our coverage is published and still visible, so we’ll ignore this for now. More info can be found in the GitHub issue if you are interested.
Reflection
That was it for this post! This time we continued to expand the quality control of our project by adding tests and coverage. Our project quality was increased by extending it with tests and code coverage.
Take a moment to this about this; what do you think is most important in a project to keep it clean, under control, and with high quality? If you have any great answers, techniques, or stories to share I’d like to hear about them! | https://joachimdalen.no/2020/10/11/code-coverage-in-azure-pipelines | CC-MAIN-2021-04 | refinedweb | 1,523 | 52.39 |
Night Of The Living Thread
What should this Python code print?:
t = threading.Thread() t.start() if os.fork() == 0: # We're in the child process. print t.isAlive()
In Unix, only the thread that calls
fork() is copied to the child process; all other threads are dead. So
t.isAlive() in the child process should always return False. But sometimes, it returns True! It's the....
How did I discover this horrifying zombie thread? A project I work on, PyMongo, uses a background thread to monitor the state of the database server. If a user initializes PyMongo and then forks, the monitor is absent in the child. PyMongo should notice that the monitor thread's
isAlive is False, and raise an error:
# Starts monitor: client = pymongo.MongoReplicaSetClient() os.fork() # Should raise error, "monitor is dead": client.db.collection.find_one()
But intermittently, the monitor is still alive after the fork! It keeps coming back in a bloodthirsty lust for HUMAN FLESH!
I put on my Sixties scientist outfit (lab coat, thick-framed glasses) and sought the cause of this unnatural reanimation. To begin with, what does
Thread.isAlive() do?:
class Thread(object): def isAlive(self): return self.__started.is_set() and not self.__stopped
After a fork,
__stopped should be True on all threads but one. Whose job is it to set
__stopped on all the threads that didn't call
fork()? In
threading.py I discovered the
_after_fork() function, which I've simplified here:
# Globals. _active = {} _limbo = {} def _after_fork(): # This function is called by PyEval_ReInitThreads # which is called from PyOS_AfterFork. Here we # clean up threading module state that should not # exist after a fork. # fork() only copied current thread; clear others. new_active = {} current = current_thread() for thread in _active.itervalues(): if thread is current: # There is only one active thread. ident = _get_ident() new_active[ident] = thread else: # All the others are already stopped. thread._Thread__stop() _limbo.clear() _active.clear() _active.update(new_active) assert len(_active) == 1
This function iterates all the Thread objects in a global dict called
_active; each is removed and marked as "stopped", except for the current thread. How could this go wrong?
Well, consider how a thread starts:
class Thread(object): def start(self): _limbo[self] = self _start_new_thread(self.__bootstrap) def __bootstrap(self): self.__started.set() _active[self.__ident] = self del _limbo[self] self.run()
(Again, I've simplified this.) The Thread object's
start method adds the object to the
_limbo list, then creates a new OS-level thread. The new thread, before it gets to work, marks itself as "started" and moves itself from
_limbo to
_active.
Do you see the bug now? Perhaps the thread was reanimated by space rays from Venus and craves the flesh of the living!
Or perhaps there's a race condition:
- Main thread calls worker's
start().
- Worker calls
self.__started.set(), but is interrupted before it adds itself to
_active.
- Main thread calls
fork().
- In child process, main thread calls
_after_fork, which doesn't find the worker in
_activeand doesn't mark it "stopped".
isAlive()now returns True because the worker is started and not stopped.
Now we know the cause of the grotesque revenant. What's the cure? Headshot?
I submitted a patch to Python that simply swapped the order of operations: first the thread adds itself to
_active, then it marks itself started:
def __bootstrap(self): _active[self.__ident] = self self.__started.set() self.run()
If the thread is interrupted by a fork after adding itself to
_active, then
_after_fork() finds it there and marks it stopped. The thread ends up stopped but not started, rather than the reverse. In this case
isAlive() correctly returns False.
The Python core team looked at my patch, and Charles-François Natali suggested a cleaner fix. If the zombie thread is not yet in
_active, it is in the global
_limbo list. So
_after_fork should iterate over both
_limbo and
_active, instead of just
_active. Then it will mark the zombie thread as "stopped" along with the other threads.
def _enumerate(): return _active.values() + _limbo.values() def _after_fork(): new_active = {} current = current_thread() for thread in _enumerate(): if thread is current: # There is only one active thread. ident = _get_ident() new_active[ident] = thread else: # All the others are already stopped. thread._Thread__stop()
This fix will be included in the next Python 2.7 and 3.3 releases. The zombie threads will stay good and dead...for now!
(Now read the sequels: Dawn of the Thread, in which I battle zombie threads in the abandoned tunnels of Python 2.6; and Day of the Thread, a post-apocalyptic thriller in which a lone human survivor tries to get a patch accepted via bugs.python.org.)
| https://emptysqua.re/blog/night-of-the-living-thread/ | CC-MAIN-2018-17 | refinedweb | 780 | 68.16 |
Some Arduino boards allow you to permanently store data in an EEPROM without having to keep the board plugged in. This article will teach you how to write to the onboard EEPROM (if your Arduino has one) and introduce you to an alternative method which allows you to use external memory.
You can use this guide for any microcontroller that supports communicating over the I2C bus.
Supported Arduino Boards
The following table lists how much data each MCU can store:
Interfacing the Built-in EEPROM
EEPROM stands for Electronically Erasable Programmable Read-Only Memory. While you can overwrite the data on the chip, you can only do so a limited number of times before it might start malfunctioning. However, you can read from it as many times as you want.
The Write() Method
The following example illustrates how you can store a byte.
#include <EEPROM.h> void setup() { int word_address = 0; EEPROM.write(word_address, 0x7F); } void loop() { }
Use the write() method together with a word address and the value you want to store. The address has to be a value between zero and EEPROM.length()-1 and it tells the MCU where to store the value.
The read() Method
The following example reads a byte from the EEPROM:
#include <EEPROM.h> void setup() { Serial.begin(9600); int word_address = 0; byte value; value = EEPROM.read(word_address); Serial.println(value, HEX); } void loop() { }
The read() -method will also take an address as a parameter and return the value as a byte.
Clearing the Memory
To clear the memory, store a zero at each position of the EEPROM:
void erase(void) { for (int i = 0 ; i < EEPROM.length() ; i++) EEPROM.write(i, 0); }
A "Hello World" Example
The following code will clear the EEPROM and then store “Hello World!” in it before writing the string to the console:
#include <EEPROM.h> void erase(void) { for (int i = 0 ; i < EEPROM.length() ; i++) EEPROM.write(i, 0); } void printMessage(byte* first, size_t len) { for (int i = 0; i < len; i++) { Serial.print((char)first[i]); } } void writeMsg(byte* first, size_t len) { for(int i = 0; i < len; i++) { EEPROM.write(i, first[i]); } } void readMsg(size_t len) { byte res; Serial.print("Message: "); for(int i = 0; i < len; i++) { res = EEPROM.read(i); Serial.print((char)res); } Serial.println(""); } void setup() { char* string = "Hello World!"; Serial.begin(9600); Serial.print("Serial connection opened!\n"); Serial.print("EEPROM length: "); Serial.println(EEPROM.length()); Serial.print("Attempting to erase EEPROM... "); erase(); Serial.print("Done!\n"); Serial.print("Message: "); printMessage(string, 12); Serial.print("\n"); Serial.print("Attempting to write to EEPROM...\n"); writeMsg(string, 12); Serial.print("Done!\n"); Serial.print("Attempting to read from EEPROM...\n"); readMsg(12); Serial.print("Done!\n"); } void loop() { }
Using an External EEPROM
If you don’t use an Arduino or you want to have extra storage space, you can utilize an external EEPROM IC to store bytes. In this example, we’ll use the 4LC16B (PDF), which is a 16 kB I2C EEPROM.
The circuit is simple and you only need to add a 100K pull-up resistor and connect the IC to the Arduino (or any other MCU you want to use):
Circuit diagram for adding the 4LC16B external EEPROM to an Arduino.
The seventh pin of this IC is a write-protect indicator. Connect this pin to GND if you want to write to the memory. If it’s high, the chip won’t save any data. Reading is possible regardless of the pin’s state.
Communicating With the External Memory
Setting up communication between the Arduino and the external memory is where things get more complicated compared to the built-in memory.
The datasheet of the 4LC16B IC precisely describes how to communicate with it to store data. I wrote this sketch to allow you to interface a word (or a byte) on the external EEPROM. I tested it with the 16 kB variant but it should work with every other size (from this manufacturer) as long as the communication works in the same way:
#include <Wire.h> static const byte DEVICE_BASE_ADDRESS = 0x50; void setup() { Wire.begin(); Serial.begin(9600); } byte readByteFromEEPROM(byte block, byte word_offset) { Wire.beginTransmission(block); Wire.write(int(word_offset)); Wire.endTransmission(true); Wire.requestFrom(int(block), 1); if (Wire.available()) return Wire.read(); } void readBlockFromEEPROM(byte block, byte outArray[256]) { for(int i = 0; i < 256; i++) { outArray[i] = readByteFromEEPROM(block, i); } } void readPageFromEEPROM(byte block, byte word_offset, byte outArray[16]) { for(int i = 0; i < 16; i++) { outArray[i] = readByteFromEEPROM(block, word_offset + i); } } void writeByteToEEPROM(byte block, byte word_offset, byte data) { writePageToEEPROM(block, word_offset, &data, 1); } /** * block: * 0x50 = first block = DEVICE_BASE_ADDRESS * 0x51 = second block * ... * 0x57 = eight block */ void writePageToEEPROM(byte block, byte word_offset, byte *data, size_t len) { Wire.beginTransmission(block); Wire.write(word_offset); for(int i = 0; i < len; i++) { Wire.write(data[i]); } Wire.endTransmission(true); delay(10); }
The memory is organized in eight blocks of 256 Bytes and each block can be directly addressed. The DEVICE_BASE_ADDRESS (0x50) represents the first one and 0x57 represents the last block.
Different chips have different base addresses too. Refer to the datasheet of your EEPROM and update the code if necessary.
Another "Hello World" Example
This program will store “Hello World” in the first page of the first block on the external EEPROM and then read the entire first block and print it to the console:
void printBlock(byte blockContent[256]) { for(int i = 0; i < 16; i++) { Serial.print("Page "); if(i+1 < 10) Serial.print("0"); Serial.print(i+1); Serial.print(": "); for(int u = 0; u < 16; u++) { Serial.print((char)blockContent[i*16+u]); if(u==7) Serial.print(" "); } Serial.println(""); } } void loop() { byte result[256]; writePageToEEPROM(DEVICE_BASE_ADDRESS, 0, "Hello World!", 12); readBlockFromEEPROM(DEVICE_BASE_ADDRESS, result); printBlock(result); delay(20000); exit(0); }
Executing this example will give you the following result (Or something similar depending on the contents of your EEPROM):
Understanding EEPROM is Important for Beginners
This super simple project is perfect for beginners who want to learn about memory and I2C communications (and anyone who wants to boost their making prowess) because it allows you to permanently store information on your Arduino (or other MCU) even without power.
Knowing how to store information makes your projects mobile and also give you peace of mind that the most important parts of your project are stored whether your board is hooked up to power or not. | https://maker.pro/arduino/tutorial/how-to-permanently-store-data-on-your-arduino | CC-MAIN-2021-10 | refinedweb | 1,076 | 56.66 |
colorize : uint(colorizing a DisplayObject makes it look as though you're seeing it through a colored piece of glass whereas tinting it makes every pixel exactly that color. You can control the amount of colorization using the "amount" value where 1 is full strength, 0.5 is half-strength, and 0 has no colorization effect.)
amount : Number [1](only used in conjunction with "colorize")
contrast : Number(1 is normal contrast, 0 has no contrast, and 2 is double the normal contrast, etc.)
saturation : Number(1 is normal saturation, 0 makes the DisplayObject look black and white, and 2 would be double the normal saturation)
hue : Number(changes the hue of every pixel. Think of it as degrees, so 180 would be rotating the hue to be exactly opposite as normal, 360 would be the same as 0, etc.)
brightness : Number(1 is normal brightness, 0 is much darker than normal, and 2 is twice the normal brightness, etc.)
threshold : Number(number from 0 to 255 that controls the threshold of where the pixels turn white or black)
matrix : Array(If you already have a matrix from a ColorMatrixFilter that you want to tween to, pass it in with the "matrix" property. This makes it possible to match effects created in the Flash IDE.)
index : Number(only necessary if you already have a filter applied and you want to target it with the tween.)
addFilter : Boolean [false]
remove : Boolean [false](Set remove to true if you want the filter to be removed when the tween completes.)}});
USAGE:
import com.greensock.TweenLite; import com.greensock.plugins.TweenPlugin; import com.greensock.plugins.ColorMatrixFilterPlugin; TweenPlugin.activate([ColorMatrixFilterPlugin]); //activation is permanent in the SWF, so this line only needs to be run once. TweenLite.to(mc, 1, {colorMatrixFilter:{colorize:0xFF0000}});
Copyright 2008-2013, GreenSock. All rights reserved. This work is subject to the terms in or for Club GreenSock members, the software agreement that was issued with the membership. | http://www.greensock.com/asdocs/com/greensock/plugins/ColorMatrixFilterPlugin.html | CC-MAIN-2022-40 | refinedweb | 325 | 54.73 |
My name is Dave Winer.
RSS is a popular syndication format, produced by a wide of publications, blogs and apps. It's used to distribute items of content, which can have a title, link and/or description. It supports categories, enclosures, comments and allows for namespaces to extend the format. It's the basis for podcasting.
RSS is an XML-based format. XML is widely deployed and debugged, and will be with us for a very long time. For example all the HTML content on the web is in XML.
JSON is gaining popularity as a parallel to XML.
At this stage it's a question.
Should we distribute RSS data in JSON as well as XML?
If so, what would JSONified RSS look like?
What applications would arise from it?
Here's the Scripting News RSS 2.0 feed expressed in JSON.
It's a simple mapping. If an element in the XML has attributes, we make each attribute a sub-element, and put the value of the element in a sub-element named #value.
Can you put together a demo JavaScript app that runs off this data?
What changes, if any, do you feel need to be made to this format?
It's nice that the browsers don't mess with rss.js data as they do with the XML version. Can we hope that they leave this format alone? :-)
What icon would we use? The same orange radio signal icon that Mozilla and Microsoft came up with?
Update: I wrote a blog post about this. | https://web.archive.org/web/20130116053736/http:/rssjs.org/ | CC-MAIN-2017-22 | refinedweb | 258 | 86.91 |
/* Protoize program - Original version by Ron Guilmette (rfg@segfault.us.com). Copyright (C) "intl.h" #include "cppdefault.h" #include <setjmp.h> #include <signal.h> #if ! defined( SIGCHLD ) && defined( SIGCLD ) # define SIGCHLD SIGCLD #endif #ifdef HAVE_UNISTD_H #include <unistd.h> #endif #include "version.h" /* Include getopt.h for the sake of getopt_long. */ #include "getopt.h" /* Macro to see if the path elements match. */ #ifdef HAVE_DOS_BASED_FILE_SYSTEM #define IS_SAME_PATH_CHAR(a,b) (TOUPPER (a) == TOUPPER (b)) #else #define IS_SAME_PATH_CHAR(a,b) ((a) == (b)) #endif /* Macro to see if the paths match. */ #define IS_SAME_PATH(a,b) (FILENAME_CMP (a, b) == 0) /* Suffix for aux-info files. */ #ifdef __MSDOS__ #define AUX_INFO_SUFFIX "X" #else #define AUX_INFO_SUFFIX ".X" #endif /* Suffix for saved files. */ #ifdef __MSDOS__ #define SAVE_SUFFIX "sav" #else #define SAVE_SUFFIX ".save" #endif /* Suffix for renamed C++ files. */ #ifdef HAVE_DOS_BASED_FILE_SYSTEM #define CPLUS_FILE_SUFFIX "cc" #else #define CPLUS_FILE_SUFFIX "C" #endif static void usage (void) ATTRIBUTE_NORETURN; static void aux_info_corrupted (void) ATTRIBUTE_NORETURN; static void declare_source_confusing (const char *) ATTRIBUTE_NORETURN; static const char *shortpath (const char *, const char *); static void notice (const char *, ...) ATTRIBUTE_PRINTF_1; static char *savestring (const char *, unsigned int); static char *dupnstr (const char *, size_t); static int safe_read (int, void *, int); static void safe_write (int, void *, int, const char *); static void save_pointers (void); static void restore_pointers (void); static int is_id_char (int); static int in_system_include_dir (const char *); static int directory_specified_p (const char *); static int file_excluded_p (const char *); static char *unexpand_if_needed (const char *); static char *abspath (const char *, const char *); static void check_aux_info (int); static const char *find_corresponding_lparen (const char *); static int referenced_file_is_newer (const char *, time_t); static void save_def_or_dec (const char *, int); static void munge_compile_params (const char *); static int gen_aux_info_file (const char *); static void process_aux_info_file (const char *, int, int); static int identify_lineno (const char *); static void check_source (int, const char *); static const char *seek_to_line (int); static const char *forward_to_next_token_char (const char *); static void output_bytes (const char *, size_t); static void output_string (const char *); static void output_up_to (const char *); static int other_variable_style_function (const char *); static const char *find_rightmost_formals_list (const char *); static void do_cleaning (char *, const char *); static const char *careful_find_l_paren (const char *); static void do_processing (void); /* Look for these where the `const' qualifier is intentionally cast aside. */ #define NONCONST /* Define a default place to find the SYSCALLS.X file. */ #ifndef UNPROTOIZE #ifndef STANDARD_EXEC_PREFIX #define STANDARD_EXEC_PREFIX "/usr/local/lib/gcc-lib/" #endif /* !defined STANDARD_EXEC_PREFIX */ static const char * const standard_exec_prefix = STANDARD_EXEC_PREFIX; static const char * const target_machine = DEFAULT_TARGET_MACHINE; static const char * const target_version = DEFAULT_TARGET_VERSION; #endif /* !defined (UNPROTOIZE) */ /* Suffix of aux_info files. */ static const char * const aux_info_suffix = AUX_INFO_SUFFIX; /* String to attach to filenames for saved versions of original files. */ static const char * const save_suffix = SAVE_SUFFIX; #ifndef UNPROTOIZE /* String to attach to C filenames renamed to C++. */ static const char * const cplus_suffix = CPLUS_FILE_SUFFIX; /* File name of the file which contains descriptions of standard system routines. Note that we never actually do anything with this file per se, but we do read in its corresponding aux_info file. */ static const char syscalls_filename[] = "SYSCALLS.c"; /* Default place to find the above file. */ static const char * default_syscalls_dir; /* Variable to hold the complete absolutized filename of the SYSCALLS.c.X file. */ static char * syscalls_absolute_filename; #endif /* !defined (UNPROTOIZE) */ /* Type of the structure that holds information about macro unexpansions. */ struct unexpansion_struct { const char *const expanded; const char *const contracted; }; typedef struct unexpansion_struct unexpansion; /* A table of conversions that may need to be made for some (stupid) older operating systems where these types are preprocessor macros rather than typedefs (as they really ought to be). WARNING: The contracted forms must be as small (or smaller) as the expanded forms, or else havoc will ensue. */ static const unexpansion unexpansions[] = { { "struct _iobuf", "FILE" }, { 0, 0 } }; /* The number of "primary" slots in the hash tables for filenames and for function names. This can be as big or as small as you like, except that it must be a power of two. */ #define HASH_TABLE_SIZE (1 << 9) /* Bit mask to use when computing hash values. */ static const int hash_mask = (HASH_TABLE_SIZE - 1); /* Datatype for lists of directories or filenames. */ struct string_list { const char *name; struct string_list *next; }; static struct string_list *string_list_cons (const char *, struct string_list *); /* List of directories in which files should be converted. */ struct string_list *directory_list; /* List of file names which should not be converted. A file is excluded if the end of its name, following a /, matches one of the names in this list. */ struct string_list *exclude_list; /* The name of the other style of variable-number-of-parameters functions (i.e. the style that we want to leave unconverted because we don't yet know how to convert them to this style. This string is used in warning messages. */ /* Also define here the string that we can search for in the parameter lists taken from the .X files which will unambiguously indicate that we have found a varargs style function. */ #ifdef UNPROTOIZE static const char * const other_var_style = "stdarg"; #else /* !defined (UNPROTOIZE) */ static const char * const other_var_style = "varargs"; static const char *varargs_style_indicator = "va_alist"; #endif /* !defined (UNPROTOIZE) */ /* The following two types are used to create hash tables. In this program, there are two hash tables which are used to store and quickly lookup two different classes of strings. The first type of strings stored in the first hash table are absolute filenames of files which protoize needs to know about. The second type of strings (stored in the second hash table) are function names. It is this second class of strings which really inspired the use of the hash tables, because there may be a lot of them. */ typedef struct hash_table_entry_struct hash_table_entry; /* Do some typedefs so that we don't have to write "struct" so often. */ typedef struct def_dec_info_struct def_dec_info; typedef struct file_info_struct file_info; typedef struct f_list_chain_item_struct f_list_chain_item; #ifndef UNPROTOIZE static int is_syscalls_file (const file_info *); static void rename_c_file (const hash_table_entry *); static const def_dec_info *find_extern_def (const def_dec_info *, const def_dec_info *); static const def_dec_info *find_static_definition (const def_dec_info *); static void connect_defs_and_decs (const hash_table_entry *); static void add_local_decl (const def_dec_info *, const char *); static void add_global_decls (const file_info *, const char *); #endif /* ! UNPROTOIZE */ static int needs_to_be_converted (const file_info *); static void visit_each_hash_node (const hash_table_entry *, void (*)(const hash_table_entry *)); static hash_table_entry *add_symbol (hash_table_entry *, const char *); static hash_table_entry *lookup (hash_table_entry *, const char *); static void free_def_dec (def_dec_info *); static file_info *find_file (const char *, int); static void reverse_def_dec_list (const hash_table_entry *); static void edit_fn_declaration (const def_dec_info *, const char *); static int edit_formals_lists (const char *, unsigned int, const def_dec_info *); static void edit_fn_definition (const def_dec_info *, const char *); static void scan_for_missed_items (const file_info *); static void edit_file (const hash_table_entry *); /* In the struct below, note that the "_info" field has two different uses depending on the type of hash table we are in (i.e. either the filenames hash table or the function names hash table). In the filenames hash table the info fields of the entries point to the file_info struct which is associated with each filename (1 per filename). In the function names hash table, the info field points to the head of a singly linked list of def_dec_info entries which are all defs or decs of the function whose name is pointed to by the "symbol" field. Keeping all of the defs/decs for a given function name on a special list specifically for that function name makes it quick and easy to find out all of the important information about a given (named) function. */ struct hash_table_entry_struct { hash_table_entry * hash_next; /* -> to secondary entries */ const char * symbol; /* -> to the hashed string */ union { const def_dec_info * _ddip; file_info * _fip; } _info; }; #define ddip _info._ddip #define fip _info._fip /* Define a type specifically for our two hash tables. */ typedef hash_table_entry hash_table[HASH_TABLE_SIZE]; /* The following struct holds all of the important information about any single filename (e.g. file) which we need to know about. */ struct file_info_struct { const hash_table_entry * hash_entry; /* -> to associated hash entry */ const def_dec_info * defs_decs; /* -> to chain of defs/decs */ time_t mtime; /* Time of last modification. */ }; /* Due to the possibility that functions may return pointers to functions, (which may themselves have their own parameter lists) and due to the fact that returned pointers-to-functions may be of type "pointer-to- function-returning-pointer-to-function" (ad nauseum) we have to keep an entire chain of ANSI style formal parameter lists for each function. Normally, for any given function, there will only be one formals list on the chain, but you never know. Note that the head of each chain of formals lists is pointed to by the `f_list_chain' field of the corresponding def_dec_info record. For any given chain, the item at the head of the chain is the *leftmost* parameter list seen in the actual C language function declaration. If there are other members of the chain, then these are linked in left-to-right order from the head of the chain. */ struct f_list_chain_item_struct { const f_list_chain_item * chain_next; /* -> to next item on chain */ const char * formals_list; /* -> to formals list string */ }; /* The following struct holds all of the important information about any single function definition or declaration which we need to know about. Note that for unprotoize we don't need to know very much because we never even create records for stuff that we don't intend to convert (like for instance defs and decs which are already in old K&R format and "implicit" function declarations). */ struct def_dec_info_struct { const def_dec_info * next_in_file; /* -> to rest of chain for file */ file_info * file; /* -> file_info for containing file */ int line; /* source line number of def/dec */ const char * ansi_decl; /* -> left end of ansi decl */ hash_table_entry * hash_entry; /* -> hash entry for function name */ unsigned int is_func_def; /* = 0 means this is a declaration */ const def_dec_info * next_for_func; /* -> to rest of chain for func name */ unsigned int f_list_count; /* count of formals lists we expect */ char prototyped; /* = 0 means already prototyped */ #ifndef UNPROTOIZE const f_list_chain_item * f_list_chain; /* -> chain of formals lists */ const def_dec_info * definition; /* -> def/dec containing related def */ char is_static; /* = 0 means visibility is "extern" */ char is_implicit; /* != 0 for implicit func decl's */ char written; /* != 0 means written for implicit */ #else /* !defined (UNPROTOIZE) */ const char * formal_names; /* -> to list of names of formals */ const char * formal_decls; /* -> to string of formal declarations */ #endif /* !defined (UNPROTOIZE) */ }; /* Pointer to the tail component of the filename by which this program was invoked. Used everywhere in error and warning messages. */ static const char *pname; /* Error counter. Will be nonzero if we should give up at the next convenient stopping point. */ static int errors = 0; /* Option flags. */ /* ??? The variables are not marked static because some of them have the same names as gcc variables declared in options.h. */ /* ??? These comments should say what the flag mean as well as the options that set them. */ /* File name to use for running gcc. Allows GCC 2 to be named something other than gcc. */ static const char *compiler_file_name = "gcc"; int version_flag = 0; /* Print our version number. */ int quiet_flag = 0; /* Don't print messages normally. */ int nochange_flag = 0; /* Don't convert, just say what files we would have converted. */ int nosave_flag = 0; /* Don't save the old version. */ int keep_flag = 0; /* Don't delete the .X files. */ static const char ** compile_params = 0; /* Option string for gcc. */ #ifdef UNPROTOIZE static const char *indent_string = " "; /* Indentation for newly inserted parm decls. */ #else /* !defined (UNPROTOIZE) */ int local_flag = 0; /* Insert new local decls (when?). */ int global_flag = 0; /* set by -g option */ int cplusplus_flag = 0; /* Rename converted files to *.C. */ static const char *nondefault_syscalls_dir = 0; /* Dir to look for SYSCALLS.c.X in. */ #endif /* !defined (UNPROTOIZE) */ /* An index into the compile_params array where we should insert the source file name when we are ready to exec the C compiler. A zero value indicates that we have not yet called munge_compile_params. */ static int input_file_name_index = 0; /* An index into the compile_params array where we should insert the filename for the aux info file, when we run the C compiler. */ static int aux_info_file_name_index = 0; /* Count of command line arguments which were "filename" arguments. */ static int n_base_source_files = 0; /* Points to a malloc'ed list of pointers to all of the filenames of base source files which were specified on the command line. */ static const char **base_source_filenames; /* Line number of the line within the current aux_info file that we are currently processing. Used for error messages in case the prototypes info file is corrupted somehow. */ static int current_aux_info_lineno; /* Pointer to the name of the source file currently being converted. */ static const char *convert_filename; /* Pointer to relative root string (taken from aux_info file) which indicates where directory the user was in when he did the compilation step that produced the containing aux_info file. */ static const char *invocation_filename; /* Pointer to the base of the input buffer that holds the original text for the source file currently being converted. */ static const char *orig_text_base; /* Pointer to the byte just beyond the end of the input buffer that holds the original text for the source file currently being converted. */ static const char *orig_text_limit; /* Pointer to the base of the input buffer that holds the cleaned text for the source file currently being converted. */ static const char *clean_text_base; /* Pointer to the byte just beyond the end of the input buffer that holds the cleaned text for the source file currently being converted. */ static const char *clean_text_limit; /* Pointer to the last byte in the cleaned text buffer that we have already (virtually) copied to the output buffer (or decided to ignore). */ static const char * clean_read_ptr; /* Pointer to the base of the output buffer that holds the replacement text for the source file currently being converted. */ static char *repl_text_base; /* Pointer to the byte just beyond the end of the output buffer that holds the replacement text for the source file currently being converted. */ static char *repl_text_limit; /* Pointer to the last byte which has been stored into the output buffer. The next byte to be stored should be stored just past where this points to. */ static char * repl_write_ptr; /* Pointer into the cleaned text buffer for the source file we are currently converting. This points to the first character of the line that we last did a "seek_to_line" to (see below). */ static const char *last_known_line_start; /* Number of the line (in the cleaned text buffer) that we last did a "seek_to_line" to. Will be one if we just read a new source file into the cleaned text buffer. */ static int last_known_line_number; /* The filenames hash table. */ static hash_table filename_primary; /* The function names hash table. */ static hash_table function_name_primary; /* The place to keep the recovery address which is used only in cases where we get hopelessly confused by something in the cleaned original text. */ static jmp_buf source_confusion_recovery; /* A pointer to the current directory filename (used by abspath). */ static char *cwd_buffer; /* A place to save the read pointer until we are sure that an individual attempt at editing will succeed. */ static const char * saved_clean_read_ptr; /* A place to save the write pointer until we are sure that an individual attempt at editing will succeed. */ static char * saved_repl_write_ptr; /* Translate and output an error message. */ static void notice (const char *cmsgid, ...) { va_list ap; va_start (ap, cmsgid); vfprintf (stderr, _(cmsgid), ap); va_end (ap); } /* Make a copy of a string INPUT with size SIZE. */ static char * savestring (const char *input, unsigned int size) { char *output = xmalloc (size + 1); strcpy (output, input); return output; } /* Make a duplicate of the first N bytes of a given string in a newly allocated area. */ static char * dupnstr (const char *s, size_t n) { char *ret_val = xmalloc (n + 1); strncpy (ret_val, s, n); ret_val[n] = '\0'; return ret_val; } /* Read LEN bytes at PTR from descriptor DESC, for file FILENAME, retrying if necessary. Return the actual number of bytes read. */ static int safe_read (int desc, void *ptr, int len) { int left = len; while (left > 0) { int nchars = read (desc, ptr, left); if (nchars < 0) { #ifdef EINTR if (errno == EINTR) continue; #endif return nchars; } if (nchars == 0) break; /* Arithmetic on void pointers is a gcc extension. */ ptr = (char *) ptr + nchars; left -= nchars; } return len - left; } /* Write LEN bytes at PTR to descriptor DESC, retrying if necessary, and treating any real error as fatal. */ static void safe_write (int desc, void *ptr, int len, const char *out_fname) { while (len > 0) { int written = write (desc, ptr, len); if (written < 0) { int errno_val = errno; #ifdef EINTR if (errno_val == EINTR) continue; #endif notice ("%s: error writing file '%s': %s\n", pname, shortpath (NULL, out_fname), xstrerror (errno_val)); return; } /* Arithmetic on void pointers is a gcc extension. */ ptr = (char *) ptr + written; len -= written; } } /* Get setup to recover in case the edit we are about to do goes awry. */ static void save_pointers (void) { saved_clean_read_ptr = clean_read_ptr; saved_repl_write_ptr = repl_write_ptr; } /* Call this routine to recover our previous state whenever something looks too confusing in the source code we are trying to edit. */ static void restore_pointers (void) { clean_read_ptr = saved_clean_read_ptr; repl_write_ptr = saved_repl_write_ptr; } /* Return true if the given character is a valid identifier character. */ static int is_id_char (int ch) { return (ISIDNUM (ch) || (ch == '$')); } /* Give a message indicating the proper way to invoke this program and then exit with nonzero status. */ static void usage (void) { #ifdef UNPROTOIZE notice ("%s: usage '%s [ -VqfnkN ] [ -i <istring> ] [ filename ... ]'\n", pname, pname); #else /* !defined (UNPROTOIZE) */ notice ("%s: usage '%s [ -VqfnkNlgC ] [ -B <dirname> ] [ filename ... ]'\n", pname, pname); #endif /* !defined (UNPROTOIZE) */ exit (FATAL_EXIT_CODE); } /* Return true if the given filename (assumed to be an absolute filename) designates a file residing anywhere beneath any one of the "system" include directories. */ static int in_system_include_dir (const char *path) { const struct default_include *p; gcc_assert (IS_ABSOLUTE_PATH (path)); for (p = cpp_include_defaults; p->fname; p++) if (!strncmp (path, p->fname, strlen (p->fname)) && IS_DIR_SEPARATOR (path[strlen (p->fname)])) return 1; return 0; } #if 0 /* Return true if the given filename designates a file that the user has read access to and for which the user has write access to the containing directory. */ static int file_could_be_converted (const char *path) { char *const dir_name = alloca (strlen (path) + 1); if (access (path, R_OK)), W_OK)) return 0; return 1; } /* Return true if the given filename designates a file that we are allowed to modify. Files which we should not attempt to modify are (a) "system" include files, and (b) files which the user doesn't have write access to, and (c) files which reside in directories which the user doesn't have write access to. Unless requested to be quiet, give warnings about files that we will not try to convert for one reason or another. An exception is made for "system" include files, which we never try to convert and for which we don't issue the usual warnings. */ static int file_normally_convertible (const char *path) { char *const dir_name = alloca (strlen (path) + 1); if (in_system_include_dir (path)), R_OK)) { if (!quiet_flag) notice ("%s: warning: no read access for file '%s'\n", pname, shortpath (NULL, path)); return 0; } if (access (path, W_OK)) { if (!quiet_flag) notice ("%s: warning: no write access for file '%s'\n", pname, shortpath (NULL, path)); return 0; } if (access (dir_name, W_OK)) { if (!quiet_flag) notice ("%s: warning: no write access for dir containing '%s'\n", pname, shortpath (NULL, path)); return 0; } return 1; } #endif /* 0 */ #ifndef UNPROTOIZE /* Return true if the given file_info struct refers to the special SYSCALLS.c.X file. Return false otherwise. */ static int is_syscalls_file (const file_info *fi_p) { char const *f = fi_p->hash_entry->symbol; size_t fl = strlen (f), sysl = sizeof (syscalls_filename) - 1; return sysl <= fl && strcmp (f + fl - sysl, syscalls_filename) == 0; } #endif /* !defined (UNPROTOIZE) */ /* Check to see if this file will need to have anything done to it on this run. If there is nothing in the given file which both needs conversion and for which we have the necessary stuff to do the conversion, return false. Otherwise, return true. Note that (for protoize) it is only valid to call this function *after* the connections between declarations and definitions have all been made by connect_defs_and_decs. */ static int needs_to_be_converted (const file_info *file_p) { const def_dec_info *ddp; #ifndef UNPROTOIZE if (is_syscalls_file (file_p)) return 0; #endif /* !defined (UNPROTOIZE) */ for (ddp = file_p->defs_decs; ddp; ddp = ddp->next_in_file) if ( #ifndef UNPROTOIZE /* ... and if we a protoizing and this function is in old style ... */ !ddp->prototyped /* ... and if this a definition or is a decl with an associated def ... */ && (ddp->is_func_def || (!ddp->is_func_def && ddp->definition)) #else /* defined (UNPROTOIZE) */ /* ... and if we are unprotoizing and this function is in new style ... */ ddp->prototyped #endif /* defined (UNPROTOIZE) */ ) /* ... then the containing file needs converting. */ return -1; return 0; } /* Return 1 if the file name NAME is in a directory that should be converted. */ static int directory_specified_p (const char *name) { struct string_list *p; for (p = directory_list; p; p = p->next) if (!strncmp (name, p->name, strlen (p->name)) && IS_DIR_SEPARATOR (name[strlen (p->name)])) { const char *q = name + strlen (p->name) + 1; /* If there are more slashes, it's in a subdir, so this match doesn't count. */ while (*q++) if (IS_DIR_SEPARATOR (*(q-1))) goto lose; return 1; lose: ; } return 0; } /* Return 1 if the file named NAME should be excluded from conversion. */ static int file_excluded_p (const char *name) { struct string_list *p; int len = strlen (name); for (p = exclude_list; p; p = p->next) if (!strcmp (name + len - strlen (p->name), p->name) && IS_DIR_SEPARATOR (name[len - strlen (p->name) - 1])) return 1; return 0; } /* Construct a new element of a string_list. STRING is the new element value, and REST holds the remaining elements. */ static struct string_list * string_list_cons (const char *string, struct string_list *rest) { struct string_list *temp = xmalloc (sizeof (struct string_list)); temp->next = rest; temp->name = string; return temp; } /* ??? The GNU convention for mentioning function args in its comments is to capitalize them. So change "hash_tab_p" to HASH_TAB_P below. Likewise for all the other functions. */ /* Given a hash table, apply some function to each node in the table. The table to traverse is given as the "hash_tab_p" argument, and the function to be applied to each node in the table is given as "func" argument. */ static void visit_each_hash_node (const hash_table_entry *hash_tab_p, void (*func) (const hash_table_entry *)) { const hash_table_entry *primary; for (primary = hash_tab_p; primary < &hash_tab_p[HASH_TABLE_SIZE]; primary++) if (primary->symbol) { hash_table_entry *second; (*func)(primary); for (second = primary->hash_next; second; second = second->hash_next) (*func) (second); } } /* Initialize all of the fields of a new hash table entry, pointed to by the "p" parameter. Note that the space to hold the entry is assumed to have already been allocated before this routine is called. */ static hash_table_entry * add_symbol (hash_table_entry *p, const char *s) { p->hash_next = NULL; p->symbol = xstrdup (s); p->ddip = NULL; p->fip = NULL; return p; } /* Look for a particular function name or filename in the particular hash table indicated by "hash_tab_p". If the name is not in the given hash table, add it. Either way, return a pointer to the hash table entry for the given name. */ static hash_table_entry * lookup (hash_table_entry *hash_tab_p, const char *search_symbol) { int hash_value = 0; const char *search_symbol_char_p = search_symbol; hash_table_entry *p; while (*search_symbol_char_p) hash_value += *search_symbol_char_p++; hash_value &= hash_mask; p = &hash_tab_p[hash_value]; if (! p->symbol) return add_symbol (p, search_symbol); if (!strcmp (p->symbol, search_symbol)) return p; while (p->hash_next) { p = p->hash_next; if (!strcmp (p->symbol, search_symbol)) return p; } p->hash_next = xmalloc (sizeof (hash_table_entry)); p = p->hash_next; return add_symbol (p, search_symbol); } /* Throw a def/dec record on the junk heap. Also, since we are not using this record anymore, free up all of the stuff it pointed to. */ static void free_def_dec (def_dec_info *p) { free ((NONCONST void *) p->ansi_decl); #ifndef UNPROTOIZE { const f_list_chain_item * curr; const f_list_chain_item * next; for (curr = p->f_list_chain; curr; curr = next) { next = curr->chain_next; free ((NONCONST void *) curr); } } #endif /* !defined (UNPROTOIZE) */ free (p); } /* Unexpand as many macro symbols as we can find. If the given line must be unexpanded, make a copy of it in the heap and return a pointer to the unexpanded copy. Otherwise return NULL. */ static char * unexpand_if_needed (const char *aux_info_line) { static char *line_buf = 0; static int line_buf_size = 0; const unexpansion *unexp_p; int got_unexpanded = 0; const char *s; char *copy_p = line_buf; if (line_buf == 0) { line_buf_size = 1024; line_buf = xmalloc (line_buf_size); } copy_p = line_buf; /* Make a copy of the input string in line_buf, expanding as necessary. */ for (s = aux_info_line; *s != '\n'; ) { for (unexp_p = unexpansions; unexp_p->expanded; unexp_p++) { const char *in_p = unexp_p->expanded; size_t len = strlen (in_p); if (*s == *in_p && !strncmp (s, in_p, len) && !is_id_char (s[len])) { int size = strlen (unexp_p->contracted); got_unexpanded = 1; if (copy_p + size - line_buf >= line_buf_size) { int offset = copy_p - line_buf; line_buf_size *= 2; line_buf_size += size; line_buf = xrealloc (line_buf, line_buf_size); copy_p = line_buf + offset; } strcpy (copy_p, unexp_p->contracted); copy_p += size; /* Assume that there will not be another replacement required within the text just replaced. */ s += len; goto continue_outer; } } if (copy_p - line_buf == line_buf_size) { int offset = copy_p - line_buf; line_buf_size *= 2; line_buf = xrealloc (line_buf, line_buf_size); copy_p = line_buf + offset; } *copy_p++ = *s++; continue_outer: ; } if (copy_p + 2 - line_buf >= line_buf_size) { int offset = copy_p - line_buf; line_buf_size *= 2; line_buf = xrealloc (line_buf, line_buf_size); copy_p = line_buf + offset; } *copy_p++ = '\n'; *copy_p = '\0'; return (got_unexpanded ? savestring (line_buf, copy_p - line_buf) : 0); } /* Return the absolutized filename for the given relative filename. Note that if that filename is already absolute, it may still be returned in a modified form because this routine also eliminates redundant slashes and single dots and eliminates double dots to get a shortest possible filename from the given input filename. The absolutization of relative filenames is made by assuming that the given filename is to be taken as relative to the first argument (cwd) or to the current directory if cwd is NULL. */ static char * abspath (const char *cwd, const char *rel_filename) { /* Setup the current working directory as needed. */ const char *const cwd2 = (cwd) ? cwd : cwd_buffer; char *const abs_buffer = alloca (strlen (cwd2) + strlen (rel_filename) + 2); char *endp = abs_buffer; char *outp, *inp; /* Copy the filename (possibly preceded by the current working directory name) into the absolutization buffer. */ { const char *src_p; if (! IS_ABSOLUTE_PATH (rel_filename)) { src_p = cwd2; while ((*endp++ = *src_p++)) continue; *(endp-1) = DIR_SEPARATOR; /* overwrite null */ } #ifdef HAVE_DOS_BASED_FILE_SYSTEM else if (IS_DIR_SEPARATOR (rel_filename[0])) { /* A path starting with a directory separator is considered absolute for dos based filesystems, but it's really not -- it's just the convention used throughout GCC and it works. However, in this case, we still need to prepend the drive spec from cwd_buffer. */ *endp++ = cwd2[0]; *endp++ = cwd2[1]; } #endif src_p = rel_filename; while ((*endp++ = *src_p++)) continue; } /* Now make a copy of abs_buffer into abs_buffer, shortening the filename (by taking out slashes and dots) as we go. */ outp = inp = abs_buffer; *outp++ = *inp++; /* copy first slash */ #if defined (apollo) || defined (_WIN32) || defined (__INTERIX) if (IS_DIR_SEPARATOR (inp[0])) *outp++ = *inp++; /* copy second slash */ #endif for (;;) { if (!inp[0]) break; else if (IS_DIR_SEPARATOR (inp[0]) && IS_DIR_SEPARATOR (outp[-1])) { inp++; continue; } else if (inp[0] == '.' && IS_DIR_SEPARATOR (outp[-1])) { if (!inp[1]) break; else if (IS_DIR_SEPARATOR (inp[1])) { inp += 2; continue; } else if ((inp[1] == '.') && (inp[2] == 0 || IS_DIR_SEPARATOR (inp[2]))) { inp += (IS_DIR_SEPARATOR (inp[2])) ? 3 : 2; outp -= 2; while (outp >= abs_buffer && ! IS_DIR_SEPARATOR (*outp)) outp--; if (outp < abs_buffer) { /* Catch cases like /.. where we try to backup to a point above the absolute root of the logical file system. */ notice ("%s: invalid file name: %s\n", pname, rel_filename); exit (FATAL_EXIT_CODE); } *++outp = '\0'; continue; } } *outp++ = *inp++; } /* On exit, make sure that there is a trailing null, and make sure that the last character of the returned string is *not* a slash. */ *outp = '\0'; if (IS_DIR_SEPARATOR (outp[-1])) *--outp = '\0'; /* Make a copy (in the heap) of the stuff left in the absolutization buffer and return a pointer to the copy. */ return savestring (abs_buffer, outp - abs_buffer); } /* Given a filename (and possibly a directory name from which the filename is relative) return a string which is the shortest possible equivalent for the corresponding full (absolutized) filename. The shortest possible equivalent may be constructed by converting the absolutized filename to be a relative filename (i.e. relative to the actual current working directory). However if a relative filename is longer, then the full absolute filename is returned. KNOWN BUG: Note that "simple-minded" conversion of any given type of filename (either relative or absolute) may not result in a valid equivalent filename if any subpart of the original filename is actually a symbolic link. */ static const char * shortpath (const char *cwd, const char *filename) { char *rel_buffer; char *rel_buf_p; char *cwd_p = cwd_buffer; char *path_p; int unmatched_slash_count = 0; size_t filename_len = strlen (filename); path_p = abspath (cwd, filename); rel_buf_p = rel_buffer = xmalloc (filename_len); while (*cwd_p && IS_SAME_PATH_CHAR (*cwd_p, *path_p)) { cwd_p++; path_p++; } if (!*cwd_p && (!*path_p || IS_DIR_SEPARATOR (*path_p))) { /* whole pwd matched */ if (!*path_p) /* input *is* the current path! */ return "."; else return ++path_p; } else { if (*path_p) { --cwd_p; --path_p; while (! IS_DIR_SEPARATOR (*cwd_p)) /* backup to last slash */ { --cwd_p; --path_p; } cwd_p++; path_p++; unmatched_slash_count++; } /* Find out how many directory levels in cwd were *not* matched. */ while (*cwd_p++) if (IS_DIR_SEPARATOR (*(cwd_p-1))) unmatched_slash_count++; /* Now we know how long the "short name" will be. Reject it if longer than the input. */ if (unmatched_slash_count * 3 + strlen (path_p) >= filename_len) return filename; /* For each of them, put a `../' at the beginning of the short name. */ while (unmatched_slash_count--) { /* Give up if the result gets to be longer than the absolute path name. */ if (rel_buffer + filename_len <= rel_buf_p + 3) return filename; *rel_buf_p++ = '.'; *rel_buf_p++ = '.'; *rel_buf_p++ = DIR_SEPARATOR; } /* Then tack on the unmatched part of the desired file's name. */ do { if (rel_buffer + filename_len <= rel_buf_p) return filename; } while ((*rel_buf_p++ = *path_p++)); --rel_buf_p; if (IS_DIR_SEPARATOR (*(rel_buf_p-1))) *--rel_buf_p = '\0'; return rel_buffer; } } /* Lookup the given filename in the hash table for filenames. If it is a new one, then the hash table info pointer will be null. In this case, we create a new file_info record to go with the filename, and we initialize that record with some reasonable values. */ /* FILENAME was const, but that causes a warning on AIX when calling stat. That is probably a bug in AIX, but might as well avoid the warning. */ static file_info * find_file (const char *filename, int do_not_stat) { hash_table_entry *hash_entry_p; hash_entry_p = lookup (filename_primary, filename); if (hash_entry_p->fip) return hash_entry_p->fip; else { struct stat stat_buf; file_info *file_p = xmalloc (sizeof (file_info)); /* If we cannot get status on any given source file, give a warning and then just set its time of last modification to infinity. */ if (do_not_stat) stat_buf.st_mtime = (time_t) 0; else { if (stat (filename, &stat_buf) == -1) { int errno_val = errno; notice ("%s: %s: can't get status: %s\n", pname, shortpath (NULL, filename), xstrerror (errno_val)); stat_buf.st_mtime = (time_t) -1; } } hash_entry_p->fip = file_p; file_p->hash_entry = hash_entry_p; file_p->defs_decs = NULL; file_p->mtime = stat_buf.st_mtime; return file_p; } } /* Generate a fatal error because some part of the aux_info file is messed up. */ static void aux_info_corrupted (void) { notice ("\n%s: fatal error: aux info file corrupted at line %d\n", pname, current_aux_info_lineno); exit (FATAL_EXIT_CODE); } /* ??? This comment is vague. Say what the condition is for. */ /* Check to see that a condition is true. This is kind of like an assert. */ static void check_aux_info (int cond) { if (! cond) aux_info_corrupted (); } /* Given a pointer to the closing right parenthesis for a particular formals list (in an aux_info file) find the corresponding left parenthesis and return a pointer to it. */ static const char * find_corresponding_lparen (const char *p) { const char *q; int paren_depth; for (paren_depth = 1, q = p-1; paren_depth; q--) { switch (*q) { case ')': paren_depth++; break; case '(': paren_depth--; break; } } return ++q; } /* Given a line from an aux info file, and a time at which the aux info file it came from was created, check to see if the item described in the line comes from a file which has been modified since the aux info file was created. If so, return nonzero, else return zero. */ static int referenced_file_is_newer (const char *l, time_t aux_info_mtime) { const char *p; file_info *fi_p; char *filename; check_aux_info (l[0] == '/'); check_aux_info (l[1] == '*'); check_aux_info (l[2] == ' '); { const char *filename_start = p = l + 3;. */ fi_p = find_file (abspath (invocation_filename, filename), 0); return (fi_p->mtime > aux_info_mtime); } /* Given a line of info from the aux_info file, create a new def_dec_info record to remember all of the important information about a function definition or declaration. Link this record onto the list of such records for the particular file in which it occurred in proper (descending) line number order (for now). If there is an identical record already on the list for the file, throw this one away. Doing so takes care of the (useless and troublesome) duplicates which are bound to crop up due to multiple inclusions of any given individual header file. Finally, link the new def_dec record onto the list of such records pertaining to this particular function name. */ static void save_def_or_dec (const char *l, int is_syscalls) { const char *p; const char *semicolon_p; def_dec_info *def_dec_p = xmalloc (sizeof (def_dec_info)); #ifndef UNPROTOIZE def_dec_p->written = 0; #endif /* !defined (UNPROTOIZE) */ /* Start processing the line by picking off 5 pieces of information from the left hand end of the line. These are filename, line number, new/old/implicit flag (new = ANSI prototype format), definition or declaration flag, and extern/static flag). */ check_aux_info (l[0] == '/'); check_aux_info (l[1] == '*'); check_aux_info (l[2] == ' '); { const char *filename_start = p = l + 3; char *filename;. Note that we started out by forcing all of the base source file names (i.e. the names of the aux_info files with the .X stripped off) into the filenames hash table, and we simultaneously setup file_info records for all of these base file names (even if they may be useless later). The file_info records for all of these "base" file names (properly) act as file_info records for the "original" (i.e. un-included) files which were submitted to gcc for compilation (when the -aux-info option was used). */ def_dec_p->file = find_file (abspath (invocation_filename, filename), is_syscalls); } { const char *line_number_start = ++p; char line_number[10]; while (*p != ':' #ifdef HAVE_DOS_BASED_FILE_SYSTEM || (*p == ':' && *p && *(p+1) && IS_DIR_SEPARATOR (*(p+1))) #endif ) p++; strncpy (line_number, line_number_start, (size_t) (p - line_number_start)); line_number[p-line_number_start] = '\0'; def_dec_p->line = atoi (line_number); } /* Check that this record describes a new-style, old-style, or implicit definition or declaration. */ p++; /* Skip over the `:'. */ check_aux_info ((*p == 'N') || (*p == 'O') || (*p == 'I')); /* Is this a new style (ANSI prototyped) definition or declaration? */ def_dec_p->prototyped = (*p == 'N'); #ifndef UNPROTOIZE /* Is this an implicit declaration? */ def_dec_p->is_implicit = (*p == 'I'); #endif /* !defined (UNPROTOIZE) */ p++; check_aux_info ((*p == 'C') || (*p == 'F')); /* Is this item a function definition (F) or a declaration (C). Note that we treat item taken from the syscalls file as though they were function definitions regardless of what the stuff in the file says. */ def_dec_p->is_func_def = ((*p++ == 'F') || is_syscalls); #ifndef UNPROTOIZE def_dec_p->definition = 0; /* Fill this in later if protoizing. */ #endif /* !defined (UNPROTOIZE) */ check_aux_info (*p++ == ' '); check_aux_info (*p++ == '*'); check_aux_info (*p++ == '/'); check_aux_info (*p++ == ' '); #ifdef UNPROTOIZE check_aux_info ((!strncmp (p, "static", 6)) || (!strncmp (p, "extern", 6))); #else /* !defined (UNPROTOIZE) */ if (!strncmp (p, "static", 6)) def_dec_p->is_static = -1; else if (!strncmp (p, "extern", 6)) def_dec_p->is_static = 0; else check_aux_info (0); /* Didn't find either `extern' or `static'. */ #endif /* !defined (UNPROTOIZE) */ { const char *ansi_start = p; p += 6; /* Pass over the "static" or "extern". */ /* We are now past the initial stuff. Search forward from here to find the terminating semicolon that should immediately follow the entire ANSI format function declaration. */ while (*++p != ';') continue; semicolon_p = p; /* Make a copy of the ansi declaration part of the line from the aux_info file. */ def_dec_p->ansi_decl = dupnstr (ansi_start, (size_t) ((semicolon_p+1) - ansi_start)); /* Backup and point at the final right paren of the final argument list. */ p--; #ifndef UNPROTOIZE def_dec_p->f_list_chain = NULL; #endif /* !defined (UNPROTOIZE) */ while (p != ansi_start && (p[-1] == ' ' || p[-1] == '\t')) p--; if (*p != ')') { free_def_dec (def_dec_p); return; } } /* Now isolate a whole set of formal argument lists, one-by-one. Normally, there will only be one list to isolate, but there could be more. */ def_dec_p->f_list_count = 0; for (;;) { const char *left_paren_p = find_corresponding_lparen (p); #ifndef UNPROTOIZE { f_list_chain_item *cip = xmalloc (sizeof (f_list_chain_item)); cip->formals_list = dupnstr (left_paren_p + 1, (size_t) (p - (left_paren_p+1))); /* Add the new chain item at the head of the current list. */ cip->chain_next = def_dec_p->f_list_chain; def_dec_p->f_list_chain = cip; } #endif /* !defined (UNPROTOIZE) */ def_dec_p->f_list_count++; p = left_paren_p - 2; /* p must now point either to another right paren, or to the last character of the name of the function that was declared/defined. If p points to another right paren, then this indicates that we are dealing with multiple formals lists. In that case, there really should be another right paren preceding this right paren. */ if (*p != ')') break; else check_aux_info (*--p == ')'); } { const char *past_fn = p + 1; check_aux_info (*past_fn == ' '); /* Scan leftwards over the identifier that names the function. */ while (is_id_char (*p)) p--; p++; /* p now points to the leftmost character of the function name. */ { char *fn_string = alloca (past_fn - p + 1); strncpy (fn_string, p, (size_t) (past_fn - p)); fn_string[past_fn-p] = '\0'; def_dec_p->hash_entry = lookup (function_name_primary, fn_string); } } /* Look at all of the defs and decs for this function name that we have collected so far. If there is already one which is at the same line number in the same file, then we can discard this new def_dec_info record. As an extra assurance that any such pair of (nominally) identical function declarations are in fact identical, we also compare the ansi_decl parts of the lines from the aux_info files just to be on the safe side. This comparison will fail if (for instance) the user was playing messy games with the preprocessor which ultimately causes one function declaration in one header file to look differently when that file is included by two (or more) other files. */ { const def_dec_info *other; for (other = def_dec_p->hash_entry->ddip; other; other = other->next_for_func) { if (def_dec_p->line == other->line && def_dec_p->file == other->file) { if (strcmp (def_dec_p->ansi_decl, other->ansi_decl)) { notice ("%s:%d: declaration of function '%s' takes different forms\n", def_dec_p->file->hash_entry->symbol, def_dec_p->line, def_dec_p->hash_entry->symbol); exit (FATAL_EXIT_CODE); } free_def_dec (def_dec_p); return; } } } #ifdef UNPROTOIZE /* If we are doing unprotoizing, we must now setup the pointers that will point to the K&R name list and to the K&R argument declarations list. Note that if this is only a function declaration, then we should not expect to find any K&R style formals list following the ANSI-style formals list. This is because GCC knows that such information is useless in the case of function declarations (function definitions are a different story however). Since we are unprotoizing, we don't need any such lists anyway. All we plan to do is to delete all characters between ()'s in any case. */ def_dec_p->formal_names = NULL; def_dec_p->formal_decls = NULL; if (def_dec_p->is_func_def) { p = semicolon_p; check_aux_info (*++p == ' '); check_aux_info (*++p == '/'); check_aux_info (*++p == '*'); check_aux_info (*++p == ' '); check_aux_info (*++p == '('); { const char *kr_names_start = ++p; /* Point just inside '('. */ while (*p++ != ')') continue; p--; /* point to closing right paren */ /* Make a copy of the K&R parameter names list. */ def_dec_p->formal_names = dupnstr (kr_names_start, (size_t) (p - kr_names_start)); } check_aux_info (*++p == ' '); p++; /* p now points to the first character of the K&R style declarations list (if there is one) or to the star-slash combination that ends the comment in which such lists get embedded. */ /* Make a copy of the K&R formal decls list and set the def_dec record to point to it. */ if (*p == '*') /* Are there no K&R declarations? */ { check_aux_info (*++p == '/'); def_dec_p->formal_decls = ""; } else { const char *kr_decls_start = p; while (p[0] != '*' || p[1] != '/') p++; p--; check_aux_info (*p == ' '); def_dec_p->formal_decls = dupnstr (kr_decls_start, (size_t) (p - kr_decls_start)); } /* Handle a special case. If we have a function definition marked as being in "old" style, and if its formal names list is empty, then it may actually have the string "void" in its real formals list in the original source code. Just to make sure, we will get setup to convert such things anyway. This kludge only needs to be here because of an insurmountable problem with generating .X files. */ if (!def_dec_p->prototyped && !*def_dec_p->formal_names) def_dec_p->prototyped = 1; } /* Since we are unprotoizing, if this item is already in old (K&R) style, we can just ignore it. If that is true, throw away the itme now. */ if (!def_dec_p->prototyped) { free_def_dec (def_dec_p); return; } #endif /* defined (UNPROTOIZE) */ /* Add this record to the head of the list of records pertaining to this particular function name. */ def_dec_p->next_for_func = def_dec_p->hash_entry->ddip; def_dec_p->hash_entry->ddip = def_dec_p; /* Add this new def_dec_info record to the sorted list of def_dec_info records for this file. Note that we don't have to worry about duplicates (caused by multiple inclusions of header files) here because we have already eliminated duplicates above. */ if (!def_dec_p->file->defs_decs) { def_dec_p->file->defs_decs = def_dec_p; def_dec_p->next_in_file = NULL; } else { int line = def_dec_p->line; const def_dec_info *prev = NULL; const def_dec_info *curr = def_dec_p->file->defs_decs; const def_dec_info *next = curr->next_in_file; while (next && (line < curr->line)) { prev = curr; curr = next; next = next->next_in_file; } if (line >= curr->line) { def_dec_p->next_in_file = curr; if (prev) ((NONCONST def_dec_info *) prev)->next_in_file = def_dec_p; else def_dec_p->file->defs_decs = def_dec_p; } else /* assert (next == NULL); */ { ((NONCONST def_dec_info *) curr)->next_in_file = def_dec_p; /* assert (next == NULL); */ def_dec_p->next_in_file = next; } } } /* Set up the vector COMPILE_PARAMS which is the argument list for running GCC. Also set input_file_name_index and aux_info_file_name_index to the indices of the slots where the file names should go. */ /* We initialize the vector by removing -g, -O, -S, -c, and -o options, and adding '-aux-info AUXFILE -S -o /dev/null INFILE' at the end. */ static void munge_compile_params (const char *params_list) { /* Build up the contents in a temporary vector that is so big that to has to be big enough. */ const char **temp_params = alloca ((strlen (params_list) + 8) * sizeof (char *)); int param_count = 0; const char *param; struct stat st; temp_params[param_count++] = compiler_file_name; for (;;) { while (ISSPACE ((const unsigned char)*params_list)) params_list++; if (!*params_list) break; param = params_list; while (*params_list && !ISSPACE ((const unsigned char)*params_list)) params_list++; if (param[0] != '-') temp_params[param_count++] = dupnstr (param, (size_t) (params_list - param)); else { switch (param[1]) { case 'g': case 'O': case 'S': case 'c': break; /* Don't copy these. */ case 'o': while (ISSPACE ((const unsigned char)*params_list)) params_list++; while (*params_list && !ISSPACE ((const unsigned char)*params_list)) params_list++; break; default: temp_params[param_count++] = dupnstr (param, (size_t) (params_list - param)); } } if (!*params_list) break; } temp_params[param_count++] = "-aux-info"; /* Leave room for the aux-info file name argument. */ aux_info_file_name_index = param_count; temp_params[param_count++] = NULL; temp_params[param_count++] = "-S"; temp_params[param_count++] = "-o"; if ((stat (HOST_BIT_BUCKET, &st) == 0) && (!S_ISDIR (st.st_mode)) && (access (HOST_BIT_BUCKET, W_OK) == 0)) temp_params[param_count++] = HOST_BIT_BUCKET; else /* FIXME: This is hardly likely to be right, if HOST_BIT_BUCKET is not writable. But until this is rejigged to use make_temp_file(), this is the best we can do. */ temp_params[param_count++] = "/dev/null"; /* Leave room for the input file name argument. */ input_file_name_index = param_count; temp_params[param_count++] = NULL; /* Terminate the list. */ temp_params[param_count++] = NULL; /* Make a copy of the compile_params in heap space. */ compile_params = xmalloc (sizeof (char *) * (param_count+1)); memcpy (compile_params, temp_params, sizeof (char *) * param_count); } /* Do a recompilation for the express purpose of generating a new aux_info file to go with a specific base source file. The result is a boolean indicating success. */ static int gen_aux_info_file (const char *base_filename) { if (!input_file_name_index) munge_compile_params (""); /* Store the full source file name in the argument vector. */ compile_params[input_file_name_index] = shortpath (NULL, base_filename); /* Add .X to source file name to get aux-info file name. */ compile_params[aux_info_file_name_index] = concat (compile_params[input_file_name_index], aux_info_suffix, NULL); if (!quiet_flag) notice ("%s: compiling '%s'\n", pname, compile_params[input_file_name_index]); { char *errmsg_fmt, *errmsg_arg; int wait_status, pid; pid = pexecute (compile_params[0], (char * const *) compile_params, pname, NULL, &errmsg_fmt, &errmsg_arg, PEXECUTE_FIRST | PEXECUTE_LAST | PEXECUTE_SEARCH); if (pid == -1) { int errno_val = errno; fprintf (stderr, "%s: ", pname); fprintf (stderr, errmsg_fmt, errmsg_arg); fprintf (stderr, ": %s\n", xstrerror (errno_val)); return 0; } pid = pwait (pid, &wait_status, 0); if (pid == -1) { notice ("%s: wait: %s\n", pname, xstrerror (errno)); return 0; } if (WIFSIGNALED (wait_status)) { notice ("%s: subprocess got fatal signal %d\n", pname, WTERMSIG (wait_status)); return 0; } if (WIFEXITED (wait_status)) { if (WEXITSTATUS (wait_status) != 0) { notice ("%s: %s exited with status %d\n", pname, compile_params[0], WEXITSTATUS (wait_status)); return 0; } return 1; } gcc_unreachable (); } } /* Read in all of the information contained in a single aux_info file. Save all of the important stuff for later. */ static void process_aux_info_file (const char *base_source_filename, int keep_it, int is_syscalls) { size_t base_len = strlen (base_source_filename); char * aux_info_filename = alloca (base_len + strlen (aux_info_suffix) + 1); char *aux_info_base; char *aux_info_limit; char *aux_info_relocated_name; const char *aux_info_second_line; time_t aux_info_mtime; size_t aux_info_size; int must_create; /* Construct the aux_info filename from the base source filename. */ strcpy (aux_info_filename, base_source_filename); strcat (aux_info_filename, aux_info_suffix); /* Check that the aux_info file exists and is readable. If it does not exist, try to create it (once only). */ /* If file doesn't exist, set must_create. Likewise if it exists and we can read it but it is obsolete. Otherwise, report an error. */ must_create = 0; /* Come here with must_create set to 1 if file is out of date. */ start_over: ; if (access (aux_info_filename, R_OK) == -1) { if (errno == ENOENT) { if (is_syscalls) { notice ("%s: warning: missing SYSCALLS file '%s'\n", pname, aux_info_filename); return; } must_create = 1; } else { int errno_val = errno; notice ("%s: can't read aux info file '%s': %s\n", pname, shortpath (NULL, aux_info_filename), xstrerror (errno_val)); errors++; return; } } #if 0 /* There is code farther down to take care of this. */ else { struct stat s1, s2; stat (aux_info_file_name, &s1); stat (base_source_file_name, &s2); if (s2.st_mtime > s1.st_mtime) must_create = 1; } #endif /* 0 */ /* If we need a .X file, create it, and verify we can read it. */ if (must_create) { if (!gen_aux_info_file (base_source_filename)) { errors++; return; } if (access (aux_info_filename, R_OK) == -1) { int errno_val = errno; notice ("%s: can't read aux info file '%s': %s\n", pname, shortpath (NULL, aux_info_filename), xstrerror (errno_val)); errors++; return; } } { struct stat stat_buf; /* Get some status information about this aux_info file. */ if (stat (aux_info_filename, &stat_buf) == -1) { int errno_val = errno; notice ("%s: can't get status of aux info file '%s': %s\n", pname, shortpath (NULL, aux_info_filename), xstrerror (errno_val)); errors++; return; } /* Check on whether or not this aux_info file is zero length. If it is, then just ignore it and return. */ if ((aux_info_size = stat_buf.st_size) == 0) return; /* Get the date/time of last modification for this aux_info file and remember it. We will have to check that any source files that it contains information about are at least this old or older. */ aux_info_mtime = stat_buf.st_mtime; if (!is_syscalls) { /* Compare mod time with the .c file; update .X file if obsolete. The code later on can fail to check the .c file if it did not directly define any functions. */ if (stat (base_source_filename, &stat_buf) == -1) { int errno_val = errno; notice ("%s: can't get status of aux info file '%s': %s\n", pname, shortpath (NULL, base_source_filename), xstrerror (errno_val)); errors++; return; } if (stat_buf.st_mtime > aux_info_mtime) { must_create = 1; goto start_over; } } } { int aux_info_file; int fd_flags; /* Open the aux_info file. */ fd_flags = O_RDONLY; #ifdef O_BINARY /* Use binary mode to avoid having to deal with different EOL characters. */ fd_flags |= O_BINARY; #endif if ((aux_info_file = open (aux_info_filename, fd_flags, 0444 )) == -1) { int errno_val = errno; notice ("%s: can't open aux info file '%s' for reading: %s\n", pname, shortpath (NULL, aux_info_filename), xstrerror (errno_val)); return; } /* Allocate space to hold the aux_info file in memory. */ aux_info_base = xmalloc (aux_info_size + 1); aux_info_limit = aux_info_base + aux_info_size; *aux_info_limit = '\0'; /* Read the aux_info file into memory. */ if (safe_read (aux_info_file, aux_info_base, aux_info_size) != (int) aux_info_size) { int errno_val = errno; notice ("%s: error reading aux info file '%s': %s\n", pname, shortpath (NULL, aux_info_filename), xstrerror (errno_val)); free (aux_info_base); close (aux_info_file); return; } /* Close the aux info file. */ if (close (aux_info_file)) { int errno_val = errno; notice ("%s: error closing aux info file '%s': %s\n", pname, shortpath (NULL, aux_info_filename), xstrerror (errno_val)); free (aux_info_base); close (aux_info_file); return; } } /* Delete the aux_info file (unless requested not to). If the deletion fails for some reason, don't even worry about it. */ if (must_create && !keep_it) if (unlink (aux_info_filename) == -1) { int errno_val = errno; notice ("%s: can't delete aux info file '%s': %s\n", pname, shortpath (NULL, aux_info_filename), xstrerror (errno_val)); } /* Save a pointer into the first line of the aux_info file which contains the filename of the directory from which the compiler was invoked when the associated source file was compiled. This information is used later to help create complete filenames out of the (potentially) relative filenames in the aux_info file. */ { char *p = aux_info_base; while (*p != ':' #ifdef HAVE_DOS_BASED_FILE_SYSTEM || (*p == ':' && *p && *(p+1) && IS_DIR_SEPARATOR (*(p+1))) #endif ) p++; p++; while (*p == ' ') p++; invocation_filename = p; /* Save a pointer to first byte of path. */ while (*p != ' ') p++; *p++ = DIR_SEPARATOR; *p++ = '\0'; while (*p++ != '\n') continue; aux_info_second_line = p; aux_info_relocated_name = 0; if (! IS_ABSOLUTE_PATH (invocation_filename)) { /* INVOCATION_FILENAME is relative; append it to BASE_SOURCE_FILENAME's dir. */ char *dir_end; aux_info_relocated_name = xmalloc (base_len + (p-invocation_filename)); strcpy (aux_info_relocated_name, base_source_filename); dir_end = strrchr (aux_info_relocated_name, DIR_SEPARATOR); #ifdef DIR_SEPARATOR_2 { char *slash; slash = strrchr (dir_end ? dir_end : aux_info_relocated_name, DIR_SEPARATOR_2); if (slash) dir_end = slash; } #endif if (dir_end) dir_end++; else dir_end = aux_info_relocated_name; strcpy (dir_end, invocation_filename); invocation_filename = aux_info_relocated_name; } } { const char *aux_info_p; /* Do a pre-pass on the lines in the aux_info file, making sure that all of the source files referenced in there are at least as old as this aux_info file itself. If not, go back and regenerate the aux_info file anew. Don't do any of this for the syscalls file. */ if (!is_syscalls) { current_aux_info_lineno = 2; for (aux_info_p = aux_info_second_line; *aux_info_p; ) { if (referenced_file_is_newer (aux_info_p, aux_info_mtime)) { free (aux_info_base); free (aux_info_relocated_name); if (keep_it && unlink (aux_info_filename) == -1) { int errno_val = errno; notice ("%s: can't delete file '%s': %s\n", pname, shortpath (NULL, aux_info_filename), xstrerror (errno_val)); return; } must_create = 1; goto start_over; } /* Skip over the rest of this line to start of next line. */ while (*aux_info_p != '\n') aux_info_p++; aux_info_p++; current_aux_info_lineno++; } } /* Now do the real pass on the aux_info lines. Save their information in the in-core data base. */ current_aux_info_lineno = 2; for (aux_info_p = aux_info_second_line; *aux_info_p;) { char *unexpanded_line = unexpand_if_needed (aux_info_p); if (unexpanded_line) { save_def_or_dec (unexpanded_line, is_syscalls); free (unexpanded_line); } else save_def_or_dec (aux_info_p, is_syscalls); /* Skip over the rest of this line and get to start of next line. */ while (*aux_info_p != '\n') aux_info_p++; aux_info_p++; current_aux_info_lineno++; } } free (aux_info_base); free (aux_info_relocated_name); } #ifndef UNPROTOIZE /* Check an individual filename for a .c suffix. If the filename has this suffix, rename the file such that its suffix is changed to .C. This function implements the -C option. */ static void rename_c_file (const hash_table_entry *hp) { const char *filename = hp->symbol; int last_char_index = strlen (filename) - 1; char *const new_filename = alloca (strlen (filename) + strlen (cplus_suffix) + 1); /* Note that we don't care here if the given file was converted or not. It is possible that the given file was *not* converted, simply because there was nothing in it which actually required conversion. Even in this case, we want to do the renaming. Note that we only rename files with the .c suffix (except for the syscalls file, which is left alone). */ if (filename[last_char_index] != 'c' || filename[last_char_index-1] != '.' || IS_SAME_PATH (syscalls_absolute_filename, filename)) return; strcpy (new_filename, filename); strcpy (&new_filename[last_char_index], cplus_suffix); if (rename (filename, new_filename) == -1) { int errno_val = errno; notice ("%s: warning: can't rename file '%s' to '%s': %s\n", pname, shortpath (NULL, filename), shortpath (NULL, new_filename), xstrerror (errno_val)); errors++; return; } } #endif /* !defined (UNPROTOIZE) */ /* Take the list of definitions and declarations attached to a particular file_info node and reverse the order of the list. This should get the list into an order such that the item with the lowest associated line number is nearest the head of the list. When these lists are originally built, they are in the opposite order. We want to traverse them in normal line number order later (i.e. lowest to highest) so reverse the order here. */ static void reverse_def_dec_list (const hash_table_entry *hp) { file_info *file_p = hp->fip; def_dec_info *prev = NULL; def_dec_info *current = (def_dec_info *) file_p->defs_decs; if (!current) return; /* no list to reverse */ prev = current; if (! (current = (def_dec_info *) current->next_in_file)) return; /* can't reverse a single list element */ prev->next_in_file = NULL; while (current) { def_dec_info *next = (def_dec_info *) current->next_in_file; current->next_in_file = prev; prev = current; current = next; } file_p->defs_decs = prev; } #ifndef UNPROTOIZE /* Find the (only?) extern definition for a particular function name, starting from the head of the linked list of entries for the given name. If we cannot find an extern definition for the given function name, issue a warning and scrounge around for the next best thing, i.e. an extern function declaration with a prototype attached to it. Note that we only allow such substitutions for extern declarations and never for static declarations. That's because the only reason we allow them at all is to let un-prototyped function declarations for system-supplied library functions get their prototypes from our own extra SYSCALLS.c.X file which contains all of the correct prototypes for system functions. */ static const def_dec_info * find_extern_def (const def_dec_info *head, const def_dec_info *user) { const def_dec_info *dd_p; const def_dec_info *extern_def_p = NULL; int conflict_noted = 0; /* Don't act too stupid here. Somebody may try to convert an entire system in one swell fwoop (rather than one program at a time, as should be done) and in that case, we may find that there are multiple extern definitions of a given function name in the entire set of source files that we are converting. If however one of these definitions resides in exactly the same source file as the reference we are trying to satisfy then in that case it would be stupid for us to fail to realize that this one definition *must* be the precise one we are looking for. To make sure that we don't miss an opportunity to make this "same file" leap of faith, we do a prescan of the list of records relating to the given function name, and we look (on this first scan) *only* for a definition of the function which is in the same file as the reference we are currently trying to satisfy. */ for (dd_p = head; dd_p; dd_p = dd_p->next_for_func) if (dd_p->is_func_def && !dd_p->is_static && dd_p->file == user->file) return dd_p; /* Now, since we have not found a definition in the same file as the reference, we scan the list again and consider all possibilities from all files. Here we may get conflicts with the things listed in the SYSCALLS.c.X file, but if that happens it only means that the source code being converted contains its own definition of a function which could have been supplied by libc.a. In such cases, we should avoid issuing the normal warning, and defer to the definition given in the user's own code. */ for (dd_p = head; dd_p; dd_p = dd_p->next_for_func) if (dd_p->is_func_def && !dd_p->is_static) { if (!extern_def_p) /* Previous definition? */ extern_def_p = dd_p; /* Remember the first definition found. */ else { /* Ignore definition just found if it came from SYSCALLS.c.X. */ if (is_syscalls_file (dd_p->file)) continue; /* Quietly replace the definition previously found with the one just found if the previous one was from SYSCALLS.c.X. */ if (is_syscalls_file (extern_def_p->file)) { extern_def_p = dd_p; continue; } /* If we get here, then there is a conflict between two function declarations for the same function, both of which came from the user's own code. */ if (!conflict_noted) /* first time we noticed? */ { conflict_noted = 1; notice ("%s: conflicting extern definitions of '%s'\n", pname, head->hash_entry->symbol); if (!quiet_flag) { notice ("%s: declarations of '%s' will not be converted\n", pname, head->hash_entry->symbol); notice ("%s: conflict list for '%s' follows:\n", pname, head->hash_entry->symbol); fprintf (stderr, "%s: %s(%d): %s\n", pname, shortpath (NULL, extern_def_p->file->hash_entry->symbol), extern_def_p->line, extern_def_p->ansi_decl); } } if (!quiet_flag) fprintf (stderr, "%s: %s(%d): %s\n", pname, shortpath (NULL, dd_p->file->hash_entry->symbol), dd_p->line, dd_p->ansi_decl); } } /* We want to err on the side of caution, so if we found multiple conflicting definitions for the same function, treat this as being that same as if we had found no definitions (i.e. return NULL). */ if (conflict_noted) return NULL; if (!extern_def_p) { /* We have no definitions for this function so do the next best thing. Search for an extern declaration already in prototype form. */ for (dd_p = head; dd_p; dd_p = dd_p->next_for_func) if (!dd_p->is_func_def && !dd_p->is_static && dd_p->prototyped) { extern_def_p = dd_p; /* save a pointer to the definition */ if (!quiet_flag) notice ("%s: warning: using formals list from %s(%d) for function '%s'\n", pname, shortpath (NULL, dd_p->file->hash_entry->symbol), dd_p->line, dd_p->hash_entry->symbol); break; } /* Gripe about unprototyped function declarations that we found no corresponding definition (or other source of prototype information) for. Gripe even if the unprototyped declaration we are worried about exists in a file in one of the "system" include directories. We can gripe about these because we should have at least found a corresponding (pseudo) definition in the SYSCALLS.c.X file. If we didn't, then that means that the SYSCALLS.c.X file is missing some needed prototypes for this particular system. That is worth telling the user about! */ if (!extern_def_p) { const char *file = user->file->hash_entry->symbol; if (!quiet_flag) if (in_system_include_dir (file)) { /* Why copy this string into `needed' at all? Why not just use user->ansi_decl without copying? */ char *needed = alloca (strlen (user->ansi_decl) + 1); char *p; strcpy (needed, user->ansi_decl); p = strstr (needed, user->hash_entry->symbol) + strlen (user->hash_entry->symbol) + 2; /* Avoid having ??? in the string. */ *p++ = '?'; *p++ = '?'; *p++ = '?'; strcpy (p, ");"); notice ("%s: %d: '%s' used but missing from SYSCALLS\n", shortpath (NULL, file), user->line, needed+7); /* Don't print "extern " */ } #if 0 else notice ("%s: %d: warning: no extern definition for '%s'\n", shortpath (NULL, file), user->line, user->hash_entry->symbol); #endif } } return extern_def_p; } /* Find the (only?) static definition for a particular function name in a given file. Here we get the function-name and the file info indirectly from the def_dec_info record pointer which is passed in. */ static const def_dec_info * find_static_definition (const def_dec_info *user) { const def_dec_info *head = user->hash_entry->ddip; const def_dec_info *dd_p; int num_static_defs = 0; const def_dec_info *static_def_p = NULL; for (dd_p = head; dd_p; dd_p = dd_p->next_for_func) if (dd_p->is_func_def && dd_p->is_static && (dd_p->file == user->file)) { static_def_p = dd_p; /* save a pointer to the definition */ num_static_defs++; } if (num_static_defs == 0) { if (!quiet_flag) notice ("%s: warning: no static definition for '%s' in file '%s'\n", pname, head->hash_entry->symbol, shortpath (NULL, user->file->hash_entry->symbol)); } else if (num_static_defs > 1) { notice ("%s: multiple static defs of '%s' in file '%s'\n", pname, head->hash_entry->symbol, shortpath (NULL, user->file->hash_entry->symbol)); return NULL; } return static_def_p; } /* Find good prototype style formal argument lists for all of the function declarations which didn't have them before now. To do this we consider each function name one at a time. For each function name, we look at the items on the linked list of def_dec_info records for that particular name. Somewhere on this list we should find one (and only one) def_dec_info record which represents the actual function definition, and this record should have a nice formal argument list already associated with it. Thus, all we have to do is to connect up all of the other def_dec_info records for this particular function name to the special one which has the full-blown formals list. Of course it is a little more complicated than just that. See below for more details. */ static void connect_defs_and_decs (const hash_table_entry *hp) { const def_dec_info *dd_p; const def_dec_info *extern_def_p = NULL; int first_extern_reference = 1; /* Traverse the list of definitions and declarations for this particular function name. For each item on the list, if it is a function definition (either old style or new style) then GCC has already been kind enough to produce a prototype for us, and it is associated with the item already, so declare the item as its own associated "definition". Also, for each item which is only a function declaration, but which nonetheless has its own prototype already (obviously supplied by the user) declare the item as its own definition. Note that when/if there are multiple user-supplied prototypes already present for multiple declarations of any given function, these multiple prototypes *should* all match exactly with one another and with the prototype for the actual function definition. We don't check for this here however, since we assume that the compiler must have already done this consistency checking when it was creating the .X files. */ for (dd_p = hp->ddip; dd_p; dd_p = dd_p->next_for_func) if (dd_p->prototyped) ((NONCONST def_dec_info *) dd_p)->definition = dd_p; /* Traverse the list of definitions and declarations for this particular function name. For each item on the list, if it is an extern function declaration and if it has no associated definition yet, go try to find the matching extern definition for the declaration. When looking for the matching function definition, warn the user if we fail to find one. If we find more that one function definition also issue a warning. Do the search for the matching definition only once per unique function name (and only when absolutely needed) so that we can avoid putting out redundant warning messages, and so that we will only put out warning messages when there is actually a reference (i.e. a declaration) for which we need to find a matching definition. */ for (dd_p = hp->ddip; dd_p; dd_p = dd_p->next_for_func) if (!dd_p->is_func_def && !dd_p->is_static && !dd_p->definition) { if (first_extern_reference) { extern_def_p = find_extern_def (hp->ddip, dd_p); first_extern_reference = 0; } ((NONCONST def_dec_info *) dd_p)->definition = extern_def_p; } /* Traverse the list of definitions and declarations for this particular function name. For each item on the list, if it is a static function declaration and if it has no associated definition yet, go try to find the matching static definition for the declaration within the same file. When looking for the matching function definition, warn the user if we fail to find one in the same file with the declaration, and refuse to convert this kind of cross-file static function declaration. After all, this is stupid practice and should be discouraged. We don't have to worry about the possibility that there is more than one matching function definition in the given file because that would have been flagged as an error by the compiler. Do the search for the matching definition only once per unique function-name/source-file pair (and only when absolutely needed) so that we can avoid putting out redundant warning messages, and so that we will only put out warning messages when there is actually a reference (i.e. a declaration) for which we actually need to find a matching definition. */ for (dd_p = hp->ddip; dd_p; dd_p = dd_p->next_for_func) if (!dd_p->is_func_def && dd_p->is_static && !dd_p->definition) { const def_dec_info *dd_p2; const def_dec_info *static_def; /* We have now found a single static declaration for which we need to find a matching definition. We want to minimize the work (and the number of warnings), so we will find an appropriate (matching) static definition for this declaration, and then distribute it (as the definition for) any and all other static declarations for this function name which occur within the same file, and which do not already have definitions. Note that a trick is used here to prevent subsequent attempts to call find_static_definition for a given function-name & file if the first such call returns NULL. Essentially, we convert these NULL return values to -1, and put the -1 into the definition field for each other static declaration from the same file which does not already have an associated definition. This makes these other static declarations look like they are actually defined already when the outer loop here revisits them later on. Thus, the outer loop will skip over them. Later, we turn the -1's back to NULL's. */ ((NONCONST def_dec_info *) dd_p)->definition = (static_def = find_static_definition (dd_p)) ? static_def : (const def_dec_info *) -1; for (dd_p2 = dd_p->next_for_func; dd_p2; dd_p2 = dd_p2->next_for_func) if (!dd_p2->is_func_def && dd_p2->is_static && !dd_p2->definition && (dd_p2->file == dd_p->file)) ((NONCONST def_dec_info *) dd_p2)->definition = dd_p->definition; } /* Convert any dummy (-1) definitions we created in the step above back to NULL's (as they should be). */ for (dd_p = hp->ddip; dd_p; dd_p = dd_p->next_for_func) if (dd_p->definition == (def_dec_info *) -1) ((NONCONST def_dec_info *) dd_p)->definition = NULL; } #endif /* !defined (UNPROTOIZE) */ /* Give a pointer into the clean text buffer, return a number which is the original source line number that the given pointer points into. */ static int identify_lineno (const char *clean_p) { int line_num = 1; const char *scan_p; for (scan_p = clean_text_base; scan_p <= clean_p; scan_p++) if (*scan_p == '\n') line_num++; return line_num; } /* Issue an error message and give up on doing this particular edit. */ static void declare_source_confusing (const char *clean_p) { if (!quiet_flag) { if (clean_p == 0) notice ("%s: %d: warning: source too confusing\n", shortpath (NULL, convert_filename), last_known_line_number); else notice ("%s: %d: warning: source too confusing\n", shortpath (NULL, convert_filename), identify_lineno (clean_p)); } longjmp (source_confusion_recovery, 1); } /* Check that a condition which is expected to be true in the original source code is in fact true. If not, issue an error message and give up on converting this particular source file. */ static void check_source (int cond, const char *clean_p) { if (!cond) declare_source_confusing (clean_p); } /* If we think of the in-core cleaned text buffer as a memory mapped file (with the variable last_known_line_start acting as sort of a file pointer) then we can imagine doing "seeks" on the buffer. The following routine implements a kind of "seek" operation for the in-core (cleaned) copy of the source file. When finished, it returns a pointer to the start of a given (numbered) line in the cleaned text buffer. Note that protoize only has to "seek" in the forward direction on the in-core cleaned text file buffers, and it never needs to back up. This routine is made a little bit faster by remembering the line number (and pointer value) supplied (and returned) from the previous "seek". This prevents us from always having to start all over back at the top of the in-core cleaned buffer again. */ static const char * seek_to_line (int n) { gcc_assert (n >= last_known_line_number); while (n > last_known_line_number) { while (*last_known_line_start != '\n') check_source (++last_known_line_start < clean_text_limit, 0); last_known_line_start++; last_known_line_number++; } return last_known_line_start; } /* Given a pointer to a character in the cleaned text buffer, return a pointer to the next non-whitespace character which follows it. */ static const char * forward_to_next_token_char (const char *ptr) { for (++ptr; ISSPACE ((const unsigned char)*ptr); check_source (++ptr < clean_text_limit, 0)) continue; return ptr; } /* Copy a chunk of text of length `len' and starting at `str' to the current output buffer. Note that all attempts to add stuff to the current output buffer ultimately go through here. */ static void output_bytes (const char *str, size_t len) { if ((repl_write_ptr + 1) + len >= repl_text_limit) { size_t new_size = (repl_text_limit - repl_text_base) << 1; char *new_buf = xrealloc (repl_text_base, new_size); repl_write_ptr = new_buf + (repl_write_ptr - repl_text_base); repl_text_base = new_buf; repl_text_limit = new_buf + new_size; } memcpy (repl_write_ptr + 1, str, len); repl_write_ptr += len; } /* Copy all bytes (except the trailing null) of a null terminated string to the current output buffer. */ static void output_string (const char *str) { output_bytes (str, strlen (str)); } /* Copy some characters from the original text buffer to the current output buffer. This routine takes a pointer argument `p' which is assumed to be a pointer into the cleaned text buffer. The bytes which are copied are the `original' equivalents for the set of bytes between the last value of `clean_read_ptr' and the argument value `p'. The set of bytes copied however, comes *not* from the cleaned text buffer, but rather from the direct counterparts of these bytes within the original text buffer. Thus, when this function is called, some bytes from the original text buffer (which may include original comments and preprocessing directives) will be copied into the output buffer. Note that the request implied when this routine is called includes the byte pointed to by the argument pointer `p'. */ static void output_up_to (const char *p) { size_t copy_length = (size_t) (p - clean_read_ptr); const char *copy_start = orig_text_base+(clean_read_ptr-clean_text_base)+1; if (copy_length == 0) return; output_bytes (copy_start, copy_length); clean_read_ptr = p; } /* Given a pointer to a def_dec_info record which represents some form of definition of a function (perhaps a real definition, or in lieu of that perhaps just a declaration with a full prototype) return true if this function is one which we should avoid converting. Return false otherwise. */ static int other_variable_style_function (const char *ansi_header) { #ifdef UNPROTOIZE /* See if we have a stdarg function, or a function which has stdarg style parameters or a stdarg style return type. */ return strstr (ansi_header, "...") != 0; #else /* !defined (UNPROTOIZE) */ /* See if we have a varargs function, or a function which has varargs style parameters or a varargs style return type. */ const char *p; int len = strlen (varargs_style_indicator); for (p = ansi_header; p; ) { const char *candidate; if ((candidate = strstr (p, varargs_style_indicator)) == 0) return 0; else if (!is_id_char (candidate[-1]) && !is_id_char (candidate[len])) return 1; else p = candidate + 1; } return 0; #endif /* !defined (UNPROTOIZE) */ } /* Do the editing operation specifically for a function "declaration". Note that editing for function "definitions" are handled in a separate routine below. */ static void edit_fn_declaration (const def_dec_info *def_dec_p, const char *volatile clean_text_p) { const char *start_formals; const char *end_formals; const char *function_to_edit = def_dec_p->hash_entry->symbol; size_t func_name_len = strlen (function_to_edit); const char *end_of_fn_name; #ifndef UNPROTOIZE const f_list_chain_item *this_f_list_chain_item; const def_dec_info *definition = def_dec_p->definition; /* If we are protoizing, and if we found no corresponding definition for this particular function declaration, then just leave this declaration exactly as it is. */ if (!definition) return; /* If we are protoizing, and if the corresponding definition that we found for this particular function declaration defined an old style varargs function, then we want to issue a warning and just leave this function declaration unconverted. */ if (other_variable_style_function (definition->ansi_decl)) { if (!quiet_flag) notice ("%s: %d: warning: varargs function declaration not converted\n", shortpath (NULL, def_dec_p->file->hash_entry->symbol), def_dec_p->line); return; } #endif /* !defined (UNPROTOIZE) */ /* Setup here to recover from confusing source code detected during this particular "edit". */ save_pointers (); if (setjmp (source_confusion_recovery)) { restore_pointers (); notice ("%s: declaration of function '%s' not converted\n", pname, function_to_edit); return; } /* We are editing a function declaration. The line number we did a seek to contains the comma or semicolon which follows the declaration. Our job now is to scan backwards looking for the function name. This name *must* be followed by open paren (ignoring whitespace, of course). We need to replace everything between that open paren and the corresponding closing paren. If we are protoizing, we need to insert the prototype-style formals lists. If we are unprotoizing, we need to just delete everything between the pairs of opening and closing parens. */ /* First move up to the end of the line. */ while (*clean_text_p != '\n') check_source (++clean_text_p < clean_text_limit, 0); clean_text_p--; /* Point to just before the newline character. */ /* Now we can scan backwards for the function name. */ do { for (;;) { /* Scan leftwards until we find some character which can be part of an identifier. */ while (!is_id_char (*clean_text_p)) check_source (--clean_text_p > clean_read_ptr, 0); /* Scan backwards until we find a char that cannot be part of an identifier. */ while (is_id_char (*clean_text_p)) check_source (--clean_text_p > clean_read_ptr, 0); /* Having found an "id break", see if the following id is the one that we are looking for. If so, then exit from this loop. */ if (!strncmp (clean_text_p+1, function_to_edit, func_name_len)) { char ch = *(clean_text_p + 1 + func_name_len); /* Must also check to see that the name in the source text ends where it should (in order to prevent bogus matches on similar but longer identifiers. */ if (! is_id_char (ch)) break; /* exit from loop */ } } /* We have now found the first perfect match for the function name in our backward search. This may or may not be the actual function name at the start of the actual function declaration (i.e. we could have easily been mislead). We will try to avoid getting fooled too often by looking forward for the open paren which should follow the identifier we just found. We ignore whitespace while hunting. If the next non-whitespace byte we see is *not* an open left paren, then we must assume that we have been fooled and we start over again accordingly. Note that there is no guarantee, that even if we do see the open paren, that we are in the right place. Programmers do the strangest things sometimes! */ end_of_fn_name = clean_text_p + strlen (def_dec_p->hash_entry->symbol); start_formals = forward_to_next_token_char (end_of_fn_name); } while (*start_formals != '('); /* start_of_formals now points to the opening left paren which immediately follows the name of the function. */ /* Note that there may be several formals lists which need to be modified due to the possibility that the return type of this function is a pointer-to-function type. If there are several formals lists, we convert them in left-to-right order here. */ #ifndef UNPROTOIZE this_f_list_chain_item = definition->f_list_chain; #endif /* !defined (UNPROTOIZE) */ for (;;) { { int depth; end_formals = start_formals + 1; depth = 1; for (; depth; check_source (++end_formals < clean_text_limit, 0)) { switch (*end_formals) { case '(': depth++; break; case ')': depth--; break; } } end_formals--; } /* end_formals now points to the closing right paren of the formals list whose left paren is pointed to by start_formals. */ /* Now, if we are protoizing, we insert the new ANSI-style formals list attached to the associated definition of this function. If however we are unprotoizing, then we simply delete any formals list which may be present. */ output_up_to (start_formals); #ifndef UNPROTOIZE if (this_f_list_chain_item) { output_string (this_f_list_chain_item->formals_list); this_f_list_chain_item = this_f_list_chain_item->chain_next; } else { if (!quiet_flag) notice ("%s: warning: too many parameter lists in declaration of '%s'\n", pname, def_dec_p->hash_entry->symbol); check_source (0, end_formals); /* leave the declaration intact */ } #endif /* !defined (UNPROTOIZE) */ clean_read_ptr = end_formals - 1; /* Now see if it looks like there may be another formals list associated with the function declaration that we are converting (following the formals list that we just converted. */ { const char *another_r_paren = forward_to_next_token_char (end_formals); if ((*another_r_paren != ')') || (*(start_formals = forward_to_next_token_char (another_r_paren)) != '(')) { #ifndef UNPROTOIZE if (this_f_list_chain_item) { if (!quiet_flag) notice ("\n%s: warning: too few parameter lists in declaration of '%s'\n", pname, def_dec_p->hash_entry->symbol); check_source (0, start_formals); /* leave the decl intact */ } #endif /* !defined (UNPROTOIZE) */ break; } } /* There does appear to be yet another formals list, so loop around again, and convert it also. */ } } /* Edit a whole group of formals lists, starting with the rightmost one from some set of formals lists. This routine is called once (from the outside) for each function declaration which is converted. It is recursive however, and it calls itself once for each remaining formal list that lies to the left of the one it was originally called to work on. Thus, a whole set gets done in right-to-left order. This routine returns nonzero if it thinks that it should not be trying to convert this particular function definition (because the name of the function doesn't match the one expected). */ static int edit_formals_lists (const char *end_formals, unsigned int f_list_count, const def_dec_info *def_dec_p) { const char *start_formals; int depth; start_formals = end_formals - 1; depth = 1; for (; depth; check_source (--start_formals > clean_read_ptr, 0)) { switch (*start_formals) { case '(': depth--; break; case ')': depth++; break; } } start_formals++; /* start_formals now points to the opening left paren of the formals list. */ f_list_count--; if (f_list_count) { const char *next_end; /* There should be more formal lists to the left of here. */ next_end = start_formals - 1; check_source (next_end > clean_read_ptr, 0); while (ISSPACE ((const unsigned char)*next_end)) check_source (--next_end > clean_read_ptr, 0); check_source (*next_end == ')', next_end); check_source (--next_end > clean_read_ptr, 0); check_source (*next_end == ')', next_end); if (edit_formals_lists (next_end, f_list_count, def_dec_p)) return 1; } /* Check that the function name in the header we are working on is the same as the one we would expect to find. If not, issue a warning and return nonzero. */ if (f_list_count == 0) { const char *expected = def_dec_p->hash_entry->symbol; const char *func_name_start; const char *func_name_limit; size_t func_name_len; for (func_name_limit = start_formals-1; ISSPACE ((const unsigned char)*func_name_limit); ) check_source (--func_name_limit > clean_read_ptr, 0); for (func_name_start = func_name_limit++; is_id_char (*func_name_start); func_name_start--) check_source (func_name_start > clean_read_ptr, 0); func_name_start++; func_name_len = func_name_limit - func_name_start; if (func_name_len == 0) check_source (0, func_name_start); if (func_name_len != strlen (expected) || strncmp (func_name_start, expected, func_name_len)) { notice ("%s: %d: warning: found '%s' but expected '%s'\n", shortpath (NULL, def_dec_p->file->hash_entry->symbol), identify_lineno (func_name_start), dupnstr (func_name_start, func_name_len), expected); return 1; } } output_up_to (start_formals); #ifdef UNPROTOIZE if (f_list_count == 0) output_string (def_dec_p->formal_names); #else /* !defined (UNPROTOIZE) */ { unsigned f_list_depth; const f_list_chain_item *flci_p = def_dec_p->f_list_chain; /* At this point, the current value of f_list count says how many links we have to follow through the f_list_chain to get to the particular formals list that we need to output next. */ for (f_list_depth = 0; f_list_depth < f_list_count; f_list_depth++) flci_p = flci_p->chain_next; output_string (flci_p->formals_list); } #endif /* !defined (UNPROTOIZE) */ clean_read_ptr = end_formals - 1; return 0; } /* Given a pointer to a byte in the clean text buffer which points to the beginning of a line that contains a "follower" token for a function definition header, do whatever is necessary to find the right closing paren for the rightmost formals list of the function definition header. */ static const char * find_rightmost_formals_list (const char *clean_text_p) { const char *end_formals; /* We are editing a function definition. The line number we did a seek to contains the first token which immediately follows the entire set of formals lists which are part of this particular function definition header. Our job now is to scan leftwards in the clean text looking for the right-paren which is at the end of the function header's rightmost formals list. If we ignore whitespace, this right paren should be the first one we see which is (ignoring whitespace) immediately followed either by the open curly-brace beginning the function body or by an alphabetic character (in the case where the function definition is in old (K&R) style and there are some declarations of formal parameters). */ /* It is possible that the right paren we are looking for is on the current line (together with its following token). Just in case that might be true, we start out here by skipping down to the right end of the current line before starting our scan. */ for (end_formals = clean_text_p; *end_formals != '\n'; end_formals++) continue; end_formals--; #ifdef UNPROTOIZE /* Now scan backwards while looking for the right end of the rightmost formals list associated with this function definition. */ { char ch; const char *l_brace_p; /* Look leftward and try to find a right-paren. */ while (*end_formals != ')') { if (ISSPACE ((unsigned char)*end_formals)) while (ISSPACE ((unsigned char)*end_formals)) check_source (--end_formals > clean_read_ptr, 0); else check_source (--end_formals > clean_read_ptr, 0); } ch = *(l_brace_p = forward_to_next_token_char (end_formals)); /* Since we are unprotoizing an ANSI-style (prototyped) function definition, there had better not be anything (except whitespace) between the end of the ANSI formals list and the beginning of the function body (i.e. the '{'). */ check_source (ch == '{', l_brace_p); } #else /* !defined (UNPROTOIZE) */ /* Now scan backwards while looking for the right end of the rightmost formals list associated with this function definition. */ while (1) { char ch; const char *l_brace_p; /* Look leftward and try to find a right-paren. */ while (*end_formals != ')') { if (ISSPACE ((const unsigned char)*end_formals)) while (ISSPACE ((const unsigned char)*end_formals)) check_source (--end_formals > clean_read_ptr, 0); else check_source (--end_formals > clean_read_ptr, 0); } ch = *(l_brace_p = forward_to_next_token_char (end_formals)); /* Since it is possible that we found a right paren before the starting '{' of the body which IS NOT the one at the end of the real K&R formals list (say for instance, we found one embedded inside one of the old K&R formal parameter declarations) we have to check to be sure that this is in fact the right paren that we were looking for. The one we were looking for *must* be followed by either a '{' or by an alphabetic character, while others *cannot* validly be followed by such characters. */ if ((ch == '{') || ISALPHA ((unsigned char) ch)) break; /* At this point, we have found a right paren, but we know that it is not the one we were looking for, so backup one character and keep looking. */ check_source (--end_formals > clean_read_ptr, 0); } #endif /* !defined (UNPROTOIZE) */ return end_formals; } #ifndef UNPROTOIZE /* Insert into the output file a totally new declaration for a function which (up until now) was being called from within the current block without having been declared at any point such that the declaration was visible (i.e. in scope) at the point of the call. We need to add in explicit declarations for all such function calls in order to get the full benefit of prototype-based function call parameter type checking. */ static void add_local_decl (const def_dec_info *def_dec_p, const char *clean_text_p) { const char *start_of_block; const char *function_to_edit = def_dec_p->hash_entry->symbol; /* Don't insert new local explicit declarations unless explicitly requested to do so. */ if (!local_flag) return; /* Setup here to recover from confusing source code detected during this particular "edit". */ save_pointers (); if (setjmp (source_confusion_recovery)) { restore_pointers (); notice ("%s: local declaration for function '%s' not inserted\n", pname, function_to_edit); return; } /* We have already done a seek to the start of the line which should contain *the* open curly brace which begins the block in which we need to insert an explicit function declaration (to replace the implicit one). Now we scan that line, starting from the left, until we find the open curly brace we are looking for. Note that there may actually be multiple open curly braces on the given line, but we will be happy with the leftmost one no matter what. */ start_of_block = clean_text_p; while (*start_of_block != '{' && *start_of_block != '\n') check_source (++start_of_block < clean_text_limit, 0); /* Note that the line from the original source could possibly contain *no* open curly braces! This happens if the line contains a macro call which expands into a chunk of text which includes a block (and that block's associated open and close curly braces). In cases like this, we give up, issue a warning, and do nothing. */ if (*start_of_block != '{') { if (!quiet_flag) notice ("\n%s: %d: warning: can't add declaration of '%s' into macro call\n", def_dec_p->file->hash_entry->symbol, def_dec_p->line, def_dec_p->hash_entry->symbol); return; } /* Figure out what a nice (pretty) indentation would be for the new declaration we are adding. In order to do this, we must scan forward from the '{' until we find the first line which starts with some non-whitespace characters (i.e. real "token" material). */ { const char *ep = forward_to_next_token_char (start_of_block) - 1; const char *sp; /* Now we have ep pointing at the rightmost byte of some existing indent stuff. At least that is the hope. We can now just scan backwards and find the left end of the existing indentation string, and then copy it to the output buffer. */ for (sp = ep; ISSPACE ((const unsigned char)*sp) && *sp != '\n'; sp--) continue; /* Now write out the open { which began this block, and any following trash up to and including the last byte of the existing indent that we just found. */ output_up_to (ep); /* Now we go ahead and insert the new declaration at this point. If the definition of the given function is in the same file that we are currently editing, and if its full ANSI declaration normally would start with the keyword `extern', suppress the `extern'. */ { const char *decl = def_dec_p->definition->ansi_decl; if ((*decl == 'e') && (def_dec_p->file == def_dec_p->definition->file)) decl += 7; output_string (decl); } /* Finally, write out a new indent string, just like the preceding one that we found. This will typically include a newline as the first character of the indent string. */ output_bytes (sp, (size_t) (ep - sp) + 1); } } /* Given a pointer to a file_info record, and a pointer to the beginning of a line (in the clean text buffer) which is assumed to contain the first "follower" token for the first function definition header in the given file, find a good place to insert some new global function declarations (which will replace scattered and imprecise implicit ones) and then insert the new explicit declaration at that point in the file. */ static void add_global_decls (const file_info *file_p, const char *clean_text_p) { const def_dec_info *dd_p; const char *scan_p; /* Setup here to recover from confusing source code detected during this particular "edit". */ save_pointers (); if (setjmp (source_confusion_recovery)) { restore_pointers (); notice ("%s: global declarations for file '%s' not inserted\n", pname, shortpath (NULL, file_p->hash_entry->symbol)); return; } /* Start by finding a good location for adding the new explicit function declarations. To do this, we scan backwards, ignoring whitespace and comments and other junk until we find either a semicolon, or until we hit the beginning of the file. */ scan_p = find_rightmost_formals_list (clean_text_p); for (;; --scan_p) { if (scan_p < clean_text_base) break; check_source (scan_p > clean_read_ptr, 0); if (*scan_p == ';') break; } /* scan_p now points either to a semicolon, or to just before the start of the whole file. */ /* Now scan forward for the first non-whitespace character. In theory, this should be the first character of the following function definition header. We will put in the added declarations just prior to that. */ scan_p++; while (ISSPACE ((const unsigned char)*scan_p)) scan_p++; scan_p--; output_up_to (scan_p); /* Now write out full prototypes for all of the things that had been implicitly declared in this file (but only those for which we were actually able to find unique matching definitions). Avoid duplicates by marking things that we write out as we go. */ { int some_decls_added = 0; for (dd_p = file_p->defs_decs; dd_p; dd_p = dd_p->next_in_file) if (dd_p->is_implicit && dd_p->definition && !dd_p->definition->written) { const char *decl = dd_p->definition->ansi_decl; /* If the function for which we are inserting a declaration is actually defined later in the same file, then suppress the leading `extern' keyword (if there is one). */ if (*decl == 'e' && (dd_p->file == dd_p->definition->file)) decl += 7; output_string ("\n"); output_string (decl); some_decls_added = 1; ((NONCONST def_dec_info *) dd_p->definition)->written = 1; } if (some_decls_added) output_string ("\n\n"); } /* Unmark all of the definitions that we just marked. */ for (dd_p = file_p->defs_decs; dd_p; dd_p = dd_p->next_in_file) if (dd_p->definition) ((NONCONST def_dec_info *) dd_p->definition)->written = 0; } #endif /* !defined (UNPROTOIZE) */ /* Do the editing operation specifically for a function "definition". Note that editing operations for function "declarations" are handled by a separate routine above. */ static void edit_fn_definition (const def_dec_info *def_dec_p, const char *volatile clean_text_p) { const char *end_formals; const char *function_to_edit = def_dec_p->hash_entry->symbol; /* Setup here to recover from confusing source code detected during this particular "edit". */ save_pointers (); if (setjmp (source_confusion_recovery)) { restore_pointers (); notice ("%s: definition of function '%s' not converted\n", pname, function_to_edit); return; } end_formals = find_rightmost_formals_list (clean_text_p); /* end_of_formals now points to the closing right paren of the rightmost formals list which is actually part of the `header' of the function definition that we are converting. */ /* If the header of this function definition looks like it declares a function with a variable number of arguments, and if the way it does that is different from that way we would like it (i.e. varargs vs. stdarg) then issue a warning and leave the header unconverted. */ if (other_variable_style_function (def_dec_p->ansi_decl)) { if (!quiet_flag) notice ("%s: %d: warning: definition of %s not converted\n", shortpath (NULL, def_dec_p->file->hash_entry->symbol), identify_lineno (end_formals), other_var_style); output_up_to (end_formals); return; } if (edit_formals_lists (end_formals, def_dec_p->f_list_count, def_dec_p)) { restore_pointers (); notice ("%s: definition of function '%s' not converted\n", pname, function_to_edit); return; } /* Have to output the last right paren because this never gets flushed by edit_formals_list. */ output_up_to (end_formals); #ifdef UNPROTOIZE { const char *decl_p; const char *semicolon_p; const char *limit_p; const char *scan_p; int had_newlines = 0; /* Now write out the K&R style formal declarations, one per line. */ decl_p = def_dec_p->formal_decls; limit_p = decl_p + strlen (decl_p); for (;decl_p < limit_p; decl_p = semicolon_p + 2) { for (semicolon_p = decl_p; *semicolon_p != ';'; semicolon_p++) continue; output_string ("\n"); output_string (indent_string); output_bytes (decl_p, (size_t) ((semicolon_p + 1) - decl_p)); } /* If there are no newlines between the end of the formals list and the start of the body, we should insert one now. */ for (scan_p = end_formals+1; *scan_p != '{'; ) { if (*scan_p == '\n') { had_newlines = 1; break; } check_source (++scan_p < clean_text_limit, 0); } if (!had_newlines) output_string ("\n"); } #else /* !defined (UNPROTOIZE) */ /* If we are protoizing, there may be some flotsam & jetsam (like comments and preprocessing directives) after the old formals list but before the following { and we would like to preserve that stuff while effectively deleting the existing K&R formal parameter declarations. We do so here in a rather tricky way. Basically, we white out any stuff *except* the comments/pp-directives in the original text buffer, then, if there is anything in this area *other* than whitespace, we output it. */ { const char *end_formals_orig; const char *start_body; const char *start_body_orig; const char *scan; const char *scan_orig; int have_flotsam = 0; int have_newlines = 0; for (start_body = end_formals + 1; *start_body != '{';) check_source (++start_body < clean_text_limit, 0); end_formals_orig = orig_text_base + (end_formals - clean_text_base); start_body_orig = orig_text_base + (start_body - clean_text_base); scan = end_formals + 1; scan_orig = end_formals_orig + 1; for (; scan < start_body; scan++, scan_orig++) { if (*scan == *scan_orig) { have_newlines |= (*scan_orig == '\n'); /* Leave identical whitespace alone. */ if (!ISSPACE ((const unsigned char)*scan_orig)) *((NONCONST char *) scan_orig) = ' '; /* identical - so whiteout */ } else have_flotsam = 1; } if (have_flotsam) output_bytes (end_formals_orig + 1, (size_t) (start_body_orig - end_formals_orig) - 1); else if (have_newlines) output_string ("\n"); else output_string (" "); clean_read_ptr = start_body - 1; } #endif /* !defined (UNPROTOIZE) */ } /* Clean up the clean text buffer. Do this by converting comments and preprocessing directives into spaces. Also convert line continuations into whitespace. Also, whiteout string and character literals. */ static void do_cleaning (char *new_clean_text_base, const char *new_clean_text_limit) { char *scan_p; int non_whitespace_since_newline = 0; for (scan_p = new_clean_text_base; scan_p < new_clean_text_limit; scan_p++) { switch (*scan_p) { case '/': /* Handle comments. */ if (scan_p[1] != '*') goto regular; non_whitespace_since_newline = 1; scan_p[0] = ' '; scan_p[1] = ' '; scan_p += 2; while (scan_p[1] != '/' || scan_p[0] != '*') { if (!ISSPACE ((const unsigned char)*scan_p)) *scan_p = ' '; ++scan_p; gcc_assert (scan_p < new_clean_text_limit); } *scan_p++ = ' '; *scan_p = ' '; break; case '#': /* Handle pp directives. */ if (non_whitespace_since_newline) goto regular; *scan_p = ' '; while (scan_p[1] != '\n' || scan_p[0] == '\\') { if (!ISSPACE ((const unsigned char)*scan_p)) *scan_p = ' '; ++scan_p; gcc_assert (scan_p < new_clean_text_limit); } *scan_p++ = ' '; break; case '\'': /* Handle character); } *scan_p++ = ' '; break; case '"': /* Handle string); } if (!ISSPACE ((const unsigned char)*scan_p)) *scan_p = ' '; scan_p++; break; case '\\': /* Handle line continuations. */ if (scan_p[1] != '\n') goto regular; *scan_p = ' '; break; case '\n': non_whitespace_since_newline = 0; /* Reset. */ break; case ' ': case '\v': case '\t': case '\r': case '\f': case '\b': break; /* Whitespace characters. */ default: regular: non_whitespace_since_newline = 1; break; } } } /* Given a pointer to the closing right parenthesis for a particular formals list (in the clean text buffer) find the corresponding left parenthesis and return a pointer to it. */ static const char * careful_find_l_paren (const char *p) { const char *q; int paren_depth; for (paren_depth = 1, q = p-1; paren_depth; check_source (--q >= clean_text_base, 0)) { switch (*q) { case ')': paren_depth++; break; case '(': paren_depth--; break; } } return ++q; } /* Scan the clean text buffer for cases of function definitions that we don't really know about because they were preprocessed out when the aux info files were created. In this version of protoize/unprotoize we just give a warning for each one found. A later version may be able to at least unprotoize such missed items. Note that we may easily find all function definitions simply by looking for places where there is a left paren which is (ignoring whitespace) immediately followed by either a left-brace or by an upper or lower case letter. Whenever we find this combination, we have also found a function definition header. Finding function *declarations* using syntactic clues is much harder. I will probably try to do this in a later version though. */ static void scan_for_missed_items (const file_info *file_p) { static const char *scan_p; const char *limit = clean_text_limit - 3; static const char *backup_limit; backup_limit = clean_text_base - 1; for (scan_p = clean_text_base; scan_p < limit; scan_p++) { if (*scan_p == ')') { static const char *last_r_paren; const char *ahead_p; last_r_paren = scan_p; for (ahead_p = scan_p + 1; ISSPACE ((const unsigned char)*ahead_p); ) check_source (++ahead_p < limit, limit); scan_p = ahead_p - 1; if (ISALPHA ((const unsigned char)*ahead_p) || *ahead_p == '{') { const char *last_l_paren; const int lineno = identify_lineno (ahead_p); if (setjmp (source_confusion_recovery)) continue; /* We know we have a function definition header. Now skip leftwards over all of its associated formals lists. */ do { last_l_paren = careful_find_l_paren (last_r_paren); for (last_r_paren = last_l_paren-1; ISSPACE ((const unsigned char)*last_r_paren); ) check_source (--last_r_paren >= backup_limit, backup_limit); } while (*last_r_paren == ')'); if (is_id_char (*last_r_paren)) { const char *id_limit = last_r_paren + 1; const char *id_start; size_t id_length; const def_dec_info *dd_p; for (id_start = id_limit-1; is_id_char (*id_start); ) check_source (--id_start >= backup_limit, backup_limit); id_start++; backup_limit = id_start; if ((id_length = (size_t) (id_limit - id_start)) == 0) goto not_missed; { char *func_name = alloca (id_length + 1); static const char * const stmt_keywords[] = { "if", "else", "do", "while", "for", "switch", "case", "return", 0 }; const char * const *stmt_keyword; strncpy (func_name, id_start, id_length); func_name[id_length] = '\0'; /* We must check here to see if we are actually looking at a statement rather than an actual function call. */ for (stmt_keyword = stmt_keywords; *stmt_keyword; stmt_keyword++) if (!strcmp (func_name, *stmt_keyword)) goto not_missed; #if 0 notice ("%s: found definition of '%s' at %s(%d)\n", pname, func_name, shortpath (NULL, file_p->hash_entry->symbol), identify_lineno (id_start)); #endif /* 0 */ /* We really should check for a match of the function name here also, but why bother. */ for (dd_p = file_p->defs_decs; dd_p; dd_p = dd_p->next_in_file) if (dd_p->is_func_def && dd_p->line == lineno) goto not_missed; /* If we make it here, then we did not know about this function definition. */ notice ("%s: %d: warning: '%s' excluded by preprocessing\n", shortpath (NULL, file_p->hash_entry->symbol), identify_lineno (id_start), func_name); notice ("%s: function definition not converted\n", pname); } not_missed: ; } } } } } /* Do all editing operations for a single source file (either a "base" file or an "include" file). To do this we read the file into memory, keep a virgin copy there, make another cleaned in-core copy of the original file (i.e. one in which all of the comments and preprocessing directives have been replaced with whitespace), then use these two in-core copies of the file to make a new edited in-core copy of the file. Finally, rename the original file (as a way of saving it), and then write the edited version of the file from core to a disk file of the same name as the original. Note that the trick of making a copy of the original sans comments & preprocessing directives make the editing a whole lot easier. */ static void edit_file (const hash_table_entry *hp) { struct stat stat_buf; const file_info *file_p = hp->fip; char *new_orig_text_base; char *new_orig_text_limit; char *new_clean_text_base; char *new_clean_text_limit; size_t orig_size; size_t repl_size; int first_definition_in_file; /* If we are not supposed to be converting this file, or if there is nothing in there which needs converting, just skip this file. */ if (!needs_to_be_converted (file_p)) return; convert_filename = file_p->hash_entry->symbol; /* Convert a file if it is in a directory where we want conversion and the file is not excluded. */ if (!directory_specified_p (convert_filename) || file_excluded_p (convert_filename)) { if (!quiet_flag #ifdef UNPROTOIZE /* Don't even mention "system" include files unless we are protoizing. If we are protoizing, we mention these as a gentle way of prodding the user to convert his "system" include files to prototype format. */ && !in_system_include_dir (convert_filename) #endif /* defined (UNPROTOIZE) */ ) notice ("%s: '%s' not converted\n", pname, shortpath (NULL, convert_filename)); return; } /* Let the user know what we are up to. */ if (nochange_flag) notice ("%s: would convert file '%s'\n", pname, shortpath (NULL, convert_filename)); else notice ("%s: converting file '%s'\n", pname, shortpath (NULL, convert_filename)); fflush (stderr); /* Find out the size (in bytes) of the original file. */ /* The cast avoids an erroneous warning on AIX. */ if (stat (convert_filename, &stat_buf) == -1) { int errno_val = errno; notice ("%s: can't get status for file '%s': %s\n", pname, shortpath (NULL, convert_filename), xstrerror (errno_val)); return; } orig_size = stat_buf.st_size; /* Allocate a buffer to hold the original text. */ orig_text_base = new_orig_text_base = xmalloc (orig_size + 2); orig_text_limit = new_orig_text_limit = new_orig_text_base + orig_size; /* Allocate a buffer to hold the cleaned-up version of the original text. */ clean_text_base = new_clean_text_base = xmalloc (orig_size + 2); clean_text_limit = new_clean_text_limit = new_clean_text_base + orig_size; clean_read_ptr = clean_text_base - 1; /* Allocate a buffer that will hopefully be large enough to hold the entire converted output text. As an initial guess for the maximum size of the output buffer, use 125% of the size of the original + some extra. This buffer can be expanded later as needed. */ repl_size = orig_size + (orig_size >> 2) + 4096; repl_text_base = xmalloc (repl_size + 2); repl_text_limit = repl_text_base + repl_size - 1; repl_write_ptr = repl_text_base - 1; { int input_file; int fd_flags; /* Open the file to be converted in READ ONLY mode. */ fd_flags = O_RDONLY; #ifdef O_BINARY /* Use binary mode to avoid having to deal with different EOL characters. */ fd_flags |= O_BINARY; #endif if ((input_file = open (convert_filename, fd_flags, 0444)) == -1) { int errno_val = errno; notice ("%s: can't open file '%s' for reading: %s\n", pname, shortpath (NULL, convert_filename), xstrerror (errno_val)); return; } /* Read the entire original source text file into the original text buffer in one swell fwoop. Then figure out where the end of the text is and make sure that it ends with a newline followed by a null. */ if (safe_read (input_file, new_orig_text_base, orig_size) != (int) orig_size) { int errno_val = errno; close (input_file); notice ("\n%s: error reading input file '%s': %s\n", pname, shortpath (NULL, convert_filename), xstrerror (errno_val)); return; } close (input_file); } if (orig_size == 0 || orig_text_limit[-1] != '\n') { *new_orig_text_limit++ = '\n'; orig_text_limit++; } /* Create the cleaned up copy of the original text. */ memcpy (new_clean_text_base, orig_text_base, (size_t) (orig_text_limit - orig_text_base)); do_cleaning (new_clean_text_base, new_clean_text_limit); #if 0 { int clean_file; size_t clean_size = orig_text_limit - orig_text_base; char *const clean_filename = alloca (strlen (convert_filename) + 6 + 1); /* Open (and create) the clean file. */ strcpy (clean_filename, convert_filename); strcat (clean_filename, ".clean"); if ((clean_file = creat (clean_filename, 0666)) == -1) { int errno_val = errno; notice ("%s: can't create/open clean file '%s': %s\n", pname, shortpath (NULL, clean_filename), xstrerror (errno_val)); return; } /* Write the clean file. */ safe_write (clean_file, new_clean_text_base, clean_size, clean_filename); close (clean_file); } #endif /* 0 */ /* Do a simplified scan of the input looking for things that were not mentioned in the aux info files because of the fact that they were in a region of the source which was preprocessed-out (via #if or via #ifdef). */ scan_for_missed_items (file_p); /* Setup to do line-oriented forward seeking in the clean text buffer. */ last_known_line_number = 1; last_known_line_start = clean_text_base; /* Now get down to business and make all of the necessary edits. */ { const def_dec_info *def_dec_p; first_definition_in_file = 1; def_dec_p = file_p->defs_decs; for (; def_dec_p; def_dec_p = def_dec_p->next_in_file) { const char *clean_text_p = seek_to_line (def_dec_p->line); /* clean_text_p now points to the first character of the line which contains the `terminator' for the declaration or definition that we are about to process. */ #ifndef UNPROTOIZE if (global_flag && def_dec_p->is_func_def && first_definition_in_file) { add_global_decls (def_dec_p->file, clean_text_p); first_definition_in_file = 0; } /* Don't edit this item if it is already in prototype format or if it is a function declaration and we have found no corresponding definition. */ if (def_dec_p->prototyped || (!def_dec_p->is_func_def && !def_dec_p->definition)) continue; #endif /* !defined (UNPROTOIZE) */ if (def_dec_p->is_func_def) edit_fn_definition (def_dec_p, clean_text_p); else #ifndef UNPROTOIZE if (def_dec_p->is_implicit) add_local_decl (def_dec_p, clean_text_p); else #endif /* !defined (UNPROTOIZE) */ edit_fn_declaration (def_dec_p, clean_text_p); } } /* Finalize things. Output the last trailing part of the original text. */ output_up_to (clean_text_limit - 1); /* If this is just a test run, stop now and just deallocate the buffers. */ if (nochange_flag) { free (new_orig_text_base); free (new_clean_text_base); free (repl_text_base); return; } /* Change the name of the original input file. This is just a quick way of saving the original file. */ if (!nosave_flag) { char *new_filename = xmalloc (strlen (convert_filename) + strlen (save_suffix) + 2); strcpy (new_filename, convert_filename); #ifdef __MSDOS__ /* MSDOS filenames are restricted to 8.3 format, so we save `foo.c' as `foo.<save_suffix>'. */ new_filename[(strlen (convert_filename) - 1] = '\0'; #endif strcat (new_filename, save_suffix); /* Don't overwrite existing file. */ if (access (new_filename, F_OK) == 0) { if (!quiet_flag) notice ("%s: warning: file '%s' already saved in '%s'\n", pname, shortpath (NULL, convert_filename), shortpath (NULL, new_filename)); } else if (rename (convert_filename, new_filename) == -1) { int errno_val = errno; notice ("%s: can't link file '%s' to '%s': %s\n", pname, shortpath (NULL, convert_filename), shortpath (NULL, new_filename), xstrerror (errno_val)); return; } } if (unlink (convert_filename) == -1) { int errno_val = errno; /* The file may have already been renamed. */ if (errno_val != ENOENT) { notice ("%s: can't delete file '%s': %s\n", pname, shortpath (NULL, convert_filename), xstrerror (errno_val)); return; } } { int output_file; /* Open (and create) the output file. */ if ((output_file = creat (convert_filename, 0666)) == -1) { int errno_val = errno; notice ("%s: can't create/open output file '%s': %s\n", pname, shortpath (NULL, convert_filename), xstrerror (errno_val)); return; } #ifdef O_BINARY /* Use binary mode to avoid changing the existing EOL character. */ setmode (output_file, O_BINARY); #endif /* Write the output file. */ { unsigned int out_size = (repl_write_ptr + 1) - repl_text_base; safe_write (output_file, repl_text_base, out_size, convert_filename); } close (output_file); } /* Deallocate the conversion buffers. */ free (new_orig_text_base); free (new_clean_text_base); free (repl_text_base); /* Change the mode of the output file to match the original file. */ /* The cast avoids an erroneous warning on AIX. */ if (chmod (convert_filename, stat_buf.st_mode) == -1) { int errno_val = errno; notice ("%s: can't change mode of file '%s': %s\n", pname, shortpath (NULL, convert_filename), xstrerror (errno_val)); } /* Note: We would try to change the owner and group of the output file to match those of the input file here, except that may not be a good thing to do because it might be misleading. Also, it might not even be possible to do that (on BSD systems with quotas for instance). */ } /* Do all of the individual steps needed to do the protoization (or unprotoization) of the files referenced in the aux_info files given in the command line. */ static void do_processing (void) { const char * const *base_pp; const char * const * const end_pps = &base_source_filenames[n_base_source_files]; #ifndef UNPROTOIZE int syscalls_len; #endif /* !defined (UNPROTOIZE) */ /* One-by-one, check (and create if necessary), open, and read all of the stuff in each aux_info file. After reading each aux_info file, the aux_info_file just read will be automatically deleted unless the keep_flag is set. */ for (base_pp = base_source_filenames; base_pp < end_pps; base_pp++) process_aux_info_file (*base_pp, keep_flag, 0); #ifndef UNPROTOIZE /* Also open and read the special SYSCALLS.c aux_info file which gives us the prototypes for all of the standard system-supplied functions. */ if (nondefault_syscalls_dir) { syscalls_absolute_filename = xmalloc (strlen (nondefault_syscalls_dir) + 1 + sizeof (syscalls_filename)); strcpy (syscalls_absolute_filename, nondefault_syscalls_dir); } else { GET_ENVIRONMENT (default_syscalls_dir, "GCC_EXEC_PREFIX"); if (!default_syscalls_dir) { default_syscalls_dir = standard_exec_prefix; } syscalls_absolute_filename = xmalloc (strlen (default_syscalls_dir) + 0 + strlen (target_machine) + 1 + strlen (target_version) + 1 + sizeof (syscalls_filename)); strcpy (syscalls_absolute_filename, default_syscalls_dir); strcat (syscalls_absolute_filename, target_machine); strcat (syscalls_absolute_filename, "/"); strcat (syscalls_absolute_filename, target_version); strcat (syscalls_absolute_filename, "/"); } syscalls_len = strlen (syscalls_absolute_filename); if (! IS_DIR_SEPARATOR (*(syscalls_absolute_filename + syscalls_len - 1))) { *(syscalls_absolute_filename + syscalls_len++) = DIR_SEPARATOR; *(syscalls_absolute_filename + syscalls_len) = '\0'; } strcat (syscalls_absolute_filename, syscalls_filename); /* Call process_aux_info_file in such a way that it does not try to delete the SYSCALLS aux_info file. */ process_aux_info_file (syscalls_absolute_filename, 1, 1); #endif /* !defined (UNPROTOIZE) */ /* When we first read in all of the information from the aux_info files we saved in it descending line number order, because that was likely to be faster. Now however, we want the chains of def & dec records to appear in ascending line number order as we get further away from the file_info record that they hang from. The following line causes all of these lists to be rearranged into ascending line number order. */ visit_each_hash_node (filename_primary, reverse_def_dec_list); #ifndef UNPROTOIZE /* Now do the "real" work. The following line causes each declaration record to be "visited". For each of these nodes, an attempt is made to match up the function declaration with a corresponding function definition, which should have a full prototype-format formals list with it. Once these match-ups are made, the conversion of the function declarations to prototype format can be made. */ visit_each_hash_node (function_name_primary, connect_defs_and_decs); #endif /* !defined (UNPROTOIZE) */ /* Now convert each file that can be converted (and needs to be). */ visit_each_hash_node (filename_primary, edit_file); #ifndef UNPROTOIZE /* If we are working in cplusplus mode, try to rename all .c files to .C files. Don't panic if some of the renames don't work. */ if (cplusplus_flag && !nochange_flag) visit_each_hash_node (filename_primary, rename_c_file); #endif /* !defined (UNPROTOIZE) */ } static const struct option longopts[] = { {"version", 0, 0, 'V'}, {"file_name", 0, 0, 'p'}, {"quiet", 0, 0, 'q'}, {"silent", 0, 0, 'q'}, {"force", 0, 0, 'f'}, {"keep", 0, 0, 'k'}, {"nosave", 0, 0, 'N'}, {"nochange", 0, 0, 'n'}, {"compiler-options", 1, 0, 'c'}, {"exclude", 1, 0, 'x'}, {"directory", 1, 0, 'd'}, #ifdef UNPROTOIZE {"indent", 1, 0, 'i'}, #else {"local", 0, 0, 'l'}, {"global", 0, 0, 'g'}, {"c++", 0, 0, 'C'}, {"syscalls-dir", 1, 0, 'B'}, #endif {0, 0, 0, 0} }; extern int main (int, char **const); int main (int argc, char **const argv) { int longind; int c; const char *params = ""; pname = strrchr (argv[0], DIR_SEPARATOR); #ifdef DIR_SEPARATOR_2 { char *slash; slash = strrchr (pname ? pname : argv[0], DIR_SEPARATOR_2); if (slash) pname = slash; } #endif pname = pname ? pname+1 : argv[0]; #ifdef SIGCHLD /* We *MUST* set SIGCHLD to SIG_DFL so that the wait4() call will receive the signal. A different setting is inheritable */ signal (SIGCHLD, SIG_DFL); #endif /* Unlock the stdio streams. */ unlock_std_streams (); gcc_init_libintl (); cwd_buffer = getpwd (); if (!cwd_buffer) { notice ("%s: cannot get working directory: %s\n", pname, xstrerror(errno)); return (FATAL_EXIT_CODE); } /* By default, convert the files in the current directory. */ directory_list = string_list_cons (cwd_buffer, NULL); while ((c = getopt_long (argc, argv, #ifdef UNPROTOIZE "c:d:i:knNp:qvVx:", #else "B:c:Cd:gklnNp:qvVx:", #endif longopts, &longind)) != EOF) { if (c == 0) /* Long option. */ c = longopts[longind].val; switch (c) { case 'p': compiler_file_name = optarg; break; case 'd': directory_list = string_list_cons (abspath (NULL, optarg), directory_list); break; case 'x': exclude_list = string_list_cons (optarg, exclude_list); break; case 'v': case 'V': version_flag = 1; break; case 'q': quiet_flag = 1; break; #if 0 case 'f': force_flag = 1; break; #endif case 'n': nochange_flag = 1; keep_flag = 1; break; case 'N': nosave_flag = 1; break; case 'k': keep_flag = 1; break; case 'c': params = optarg; break; #ifdef UNPROTOIZE case 'i': indent_string = optarg; break; #else /* !defined (UNPROTOIZE) */ case 'l': local_flag = 1; break; case 'g': global_flag = 1; break; case 'C': cplusplus_flag = 1; break; case 'B': nondefault_syscalls_dir = optarg; break; #endif /* !defined (UNPROTOIZE) */ default: usage (); } } /* Set up compile_params based on -p and -c options. */ munge_compile_params (params); n_base_source_files = argc - optind; /* Now actually make a list of the base source filenames. */ base_source_filenames = xmalloc ((n_base_source_files + 1) * sizeof (char *)); n_base_source_files = 0; for (; optind < argc; optind++) { const char *path = abspath (NULL, argv[optind]); int len = strlen (path); if (path[len-1] == 'c' && path[len-2] == '.') base_source_filenames[n_base_source_files++] = path; else { notice ("%s: input file names must have .c suffixes: %s\n", pname, shortpath (NULL, path)); errors++; } } #ifndef UNPROTOIZE /* We are only interested in the very first identifier token in the definition of `va_list', so if there is more junk after that first identifier token, delete it from the `varargs_style_indicator'. */ { const char *cp; for (cp = varargs_style_indicator; ISIDNUM (*cp); cp++) continue; if (*cp != 0) varargs_style_indicator = savestring (varargs_style_indicator, cp - varargs_style_indicator); } #endif /* !defined (UNPROTOIZE) */ if (errors) usage (); else { if (version_flag) fprintf (stderr, "%s: %s\n", pname, version_string); do_processing (); } return (errors ? FATAL_EXIT_CODE : SUCCESS_EXIT_CODE); } | http://opensource.apple.com/source/libstdcxx/libstdcxx-39/libstdcxx/gcc/protoize.c | CC-MAIN-2016-18 | refinedweb | 17,104 | 50.46 |
I've been running into a problem while trying to delete uploaded images.
The error is along these lines:
SuspiciousOperation: Attempted access to '/media/artists/12-stones/154339.jpg' denied.
After reading around it looks like the error is due to the fact that it's looking for the image in the wrong place (notice first slash, /media/ doesn't exist on the filesystem)
My MEDIA_ROOT and MEDIA_URL are:
MEDIA_ROOT = '/home/tsoporan/site/media/' MEDIA_URL = "/media/
My models upload_to parameter is passed this function:
def get_artist_path(instance, filename): return os.path.join('artists', slugify(instance.name), filename)
My questions are:
1) How can I fix this problem for future uploads?
2) Is it possible to fix my current images' paths without having to reupload?
Regards, Titus
Well, a little grepping around in the code shows that there may be a deeper error message that got homogenized along the way.
in django/core/files/storage.py, line 210 (this is in 1.1.1) we have:
def path(self, name): try: path = safe_join(self.location, name) except ValueError: raise SuspiciousOperation("Attempted access to '%s' denied." % name) return smart_str(os.path.normpath(path))
So the error has to be coming out of safe_join().
In django/utils/_os.py, we have the following. Note the ValueError it throws on the third to last line:
===========================
def safe_join(base, *paths): """ Joins one or more path components to the base path component intelligently. Returns a normalized, absolute version of the final path. The final path must be located inside of the base path component (otherwise a ValueError is raised). """ # We need to use normcase to ensure we don't false-negative on case # insensitive operating systems (like Windows). base = force_unicode(base) paths = [force_unicode(p) for p in paths] final_path = normcase(abspathu(join(base, *paths))) base_path = normcase(abspathu(base)) base_path_len = len(base_path) # Ensure final_path starts with base_path and that the next character after # the final path is os.sep (or nothing, in which case final_path must be # equal to base_path). if not final_path.startswith(base_path) \ or final_path[base_path_len:base_path_len+1] not in ('', sep): raise ValueError('the joined path is located outside of the base path' ' component') return final_path
==================
Hmmm, "The joined path is located outside of the base path component". Now there are a couple of calls to abspathu() in there (which is defined just above this routine and is different for NT than for other OSes). abspathu() converts all non-absolute paths to absolute by tacking on os.cwdu(), the current working directory.
Question: By any chance do you have a symlink (symbolic link) to your media directory? In other words, it's not a direct child of the project directory? I don't know if this is a valid question, it just popped out of my head.
Question: What are the values of
self.location and
name that are being passed to safe_join()?
Wild-ass-guess: is
self.location empty?
Another wild-ass-guess: did MEDIA_ROOT somehow get changed to
/media/?
If you have your own copy of Django installed (it's not hard to do), trying putting some print statements in these routines and then run it as the development server. The print output will go to the console.
Update: Hmmm. You said "2) The values for self.location and name are: /home/tsoporan/site/media and /media/albums/anthem-for-the-underdog/30103635.jpg"
Does the following path make any sense?
"/home/tsoporan/site/media/media/albums/anthem-for-the-underdog"
Note the .../media/media/... in there.
Also, what OS is this? Django rev? | https://pythonpedia.com/en/knowledge-base/1950069/suspicious-operation-django | CC-MAIN-2020-45 | refinedweb | 591 | 57.57 |
allibaba com raw material of fiber tetile products rock wool for sale lowes
US $30-50 / Cubic Meter
20 Cubic Meters (Min. Order)
pvc granules, pvc compounds , PVC raw material of products.
US $1000-1200 / Ton
1 Metric Ton (Min. Order)
Good Quality Nonwoven Spunlace Raw Material,Nonwoven Product Of Mask
US $7.59-8.35 / Kilogram
1000 Kilograms (Min. Order)
hot products Silway 740 waterproofing primers raw material products for cement board of Higih Quality
US $1-10 / Kilogram
200 Kilograms (Min. Order)
High quality raw materials of cleaning products
US $14-20 / Set
1000 Sets (Min. Order)
Raw material of PVC products CPE 135A
US $1100-1200 / Metric Ton
1 Metric Ton (Min. Order)
List of electronic products plastic raw materials prices dop korea
US $1000-1400 / Ton
1 Ton (Min. Order)
High viscosity, high quality, low price of oil field fracturing products raw materials of guar gum splits
US $2150-2350 / Tonne
1 Tonne (Min. Order)
Production of the raw material of camphor napthalene flakes manufacturer ZH150601
US $1000-1500 / Ton
1 Ton (Min. Order)
Raw Material New Products Manufacture Supplier Of Carbon Black/Ceramic Pigment Carbon Black
US $0.55-25 / Kilogram
50 Kilograms (Min. Order)
Acrylamide raw material of production of polyacrylamide
1 Metric Ton (Min. Order)
Chian hot sale new products raw material of wet wipes
US $2.7-3.8 / Ton
1 Ton (Min. Order)
Acrylamide raw material of production of polyacrylamide
US $1600-2200 / Ton
1 Ton (Min. Order)
silica granule product of gel beads raw material for making mattress
US $3-5 / Kilogram
50 Kilograms (Min. Order)
injection grade cpvc compound raw material of plastic products/polymer chemicals
US $1500-1600 / Metric Ton
1 Metric Ton (Min. Order)
chemical cleaning raw material of cleaning products
US $1228-1355 / Metric Ton
1 Metric Ton (Min. Order)
price of desiccant products small pack desiccant raw materials
US $0.01-0.5 / Pack
100 Packs (Min. Order)
Raw Materials of Cleaning Products
US $2.3-3.5 / Kilogram
1 Kilogram (Min. Order)
best selling products chemical auxiliary raw materials zeolite of indonesia
US $1.7-2.0 / Kilograms
1000 Kilograms (Min. Order)
import products of singapore raw material price pc polymer plasticizer
US $5)
water absorbing Polypropylene Spunbond Nonwoven for Raw material of Incontinence products
US $1.9-3.66 / Kilogram
4 Tons (Min. Order)
The lastest products a medical apparel of raw material polypropylene spunbond non woven fabric for hospital
US $1.5-1.9 / Kilogram
500 Kilograms (Min. Order)
Whole Sale SS Waterproof Breathable Non-toxic PP Non woven Fabric for Face Masks of Medical Raw Material
US $800-1300 / Ton
1 Ton (Min. Order)
Hot-sale chemical product attapulgite clay raw material 4a molecular sieve for oxygen concentrator with high quality
US $1000-2000 / Metric Ton
1 Metric Ton (Min. Order)
Raw material 4-Chloro-3-Methylphenol for antibacterial hand soap, soap, shampoo and healthy products
US $10.0-11.0 / Kilogram
1 Kilogram (Min. Order)
Plasticizer for PVC resin Chlorinated paraffin 52 plastic and rubber raw material
US $660.0-690.0 / Ton
1 Ton (Min. Order)
make-up product 3 years shelf life DC1501 Silicone oil forcosmetic raw material
US $2.65-2.9 / Kilogram
1 Kilogram (Min. Order)
factory supply non woven fabric raw material for non-woven cooler bag
US $1.85-2.39 / Kilogram
1 Ton (Min. Order)
All Application for Natural Zeolite Clinoptilolite Type Application and Raw Material, Rock Shape Clinoptilolite
US $90-290 / Ton
1 Ton (Min. Order)
Dry Nitrile Virgin High Quality Cheap Granule Raw Material
US $1.0-1.0 / Twenty-Foot Container
1 Twenty-Foot Container (Min. Order)
non-woven bbay diapers materials raw materials for baby diaper
US $1.5-2.5 / Kilogram
2 Kilograms (Min. Order)
HOT SELL polyurethane raw material
US $100-999 / Ton
10 Tons (Min. Order)
Glossy plain Aluminium foil laminated matte Non woven cloth fabric Insulation cool bag raw material
US $0.01-0.01 / Meter
1 Meter (Min. Order)
Attractive Price Epoxidized Soybean Oil ESO ESBO DINP Free Plasticizer PVC Additives Raw Material for diaper
US $900-1030 / Metric Ton
1 Metric Ton (Min. Order)
Waterproof SMS Non woven Fabric PP+PE medical material / smms nonwoven fabric / 22g pp spunbond sms
US $0.15-0.3 / Kilogram
1000 Kilograms (Min. Order)
China organic silicon polymers hydroxy polysiloxane 80000cst as rtv silicone rubber raw materials
US $1.9-3.07 / Kilogram
1 Kilogram (Min. Order)
Raw material for baby diapers making SSS SMMS Hydrophilic Non woven Fabric
US $1-1.5 / Kilogram
1000 Kilograms (Min. Order)
good quality of virgin Chemical rew material Chlorinated Polyethylene white powder CPE chemical raw material
US $1100-1300 / Metric Ton
17 Metric Tons (Min. Order)
raw materials of Sanitary pads/diapers absorbent paper with Japan SAP
US $2.5-4.5 / Kilogram
2000 Kilograms (Min. Order)
Manufacturer Customized SMS Nonwoven Fabric Medical PP Non-woven For Medical Products
US $1.5-2 / Kilogram
5000 Kilograms (Min. Order)
Hot Sale White Non-Woven Fabric PP Spunbond Nonwoven Fabric Raw Material For Baby Diaper
US $1.8-2.5 / Kilogram
1000 Kilograms (Min. Order)
Embossed wet/dry wipes Nonwoven Spunlace raw material
US $1-3 / Bag
1000 Bags (Min. Order)
pp nonwoven fabric non flammable raw material
US $1450-1830 / Ton
1 Ton (Min. Order)
silicon dioxide powder as raw material in food additives in soy sauce
US $500-800 / Ton
9 Tons (Min. Order)
Superplasticizer concrete admixture manufacturing company looking for agents to distribution our products
US $3000-3100 / Ton
1 Ton (Min. Order)
- About product and suppliers:
Alibaba.com offers 120,469 products of raw material products. About 2% of these are detergent raw materials, 1% are rubber auxiliary agents, and 1% are antibiotic and antimicrobial agents. A wide variety of products of raw material options are available to you, such as plastic, viscose / polyester, and 100% polyester. You can also choose from industry, hospital, and hygiene. As well as from anti-bacteria, eco-friendly, and recyclable. And whether products of raw material is ce, fda, or sgs. There are 119,708 products of raw material suppliers, mainly located in Asia. The top supplying countries are China (Mainland), India, and Vietnam, which supply 98%, 1%, and 1% of products of raw material respectively. Products of raw material products are most popular in North America, Domestic Market, and Southeast Asia. You can ensure product safety by selecting from certified suppliers, including 29,662 with ISO9001, 15,180 with Other, and 7,218 with ISO14001 certification.
Buying Request Hub
Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE
Do you want to show products of raw material or other products of your own company? Display your Products FREE now! | http://www.alibaba.com/showroom/products-of-raw-material.html | CC-MAIN-2017-47 | refinedweb | 1,125 | 50.33 |
Created attachment 36061 [details]
Attached the exception for the above code mentioned in the description.
Null pointer exception is thrown ,When i am trying to read data from external source Excel.
here is my code to read data from excel (.xlsx)
public class ExcelReader {
public static void main(String[] args) throws Exception {
String path ="C:\\Users\\hudgipr\\Desktop\\ConfigData.xlsx";
FileInputStream fis = new FileInputStream(path);
Workbook wb = WorkbookFactory.create(fis);
Sheet sh = wb.getSheet("Flipkart");
Row rw = sh.getRow(0);
Cell cl =rw.getCell(0);
System.out.println(cl.getStringCellValue());
}
}
problem is when i run it normal way it works fine and provide me correct output without throwing any exception,but when i debug same code i get an exception on the line code
Workbook wb = WorkbookFactory.create(fis);
Please find attachment for exception while debugging.
Please help me i am blocked on this from 3 days and did not found any solution,not sure whats the issue.
The exception seems to happen deeply inside JVM code while class-loading and is not related to POI itself. Also seems to only happen when debugging in local IDE, so not easy to reproduce.
Therefore I am not sure how we can help/investigate her. If it works when run without IDE debugging, it seems to be caused by some glitch in your IDE and not in code in POI.
Thus I am closing this with WORKSFORME unless there is some more indication of a problem directly in Apache POI code here. | https://bz.apache.org/bugzilla/show_bug.cgi?id=62588 | CC-MAIN-2020-16 | refinedweb | 250 | 57.16 |
Prev
C++ VC ATL STL Installation Experts Index
Headers
Your browser does not support iframes.
Re: Please help with testing & improving a StringValue class
From:
"Alf P. Steinbach" <alfps@start.no>
Newsgroups:
comp.lang.c++
Date:
Sat, 08 Sep 2007 16:45:19 +0200
Message-ID:
<13e5dc5lsm46pb2@corp.supernews.com>
* Roland Pibinger:
On Sat, 08 Sep 2007 10:21:59 +0200, "Alf P. Steinbach" wrote:
I once suggested in [comp.std.c++] that SomeOne Else(TM) should propose
a string value class that accepted literals and char pointers and so on,
with possible custom deleter, and in case of literal strings just
carrying the original pointer.
In other words, for the simplest usage code:
* no overhead (just carrying a pointer or two), and
* no possibility of exceptions (for that case).
IOW, you created an assignable but otherwise immutable string class
that provides an optimization for string literals.
And also for the case of passing around a string with a custom delete
operation, e.g. as provided by many API functions such as Windows'
command line parsing.
When using std::string or std::wstring for this, the API function's
string must first be copied, where dynamic allocation is used, and then
freed (using its own delete operation). This is costly. Then, when
that string should be passed to an API function again, the std::string
must sometimes be copied to dynamically allocated memory using the API's
allocator. Which might happen many times for the same string. This is
costly. A string value class with custom deleter, such as StringValue,
solves that problem. No costly dynamic allocations, and no O(n) copy
operations, for the cases where such operations can be dispensed with by
keeping a delete function along with the string value.
Of course the last can also be accomplished using e.g.
boost::shared_ptr. But then different kinds of strings have to be
treated differently, with conversion among them. And it's awkward
anyway, so awkward that I don't think anybody's done exactly that.
[snip]
The code uses boost::intrusive_ptr from the Boost library, which
therefore is required to compile.
If you want your code to be widely used you should get rid of the
Boost dependency (which seems to be no problem in your case).
It think most C++ programmers have the Boost library installed.
But since that's a huge library, it would perhaps be an idea to bundle
the one or few Boost files that's actually used?
intrusive_ptr is just header file code, not separate compilation.
Strings with embedded zero characters
are not supported in the current code. I don't think the need is great.
Example usage code
StringValue foo()
{
return "No dynamic allocation, no possible exception, fast";
}
StringValue bar()
{
return std::string( "A dynamic" ) + " copy";
}
In general, the string literal optimization is a good idea. The design
of such a class (template) poses the real challenge. For various
reasons it should hold that sizeof StringValue == sizeof void*. You
need to find a way to distinguish a dynamically allocated array from a
string literal without additional information in the object (not even
an additional flag). One of the reasons for the above is the
requirement of thread safety for string assignment and copying.
Unfortunately there seems to be no way to implement a 'lightweight'
thread-safe assignment operator and/or copy constructor because
incrementing/decrementing the reference-counter and assignment of the
pointer are always two distinct operations. I experimented with my own
string class but could not reach a satisfactory result WRT thread
safety (i.e. when the object is accessed by multiple threads).
Uhm, that's a different problem. Essentially, if I understand you
correctly, the problem is what trade-off can you do so that in the case
of multi-threaded access to the same string, the total cost of safe
copying is less than with a mutex or whatever? And I think the best
answer is to /not/ accept the premise that multi-threaded access to the
same string without some external thread synchronization such as a
mutex, is something one should support: instead, avoid it!
I think it's in the same league as designing a language to support
arbitrary gotos. That would restrict the language severely (e.g., gotos
past object construction renders all construction guarantees void, so to
support arbitrary gotos, no object construction guarantees). And
instead of designing the language with the goal of supporting
unrestricted gotos, the sensible course is IMHO to restrict gotos.
Example exercising all currently defined constructors, where malloc and
free is used just to demonstrate that also that is possible:
<code>
#include <alfs/StringValueClass.hpp>
#include <iostream>
#include <cstdlib> // std::malloc, std::free
#include <cstring> // std::strcpy, std::strlen
char const* mallocStr( char const s[] )
{
using namespace std;
return strcpy( static_cast<char*>( malloc( strlen( s ) + 1 ) ), s );
}
void myDeleter( void const* p ) { std::free( const_cast<void*>( p ) ); }
int main()
{
// A StringValue can be freely copied and assigned, but the value
// can not be modified.
using namespace alfs;
char const* const dynValue = "dynamic copy";
char const* const ptrValue = "pointer to persistent buffer";
char const* const customValue = "custom delete";
char const sizedValue[] = { 's', 'i', 'z', 'e', 'd' };
StringValue literal( "literal" ); // No alloc.
StringValue pointer( ptrValue, NoDelete() ); // No alloc.
StringValue custom( mallocStr( customValue ), myDeleter );
StringValue sized( sizedValue, sizeof( sizedValue ) );
StringValue dynamic( dynValue );
StringValue stdval( std::string( "std::string" ) );
std::cout << literal << std::endl;
std::cout << pointer << std::endl;
std::cout << custom << std::endl;
std::cout << sized << std::endl;
std::cout << dynamic << std::endl;
std::cout << stdval << std::endl;
}
</code>
Code currently available (especially if you want to help testing and or
discussing functionality or coding, whatever) at
<url: home.no.net/alfps/cpp/lib/alfs_v00.zip> (lawyers, if any: note
that I retain copyright etc., although usage is of course permitted).
A lot of established Open Source licenses like MIT, new BSD
() or ISC
() are available.
Thank you.
I think I heard something about the Apache license, too.
Cheers, & thanks for your constructive feedback (much I hadn't thought
about!),
- ALf
Generated by PreciseInfo ™
1977 President Jimmy Carter forced to apologize to the Jews living
in America for telling his Bible class the truth, that THE JEWS
KILLED CHRIST.
(Jewish Press, May 13, 1977) | https://preciseinfo.org/Convert/Articles_CPP/Installation_Experts/C++-VC-ATL-STL-Installation-Experts-070908174519.html | CC-MAIN-2022-05 | refinedweb | 1,043 | 53.61 |
David Wincelberg runs FileJockey Software and can be reached at or filejockey@compuserve.com.
In one of my programs, there is an Import Favorites or Bookmarks feature. Users could type in the pathname of a bookmark file, press the Browse button to select one such file, or press the Find button to search for these files. Since users may want to cancel the searching before the selected disk is fully searched, the Find button becomes a Stop button. This brings up two questions: How do I make the dialog respond to a stop command and which cursor should be displayed?
The first question has been addressed in several articles. In Wicked Code Jeff Prosise presents a class called CWaitDialog. Using this class would put a dialog box on top of the current dialog (or window) that contains a progress bar and a Cancel button. You would include a BOOL flag in your long-running routine that is tested periodically. You would also add code to update the progress bar and to pump messages. The last part is needed so that clicking on the Cancel button sets this flag to FALSE while this routine is running.
For file searching, you don't know how far you are into the task (unless you have information from a previous search). In addition, I'd rather not cover the import dialog box since it displays which files have been found.
In C++ Q&A Paul DiLascia presents two approaches. The first involves a class called CCancelDlg. This is similar to the CWaitDialog class in Prosise's article. When using this class, a dialog would appear above your dialog (or window). Unlike when using CWaitDialog, testing for the stop signal involves a function call that also pumps messages; see Example 1.
// 1998 Microsoft Systems Journal. // If this code works, it was written by Paul DiLascia. // If not, I don't know who wrote it. // Compiles with Visual C++ 5.0 on Windows 95. // July 1998, pp. 83-91, // Test for abort. This is my chance to run peek/pump message loop; // ie, to process any messages that may be waiting for me // or--in Windows 3.1--for other apps as well. BOOL class_name::Abort() { MSG msg; while (::PeekMessage (&msg, NULL, NULL, NULL, PM_NOREMOVE)) { AfxGetThread()->PumpMessage(); } return m_bAbort; } // Abort
The other approach uses a class called CThreadJob. With this class, the long-running routine runs in a separate thread. This allows for a Stop button to be included in the dialog that contains the routine's start button. Another advantage is that the dialog is more responsive to being dragged since mouse messages don't have to wait until they are pumped.
For the file-searching feature, I used part of the first approach of DiLascia's so that a Stop button could be in the same dialog box. The routine for this feature periodically calls Abort() to pump messages and check if m_bAbort has changed as a result of the user pressing the Stop button.
Now that you have seen several ways to allow for a routine to be interrupted, how do you signal to users that your program is working on a long task but could be stopped? Having a Stop or Cancel button helps. To complete the picture, the cursor should convey both messages. The wait cursors in Figure 1 don't imply that a task could be interrupted.
The standard arrow cursor doesn't indicate that a long task is running. Instead, a combination of these cursors is needed. Windows comes with such cursors (see Figure 2).
To make using this cursor convenient, I wrote a class called CInterruptibleCursor, which is in Listing One.
// InterruptibleCursor.cpp // Copyright (c) 2008 by David Wincelberg ////////////////////////////////////////////////////////////////////// #include "stdafx.h" #include "InterruptibleCursor.h" #ifdef _DEBUG #undef THIS_FILE static char THIS_FILE[]=__FILE__; #define new DEBUG_NEW #endif ////////////////////////////////////////////////////////////////////// // Construction/Destruction ////////////////////////////////////////////////////////////////////// // Action: Sets the cursor to one with an arrow and a wait indicator. CInterruptibleCursor::CInterruptibleCursor() { m_curNormal = ::LoadCursor (NULL, IDC_ARROW); m_curMixed = ::LoadCursor (NULL, IDC_APPSTARTING); if (m_curNormal == NULL) m_curMixed = NULL; // changes cursor only if normal one is loaded Restore(); } // CInterruptibleCursor // Action: Restores the normal cursor. CInterruptibleCursor::~CInterruptibleCursor() { if (m_curNormal != NULL) ::SetCursor (m_curNormal); } // ~CInterruptibleCursor // Action: Sets or restores the interruptible cursor. void CInterruptibleCursor::Restore() { if (m_curMixed != NULL) ::SetCursor (m_curMixed); } // Restore
To use this class, create an instance of CInterruptibleCursor on the stack. Inside a loop (or other long process), call ic.Restore() after each call to Abort(). This is necessary since Abort() resets the cursor to the default one. When the CInterruptibleCursor object goes out of scope, the cursor is set back to the normal one; see Example 2.
#include "InterruptibleCursor.h" CInterruptibleCursor ic; do { ic.Restore(); // long task } while (!Abort() && other_conditions);
The demo program StopTaskCursor shows how a long process looks when using various cursors -- the normal one, the wait cursor, and the mixed one; see Figure 3. The complete source code and related files are available accompanying this article here.
| http://www.drdobbs.com/cpp/interruptible-task-cursor/222001595 | CC-MAIN-2015-18 | refinedweb | 820 | 65.93 |
GODDESS OF
AVALON Earth
Print Version $12.60 or Digital (iPad) $2.95 AVALON Magazine
2201-1803 SPRING 13
Fire
Water
Spirit
Green Man Spirit of Nature A Walk with Wendy Rule Scrying with She DeMontford Magical Sea Witch The Empowered Goddess Invoking the Spirit of Boudicca Contemplating the Tripple -Goddess pt 2
G o S dd y e d s 2 n s 0 e C 1 y o 3 O n c fe t 18 re -2 nc 1 e
Online Ezine
Air
ADVERTISING
Rates are Quartely (every 3rd month) To place your advert email your specs & details to: email@sroyarose.com
Advert Page Size
Dimensions
Outside Back Cover Inside Front & Back Covers Normal Full Page 1/2 Page Horizontal 1/2 Page Vertical 1/3 Page Horizontal 1/3 Page Vertical 1/4 Page Horizontal 1/8 Page (Biz Card) 1/8 Page Banner Directory (12th page)large Directory (24th page) Small *Directory Online Link
209.55mm W x 273.05mm H 209.55mm W x 273.05mm H 203.2mm W x 260.35mm H 203.2mm W x 130.30mm H 101.6mm W x 260.35mm H 203.2mm W x 86.86mm H 67.81mm W x 260.35mm H 101.6mm W x 130.30mm H 101.6mm W x 65.02mm H 203.2mm W x 32.51mm H 50.8mm W x 65.02mm H 50.8mm W x 32.51mm H 50.8mm W x 65.02mm H
*Casual Rate $550 $450 $400 $250 $250 $190 $190 $140 $ 75 $ 75 $ 55 $ 35 N/A
*Yearly Rate $500 per issue $400 per issue $350 per issue $200 per issue $200 per issue $150 per issue $100 per issue $100 per issue $ 50 per issue $ 50 per issue $ 35 per issue $ 25 per issue $ 20 a year*
l SAVE! by Paying x6Adverts
*Credit/Debit Card Payment Online at:
*Pay Direct Debit to S’Roya Rose - NAB Bank: BSB 084835 Account no: 146 309 238
S of
l
Writers make a NOTE in your Diary Please!
*Deadlines for GODDES AVALON Magazine: SPRING SUMMER AUTUMN WINTER
Half page
1st of July 1st of October 1st of January 1st of April Third page
Third page Half page
Quarter page
Eighth page Large Directory
Eighth Small page Directory BANNER Advert
Call S’Roya on Mob: +61 (0) 422361040 or Email: email@sroyarose.com ABN 60589837311
S’RoyaRose Publishing
Welcome to our New Design
Goddess of Avalon PublishER & Editor in Cheif S’Roya Rose Email: email@sroyarose.com Sub-Editor Ellen May Long Blessing all who work on Goddess of Avalon as they do so on a part time Voluntary basis!
Disclaimer
This publication and its entire contents are protected by copyright. No part of this publication may be stored, reproduced or transmitted in any form or by any means without prior written consent from the publisher. Publication of an article or advertisement does not constitute endorsement by the editors, publishers or employees of Avalon Magazine. While every care is taken to provide accurate information the publishers do not accept any responsibility for accuracy of information given. We encourage the highest possible practice of copyright & standards of conscious ethical business.
Copyrighted to S’Roya Rose 2011 all rights reserved. AVALON Magazine is available internationally via Digital & Print formats: www. MagCloud.com. Photos supplied by Shuterstock. com, istock.com, Artwork Rachel Hammond, COVER Artist Ravynne (Michele-Lee) Phelan
HOW TO Contribute
We welcome all kinds of advertising support and articles related to our focus. All submissions should be supplied as attachments in PDF, rtf or Word format by email to the Editor email@sroyarose.com Articles should be 400 to 1,000 words, Longer articles will be broken into 2 parts.
our focus includes:
Arthurian Mythology, Druidry, Divination, Transformational Consciousness, Spiritual Ascension, Gurus & Avatars, Mystics, Ancient Wisdoms & New Age Philosophies, Paganism, Wise-Craft, Wicca, Herbology, UFO’s, Paranormal, Crop Circles, Natural Therapies, Gods & Goddesses, Elementals, Fairy Mythologies, Dragon Lore, Earth Care, Spirit Guides, Angels, Spell Craft, Alchemy, Gothicism, Earth Religions, Earth Changes, Courtly Love, Eco-business, Attitudes, Healthy lifestyle, Holistic Foods, Alternative Healing Modalities, Meditation, Emotional Wellbeing, Soul awareness, Metaphysics, Bardism, Animal Spirits, Esotericism and all things Magickal, that inspires the spirit within to grow & play! We reserve the right to edit all material submitted for publication.
Goddess of Avalon Magazine!
S’Roya The Editor
Well..., after much ado, some letting go and serious contemplation I made the transition to joining both Goddess Guru with Avalon magazine, so welcome to the new look GODDESS of AVALON magazine!! It has all the same fantastic feature of both magazines.. Many of you know how busy I’ve been with my new centre NAMASTE is opening this Spring with a mini EXPO on Oct 5-6th to celebrate, busy and also exciting...!! Spring has sprung and a new cycle begins all over again... I love spring and the new winds that it brings, flowers bloom and hearts are lighter as the new sweet scent of blossoms start to appear, so to must we be rebirthed into new light of the year as a warmer cycle begins for us all. Halloween’s looming just around the corner on 31st October! What will you be doing to celebrate spooky All Hallows Eve...?! Kids love tricker treating... such merriment and mischeif. Plenty to read in this edition, some of your favourite writers, leaving their footprints on the pages of timeless existance, sharing their wisdom and interesting stories. I share my journey discovering the magnificent leafy faced ‘Green Man the Spirit of Nature’. So grab your favourite cuppa and enjoy the magazine… Happy reading Blessings S’Roya )O(
Happy Halloween 2013 31st October... Dress up, have scary fun, carve a pumkin, go tricker treating... Enjoy folks!
WHOLESALE DISTRIBUTION
We happily supply many Shops, Centers, & Individuals at Wholesale rates in minimum lots of 20 magazines. For all Shop or Wholesale enquiries please contact S’Roya Rose via: email@sroyarose.com
GODDESS OF AVALON Magazine
3
Contents 2
Advertising Rates
6
Green Man The Spirit of Nature, by S'Roya
8
A Walk with Wendy Rule, by Lizzy Rose
10 Invoking the Spirit of Beaudicca, by Gemma Lucy Smart 12 Scrying, by She' DeMontford 16 Magical Sea Witch - Sea Magic, by Sharne Tam 18 Avian Guardians of Mystery & History, by Ceri Norman 20 Profile: Galactic Goddess Starina. 22 Pagans & Politics, by Lizzy Rose 24 See you on the Dark Side of the Moon, by Kali Cox 26 Empowering the Goddess, by Blanca Beyer 28 Contemplating the Tripple Goddess Part 2, by Frances Billinghurst 31 A Rose Faery, by Apple Blossom 32 Seaching for Frejya, by ZaKaiRan 34 Good Vibrations, by S’Roya Rose 35 Goddess Book Club, by Zahira Atkins 36 Angelic Messages - Gabriel, by Lit El Star 37 Animal Dreaming, by Scott Alexander King 38 Astrology - The Moon & You, by Sheri-Elizabeth Vaccaro 41 Image - Prayer, by Kerrie Friend 42 Book & CD Reviews by S’Roya 44 Herblore Elder, by Kelly Ashton 45 AVALON Business Directory
4
GODDESS OF AVALON Magazine
Namaste! GODDESS OF AVALON
Magazine
5
The Spirit of Nature
S’Roya Rose
S’Roya teaches Metaphysical Mastery and runs inspirational, Self EmpoweringWorkshops that help develop the tools needed to live a more peaceful & balanced life. Phone: 0422361040. E: email@sroyarose.com
F
or many years I’ve seen the leafy face of the ‘Wildman’ or ‘Greenman’, either in drawings or tapestries, paintings or ceramic designs, subtly hiding in carved stone walls or peering down obscurely from Cathedral roofs and ancient Gothic churches, witnessing every move. I even observed him showing up in plays like Shakespeare’s ‘A Mid Summer Night’s Dream’ as the wonderful character Puck…
6
GODDESS OF AVALON Magazine
(one of my favourites). I’ve seen him subtly in crevices of trees and depicted in movies as Robin Hood of the infamous Sherwood Forrest, but I didn’t understand the history or real meaning behind his leafy countenance even though it felt deeply familiar to me and quite comforting on some level. Then, I remembered that when I was a young girl I use to see spirit faces in many of the tress, even naming them sometimes and talking
Artwork supplied
Green Man
to them. I felt quite happy amongst nature having grown up as a country girl, riding my horse in large pine forest’s and the Aussie gum tree lined bush trails I rode along. Trees were like kin to me, friendly, familiar, kind, non-judgemental folk who listened and comforted me in time of crisis. Even now I still seek their company when I need to commune with nature, feeling freer and connect to my mystic’s soul.
Green Man
H
owever, it was much later tree in awe of its age, realising that as an adult that I would our time line must be so different, understand as I became one day for us must seem like one personally acquainted with the second to it! How much must this Greenman in a spiritual way that was tree have witnessed of our human both intriguing and delightful. Whilst folly, with wars and our bickering visiting Glastonbury staying with a over trivialities… It kind of put many lady friend an initiating Druid High things in perspective for me fast. At Priestess, I noticed she had placed first I just stood there drinking him a carved green face of him in every all in, feeling his energy, so gentle, room of her house. Druids are very then I hugged him for over 4 minutes, fond of nature and groves of trees are then sat underneath leaning against part of ritual and where gatherings his trunk, I closed my eyes, connected take place. So I couldn’t escape him and meditated. now, wherever I went there he was, his kind green leafy eyes smiling at It appears that, me knowingly. Before too long I the Green Man has started greeting him, even sharing returned because of conversations with him. Well, he the growing interest was there and what can I say, it’s what one does when one is alone in 'Green Living'. with no one to chat to. Missing Just as the worlds my family and friends in Aussie I Tree cover is being enjoyed his silent acceptance, while steadily eroded, a his green eyed reassuring smile love and respect for became a happy daily sight. He felt the Green Man is like family in a strange way.
being rekindled...
Then one day my friend suggested I might like to go and commune with a 2000-year-old Yew tree that resided in Compton Dunton, a little village just outside of Glastonbury. It lived in a small churchyard, quite unassuming really, but majestic in stature with it’s huge branches reaching out whilst curving down, and his massive trunk which had been hollowed out by fires from long ago. I had never seen anything living that was this old before… I felt I was in some royal court visiting with a high priest, of the non-human kind. I sat under this
To my surprise the Yew tree spoke to me telepathically, and I felt such love from it, like a grandfather greeting a grand child. I spent the whole day just sitting in his energy, letting go, allowing its wisdom of standing still perform its magic. I was humbled by its ancient timelessness, having witnessed many humans coming and going with message of slow down, just breathe and relax, the time always passes, no matter what happens all will be well, have
patience. Humbling indeed… Now Yew trees are famous as they live very long lives, have huge branches that become so heavy they eventually touch the ground, creating a new sacred circle of baby Yew trees. This is why they are considered a sacred grove and the Druids used these groves to meet and hold ceremonies and rituals. Many Yews like this one were planted or moved as younglings to a churchyard used as protectors, standing guard over those loved ones departed in the graveyards for eons of time. The Greenman is a symbol of the fertile Spirit of Nature, this lives in all of nature including us, but especially felt and seen in trees. He is representative of the natural world or our connection with our earthly nature, of our animal instinctual wildness that still wants to be free of constraint, to feel the breeze in our hair and grass at your feet, to dance with the flowers, sing with the birds, swim with the salmon, hunt for our food. He represents all that is natural, organic and simple. He reminds us of our strength to endure through all the seasons of time, to weather the storms and stand tall in the face of adversity. I can see why he has become an unofficial icon for the environmental movement, reminding us to connect and interact with the natural world around us. One thing I am sure of is that those who contact this extraordinary being ‘the Green man’ will feel changed by the encounter… I did! GODDESS OF AVALON Magazine
7
WWiatlck with Wendy Rule
y t i C e h t n i h
A
Lizzy Rose Australias’s Celebrity Psychic. Specialising in Psychic, Clairvoyant,Tarot & Medium Readings. High Priestess of Mumma Moon Women’s Circles & Goddess Temple Gatherings. Email lizzyrosebookings@gmail.com 0452 479 505 by Lizzy Rose
D
elighted and intrigued with the latest happenings of Australia’s own magickal songstress Wendy Rule, together we took a walk and so with Quill in hand; my mind filled with many a question. Lizzy: Welcome back to Melbourne Australia Wendy you have been deeply missed. After journeying around the world you’ve had so many adventures, do you write music as many artists do according to your current life’s state and state of mind? Wendy;Yes, my music is always a reflection of my own emotional and spiritual state. My goal is to be very open and honest in my work. In sharing my own growth and challenges, and my personal relationship with the Universe, I hope that my music becomes a gateway 8
GODDESS OF AVALON Magazine
and a healing tool for my listeners as they follow their own journeys. Lizzy:You are currently recording your latest album what’s the title? Wendy; The album is called ‘Black Snake’ Lizzy: What’s the nature and theme of this current album? Wendy; I’m working with the energies of transformation on this album, of Death and Rebirth, and reawakening to the wonders of Nature and the Universe. The past few years have been deeply transformative for me, with lots of big life changes. I got married to my wonderful husband Timothy in November 2011, left my home of 17 years, let go of most of my belongings, and set off to travel the world. We’ve
essentially been living the troubadour life since then – with no permanent home, and lots of incredible adventures. This album is an opportunity to explore the challenges and rewards that this process has awakened, and especially the process of stripping back in order to create new growth. I explore the mythological themes of descent with songs such as Ereshkigal (based on the Sumerian story of Inanna’s descent into the Underworld), and the great flow of Life, Death and Rebirth in songs such as All Life Flows into the Great Mother, and After the Storm. Over the past year I’ve spent a lot of time in the wilderness of America, and in fact wrote the bulk of the songs when on retreat in the deserts of New Mexico, and the album reflects this strong connection with Earth and her cycles. It’s been a deeply healing journey for me. Lizzy:You have a snake tattooed on your left outer arm, may I ask when you got it, where and why? I’m wondering how the evolution of the snake unites with the making of your current album? Wendy; I have always felt a strong connection with snakes. I love them for their amazing physical beauty, and also their mythical and Magical qualities that have woven through so many cultures, from the very dawn of human existence. The snake sheds its skin, its old life, only to be ‘reborn’ glistening and new. To me, it is a potent symbol of the life force. I always get such a Magical thrill
Down into ground Down into ground I’m following her trail Of spiral scale Or slither slow Deep down below I’m following my instincts I move like water seeking serpent way to low ground
when I see a snake in the wild (which I do surprisingly often – the latest being a rattlesnake in New Mexico), and I know that this is a sign that things are about to change, or a challenge to change things that are stuck – to shed my skin. I got my tattoo in 2008, but in the 5 or so years preceding that, I had often painted a red snake on my arm during ritual, using red ochre. It made me feel empowered, connected, Magical. When I decided to honor this permanently, I chose the design of the snake from the cover of my second album, Deity (released in 1998). I wanted to capture the symbolic essence of ‘snake’ then to have a realistic representation. Every time that I see the tattoo, it is a reminder of my commitment to The Goddess, and Magic, and to my spiritual growth. I chose to call the album Black Snake after writing the song of the same name. It came about very organically, as I was hiking in the desert one day. I just began singing the first lines, and the song began to flow. In this song the Black Snake is really the embodiment of the Crone energy – gently challenging me to descend into my deep self, explore my emotions and motivation, and not shy away from the feelings of loss and sadness that my process of change was bringing up. But she’s not only the challenge; she’s also the guide – reminding me of the necessity of this healing journey. That’s why, as you’ll see in the first verse below, that I follow her willingly. Black Snake leads me down
Lizzy: How many albums have you now released? Wendy; This will be my 10th studio album, but I’ve also released 3 live albums, 2 EPs, and a video (yes, way back then!) of a theatrical presentation in 1999 called ‘An Underworld Journey’. Lizzy: Looking back over the course of your career what’s an Australian highlight for you? Wendy; Oh, there are so many! Every experience has its own beauty – from the big, full band productions of my CD launches, to the small acoustic house concerts. I loved the huge show that I did at the Atheneum theatre in 2000 for my World Between Worlds CD launch. That was before I began touring internationally, so I was able to put all of my focus on that one big show. I also love all the ‘Midsummer Night Faerie Balls’ that I have presented over the past 10 years or so. Such fun to dress up, play music, and create some powerful Magic with the community. Lizzy: Looking back over the course of your career what’s an International highlight for you? Wendy; Last year’s Glastonbury Goddess Conference in the UK was magnificent. I thoroughly enjoyed that. I also loved playing the main stage at the Faerieworlds festival in Oregon, USA, a couple of years ago. But once again, some of my most powerful and joyous experiences have been at very intimate house concerts. Such a wonderful way to really get to know my fans and become part of the community, wherever I may be travelling.
Lizzy: How do you write a song? Can anyone do it, does it start with a poem? Does the music come first or the lyrics? Wendy; My process of writing is very organic. I’ll often be taken by a theme, or a story from mythology, and know that I want to explore it. I then just let it incubate – like planting a seed. The bulk of my writing is done when I’m walking in Nature. I love to hike, and will set off for a day in the woods or the seaside or the desert, with a notebook, and my phone to capture any melodies. Then I simply walk and walk, until the everyday dross has settled and I’m in a meditative state. That’s when the Magic happens. A tune will often rise up seemingly out of nowhere. I think of them as gifts from the Faeries! Sometimes I’ll find myself singing some words with that tune, like I did with Black Snake, but at other times I’ll just sing and hike, and then realize that it would be a perfect melody for one of the themes that I’ve been incubating. Often I’ll let the melody and potential words or ideas just hang out together for months before I actually sit down and say ‘ok, I’m going to write you now’! For this latest project I knew that I wanted to go really deep, so I did two separate solitary week long retreats in the desert, without any distractions like computers or other people, and just let the songs emerge. I’d been so busy on the road that I hadn’t had a chance to write deeply in almost a year, and it was really getting me down. I was incredibly relieved to find that the songs just began tumbling out when I honored the process and gave myself the two Magic ingredients: Nature and solitude. As my time with Wendy Rule draws to a close I reflect on how special her lifes journey is and how with an open heart she has shared so much. Im very thankful to have connected with such a lovely magickal soul in this lifetime. Blessed May we all be Lizzy Rose )O( GODDESS OF AVALON Magazine
9
t i r i p S e h t g n i k o v In a c c i d u o of B by Gemma Lucy Smart
In stature she was very tall, in appearance most terrifying, in the glance of her eye most fierce, and her voice was harsh; a great mass of the tawniest hair fell to her hips. - Cassius Dio, Roman historian
H
ave you ever felt so angry about an injustice or circumstance that you aren’t sure what to do with that feeling? Confrontation is scary, and it is tempting to do what we can to get through our day pretending that everything is okay when it clearly isn’t.
Artwork supplied 10
GODDESS OF AVALON Magazine
We all have in us the spirit of Queen Boudicca of the Celts.
W
e have the fire and passion to rise up and speak our truths, to take action and to defy the odds, even in the worst of circumstances. The trick is tapping into that energy and using with intention, rather than letting anger control us. When hard times come knocking at our door and we think that life can’t get any worse, it’s tempting to want to hide under the covers until it’s all over. Many of us feel that same sense of hopelessness when we see others experiencing pain or harm... sometimes it’s simply too hard, too complex and too intractable to make any sense of. The only solution it seems is to run away. When Queen Boudicca was faced with the death of her husband and King in AD 60 or 61, she was dealt a very rough hand. Her husband had been an ally of the powerful Roman state, however on his death the Romans ignored his will and the kingdom was annexed as if it were conquered. Boudicca was flogged, and her daughters raped. When faced with a reality that no woman should ever have to experience, Boudicca chose to direct her energy and anger into action. She led the Celtic tribes of the Icini and Trinovantes into revolt, successfully taking over or burning to the ground the Roman strongholds of Camulodunum, Verulamium and Londinium. It is estimated that 70,000-80,000 Romans and British were killed by Boudicca’s armies in these cities.
Though Boudicca eventually suffered defeat and then died either by her own hand or by illness, her fierce spirit lives on in the hearts and minds of us today, and in the genetic memory of her ancestors. As a woman and military leader, Boudicca was profoundly influential both in her time and well into the present. She represents the indomitable part of us, the strength at the core of our being that cannot be broken. Invoking the spirit of Boudicca is about working with the spirit of the warrior in its feminine form. Boudicca is a mother figure- she is the fierce Mother Bear who seeks to bring justice through direct action. She doesn’t wallow in self pity, nor does she act without forethought. She rallies the troops and takes on the armies... no matter how big they are, or how unlikely success seems. Working with this energy is potent and effective when the odds seem stacked against us, or the injustices we face seem beyond our control. It is fiery and passionate, and it inspires the people around us. When we embody Boudicca it is like a wildfire has started in ourselves and is spreading to our loved ones, our friends and our ‘tribes’. It is important that we use this energy with intent and purpose. Untamed anger is a dangerous weapon, and we may find that the consequences of our actions prove more unwanted than our original circumstances. Before working with this energy, write down exactly what the injustice you are facing is. Ask for it to be
healed through and with your energy in the way that it needs to be. Do not attach to the outcome. Remember that Boudicca herself succumbed to Rome in the end, however it was the memory of her bravery and strength that was the most powerful and enduring gift she gave to the world.
How to Invoke the Spirit of Boudicca
1
Allow yourself to sit with righteous anger. Rather than let it consume you, write down your feelings and brainstorm ways to bring about change. When injustice occurs, focus on solutions rather than problems. Rally those around you with inspired action and a passionate vision for a better future. Research Queen Boudicca and other figures that represent the strength of women in positions of power. See them as a part of our history and a testament to the strength of women in adversity. Consider making a simple altar or ritual to the memory of Boudicca. Use it to inspire your actions and bring you hope in dark times.
2 3 4
Gemma Lucy Smart is a writer, creative and academic based in Sydney, Australia. Her passion for nature, ethics and the wonder and beauty of life inspires her work. http:// therootsofmanytrees.wordpress.com/ GODDESS OF AVALON Magazine
11
Shé D’Montford is the Editor in Chief of “Magick: and now “ESP” magazines. ESP Magazine, available in all leading newsagencies and on line at , is a unique magazine for the psychic community, Its writers, artists, editors, marketing managers, advertisers and production crew are all well respected members of the psychic community. Call +61(0)402 793 604 She is available for readings.
12
GODDESS OF AVALON Magazine
Scrying by She DeMontford
Scrying is the art of seeing and recognizing symbols and their meaning in the world around you. Scrying is a group of divination forms, dating back thousands of years. Different modes of scrying have been used through out history in various civilizations across the world. Scrying is still actively used by many religious and spiritual beliefs systems. All indigenous cultures on every continent use some form of scrying in their spiritual rites and journeys, including the native Asians, Africans, Americans, Australians and Europeans.
up a dialog with the universe in which you can be an active participant. You can use scrying to: Gain Security Get answers Make difficult choices. Understand De Ja Vous See Future Events Know Outcomes Predict Rises & Falls Anyone can perform scrying. Every one should. It is much better to open your eyes and try to see where you are going rather than stumble through life blindly and hope for the best. Using these techniques, you can begin to develop your own ability to directly see the future for yourself, so that you can be your own psychic.
It is also possible to see the future or prognosticate using scrying. It can show a querent their pastor present. Many great historical kings called upon "seers" to use different methods We will discuss 20 simple scrying of scrying to aid them in battle or methods (and one that is a little more with love. complex so that you can begin to stretch your skills) that you can begin Some incorrectly think of scrying to use today without any specialist as being a skill that is exclusive to training or the need for years of witchcraft; however, this is untrue. Several forms of divination are used in psychic development, such as: the Bible from Genesis to Revelations. Islamic cultures honour (scrying saved Crystal Ball Gazing. Tea & coffee cup Mohammed’s life as a child) it as does reading. Judaism and Vedic culture. Mirror Scrying. Cloud Scrying. Scrying can assist you in your daily Fire Scrying. life. It will help you understand you subconscious process and it will open Deep Water Scrying
Oghams. Runes. Bird & Animal Omens. Card Reading Dream symbols and more To perform these methods an understanding of basic symbology is required. At the end of this book is a list of symbols and their meanings. These symbols are commonly used by psychologists, Jungian analysts, dream interpreters, war propaganda agents, advertising agencies, academics and architects. These symbols transcend cultural boundaries and are a universal form of communication between humans. This is because symbols are the language used by our intelligent universe. We are hard wired to them. If we become aware of these symbols we can become aware of universal messages happening around us. This book will give you a better understanding of those symbols, their uses, their meanings and how to relate them to your environment, Once you understand these symbols you can use the methods in this book to create a way to be clear and receptive to these messages, without the conscious mind doubting, fearing and sabotaging. The techniques, methods and symbols that make up the scrying processes will become tools that you can access at any time and in any place, to help GODDESS OF AVALON Magazine
13
you get your needs met.Your world will become larger and more fulfilled.You will be more capable of avoiding the potholes in life’s highway as you use scrying to open up a 2-way dialogue with the universe. Cloud Scrying Laying on the beach and enjoying the morphing pictures formed by the clouds is a very ancient form of scrying. The Celtic Druids would look to the clouds for spiritual insight in battles, for spiritual rites, and for the harvest. This was called “Air Scrying.” Children love doing this and it can be a way to make them aware of their spiritual power very early in life. Method: The secret to successful cloud scrying is to be totally relaxed. Let the interpretations of the symbols flow and float as easily through your mind as the clouds through the sky. Allow your intuition to have full say when interpreting the symbols that you see. For instance, if I am thinking about my business and I notice a cloud that looks like an eagle’s head, it could indicate for me to increase my market to the US. Eagles are blessings from the gods and good things on the way. Cloud Vaporizing - Creating Your future Always remember this: A person controls their fate Their fate does not control the person
for it you can make the most of of a bad situation. For instance there were many signs to indicate the economic downturn in 2009-2013. Lack of clouds and long term drought were good indicators that his was on its way. My clients who listened to my warnings were able to clear their debit and cash up, ready to increase their assets as others who did not listen were forced to sell theirs. Method: Locate a small cloud Do the Zi Gi Meditation Focus all your intent with your out breath through the third eye on vaporizing that cloud. Di this Zi Gi with you eyes open watching the cloud. Concentrate with your eyes Continue to visualize breathing out of your third eye and blowing the cloud away like blowing a candle flame out. Say to your self: 'Make it evaporate, make it disappear.' Don't let that dark cloud out of your sight until it is gone.
If you have seen something in the clouds that may indicate a bad situation ahead this is one of many techniques you can use to make bad luck vaporize (along with the cloud image that is indicating it.) If you can make that cloud disappear then you will be able to turn away from the path that is not good for you. If the cloud persists and wont dissolve then you may not be able to avoid this situation. In that case use the information shown to you to prepare. If you are ready 14
GODDESS OF AVALON Magazine
S’Roya Rose - Priestess Psychic Tarot Readings Medium, Numerology, Reiki Master Transformational Healing Therapy Spiritual Teacher & Wellbeing Coach Goddess Circles & Workshops Publisher of numerous Books, Oracle Cards & Magazines E: email@sroyarose.com
0422 361 040
w w w . s roya ro s e . c o m
GODDESS OF AVALON Magazine
15
Magical Sea Witch by Sharne Tam
Sharne Tam is a practicing Magical SeaWitch, Psychic, Healer and teaches Aromatherapy, and Crystal workshops at Namaste Centre in Lismore, Northern NSW.To book your session or purchase her famous Smudge Spray call he on 0402704536 Her website is;
C
Sea Magic
onnecting with the Oceans energy to purify, transform, heal and guide you. You don’t have to live near the ocean to work with her power. Taping into her magical energies with some simple practices from the privacy of your own home can take you on a journey within, a journey that can re connect you with mother nature and bring some magic into your everyday life. After all 70% of the earth is actually water. There are a number of Goddesses associated with the Sea. The Goddess Yemaya known as the Mother Goddess of the ocean has different
16
GODDESS OF AVALON Magazine
names but appears is in folklore stories from all over the globe. Her origin is thought to be in Africa, but is also in stories from Brazil where she is called Yemanja. The Goddess Aphrodite known as the Greek goddess of beauty and love was according to legend ‘sea foam born’. Her name comes from the word aphros meaning foam. Scallop shells, dolphins and pearls have all been associated with her. Aphrodite, Ishtar and Venus were all called Stella Maris, or Star of the Sea. Working with a Sea Goddess or God is one of the most direct ways to connect with the seas magical energy.
Sea Alters
On my beach walks I always carry a green shopping bag; you know the ones you do the grocery shopping with. Or I have a pink sand bucket that I got from the toy section of a department store. I never carry plastic bags, it’s too risky that they fly off and end up in the ocean and a turtle will swallow it confusing it with a jellyfish. My promise to the Ocean,Yemaya and Neptune is that I will always clean up the rubbish that I see and find on the beach and surrounds. It’s my trade for the treasures that I take home. The rubbish I collect ranges from small pieces of plastic to discarded fishing lures, hooks and fishing lines. It’s a sacred trade. Regardless, if I find any treasures or not I always collect every piece of rubbish that I see.
We can use meditation, ritual or prayer to work with sea energy. One of the easiest ways to bring ritual into our everyday life is to set up an altar. Tending to your alter on a daily basis by lighting candles, changing wilting flowers or refreshing your offerings ground us and it is a physical reminder of the magic we can bring daily into our lives. It doesn’t matter what objects we have on our altar, there is no right or wrong. It’s the intention behind the object that has the most power. Some of the most powerful ritual magic and spell works I have done have come from using the most basic and humblest of items.Your sea alter can be as simple or extravagant as you On my sea alter I use candles in the like. colours of the ocean mainly Blues, turquoises and white. I surround My alter never stays the same it the bases of the candles with either changes like the ebb and flow the sea salt or beach sand. It also has a ocean. I walk on the beach quite large scallop shell that I use to burn a regularly, summer and winter and find treasures on each new adventure. charcoal disc for my resins and herbs. There is a glass jar with fresh sea There is always something to find water from my favourite local beach from seashells, feathers, driftwood and a ceramic plate in the shape of and my favourite treasure of all time to find is sea glass. Always make sure a star fish that I fill with sand. In the if you pick up a shell that it’s not still sand I draw Norse or Generic rune occupied by a tenant!!
symbols to enhance my spell work or to specifically dedicate my altar. Another precious item is a sand dollar that I found on my first of many pilgrimages to Hawaii. Sand dollars are not shells, but are related to starfish. They represent awareness, balance, wisdom and transformation and help you to acquire knowledge. Sea Glass, which is sometimes referred to as ‘Mermaids Tears’ is one of my most favourite tools that I fossick for hours to find. Although not originally from the sea, sea glass is transformed from its broken, sharp edged shards as it has been tumbled and massaged against the sand of the ocean floors over years and in some cases decades to be washed up by the tides transformed, smoothed, re shaped and beautiful. I make jewellery like earrings and pendants as well as the most beautiful crystal sounding mobiles that serenade me as they are being blown around in the fresh sea air where I live. I have quite recently started experimenting with some of the larger sea glass pieces I have found as a type of seer stone or divination tool whilst doing readings for clients. Dive in deep, relax, enjoy and be embraced by the Magic of the Sea )0(
GODDESS OF AVALON Magazine
17
Avian Guardians of Mystery & History by Ceri Norman
O
n my most recent visit to Stonehenge it was not the huge stones that drew my attention, but the vast number of crows perched atop the lintels, which appeared to give the stones a dense crown of jet black plumage. For the crows the large sarsen stones make a great vantage point, and there must be plenty of food for them in the surrounding area. Watching the crows closely, they appeared to be watching me back, with their intelligent eyes and it struck me that perhaps, just perhaps, they were also guarding this ancient megalithic monument. This idea may strike you as a little odd, but in Britain the corvid family - which include crows, ravens, magpies and rooks - has inspired a great deal of folklore and some very varied superstitions. To some they are omens of good fortune or sky based protectors, to others they are sinister omens of doom, gloom and even 18
GODDESS OF AVALON Magazine
death. The collective noun for crows is the somewhat macabre ‘murder of crows’, the rooks have a ‘parliament’, while the collective noun for magpies is ‘a tiding of magpies’ which harks back to the idea of the birds as omens of fortune. There is hardly a child in Britain that does not know one of the variations of the Magpie Rhymes, where the number of magpies seen indicates a specific omen: “One for sorrow, two for joy, etc” and in some parts of Britain people will still greet a single magpie with an almost ritual phrase and action in an attempt to ward off the sorrow the sighting of a single magpie allegedly portends. Meanwhile rooks are omens of good luck as their arrival portends the start of spring and if a raven landed on your house it was thought to bring prosperity with it. Most famous of the corvid family folklore is undoubtedly that of the ravens who live at the Tower of London, for it is said that "If the
Tower of London ravens are lost or fly away, the Crown will fall and Britain with it." Extreme lengths have been gone to in order to make sure the ravens remain; their wings are clipped to prevent them flying too far and when all but one of the ravens died during the blitz in World War II, Churchill had others brought in to make up the numbers as a matter of urgency. Just how this odd superstition came about is a bit of a mystery; it is now thought to be a Victorian flight of fancy, which has borrowed elements from history and Celtic mythology. From the ancient Welsh tales of the Mabinogi, comes the story of Brân the Blessed, King and Protector of Britain. Brân’s sister, Branwen, was married to the Irish King, Matholwch, but he mistreated her. Branwen managed to get a message to her brother, by either a crow or starling – depending on the version – and Brân and Matholwch went to
war. Brân’s name means ‘raven’ or ‘crow’ and his sister ‘s name means ‘white raven/crow’ and these two are likely to be ancient raven associated Welsh pre-Christian deities. During the battle, Brân was injured, he ordered his men to cut off his head and to bury, it facing the sea, beneath the White Hill so that Brân could continue to protect Britain, especially from invaders. That White Hill is where the Tower of London now stands. For a while this plan worked, until another figure associated with corvids, King Arthur, came along. King Arthur is most often associated with the dragon (Pendragon means ‘Chief Dragon’) or the bear as that is the literal meaning of his name, but in a very strange Welsh tale, known as The Dream of Rhonabwy (also within the Mabinogi), Arthur, like Brân, is closely associated with ravens. In Cornwall King Arthur was said not have to have died at the battle of Camlann, nor to have been taken to Avalon, but instead to have transformed into another of the corvid family - the rare red-billed and red-legged chough. Cornish legends say that King Arthur protects Cornwall in this form and that should the chough ever leave Cornwall, and subsequently return (as happened in 2001) that King Arthur would return... Unfortunately for Britain, King Arthur decided that he was so great a
warrior that he could protect Britain himself, though perhaps with the help of his Knights of the Round Table. During his reign, Arthur ordered Brân’s head to be dug up from the White Hill, and, no sooner was that done, well, the Saxons invaded, followed by the Vikings, Normans, etc. It may be that the idea of keeping ravens at the tower is symbolic of the presence and protection of Brân, the raven god of Britain. While his head is no longer beneath the hill below the tower, his sacred birds are still there, continuing to keep watch, as Arthur does over Cornwall in the form of a chough, and just like those crows at Stonehenge... Crows, ravens and the other corvids are beings that live between worlds, walking upon the earth or gliding through the sky. For centuries such birds have been believed to fly between the realms of the living and the dead ancestors, so their congregating upon the ancient stones of Stonehenge seems somewhat apt. While Edgar Allan Poe had his raven saying the pessimistic “Nevermore”, the Romans believed its call relayed a more optimistic message; they likened its croak to their word cras meaning ‘tomorrow’. I wonder of what mysteries the corvids speak as their calls echo around the sacred stones, and they continue to guard this special place of the ancestors…
Ceri Norman is a folklorist, historian and freelance writer, with a deep love and reverence for history, folklore, folk tales, history, oral history, storytelling and writing (both fiction and non-fiction). Her particular areas of interest are the stories and history of Celtic Britain and Scandinavia.
GODDESS OF AVALON Magazine
19
E
FIL O PR
The Galactic Goddess
very now and then I stumble across something I know that will really make a difference because it adds to the evolvement of mankind.. well... recently I had the joy of meeting a very special Galactic Goddess named Starina. What a delight it was to meet a Star Sister, who with the use of higherdimentional galactic frequencies, has cretaed a a Sacred Symbol Deck like no other. Some of these symbols caught my eye immediately as being familiar...Yes, much like crop circles that act like keys, so too these geometric designs are a way of our Star Being friends communicating their message of love and healing
E
20
GODDESS OF AVALON Magazine
to all who work with them. These ancient symbols have a way of communicating their message deep into the cells of your body. Not just a beautiful picture, they are activational healing templates, that can be used to dive deeply into the “Rabbits hole�, creating huge transformational inner shifts. While talking with her I experienced one of those deep shifts within my heart for myself. .. Just amazing! With these symbols you can seek counsel with the Elders, retrieve gifts from past lives and claim back your power. The ways in which you can use
these cards are endless. Starina has also integrated these symbols and created her own brand of Galactic Reiki workshops... Below is the book cover and the first 12 Galactic symbols. Inside her book you will find throughout, interwoven Healing Meditations: Suggested combinations for performing Readings and Healings: Grounding, Protection, and House Clearing Techniques. If you would like to know more or book an appointment or workshop contact Starina on her websites:
AUSTRALIAN GODDESS AUSTRALIAN GODDESS CONFERENCE CONFERENCE 2013 2013
2013 ‘INTO THE WILD’ ‘INTO THE WILD’ On October 18-20, GAIA Inc will be holding our 11th annual Australian Goddess Conference. This year’s beautiful venue, located in a bushland setting, is the Baden Powell Scout Centre in Pennant Hills NSW. Onsite accommodation available. Join us for a weekend of ritual, workshops, dance, storytelling & deep connection to the Wild Goddess. International artist Kellianna & our very own world renowned Wendy Rule will be our special guests.
We invite you to step into your Wild Wisdom! Goddess Association In Australia (GAIA Inc) is committed to educating the wider community about Goddess, She of 10,000 names and faces who has been present in every culture since the beginning of time... For all information please visit - conference@goddessassociation.com.au
Namaste
A Centre for Holistic Wellbeing and Light Healings
Readings
Workshops Coaching
Reiki Meditation Circles
Therapies
Seminar Rooms
Sacred Space
Contact S’Roya email@sroyarose.com 3rd Floor Carrington House, Carrington Street, Lismore 2480
To book space or a session call
0422 361 040
GODDESS OF AVALON Magazine
21
Lizzy Rose
'Pagan's & Politics' Interview: Clive Palmer & Doug Hawkins from the new Palmer United Party
D
o you ever think about what you would say if you had a voice in Parliament? If your concerns could be addressed? If your Government would answer any question at all? As a Pagan,a Witch and a Psychic, I have often wondered how Politicians in positions of leadership would answer my questions that relate directly to my belief system and spiritual practices. How does the current legislation effect my magickal practices and what if anything, would the Australian Government change to aid the Australian Pagan Community? Doug Hawkins was born in the Western General Hospital Footscray Victoria on 5th May 1960 Clive Palmer was born in the same hospital in 1954. Doug is now 53 years of age and Clive is 59. Doug Hawkins is a down to earth, kind, friendly lovable man, and in his younger years, known across the country as a well known Australian football star. Doug is now running for a seat in the Senate and entering a world he isn't at all familiar with. Clive Palmer a self-made Billionaire and a strong direct character, has declared he wants to be the next Prime Minister. Using my initiative and most magickal of ways, I conjured a few meetings with both gentlemen and with a positive exchange; I share my interviews with them here. 22
GODDESS OF AVALON Magazine
Lizzy; Clive, in your opinion what needs to change in government? Clive; Government! Lizzy; Doug, As a working suburban father and a well-known man in your community what needs to change in Government? Doug; I know and see many cases of people who can’t afford to turn their heaters on; we need to address rising electricity costs. People shouldn’t be robbing supermarkets for food; some people can’t even afford to pay for a license to learn to drive to then be able to drive to work. Let’s address employment issues. We have 120,000 homeless in Australia we need to spend money on our own people. We need to enable people to live more comfortably. We need to look at the situation with the boats, who are these people coming into our shores? And detention centres, are they safe places, isn’t there another way. We should be keeping food produce and Australian made products in our country. Lizzy; Doug, How do you see the result of the 2013 election from the people’s viewpoint? Doug; I think the people are sick of both major parties, they have had enough. Lizzy; Doug I know you to be a sweet, kind, gentle, caring humble, down to earth gentleman, however, many voters in the alternative world
may not know you as I do, or as the footy world know you, so why should those that don’t know you Vote for you? Doug; I met Clive Palmer only a short time ago. I share Clive’s passion for Australia and for change. I'm a down-right, fair dinkum, not scared to laugh at myself, honest bloke. I don’t come across too seriously. I’m just an average guy that played footy well. I’m no better than anybody else, I’m straight up, not complicated yet at times I’m a larrikin but above all, I’m a genuine people person with a ‘what you see, is what you get attitude’. Lizzy; Clive, How do you feel about gay rights? If you had the authority would you legitimize gay, bi and transgender marriage and equality? Clive; Our Party Policy is that all members have a conscious vote. We don't want people to vote against how they feel. For too long Party Leaders have stated fixed positions in social issues designed to intimidate and influence others. I am not going to state my position because I want everyone to exercise their vote in accordance with their conscience. Lizzy; Tell us more about Clive Palmer. What does he mean for the alternative communities, the pagans, witches, druids, new age and alternative spiritual people of Australia? How will you help us? Clive; My aim is to help all Australians have a safe and secure
Lizzy Rose Australias's Celebrity Psychic. Specialising in Psychic, Clairvoyant,Tarot & Medium Readings. High Priestess of Mumma MoonWomen's Circles & Goddess Temple Gatherings. Email lizzyrosebookings@gmail.com 0452 479 505
future to raise the standard of living to eliminate discrimination so that all Australians are treated equally under the law.
lied about the Carbon Tax and as electricity prices went higher, she seemed to dump those who most needed her help once elected.
Lizzy; Clive, When I discuss politics with Spiritual people, hippies, artsy folk, musicians, writers, authors, tradesmen, painters, nurses, medical practioners, natural healers, meditation teachers and the like, if I ask them who they vote for, they generally answer with THE GREENS. I gather they assume or hope that this party will be better for our environment. How does your party compare? Clive; Environment doesn't just mean the external world it also means our inner spirit, what we believe in our Life. We need an environment free from fear and in sync with the natural world around us.
Lizzy; Clive, Do you think there is a place in government for women? Clive; As Chairman Mao said, women hold up half of the sky. Women and men need to be in all
Lizzy; Clive, What will you bring to the community for families, for small business and for the folk who tend to live off-the-grid, i.e, in the hills and forests, the farmers, growers, pickers, packers etc... Clive; A better return for their efforts and a safe and secure future. Lizzy; Clive, What influence do you think Julia Gillard, as a female, had on the Australian people? Clive; A negative influence as she
parts of our lives in Australia. No person or group should be excluded from Government. Lizzy; Clive, May I ask you with respect, what influence do woman have in your life? Clive; Both my mother and my wife have been my trusted advisors. My first wife was my inspiration while she was alive. Lizzy; John Dee was a famous astrologer for Queen Elizabeth 1st in 1558. She consulted him and sought
his influence in serious decisions in regards to running England. It is known historically that people in government consulted Clairvoyants with matters of state. Do you see the relevance in using psychics, astrologers and mediums to provide counsel for those in government? Clive; All citizens of Australia need to be consulted in respect of decisions that affect them. That is a fundamental principal of democracy. Doug; “Yes I do, if the Psychic was proven to be credible and accurate. If the Psychic was you yes, I would consult you. I’d be a mug not to use the gift of insight that you have. It’s wise to consult a recognised Psychic, I wouldn’t go to just anyone but, I’d most certainly consult you.Yes I’m very open to it and would support it 100% I’d see it as a great advantage for the people and for my party” As my time with both men now comes to a close, I think on how important it is for Pagans of all descriptions to be acknowledged, and understood in our society. How acceptance and tolerance is such an important part of our nations equality. Thank you to Mr Clive Palmer and Mr Doug Hawkins for answering my questions for oure readers. Will it be the Palmer United Party who brings in change? For as we Vote, so will it Be. Why not check them out yourself.?! GODDESS OF AVALON Magazine
23
See you on the Dark Side of the Moon... Kali Cox is Domestic Goddess,
an empath, busy working Mum of 2, aspiring collator of the written word, and full-time Solitary Witch. She has appeared on local Brisbane radio, creatrix of and moderates the ‘Witches Of White Magic Unite’ Group Fan Page on Facebook.
24
GODDESS OF AVALON Magazine
I
think I must have deviated off the beaten track quite spectacularly when I find myself wanting to clean. Moments of insanity such as these usually hit around waning moon and Solstice times, whereupon each turn of the wheel takes me to a place where I become possessed by the Clean Demon and am powerless to control my body as it hurtles me round the house scrubbing and polishing, exorcising dust devils who like to spite me by returning again as soon as my back is turned. I am compelled to purge by cleansing my environment.
Usually, and much to my dismay, the body often follows suit. The mind, overwhelmed by all of this activity, throws it’s hands in the air in defeat and jumps on the bandwagon, flooding everything with abundant hot salty oceans that overflow from malfunctioning tear ducts, verification that our bodies are made up of 70% water, and that certain drought for me, is imminent. As always, after such a period of unwittingly decadent self indulgence, I am depleted and exhausted, yet fresh again at the turning of the wheel, the rebirth of the sun, and the waxing silvery moon of new promise.
Benjamin Franklin said : "In this world nothing is certain but death and taxes." Life is a constant evolution of change, and when we are not prepared for, or willing to flow with this change instead of against it, we can sometimes lose our way. The path becomes overgrown and dark, choked in weeds of doubt, and the dust devils dance in the deepest corners of our minds obscuring the view of the clarity in which we seek. Sometimes, all we crave as human beings, is a little drop of 9 carat reassurance from the people that we love that it will be ok, that we will forge on ahead with scythes and swords through the bracken, creating a path forwards to the light. However due to circumstance at times, this may remain as elusive as the gold it is interned to. Sometimes we are all we have, and what remains is to find it Within. For all the support our Dear Ones can give us in times of darkness, we should really search first inside our hearts for the faith and assurances we need to find our way, for we know our own truths and what we hold tight to believing in to carry on. What is true will always remain, even if it takes another form. Like our beloved moon, negotiating the way back to whole can also be cyclic. As women in particular we are often in harmony with the moon for our menstrual cycles, our emotions, intuition and our magical practice. By working with her energy and a keeping a little faith, we can re-emerge refreshed, restored and
reassured that we are walking in the right direction, head clear, held high, with a sense of purpose. The New Moon for me is time to withdraw, to be held in her dark embrace and allow myself to let go and feel. I crawl inside, away from the world snuggled in the warm cocoon of Self and ride the wave wherever it takes me. Consciously, I have no choice over my body-mind at Dark Moon - about as much as I have at Solstice, truth be told. It's like the switch flips to Emolicious, the slippers stay on, and I take out shares in Kleenex and Cadbury. But I have learnt to go with it, because it too shall pass. It's ok to feel wretched. If things are sub-par in your life, then they are sub-par. Let them be. But don't let it take over. Don’t stay there on extended holiday, no matter how good the peppermint tastes in that chocolate. I really don't think we allow ourselves to feel like rubbish sometimes. We bottle it up, we smile through the pain and we get on with it because life doesn't stop for our sadness. We need to make it stop, before it manifests into illness - physical or mental - and before you know it, the body is ready for a purging marathon that you don’t want to run, and you trip and fall into all the cracks that have opened up on the path while you were busy denying their existence with an upper lip of steel that would make BHP proud. Throughout the moon’s Waxing phase, I like to take all of those rubbishy things that I have been struggling with and collecting rather impatiently during Her Waning phase,
and channel them into my own little personal affirmation mantra :
*Feel *Acknowledge *Release *ACT *Feel it - Good or bad. *Acknowledge it - Allow yourself to know it's ok to feel it. *Release - Let it go. Write it out and burn it. Talk. Cry. Laugh. Exercise. Pour it’s energy into a stone and throw it with all your might into the water – Just GET RID OF IT! *ACT - Start moving forward, no more dwelling in the past. CHANGE it for the better. All the affirmations in the world are wonderful for motivation, but mean nothing unless we make the choice to act upon them. I absolutely LOVE the Full Moon for this. Her revitalising, manifesting energy empowers me to create my desires, restores my faith, and shines her magical light on the way forward. I love our intimate conversations, the way she soothingly strokes my hair with silvery fingers. The Mother ; nurturing all the broken parts of me that I am restoring back to whole again. Take a few minutes at night before sleep to allow Her faithful and ever present loyalty to be the true constant that guides you on your path to Self. Treasure the 9 carat moments when they come, and hold close to your heart the faith that they bring. And don’t forget the chocolate – to help with the cleaning, of course! GODDESS OF AVALON Magazine
25
by Blanca Beyar
The Empowered Goddess
26
GODDESS OF AVALON Magazine
The Empowered Goddess
T vision of softness†exquisite aspect of the Goddess, in every moment and in every experience that you partake in!
Blanca Beyar wears many hats in the Goddess field. She is a Spiritual Author, Guru, Intuitive and Medium.To learn more about Blanca, visit her website at:. info
GODDESS OF AVALON Magazine
27
(Sanskrit for “Mother Earth”) or Gaia, the earth itself, and is thought of as the Great Mother - the divine feminine which gives birth to us, nourishes and sustains us and in which we finally find our rest and rebirth. This aspect of the Goddess, although suppressed by the Christian Church, persisted throughout the medieval period amongst scholars and others who were still orientated to the ways of ancient Paganism. In a 12th century English Herbal, for example, the Goddess is referred to as “Mother Nature, who dost generate all things and bringest forth ever anew the Sun.” It is She who “nurturest life” and “when the spirit of humankind passes, to Thee it returns”.
Contempating the Triple Goddess Part II
By Frances Billinghurst
W
ithin modern Paganism and Goddess-centric spiritual traditions, the Triple Goddess is perceived as being both sexual in nature and a mother. These two aspects have been divorced from each other in the nearest Western equivalent to the Goddess, that of the Virgin Mary of the Catholic Church. The original meaning of virgin may not necessarily refer to a “chaste woman noted for religious piety and having a position of reverence in 28
GODDESS OF AVALON Magazine
the Church”, but to a woman who is simply “not married” without any emphasis being placed upon her sexual behaviour (or lack thereof). It is this original meaning that is meant when referring to the Virgin (or Maiden) Goddess into which category Artemis, Hestia and Athena. The Mother aspect of the Triple Goddess is represented by the full moon, or the earth. Often this aspect of the Goddess is seen as Prithvi Mata
The last aspect of the Triple Goddess is the Wise Woman, the Crone or Hag, who is the keeper of the mysteries, and is often associated with either the waning or the dark phase of the moon. The Crone Goddess symbolises our own inner wisdom, but also the elders who, in a tribal society, were the living repositories of the history and lore of that tribe. Their role was essential to the successful functioning of society and age had an honoured place. One common error that people often tend to make when first coming across the Triple Goddess is relating human years to the various aspects. While the Maiden aspect of the Goddess is more often than not depicted as a youthful young woman, it does not mean that someone of a more mature age cannot associate with this youthfulness. It is increasingly common today to see people over 50 years of age enjoying freedom and life in a way they have never experienced before. Likewise, with the pressures of modern living and the breakdown of what is deemed
the “traditional” family unit, it is not unusual for children or even teens to step into the role of the Mother by caring for and nurturing younger siblings. Being able to identify with the Mother Goddess is also not solely reserved for those women who have physically had children, or even those who care for children. There are numerous ways of “mothering” as well as numerous “things” (animal or other) that can be mothered. The different aspects of the Triple Goddess can therefore be experienced at whatever age we find ourselves at. One obvious misconception about aligning the Triple Goddess with the moon phases is that the moon actually goes through four distinct phases during its cycle. This therefore raises the query as to whether, within modern Paganism and Goddesscentric spiritual traditions; there is actually a fourth aspect of the Goddess that Graves overlooked. The late Shekhinah Mountainwater wrote about the concept of the “Dark Maiden” who she identified as the Sorceress, who is associated with the waning moon, and who was the opposite to the “Light Maiden” of the waxing moon. Goddesses that fall under the category of the “Dark Maiden” include Morgan le Fay and Persephone, as well as those who were associated with magic, shapeshifting and transformation. Franklin also saw the Goddess as four-fold; however, within her tradition these phases are not solely
connected with the moon. The fourfold Goddess is also associated with the four seasons. Franklin further points out that within modern Strega (Italian witchcraft) the four phases of the moon are acknowledged, each with an associated aspect of the Goddess. Diana is the Maiden of the waxing moon, Losna is the “Light Mother” of the full moon, Manea is the “Dark Mother” of the waning moon; and Umbrea is the Crone connected with the dark moon. Within my own work, the Dark Goddess tends to incorporate both Mountainwater’s “Dark Maiden”, modern Strega’s concept of the “Dark Mother” as well as other aspects that tend not to be easily pigeonholed into the triplicity of “Maiden”, “Mother” and “Crone”. These Goddesses include the ancient creatrix of the heavens and earth (Tiamat in Babylonia myth), the rulers of time (the Hindu Goddess Kali), and the initiators and keepers of the hidden knowledge (Welsh Arianrhod and Cerridwen). Goddesses who rule the various Underworlds, such as the Sumerian Erieshkigal, also fall into the category of being a “Dark Goddess”. Despite the connection of the Triple Goddess with the moon being a relatively recent idea, to the followers of modern Pagan and Goddess-centric spirituality, it appears to already be anchored deep within our psyche. A reason for this could be that through understanding
the triplicity of the Goddess in this way, we are able to access and gain a deeper insight into the divine feminine as a whole. When we look upon Her many faces and feel Her changing rhythms, we are also able to go within ourselves and gain deeper knowledge about our own changes, from moon phase to moon phase, and throughout the course of the years. If we associate the Goddess with the moon, then I believe that the hidden fourth aspect, that of the “Dark Goddess”, should also be acknowledged. This is because each aspect of the Goddess lives within us, the seen as well as the unseen. Frances is an initiated Wytch and High Priestess of an active coven based in Adelaide, South Australia. She has an interest in the occult and all things magical, as well as folklore and mythology. A prolific writer, her articles have appeared in over 12 separate publications worldwide with essays in Unto Herself: A Devotional Anthology to Independent Goddesses, and Shield ofWisdom: A Devotional Anthology to Athena as well as two anthologies yet to be published by Avalonia. Her own book, Dancing the SacredWheel: A Journey through the Southern Sabbats, can be purchased by contacting the author. For further information, write to PO Box 2451, Salisbury Downs SA 5108, Australia; visit the Temple's web site; or email frances@templedarkmoon.com. GODDESS OF AVALON Magazine
29
Wild Woman Weekend! is an annual event held in the bush lands of Western Australia that gifts a sacred, magical space for women to gather together to share stories, wisdom & journeys, to connect with Mother Earth & Goddess Moon, and to empower & encourage one another to embrace the wild within. Now in its 9th year, the custodians Ariana & Larissa have yet to theme this years inspiration. Keep checking our website for more info... With drumming around the fire, dancing, meditations, creativity, sumptuous magical feasting & weaving the threads of sisterhood, this year promises to sing to your very soul. Accommodation, nourishment & magical shenanigans are all inclusive all you need to bring is the wild woman within! For more information hop onto the website or find Wild Woman Weekend on facebook.
The Magical Sea Smudge Spray Now you can create a blessed atmosphere in your home, office, car or hotel room, with just one spray.
itch
$24.99 +Postage (Anywhere in Aussie)
Call 0402704536 30
GODDESS OF AVALON Magazine
The Rose Faery
Apple Blossom
A
s I was wandering alone in my English garden, I smelt the most intriguing fragrance, drawing, and trapping me into its delicious entice. My senses opened and I was instantly in love, for it was the energy of love which I felt throughout my whole being. It caressed my skin, my heart, and my entire aura. I was wrapped completely within a tender warmth as all fears dismissed, all negativity wiped away. As I drew closer, there she was. A faery! In her pink petticoat skirt I spotted her dancing eagerly around the petals of the rose. Singing with glee, happiness, and all heart felt things. She was in love with nature, as I was in love with her energy. Like a bee to pollen, I had been
attracted like a magnet to her inner and outer beauty, as my heart chakra now spun in colourful circles, ready, waiting, wanting… Attributes of the Rose Faery: • Nurturer • Passion • Warmth • Fertility • Desire • Sexual energy • Affection • Gentleness • Unconditional love • Feelings • Relationships • Trust • Self acceptance • Romance • Gratitude • Friendship • Kindness
FAERY BASKET OF LOVE SPELL Day: Friday - Goddess Freya, day of love. Moon phase: Any. Tools: • Baskets, one per gift. • Pink ribbon. • Cardboard. • Pen. Ingredients: • Red or pink roses • Apple blossoms. • Hibiscus. • Pansies. • Tulips. • Violets. Purpose: By making this gift and giving it to friends you are thanking them for their love, kindness and friendship. Method:You will need a basket for each person who shall receive this gift. Wrap the pink ribbon around the baskets to decorate them. In the baskets arrange an array of beautiful flowers such as the ones I suggested, each represents the energy of love - most importantly roses! Cut love heart shapes in cardboard and write this message on them to be attached to the handles.
"Abundant-fertile-soil Love and passion boil Faery energy come to thee One, two, three So mote it be…" Apple Blossom is a natural psychic, medium, and witch. She writes for many new age magazines in Australia and internationally and is the author of three books. Photo taken by ‘Julz Photography’ 2011 GODDESS OF AVALON Magazine
31
Searching for Freyja
by ZaKaiRan
I
have always been a searcher of the truth. This was a natural quest to pursue since I could see from the very beginning that I was surrounded by illusion when everyone else thought it was all real. And even as alluring as these trappings of life have been, none could sway me from the path of truth 32
GODDESS OF AVALON Magazine
and divinity. Of course the illusions that tantalize us along this journey can be very exciting and look and feel so delicious, and it is perfectly alright to thoroughly enjoy the fruits of the world. In fact, enjoying these fruits without getting trapped by them is where true mastery lies. Denying
and repressing the sensual world is not where true enlightenment lies. It is easy to be enlightened in a cave, but to maintain this serenity in the world of chaos, shows true mastery. And being an ascetic is not much fun and way too full of repression for this master. It is my next step in this lifetime to
live in the “real” world, which is the next step of any master’s journey. This is why so many monks and nuns of the east have been “forced” to come to the western world. And we can thank the Chinese invaders for this, otherwise most of the amazing Tibetan monks would still be in their monasteries and we would not be sharing in their amazing wisdom. This is a prime example of how every “tragedy” is a miracle if you can see the gold behind it and allow for new beginnings to take place, rather than clinging to the past. Every step along this amazing journey on earth is a miracle, and it is your choice how you will react to “tragedy” and heart-ache. Whether you will pick yourself up, dust yourself off and boldly continue on your journey with dignity. Or if you will stay down, crawl away and “live” the rest of your life in fear. Notice the quotation marks, because living in fear is not living it is dying. Either you are moving forward and growing or you are dying, there is no in between. This is why so many people decay and die with regrets, never having expressed even a portion of their true creative potential. You did not come to this planet to struggle! And you do not have to struggle to achieve enlightenment, this is a lie! Yes, you will certainly experience struggles, trials, tribulations, heart break and heartache on the yellow brick road of truth, but again – how will you react to these experiences? In an empowered state or fearful state?
The Goddess Freyja and the unknown that is off of their
I have always loved the Norse Goddess Freyja ever since I realized that Friday is named after her, (Freyja’s day). She is a ‘warrior’ goddess, (a Valkyrie), of beauty, love and destiny. She was a very sensual and compassionate goddess who helped the dying, especially warriors, transition to Valhalla. And Friday is such a typically exciting day because of the feeling that “now I can play”, now I can pursue beauty, love and my destiny, even if it is for only 2 days. Which is a much more natural state of being then the typical drudgery of the ‘work week’, which for most people is not in the pursuit of their destiny, but is just a means to an end. Unless you are masterful enough to be pursuing your destiny every day of the week. Allow the goddess Freyja to help you gracefully transition from the ‘battle field of life’ to the Valhalla of your true destiny! The Goddess Freyja is guiding us to the truth that we were not meant to be slaves to someone else’s dreams, instead, we were meant to create our own dreams! To be ‘warriors’ and pursue our own destiny. Winners go for their own dream – not anyone else’s. They do not listen to society, or parents, or siblings, or lovers, or friends…, about what they should or should not do, they determine this themselves. Achievers have very big dreams and they passionately and patiently create these dreams in their lives through consistent and persistent action. And they do not base reality upon their current circumstances. In other “Successful people are not people with no words, they do not limit themselves problems, they are people who have learned based upon their current experience. to overcome their problems.” They allow miracles to take place – Earl Nightingale. that are in the realm of mystery
‘radar screen’. Whereas ‘victims’ only believe what they see and feel. They only see what is on their radar screen. And they can’t imagine how they can change their lives, or that there even is a problem that needs to be changed. Because life has stomped their imagination in to the dirt, so they don’t try. They also base their decisions and actions upon past experience, which is fear based. “If it has happened this way in the past, then it will surely happen that same way in the future.” ‘Losers’ also have dreams, but do not typically believe they are possible. And listen to the judgmental doubters (aka friends and family) in their life, or to the media, governments, etc., rather than their own hearts. Those that listen to the call of their hearts break out of the shackles of society and family to fulfil their dreams! Don’t be afraid of your dreams! You were born to win! You were designed for accomplishment! You were engineered for success! You were endowed with the seeds of greatness! And you can be, do and have whatever you want! Desire is not your enemy! Desire is the first step of all creation! Suffering does not come from desire! Suffering comes from not believing that you can create that desire! Believe and receive, doubt and go without! May the Goddess Freyja suspend you on her wings of grace and truth to discover and manifest your dreams! Infinite blessings of immense abundance! ZaKaiRan www. YourWishIsYourCommand.co GODDESS OF AVALON Magazine
33
Good Vibrations
S’Roya Rose is a Celebrity Psychic, Practicing Priestess & Healer. She runs metaphysical courses & meditaions regularly. Her website:
Movie Review: Gabriel
The movie Gabriel depicts ‘purgatory’ quiet graphically ...the realms between Heaven and hell, here is where the two forces ‘light being the Arc Angels & dark being the Fallen’ ...battle for our souls. I love the way it keeps to its theme and you never know what is happening until the end... Great cliffhanger ending... a must watch when wanting something different. It goes something like this... Fallen angel Sammael has claimed the midworld in the name of darkness. With the help of his menacing, gun toting task force, he has smothered the dark, dreary city in vice, violence and cruelty. Sammael’s victory is assured - untill the arrival of Gabriel, the last of Haeven’s seven archangels. Gabriel is young, strong and the mightiest warrior seen since Michael his predesessor disappeared. In no time, Gabriel’s bravery and skill threaten to decimate Sammael’s evil henchmen and redeem Jade, the lost angel with a heart of gold. But Sammael has one last card to play: the secret of his own past, the knowledge of which could destroy Gabriel forever... enjoy!
Vibrational Healing Tool: Vajra I stumbled across these a few years ago whilst travelling through the UK. There are many different types of Vajra. Each Vajra is a combination of a 7” Buddha Maitreya Etheric Weaver® with two 24K Gold plated or silver plated sacred geometric orbs on either end. There are 8 geometric forms available creating a wide variety of different Vajras to choose from. Use your intuition when selecting a Vajra and try the one you are most drawn to. This allows for a more soulful, rather than intellectual, choice. As pictured, the forms come from the platonic solids, the divine building blocks found in all matter and every Kingdom of nature. Each Vajra radiates the blessings of Buddha Maitreya to awaken the Soul and heal the personality. Prices vary from $250 to $1,950 US dollars. Available from
34
GODDESS OF AVALON Magazine
Metraton Vajra $415 USD
Goddess Book Club
T
here’s nothing like getting together with likeminded Goddess women to share stories and thoughts. However, starting your own Goddess group in your area can feel like a huge task, especially if you don’t have a lot of experience in organising an event or meeting. There are all sorts of things to consider – booking a venue, venue hire (if applicable), advertising costs, insurance, little to no budget, activities etc It can be daunting. However an easy and inexpensive way to get a group together is to create a Goddess Book Club: Most of the participation takes place at home so there are no venue hire fees. Once a month, (or more if you wish) you can arrange to meet in a coffee shop or local park / barbecue area to discuss the current book. As opinions are expressed and life stories shared, the group will not just be learning more about the books, but each other as well. From there, other activities and events may blossom. Perhaps you are reading a book that features the Goddess Kuan Yin and a member of the group has visited one of her a temples.You could invite that member to bring photos or anything she would like to share of her visit. You may also choose to simply stick to reading and discussing the books. It’s entirely up to you and your group. You’ll be expanding your book collection and knowledge of Goddess authors and practices. Feel free to ask the group to make suggestions on book titles or authors.
You do not need to feel that you alone (as the organiser) have to decide on the books.You could also invite the person who suggested the book you are currently reading to prepare some questions to ask the group when it’s time to start off the discussion. Some author suggestions to get you started: Starhawk, Judith Duerk, Clarissa Pinkola Estes, Jane Meredith, Sue Monk Kidd, Charlene Spretnak, Kathy Jones, P.C. Cast and Jamie Sams. Tips for Running a Goddess Book Club 1. Only choose books that are easily attainable 2. Do you have some books piled up that you haven’t gotten around to reading yet? This is the perfect opportunity to start on them.
3. Choose your books well in advance so that everyone has ample opportunity to find a copy 4. Make sure to give everyone adequate time to read their books 5. Have some questions on hand to begin the book discussion 6. Decide if the books will be fiction or non-fiction – or both. 7. Allow the group to make suggestions on book titles or authors – you do not need to feel that you alone (as the organiser) have to decide on the books. 8. Stick to Goddess books (being that it’s a Goddess Book Club). If you deviate from them, you risk members dropping out. Zahira Atkins Zahira is co-facilitator of Living Goddess workshops in Tasmania., and the current President of the Goddess Association In Australia. )O(
GODDESS OF AVALON Magazine
35
Angelic Messages
T
here are Seven Main Archangels and Healing Rays For Mankind. They each work with our seven main chakras within our templates. Each Archangel can resonate the different Colours of entire the light spectrum and this light essence is alive and is the resonation of pure love. They are called Archangels because they were created by God to Arch around planets. There are Seven Main Archangels that hold the Seven Healing Rays for mankind in this Earth Realm. Depending on the various Ages of Mankind and the various cultures and concepts. These Archangels have be known through time by different names. However even though their names may have changed. The Main Seven are the same Archangels. And have Been with us through time helping us on our journeys with every sacred step that we take. There are many more Archangels in the Earth Realm and they all resonate On different light rays, And They all have different Divine Purposes.
Goddess Consciousness This Painting Vibrates on The Ethereal Pink Ray and is the feminine aspect of The Divine energy of God. The Transcendial Point/ Enlightenemnt. Whether it is Our Lady of The Light, The Kannon Buddha of Compassion, Isiss, Maeve, , Lakshmi, Athena, Kwan Yin, Aphrodite, White Tara, Green Tara, Mawu, Mother Mary, Ixchel, Kali, Sedna, Dana, Sarasvati,Yemany, Aphrodite, or Ishtar (Just to name a few ).This is just your own truth that you have come to experience and your own way of conceptualising the Higher Goddess powers that be.By Honouring the Goddess energy with in all of creation within all That is. It is the Energy of Birth and Creation.You are created form Love and Light.
ANGELIC MESSAGES FROM THE Goddess
The Goddess wishes you to know that you are being showered with the Pink ray of Love. This will help you with NURTURING and Honouring The Divine FEMININE. 36
GODDESS OF AVALON Magazine
1 2
This energy needs to be in a state of flow and movement. By being in your own Divine feminine state of Being. Allow this aspect of your feminine energy to be present and to flow with loving grace and dignity. This is needed to bring balance and acceptance not only of others but of yourself. Open your heart, there is nothing to fear. The most powerful force in the Universe is Love. Love is in the Air.
3
Lit-El Star is a healer & visionary artist who creates Angelic Portals for the Angelic Realms & Acended Masters of light & love.The divine purpose of theseWindows From Heaven™ is to facilitate heavenly experiences for you. Lit-El Star uses The Angelic AquasŽ. Vibrational Healing Waters together with crystal pigments & water based acrylic paints. Check out her New APP on her website.
Animal Dreaming BY SCOTT ALEXANDER KING
Red Panda
Compassion
Sharing the humble, modest and empathetic energy of the Giant Panda, the Red Panda is said to resonate to the energy of Quan Yin, the goddess of compassion and mercy. The Red Panda hears and responds to the cry of all beings. Comfortable with its emotions and unafraid to show them, the Red Panda strengthens courage, enhances creativity and offers a keen and rapid force that brings light into our life and illuminates our path. Red Panda stimulates, quickens and transforms all who are drawn to explore it, while affording enlightened spirituality, wisdom, strength and the divine power of personal transformation. Quan Yin is said to have promised the people that she would not return to Heaven until all living things had discovered and honoured their purpose, while the Red Panda sits midpoint between Heaven and Earth, perched high in the trees, looking down compassionately upon the people. Because Red Panda has humbly wandered into your life today, it’s probably because you find it hard to punish, blame or accuse anyone because your heart is free of vengeance and self-importance. Although ever watchful, Red Panda has turned up today to inspire you to grow and heal. It comes promising to fill you with an overwhelming sense of empathy whenever you lose sight of your own divinity and connection to Spirit, and to remind you of the sacredness found in all life and the inherent relationship you share with the world around you. Red Panda energy will eventually deliver you from a place of ego into a state of humility and gratitude whenever you find yourself taking things for granted or when you’ve forgotten to show appreciation for the beauty in your life. So, when Red Panda appears, it’s time to nurture and integrate the beauty and compassion of Quan Yin into your life, while encouraging others to integrated it into theirs. If you yearn to see the world become a better place, with Red Panda’s help you will help make your yearning a reality. Scott Alexander King was born to love animals. He spent much of his childhood observing, drawing and writing
about the animals he experienced in the small, tattered journal he always carried in his pocket. He would record every animal he saw: where he saw them, what they looked like and what they were doing at the time. This was an activity he never tired of ... so much so that, in essence, he’s still doing it today! Scott is the author of the best-selling, internationally recognised Animal Dreaming (a shamanic reference book and field guide that offers spiritual insights into over 200 native and introduced Australian animals and birds) and the ground-breaking Animal Dreaming Oracle Cards. Scott is available for workshops, seminars, readings and interviews.Visit Scott’s official website: or seek him out on Facebook
GODDESS OF AVALON Magazine
37
n o o u o M Y e h T &
Moon Goddess...and LOVE Sept, Oct, Nov 2013
O
nce upon a time there was a magickal galaxy. This galaxy was surrounded and overlooked by the queen of all the stars. This queen had a lovely solar system situated amidst this whirl of constellations. It spun in whirls too… everything whirled as it all spiralled into the centre. In the very centre, where everything met, a very special act was performed; a translation of troubles and fear, into love. This love was then returned back into the outer of the galaxy… you could say that a refining process was in action… so that one day only love would exist. A very special spirit did this too! In a place where all that existed was love, perhaps we would all be situated safely within the Kingdom of the Queen.Yes… wouldn’t that be lovely. We would all shine like stars and twinkle into eternity and forever, within our complete being.
more wishes you make, the more love created. Continue to gradually and consciously put your meditative qualities back together until the Pisces Full Moon on September 19th (9:12pm). We need to prepare spiritually so that we can twinkle as complete stars in heaven. So do keep bringing your conscious awareness back to your healthiness. As you become more of a healthy complete spirit, on and around the Pisces full moon, you will receive as many spiritual gifts as you planted and harvested in the lead up, so revel and bask in this gloriousness. On October 5th (10:34am), we have a Libra New Moon. Let’s think about some wishes we can make within our partnerships… Healthy Partnership Quality: Unhealthy Partnership Quality: Feeling Plenty of Love Vs. Feeling a Lack of Love Happiness Sharing Vs. Sharing with Difficulty Flow of Joy and Harmony Vs. Stunted Communication Fun and Happy Times Vs. Bitterness in Communication
As we enter Spring, we are approaching a Virgo New Moon on September 5th (9:36pm). Do prepare for this New Moon by thinking about where your spirituality is up to. Something to contemplate includes the healthiness of It’s time to make some wishes upon these and any other your emotions… qualities within your relationships that you would like to improve upon. Do certain areas of your communication Healthy Positive Emotion: affect you disharmoniously? Consider the areas that you Unhealthy Extremity of Emotion: would like to send into the pool of troubles and fear, to be Joy Vs. Anxiety translated into love. We’re trying to make as much love as Sadness Vs. Grief possible, so the more wishes, the more love… Caution for Safety Vs. Recklessness or Worry Peace of Mind Vs. Over-thinking On October 19th (9:37pm), we have a Full Moon Lunar Adaptability Vs. Frustration or Anger Eclipse in Aries. With both the Sun and Moon completely Security Vs. Fear opposite, the Earth casts a shadow over our yin polarity. This causes a gentle ripple of change within the emotive When we are healthy it is natural to feel the range of aspects of our relationships. Changes should come healthy emotions. The ‘negative’ aspect is simply a healthy together quite harmoniously with a child-like twist of fun. emotion in an extreme state. By contemplating where our emotions can be healthier, we can make a perfect Virgo On November 3rd (10:49pm) we have a New Moon New Moon WISH on the 5th. Do wish for the translation in Scorpio with a total Solar Eclipse. Our emotional of all your troubles and fear to be turned into love. The feelings will be clarified by the Sun, and highlighted by 38
GODDESS OF AVALON Magazine
Sheri-Elizabeth Vaccaro Sheri-ElizabethVaccaro is a philospher of the natural world. Her new ebook ‘Enchanted Healing Philosophy’ can be found on her website @
the position of the Moon. For this New Moon, do wait until the moments transition to be enlightened to where you feel you would like to make new wishes, rather than prepare in advance. With the Sun and Moon unified together in the same house, expect to understand yourself in ultimate definition of clarity.You will know your feelings better than you have done in a while. Do you feel comfortable? What is in-between you and the comfort of your being? Now make your WISH… On November 18th (1:16am), we have a Taurean Full Moon. Allow all transitions to slowly whirl into place. All wishes that you made and harvested for your supreme level of comfort within your spirit are all coming into place. Just lap up the comfort and love. Do know that everything you have done within your own spirit, you have also done to the galaxy. All this extra love and ease that you feel, has been a gift to the entire galaxy and the Queen. As we approach the end of Spring, do say thank you to the Queen for granting your wishes… perhaps we could call her our very own Faerie-Godmother.
We are excited to introduce the Lively Living Ultrasonic Aromatherapy Diffusers, the Aroma Joy. These diffusers are different to regular diffusers as they use advanced, eco-friendly ultrasonic technologies to provide 5 in 1 multi-functions. Aroma Diffuser, Humidifier, Ioniser, Air purifier, Colour Changing Night Light… All without the use of steam or heat.. making them totally safe for everyone. They also come beautifully boxed and with a travel bag, making a perfect gift. Because no heating element is used, the essential oils are kept in their purest form. Normal heating methods interfere with the molecular structure of the essential oils and reduce their therapeutic qualities.You will immediately feel the difference by using an Aroma Joy diffuser to disperse your essential oils. The stylish Aroma Joy diffuser makes it easy to diffuse oils in any room of the home, even in the work place. The Aroma Joy diffuser Holds 80 mls of water, which will enable the diffuser to run for approx. 4 hours. It cuts off automatically when the water level is low. Safe and easy to use, you can fall asleep whilst using the Aroma Joy. The diffusers are available through leading gift, health and wellbeing stores, and available on line. New stockist enquiries are invited.
Contact Julie 0412 521 118, E: julie@livelyliving.com.au
GODDESS OF AVALON Magazine
39
k o o B ow! N
BALI
Goddess Retreat Relax, Unwind & Rejuvinate in the island of the Gods!
Dates:
2nd - 9th Dec 2013
S’Roya invites you to join her BALI Goddess Time out Retreat Your investment is Only $2550 (not including airfairs) All Accomodation & breakfast, meditations, Yoga included. 7 days of Heaven you’ll never forget & want to do regularly! Email: email@sroyarose.com Ph 0422361040 to book. 40
GODDESS OF AVALON Magazine
Image:
Kerrie Friend
r e y a r P
S
tillness conjures up different meaning and emotions to all of us. We hear of praying for things to change, our family, situation, finances, health and so on. But what does praying really mean? It can mean totally different things to each of us. One person maybe content sitting under a tree contemplating nature, another may need to be in a meditative position and be silent while someone else may be unfulfilled unless in a church environment? Whatever the choice one thing remains the same and that is it is a vital part of our existence and one that cannot be refused as apart of having a spiritual life. We are after all a spirit, we have a soul and we live in a body. We are tri being’s. And time spent in prayer is something that helps bring stability to our life. While success, family, health and relationships are all important facets to a beautiful life, an active prayer life is the only thing that gives us the silence within ourselves that each of us desperately need.
P
rayer is a technique for achieving unity with God, It is limitless life, substance and intelligence. A mixture of consciousness, concentrated rightly, directed, spiritually orientated positive thought or affirmation. It is the art of unifying ourselves with the creative source of good and the absolute divinity within us and around us. Prayer mobilizes us with divine power. The most important purpose of prayer is that we lift ourselves to a higher level of consciousness where our mind and body can be conducted with all sufficient God. It is not a way to turn on the light in God but a way to turn on the light in us. This existence lives in all of us but we have to turn the switch on. Mostly we consider God to be outside of us, not realizing that God is also within us. This brings to light the fact that prayer is an inward, outward expression of our inner being; a gift each of us has been freely given.
T
he great writer of “The Prophet”, Kahlil Gibran says, “For what is prayer but the expansion of yourself into the living ether”. Prayer conditions us to acknowledge a spiritual realm allowing us to open ourselves to endless possibilities beyond what we can see with our natural eye. It gives us permission to ask for help as a spiritual being and claim what life has stored for us when we believe in its existence. Prayer should replace worry and anxiety, as they have no positive effect in our lives. They don’t bring solutions to problems or help us achieve good health whereas prayer does.
P
rayer can be illustrated, as a plug in a battery being charged quietly, accepting the flow of energy. It gets into the depths of our inner being and connects us with our divinity. All we need to do is learn to accept the flow of energy that is being given. Prayer will dramatically improve your life.Your image needs it to become apart of your everyday life. It is sweet to the soul, vital to your spirit and immensely beneficial for your health. God bless.
Australian celebrity Kerrie Friend was cohost of hit TV shows Perfect Match andWheel of Fortune. Kerrie and her husband founded Heaven on Earth Media in 2002. She is also an ambassador and spokesperson for a number of societies and authored ‘Image: 52Weeks to a NewYou. ’Visit:
GODDESS OF AVALON Magazine
41
D C & k o o B s w e i v e R
A Journey of the Self A Guided Meditaion CD By Lizzy Rose
Music CD - The Ancient Ones by Kellianna
Journey of the Self is the debut guided meditation CD from Australia’s Celebrity Psychic Lizzy Rose. Based upon years of guiding groups of people in their meditations in her role as High Priestess of Witchcraft, Journey of the Self is an insightful contemplative incantation that wraps you in its gentle spell.
Kellianna is an American pagan singer songwriter, an international artist performing songs and chants inspired by myth, magic, sacred places and ancient times. She will be appearing at this years GAIA Goddess Conference 18-20th Oct in Sydney. With guitar and vocals she brings to life the stories of the Gods and Goddesses. Her latest Album ‘The Ancient Ones” released April of this year is a celebration of all that is Pagan. Foot stomping, heart throbbing music that gets your ancestral blood pumping. Tunes and Melodies that stay in your head as you go about your day. Kellianna’s voice has a real earthy power that calls to you in ways nothing else can. I first heard her sing in Glastonbury at a Faery Ball. Hers is a powerful voice that sends chills up your spine as you hear the call of the Ancients ringing out through her songs.
RRP $20.00
The Ocean Oracle By Susan Marte RRP $39.00 APPStore $7.49
The guidance and wisdom offered by the ocean and its surrounds is what forms the basis of The Ocean Oracle. Using Story in the interpretation of the cards, allows the information and healing to go deeper into our being. The Stories have many meanings and, depending on the query, the meaning needed to be seen or heard will come forward. The Messages offer up areas to contemplate. Some of the messages will resonate; some won’t. Sometimes the messages may seem cryptic. That’s the nature of oracles. 42
GODDESS OF AVALON Magazine
The hour long guided meditation, recorded in a single take, retains the natural rhythm of a live performance as she leads you slowly into your hidden depths to reveal your own inner Light. The guided meditation, called The Cutting the Ties meditation, has been among Lizzy Rose’s most popular meditational performances and her fans have been requesting that she record it for them for over a decade. Combined with the new age backing track by Hellmut ‘The Wolfman’ Wolf, the gentle, hypnotic tones of this experienced High Priestess of the Craft is the perfect way to touch your center and find a peaceful place in your life.
RRP $30.00
Wendy Rule CD collection
Music CD - Live on Castle Hill By Wendy Rule RRP $19.00 USD.
Music CD - The Wolf Sky By Wendy Rule
RRP $19.00 USD.
Released 2006 Earthy, wild and shamanic, The Wolf Sky is a celebration of the epic powers of Nature. Drawing deeply on Wendy’s erxtraordinary live shows that blur the line between ritual and performance, the album becomes a ritual in itself. Opening with Wendy’s renowned Circle Song - an invocation to the 4 Elements, and ending with her Elemental Chant, The Wolf Sky takes the listener on a powerful journey through the healing realms of the Underworld.
Music CD - Guided by Venus By Wendy Rule RRP $19.00 USD ‘.
GODDESS OF AVALON Magazine
43
HerbLore
Elder by
Kellie Ashton
E
lders’ use in medicine and magic dates back further than I’m sure is even recorded. There are mythologies and tales that surround this mysterious tree which have survived even to today. It is said there is a dryad called Hylde-Moer who is the elder tree mother which inhabits the branches of the tree. Supposedly if any part of the tree was harmed or cut without the permission of the elder tree mother, she would haunt their family until the taken items were returned. Mrs Grieve tells of a very old passage which was said to be muttered over the elder before it was harvested. “Lady Ellhorn, give me some of thy wood, and I will give thee some of mine when it grows in the forest.” The tree possesses powerful protective qualities, especially for children. People used to hollow out the twigs of elder and make them into charms for protection and hand the dried berries in their household to keep them safe from robbers and spirits. Like most
all herbs which are used magically for protection, they more often than not got this reputation because they are fantastic immune herbs or treatments for respiratory conditions, colds and flu. Elder is no different, people valued it as protective against illness and so these medicinal energies, they found, could also be utilised in magic to protect them in everyday life against things such as grief, psychic and physical harm. Elder is sacred to Midsummer where our ancestors would incorporate the herb into their rituals by drinking elderberry wine, burning the dried berries or flowers as incense, making charms out of the twigs (with the elder tree mothers permission of course) and inhaling the sweet fumes of the flowers to induce a narcotic effect with which people would travel to ‘the land of fae’. She is ruled by the planet Venus and therefore also the Goddess Venus as well as her Greek version, Aphrodite and the Norse Goddess, Freyja. This tree possesses extremely powerful magic, and just as when harvesting from her earthy body, if you use her magic for selfish or impure means, usually the spell will either not work, backfire, or if they do work, there is a larger lesson to be learnt from the outcome which may not be the easiest lesson you have learnt. If you are aware of a grove of elders nearby, spend some time with these trees on the full moon to connect and converse with the tree spirits. Ask for their protection and guidance, perform scrying or toast some elderberry wine to the Goddess. Harvest some berries, flowers or twigs (with permission) and use them to protect your children. Elder is renown as a funeral herb as it was regularly used as the wood for
funeral pyres. I once attended a pagan funeral service and was asked to create an appropriate incense for the event. Elder flower was the hero of the mix and works fantastically in incenses as it is so full of oils, it clumps the mix together nicely, and an added bonus, it smells delicious. Medicinally, this herb is a staple in my dispensary. I use it as part of a trio of herbs in a tea for colds and flu; yarrow, elderflower and peppermint. These three herbs together are a fantastic team which helps fight fever, is cooling, eliminates catarrh from the respiratory system and helps you to cough up all that yummy phlegm. Elder is specifically an upper respiratory herb and acts as an anti-catarrhal. This means it is particularly useful for sinusitis and sinus congestion, head colds, hay fever and blocked noses. Recent research has confirmed the invaluable qualities it possesses for fighting the influenza virus. The berries in particular, when taken medicinally, may even nip the flu virus in the bud when taken at the very early signs of onset. It may also lower the duration of the flu when the symptoms have already taken hold.The tea is yummy, the medicine is powerful (as is its magic) and the incense is divine. A must have ally for your dispensary or apothecary?
To contact Kellie email kellie@ hunternaturopath. com.au, follow her blog at or visit her Facebook page at
Namaste
A Centre for Holistic Wellbeing and Light Healings
Readings
Workshops Coaching
Reiki Meditation Circles
Therapies
Seminar Rooms
Sacred Space
Contact S’Roya email@sroyarose.com 3rd Floor Carrington House, Carrington Street, Lismore 2480
To book space or a session call
Shaman Eilee
Guide Drawings A4 size plus Reading ~ $45
Psychic Readings
0429 064 948
0422 361 040
SPRING WELLBEING EXPO Oct 5-6th!
Namaste
two day!
Kellie Ashton
Naturopath
Natural Healing & Wellness Therapies
Naturopathy, Massage therapist, Doula
Appointments: 0410841683
Michele-lee Phelan’s art is the key to opening and expressing the realms of earth, spirit, and the imagination. A painter of dreams,dragons, mythology, goddesses and faeries. Michele-lee promotes and sells her book, oracle decks, original artworks, and fine art prints via her website Dreams of Gaia:
w w w. d r e a m s o f g a i a . c o m
“‘Mystical art and illustration of Michele-lee Phelan’”.
Love
Creativity
Wisdom
therootsofmanytrees.wordpress.com
Temple of the Dark Moon Frances Billinghurst Full Moon
Gatherings Alexandrian Wicca, Ceremonial Magick, Traditional Craft, Training Available
Apple Blossom
Fantasy Art & Design
Psychic Witch & Counsellor Dream Interpreter Tarot Readings Reiki Healer Medium 46
GODDESS OF AVALON Magazine
Mobile ~ 0419 872 418
ANIMAL DREAMINg ORACLE CARDS
By Scott Alexander King RRP $34.95 These cards are easy to interpret and are very accurate. There are four different sets of animals within the deck each set representing a different element. Earth, Air, Fire and Water”
ZaKaiRan's
Inspirational, Transformational, Divinity Evoking,Divine Life Creation Mastery,& Ascension tools... I hope they inspire you, evoke your Divinity, your Divine Mastery and help you to remember how Awesome and Magnificent you truly are!
The Magical Sea Smudge Spray Now you can create a blessed atmosphere in your home, office, car or hotel room, with just one spray.
itch
$24.99 +Postage (Anywhere in Aussie)
Call 0402704536
Mikailah
Shamanee Medicine Woman
Astrological Shamanic Star Guidance Conscious Dreaming Workshops
Gaia's Garden C G C reating
oddess
• • • • • •
ommunity
Tarot Readings - Tricia Wheel of the Year Festivals Journey’s with the Goddess Goddess Studies Australia GAIA Goddess Conference Personal Celebrations
Sword & the Serpent
by D G Mattichak Jr RRP $20.80 dgmattichakjr The Sword & The Serpent is a unique book about magick that reveals many of the arcane secrets of the occult arts.
k o o B rly! Ea
Avalon 2014 Goddess Tour
Book & Pay Online Now! Don’t miss this once in a lifetime opportunity!
10 days you’ll never forget! Connect with the Heart of the Goddess in Avalon!
2014 Tour dates are:
Avalon 15th - 24th of July Unveil the Mystery within the Sacred Isle of AVALON connect with the energy portals, vortexes & leylines! Feed your inner Goddess & let the Lady of the Lake heal the ancestral wounds of the Sacred Feminine. Take part in S’Roya’s Goddess workshops and circles, receive healing and counselling as you integrate your journey within. Rejoice, have fun, & embrace the way it was.
S’Roya invites you to join her Avalon Goddess 2014 Tour Your tour investment is $5800 *Deposit Only $800 Book your place & take advantage of the payment plan! Email: email@sroyarose.com
ut the
Check o
t Plan Paymen online
e availabyl arose.com | https://issuu.com/sroyarose/docs/goddessavalon_issue13a_print | CC-MAIN-2017-22 | refinedweb | 18,614 | 68.5 |
An IR remote control enables us to control devices from a distance through wireless communication. Home appliances like radios and TV sets can be a pain if you have to always get up from your seat to adjust the volume, change channels or even to turn on and off!
In this tutorial I will show you how IR transmitters and receivers work and give some example applications of IR remote control.
Why use Infrared light?
Infrared light is actually part of the light spectrum and is similar to visible light. However infrared light cannot be detected by the human eye because it has a wave length of about 900nm which is above the visible light spectrum. Infrared literally means “below red”.
IR light is safe to work with and even has no effect on the eyes or skin. This is one of the main reasons why this type of light is preferred for remote control purposes since we can use it without seeing it! Another reason is that IR LEDs are very easy to make therefore making the remote control devices generally cheap.
How does an IR remote control work?
An IR remote contains an IR LED which produces pulses of infrared light in order to send signals to another device with an IR receiver for decoding the signals. An IR receiver contains a photodiode and an amplifier for converting the IR light signals to electrical signals.
The remote has different buttons whereby each button produces a different pulse when pressed. When you press a key on your remote, the transmitting IR LED will blink very quickly for a fraction of a second, transmitting encoded data in form of a pulse. This pulse can then be decoded uniquely by the IR receiver so that a specific action can be taken for example decreasing the volume of your radio.
Modulation of IR signals.
A major challenge that we could face when using an IR remote control is that there are many other sources of infrared light. Almost everything that emits heat also emits infrared light therefore the signals from a remote can be interfered by anything from sunlight, indoor light bulbs and even our own bodies. Therefore we have to take some measures to ensure that the IR signal gets to the receiver without errors.
To solve this problem, the light pulses from the IR LEDs have to be modulated just like analog radio modulates a carrier wave to send a signal. Modulation involves making the transmitting IR LED blink with a particular frequency. The IR receiver will then be tuned to that frequency in order to ignore the noise signals from other infrared sources.
The diagram below shows a summary of how the IR remote control system works. The remote contains an encoder which for example Arduino.
Most IR remotes use a modulation frequency of 38kHz although other frequencies can also be used. There are very few other sources that have the regularity of a 38kHz signal, therefore an IR transmitter sending data at that frequency would easily stand out.
IR receivers are designed to look out for this modulated infrared light and to ignore the rest hence filtering out all other noise signals saturating.
Another thing to keep in mind is that infrared is light and therefore it requires line-of-sight visibility for the best possible operation and can still be reflected by items such as glass and walls.
Troubleshooting a faulty IR remote control. (Detecting infrared light)
In case there is a problem with a project involving an IR remote control, it may be hard to determine whether the problem is with the transmitter or receiver. This because our human eyes cannot see infrared light. However a mobile phone camera can!
To check if a remote control is working or not, just aim the IR LED of the remote at the lens of your mobile phone camera while viewing the screen. For a working remote, when any button is pressed, you will see the flashing IR LED on the phone screen. If no flashing is observed when pressing any button, the remote control may be faulty.
Decoding an IR Remote using Arduino.
The first part of this project involves decoding of the IR remote. This is done in order to know the control codes of your remote control because every button of the remote control generates a unique hexadecimal code that is modulated and sent to the IR receiver.
If you have the datasheet of a specific remote control then you can be able to get the list of the codes corresponding to the various buttons. However in absence of a datasheet then we can use a simple Arduino program to read and display the codes of most common remote controls on the serial monitor.
This is done by first connecting the IR receiver alone with the Arduino board as shown in the shematic below. There are different types of receivers but all of them have three pins. In my case I am using the TSOP382 IR receiver and the pins are connected to the Arduino as follows;
Pin 1 to Vout (pin 11 on Arduino)
Pin 2 to GND
Pin 3 to Vcc (+5v from Arduino)
Code for decoding the IR remote.
The important library to use for the working of the ir remote with arduino is the IRremote .h library. Make sure you include this library in the Arduino IDE before writing any code for ir remote control.
Download Library: IRremote.h
#include <IRremote.h> int IRpin = 11; IRrecv irrecv(IRpin); decode_results results; void setup() { Serial.begin(9600); irrecv.enableIRIn(); // Start the receiver } void loop() { if (irrecv.decode(&results)) { Serial.println(results.value, DEC); // Print the Serial 'results.value' irrecv.resume(); // Receive the next value } }
First we create an object called
irrecv where we specify the pin where the IR receiver is connected.
Then we create an object called
results , from the
decode_results class, which will be used by the irrecv object to share the decoded information with our application
To start the IR receiver, we call the IRrecv member function
enableIRIn()
The function
irrecv.decode will return true if a code is received and the program will execute the code in the if statement.
The received code is stored in
results.value. In my case the code is going to be stored as a decimal value. At the end we call
irrecv.resume() to reset the receiver and prepare it to receive the next code.
The above code is then uploaded to the board. Point the remote control to the receiver. Press the buttons and the respective codes will appear on the serial monitor.
From the photo above, the serial monitor indicates the codes for the remote buttons in decimal values. The long values are neglected and only the short ones considered. For example the code for button 1 from above is 16724175.
After decoding the remote we can now be able to use it in a number of applications. For example in the control of the lighting of LEDs using Arduino. The setup is as shown below.
LED Control using IR remote with Arduino.
Code for controlling LEDs using IR remote.
#include <IRremote.h> int RECV_PIN = 11; // the pin where you connect the output pin of sensor int led1 = 2; int led2 = 4; int led3 = 7; int itsONled[] = {0,0,0,0}; /* the initial state of LEDs is OFF (zero) the first zero must remain zero but you can change the others to 1's if you want a certain led to light when the board is powered */ #define code1 16724175 // code received from button no. 1 #define code2 16718055 // code received from button no. 2 #define code3 16743045 // code received from button no. 3 IRrecv irrecv(RECV_PIN); decode_results results; void setup() { Serial.begin(9600); // you can ommit this line irrecv.enableIRIn(); // Start the receiver pinMode(led1, OUTPUT); pinMode(led2, OUTPUT); pinMode(led3, OUTPUT); } } break; case code2: if(itsONled[2] == 1) { digitalWrite(led2, LOW); itsONled[2] = 0; } else { digitalWrite(led2, HIGH); itsONled[2] = 1; } break; case code3: if(itsONled[3] == 1) { digitalWrite(led3, LOW); itsONled[3] = 0; } else { digitalWrite(led3, HIGH); itsONled[3] = 1; } break; } Serial.println(value); // you can ommit this line irrecv.resume(); // Receive the next value } }
When the code above is uploaded to the Arduino board and the remote control is pointed towards the setup, the first led lights when button 1 is pressed, the second lights when button 2 is pressed and the third lights on pressing button 3. When button 1 is pressed again, the first led goes off and the result is the same for the other buttons and their corresponding leds.
Controlling a 5V 4 Channel Relay using an IR remote.
The infrared remote can be used in controlling high voltage appliances in homes for example lights. This is possible with the use of relays. In this case am using a 5V 4-channel relay module that am going to connect to an infrared receiver so that I can use Arduino to control the relay module.
If you are not very sure of how to use the relay you can check out my other tutorial on how to interface the 4 channel relay module with Arduino form the link below:
The connections are going to be made as shown in the schematic below.
Code for controlling relay with IR remote and Arduino.
#include <IRremote.h> int RECV_PIN = 11; // the pin where you connect the output pin of sensor int relay1 = 2; int relay2 = 3; int relay3 = 4; int relay4 = 5; int relayState[] = {0,0,0,0,0}; //the initial state of relays #define code1 16724175 // code received from button no. 1 #define code2 16718055 // code received from button no. 2 #define code3 16743045 // code received from button no. 3 #define code4 16716015 // code received from button no. 4 IRrecv irrecv(RECV_PIN); decode_results results; void setup() { irrecv.enableIRIn(); // Start the receiver pinMode(relay1, OUTPUT); pinMode(relay2, OUTPUT); pinMode(relay3, OUTPUT); pinMode(relay4, OUTPUT); } void loop() { if (irrecv.decode(&results)) { unsigned int value = results.value; switch(value) { case code1: if(relayState[1] == 0) { // if first relay is on then digitalWrite(relay1, HIGH); // turn it off when button is pressed relayState[1] = 1; // and set its state as off } else { // else if first led is off digitalWrite(relay1, LOW); // turn it on when the button is pressed relayState[1] = 0; // and set its state as on } break; case code2: if(relayState[2] == 0) { digitalWrite(relay2, HIGH); relayState[2] = 1; } else { digitalWrite(relay2, LOW); relayState[2] = 0; } break; case code3: if(relayState[3] == 0) { digitalWrite(relay3, HIGH); relayState[3] = 1; } else { digitalWrite(relay3, LOW); relayState[3] = 0; } break; case code4: if(relayState[4] == 0) { digitalWrite(relay4, HIGH); relayState[4] = 1; } else { digitalWrite(relay4, LOW); relayState[4] = 0; } break; } irrecv.resume(); // Receive the next value } }
I have demonstrated the use of IR remote control in some other projects like controlling a robot car and stepper motor. You can check them out for further practice:. | https://mytectutor.com/arduino-ir-remote-control-of-leds-and-relays/ | CC-MAIN-2022-27 | refinedweb | 1,833 | 61.06 |
2018-12-05 21:28:13 8 Comments
I'm new to JavaFX, and I'm sure I'm missing something obvious here.
I'm trying to make an animation of a car that travels from left to right across a window, wrapping around the right side when it gets there. The user should be able to hit up/down to adjust the speed of the animation. I had the animation going when I used a
PathTransition object, but found out you can't adjust the
Duration of a
PathTransition, so I redid it in a
Timeline.
With
Timeline, though, I'm stuck. The car does not display on the screen when I launch the application. Here's what I'm hoping is a concise code snippet:
public class Project12 extends Application { public static void main(String[] args) { launch(); } @Override public void start(Stage primaryStage) { //Create Pane to hold the car animation Pane pane = new Pane(); //Create the RaceCarPane RaceCarPane raceCar = new RaceCarPane(); pane.getChildren().add(raceCar); //Adds the race car to the main pane //Create the VBox to hold components VBox displayPane = new VBox(); displayPane.getChildren().addAll(pane, userInstructions, btnPause); displayPane.setSpacing(15); displayPane.setAlignment(Pos.CENTER); //Create scene for display and add the display pane Scene scene = new Scene(displayPane); //Add the scene to the stage and display primaryStage.setTitle("Project 12"); primaryStage.setResizable(false); //disable resizing of the window primaryStage.setScene(scene); primaryStage.show(); }
And the RaceCarPane:
public class RaceCarPane extends Pane { //Declare origin for determining polygon point locations private double originX = 10; private double originY = getHeight() - 10; private Timeline carAnimation; //Set the Timeline for the car in constructor method public RaceCarPane() { carAnimation = new Timeline( new KeyFrame(Duration.millis(100), e -> moveCar())); carAnimation.setCycleCount(Timeline.INDEFINITE); carAnimation.play(); } private void paint() { //Create a polygon for the body Polygon body = new Polygon(); body.setFill(Color.BLUE); body.setStroke(Color.DARKBLUE); //Add points to the body ObservableList<Double> bodyList = body.getPoints(); /*(code omitted, just adding coordinates to the ObservableList for all parts. I don't believe the bug is here since it displayed when I was using a PathTransition animation)*/ //Add to pane getChildren().addAll(body, roof, frontWheel, rearWheel); } public void setOrigin (double x, double y) { this.originX = x; this.originY = y; } @Override public void setWidth(double width) { super.setWidth(width); paint(); } @Override public void setHeight(double height) { super.setHeight(height);; paint(); } public void moveCar() { //Check that car is in bounds if(originX <= getWidth()) { originX += 10; paint(); } else { originX = 0; paint(); }
EDIT: Per @Jai's comment below, my solution was to revert to the
PathTransition object and use its
RateProperty bound to a
SimpleDoubleProperty. While maybe not what the project was looking for as a solution, it does the trick, so I'm happy!
Related Questions
Sponsored Content
25 Answered Questions
[SOLVED] Sort ArrayList of custom Objects by property
- 2010-05-06 21:09:36
- Samuel
- 972573 View
- 1019 Score
- 25 Answer
- Tags: java sorting date comparator
3 Answered Questions
[SOLVED] JavaFX ListView very slow
5 Answered Questions
[SOLVED] Method override returns null
- 2014-09-26 20:30:39
- Gerardas
- 906 View
- 2 Score
- 5 Answer
- Tags: java override late-binding
0 Answered Questions
How to move arrow at the edge of circle specially for quadcurve
3 Answered Questions
[SOLVED] Google Maps & JavaFX: Display marker on the map after clicking JavaFX button
- 2017-05-23 09:08:11
- Chandler Bing
- 37356 View
- 354 Score
- 3 Answer
- Tags: javascript google-maps javafx javafx-webengine
0 Answered Questions
JavaFx Change main class to another main class (seperate class)
1 Answered Questions
1 Answered Questions
[SOLVED] javaFX cannot access Scene elements
1 Answered Questions
lineChart Using JavaFx
2 Answered Questions
[SOLVED] ConcurrentException when trying clear JavaFX ObservableMap
- 2014-05-29 13:06:53
- brevleq
- 690 View
- 1 Score
- 2 Answer
- Tags: java javafx javafx-8 concurrentmodification
@Jai 2018-12-06 02:35:27
@swpalmer was right to say that you shouldn't add it repeatedly.
That aside, you could definitely use
PathTransitionby using
Animation.rateProperty()(
PathTransitionextends
Animation).
Also, from
PathTransition:
Therefore, if you are using
Timeline, you should also be setting the
translateXand
translateYof the whole node (i.e.
RaceCarPane) to perform the animation; trying to add polygons repeatedly is definitely the wrong approach. If you were also adding a lot of polygons when you were using
PathTransition, you were probably not doing it quite right as well. Even when the animation turns out to be visually correct, it doesn't always mean that it is done correctly.
@meyer9168 2018-12-06 03:09:25
Hmm. While I looked into the documentation for
PathTransitionI didn't think to check out what I could do from
Animation.If I can use
ObservableRateand just update the rate value instead of the immutable
Duration, then I should be golden. I'll give this a shot tomorrow and update if it does the trick. Thanks!
@meyer9168 2018-12-06 15:24:05
it works! Still no idea why
Timelinewouldn't work for me, but using
PathTransitionand then adjusting its
ratePropertyper a
SimpleDoublePropertydoes the trick. Now I can wrap this up and finally get to studying for next week's finals, haha...
@swpalmer 2018-12-05 21:46:38
It looks like you missed a closing brace in the paint method.
There are a few things "wrong". For one, you should not be creating new objects and adding them to the scene graph every time you want to move your car. You;ve also left out how you use originX, originY.
I suggest you use a Group to hold the parts of the car. Add it once to the RaceCarPane and use a transform on the group object to move it.
This would probably be easier if you based in on an AnimationTimer instead of a Timeline, particularly if you want to be able to dynamically adjust the speed of the car.
@meyer9168 2018-12-05 21:51:08
I missed that closing brace when I was editing out unnecessary code. Those edits were where I used originX/Y - they're just used as origin points for determining how my Polygon shapes to build the car come from. The car drew just fine when I was using
PathTransitionso I'm 90% sure that my problem isn't with drawing the car itself. I can look into an AnimationTimer, but that's not covered in our textbook so I can only assume that I'm supposed to figure out how to do this with Timeline.
@swpalmer 2018-12-07 01:57:20
@meyer9168 Modifying coordinates in your Polygon is not the right way to do this. You want to define a static object using fixed points for the polygon that is shaped like a car and then move where that is drawn as a group. You want one car node in the scene graph hierarchy that you translate as needed. | https://tutel.me/c/programming/questions/53641041/javafx++timeline+animation+object+not+displaying | CC-MAIN-2019-09 | refinedweb | 1,142 | 60.04 |
Bionic: support for Solarflare X2542 network adapter (sfc driver)
Bug Description
[Impact]
* Support for Solarflare X2542 network adapter
(Medford2 / SFC9250) in the Bionic sfc driver.
* This network adapter is present on recent hardware,
at least HP 2019 and Dell PowerEdge R740xd systems.
* On recent-hardware deployments that would rather use
the Bionic LTS / GA supported kernel and cannot move
to HWE kernels this adapter is non functional at all.
[Test Case]
* The X2542 adapter has been exercised with iperf3 and nc
across 2 hosts on 25G link speed w/ MTUs 1400/1500/9000
on both directions, for 1 week.
Its performance is on par with the Cosmic 4.18 kernel
(which contains all these patches) and the out-of-tree
driver from the vendor.
* The 7000 series adapter (for regression testing an old model,
supported previously) has been exercised with iperf and netperf
(TCP_STREAM, UDP_STREAM, TCP_RR, UDP_RR, and TCP_CRR) in one
host (client/server in different adapter ports isolated with
network namespaces, so traffic goes through the network switch),
on 10G link speed on MTUs 1500/9000, for 1 weekend.
No regressions observed between the original and test kernels.
[Regression Potential]
* The patchset touches a lot of the sfc driver, so the potential
for regression definitely exists. Thus, a lot of consideration
and testing happened:
* It has been tested on other adapter which uses the old code,
and no regressions were found so far (see 7000 series above).
* The patchset is exclusively cherry-picks, no single backport.
* The patchset essentially moves the Bionic driver up in the
upstream 'git log --oneline -- drivers/
- since commit d4a7a8893d4c ("sfc: pass valid pointers from efx_enqueue_
- until commit 7f61e6c6279b ("sfc: support FEC configuration through ethtool")
- except for 2 commits (not needed / unrelated)
- commit 42356d9a137b ("sfc: support RSS spreading of ethtool ntuple filters")
- commit 9baeb5eb1f83 ("sfc: falcon: remove duplicated bit-wise or of LOOPBACK_SGMII")
- plus 2 more recent commits (fixes)
- commit 458bd99e4974 ("sfc: remove ctpio_dmabuf_start from stats")
- commit 0c235113b3c4 ("sfc: stop the TX queue before pushing new buffers")
Regression test results/log/script,
for documentation purposes.:/
Regression testing done on an older/previously supported adapter, SFC 7000 series.
The netperf suite of TCP/UDP STREAM and RR, and TCP_RR ran for ~2 days,
with results in the same ballpark as the original kernel and test kernels.
Now waiting for test results with the new/requested adapter
before marking verification done/successful.
Summary: test name, mtu sizes, original/
TCP_CRR
1500/1500
ORIG 4550-4560
TEST 4550-4580
PROP 5260-5316
9000/9000
ORIG 4557
TEST 4570
PROP 5260-5300
TCP_RR
1500/1500
ORIG 32531
TEST ~31k,32k
PROP 32180-34277
9000/9000
ORIG 31620
TEST 27k-30k-36k
PROP 27k-33k-34k
TCP_STREAM
1500/1500
ORIG 9406
TEST 9403
PROP 9405
9000/9000
ORIG 9883
TEST 9887
PROP 9887
UDP_RR
1500/1500
ORIG ~36k/~37k
TEST ~36k/~37k
PROP ~36k
9000/9000
ORIG ~35k/~37k
TEST 33k-37k
PROP ~35.8k/~36.6k
UDP_STREAM
1500/1500
ORIG 8.6k/8.9k
TEST 8.9k
PROP 8.6k/8.7k
9000/9000
ORIG 8.7k
TEST 8.7k/8.8k
PROP 8.7k
Verification done on the new adapter, Solarflare X2542.
The tester reports that iperf3 tests ran solid for 24h-72...
Attaching debian-installer debdiff to update the kernel version
so that the netboot images include this driver update on Bionic.
The kernel version eventually settled on is 4.15.0-62 after bug
searching on versions after 4.15.0-58 (which releases this 'fix').
This builds correctly on all architectures on PPA [1]..
[1] https:/
Sponsored in Bionic.
Hello Mauricio, or anyone else affected,
Accepted debian-installer.
Verification successful with netboot image from bionic-proposed:
http://
Same testing as done on comment #8:.
The verification of the Stable Release Update for debian-installer.
[Bionic][PULL] sfc: patches for LP#1836635
/lists. ubuntu. com/archives/ kernel- team/2019- July/102196. html
https:/ | https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1836635 | CC-MAIN-2019-43 | refinedweb | 646 | 63.29 |
The database has been created and an interface class (DAO) has been created. Now I need to get some data into the system. To do that I need some kind of data input screen that will take in text and multimedia files (images, videos, etc.) In the interests of making things easy on the user I didn’t want to use an open file dialog. I find that they are unpleasant to use, especially for non-technical users. Drag and drop is much more intuitive.
According to the hype, it’s very easy to drag and drop files from a user’s desktop into an AIR application. I figured a five minute Google would give me a dozen or so examples of how to do it. Well, yes and no. There were many examples (875,000 returns on AIR drag and drop), but most were written for the Beta versions of AIR. It seemed like every sample I found used the DragManager object which (I found out later) was transformed into the NativeDragManager object when AIR was released. Other samples were so complicated that peeling back the layers to a simple sample took more time that it was worth.
What I need to do is really quite simple. Have an image field onto which a user can drag and drop an image file. Once the file was dropped onto the field I want the image to change to the new one and the file contents to be available in binary form. The last requirement is so the file can be uploaded to the database.
Step 1 – Create a target for the operation – an image field:
<mx:Image
Step 2 – Add a listener for both the drag and the drop operations. This was done by adding a simple function that is called on the createComplete event of the canvas:
public function initDrop():void{
//Listeners to manage the drop of files on the first panel
displayImage.addEventListener(NativeDragEvent.NATIVE_DRAG_ENTER,acceptDrag);
displayImage.addEventListener(NativeDragEvent.NATIVE_DRAG_DROP,dragDrop);
}
Step 3 – Add some code to allow the drag operation. Nothing too complicated here; its just a function that allows the drag operation into the image field:
//Accept drag and drop on the first panel
public function acceptDrag(event:NativeDragEvent):void{
NativeDragManager.acceptDragDrop(displayImage);
}
Step 4 – Add some code to handle the drop operation. Here I want to change the image to the new file and to put the contents of the file into a byte array. To prove to myself that it was working, I echoed the file size to the screen. This will be replaced later with a call to the DAO for a database update:
import flash.filesystem.File;
private var imageBytes:ByteArray = new ByteArray(); //the byte array can not be null!!!
…..
//On drop collect the files
public function dragDrop(event:NativeDragEvent):void{
var imageList:Array = event.clipboard.getData(ClipboardFormats.FILE_LIST_FORMAT) as Array;
var droppedFile:File = imageList[0];
//load the file into the image
displayImage.source= droppedFile.url;
//get the byte array for the image
var fileStream:FileStream = new FileStream();
fileStream.open(droppedFile, FileMode.READ);
fileStream.readBytes(imageBytes);
fileSize.text = imageBytes.length.toString();
}
There are a couple of things to note about the dragDrop function. First the result of the drag and drop is always an array of the files. This is because the system allows you to drag more than one file at a time (very useful in some circumstances). I took the cheap and cheesy route of just using the first element in the array.
Second I’m not using a Loader object to change the image contents as is suggested on many web pages. I tried this at first and the image was not replaced, but instead the second image was overlaid on top. I fiddled around a bit with the image layers but I couldn’t quite get it to work properly. Simply re-setting the image source was a simple solution to the problem.
There you go. Not much to it, but it works well and functions properly with the 1.0 release of AIR.
I split this bit of the application into a separate app so you can download it without all of the rest of the overhead. You can find it here. | http://blogs.adobe.com/steampowered/2008/12/drag_and_droop.html | CC-MAIN-2015-48 | refinedweb | 705 | 63.7 |
#include <libcgroup/groups.h>
#include <features.h>
#include <stdbool.h>
Flags for cgroup_change_cgroup_uid_gid().
Flags for cgroup_register_unchanged_process().
Applications can use following functions to simply put a task into given control group and find a groups where given tasks is.
libcgroup can move tasks to control groups using simple rules, loaded from configuration file.
libcgroup
See cgrules.conf man page to see format of the file. Following functions can be used to load these rules from a file.
See cgrules.conf man page to see format of the file. Applications can move tasks to control groups based on these rules using following functions.
Users can use cgrulesengd daemon to move tasks to groups based on the rules automatically when they change their UID, GID or executable name.
The daemon allows tasks to be 'sticky', i.e. all rules are ignored for these tasks and the daemon never moves them.
Use cached rules, do not read rules from disk.
The daemon must not touch the given task, i.e.
it never moves it to any controlgroup. | http://libcg.sourceforge.net/html/tasks_8h.html | CC-MAIN-2017-17 | refinedweb | 173 | 78.55 |
How Do I Hate Thee?
November 3, 2004
For a group of people who spend so much time working with and talking about XML, it's not surprising that the members of the XML-DEV mailing list know exactly what it is that they dislike about XML. Over the last week, we've had a festival of complaint about and hate of XML's misfeatures. But amongst the bile, there's also a very interesting debate: what exactly is it about namespaces that makes people mad?
Let Me Count the Ways
Two of XML-DEV's stalwarts, Mike Champion and Len Bullard, started the trouble. Champion posted to the list observing that five years on from the XML simplification effort in 1999, it might be time to revisit the topic.
So, five years later ... is it NOW time to think seriously about cleaning up the core XML specs to address the challenges that real-world non-XML geeks?
As usual, Champion's mail sparked a lengthy series of responses, not all of which can be covered in this article. What I will follow is the entertaining thread about XML's faults provoked by Len Bullard. In fitting with the season, Bullard somewhat impishly suggested everybody list their top five problems with XML. Everybody loves lists, and the results should turn out to be interesting. Bullard forecast that the real problem would be XML namespaces.
Of the cases presented, isn't the really gnarly one namespaces? In other words, if the edges of that were tidied, how much pain would go away?
Robin Berjon was the first to oblige, picking on DTDs as his bête noir.
- DTDs
- other legacy cruft
- DTDs
- more legacy cruft you always forget is there
- DTDs
Bill de hÓra again mentioned DTDs, but had a broader mix in his top five:
- Default namespaces
- DOM
- No Clark notation in XPath (or XML)--see 1 for details.
- Whitespace
- DTDs
While Robin Berjon agreed that the namespace notation was a pain in XPath, he didn't want to see
"Clark notation" (where the namespace URI is written in full) but instead use of
the
xmlns() XPointer scheme.
Eric Hanson picked up on one of my own wishlist items, XML packaging, though I'm not sure I'd characterize it as a major flaw:.
Sean McGrath has had more opportunity to polemicize over the faults of XML than most, and unsurprisingly, his list of faults has a broader outlook. Now, if you asked XML-DEV for a list of five things and were actually expecting to get replies with five unique items, you'd be crazy. So McGrath gave six and a neoligism or two to boot.
- The lack of sane, simple roundtrippability. I read in some XML, I write it straight back out again. I [lose] stuff on the way ...
- Namespaces--specifically defaulting and the "declare 'em anywhere you like buddy" aspects.
- No sane, simple pull based XPath 1.0 subset.
- W3C XML Schema--pretty much everything about it.
- Doctype. We should have left assertions about schema compliance (and consequently the entire idea of an embedded document type declaration subset) on the clipping room floor...
- Fuzziness over the use of terms like "XML parser" and "XML Editor" and "XML aware" and "XML compliant" ... Interop problems are the inevitable result.
Rick Jelliffe also picks up on the conformance issue and the general pain around DTDs. Jelliffe comes from a document-oriented XML processing background, so his list of XML faults brings a different perspective to the debate.
- Needs adjusted conformance levels: no-DTD, or DTD+validating ...
- Needs to reserve ISO standard entity names with ISO meanings, so that no-DTD processors can be used in the publishing industry ...
- Need to have namespace-aware DTDs. Even just to allow that @xmlns and @xmlns:* do not need to be declared in the DTD would be a giant step forward ...
- Needs
xml:space="strip"for use with no DTD.
- W3C needs to endorse ISO Schematron ...
- Whingers who dissipate real opportunities for change ... I certainly think it is time for XML 2.0, but to remove specific problems with the existing syntax, not to reduce the infoset or adopt some different syntax or disenfranchise publishing people further.
Jelliffe's list will certainly strike a fellow feeling with anyone who's ever written more than handful of documents in XML. His point of view is a welcome reminder of XML's role in the publishing world and the seeming blindness of the W3C working groups to its applications there.
So far, the sword of simplicity dangles dangerously over both namespaces and DTDs. But what else is on the chopping block? A recent weblog post from Derek Denny-Brown, a Microsoft developer working on XML products, attempted to document where "XML goes astray." In a fascinating post, Denny-Brown explains the difficulties of XML, designed as a document format, applied in data scenarios.
Dare Obasanjo helpfully pulled out the main points from Denny-Brown's article.
- XML's treatment of whitespace confuses developers.
- The limitation in the range of allowed characters in XML is a hassle which the Microsoft XML team sees customers complain about on a weekly basis.
- Namespaces are close to a disaster [but not quite, that dubious honor goes to W3C XML Schema]
Elliotte Rusty Harold however was unequivocal in his disagreement with Denny-Brown.
This article is absolute crap, and a typical example of Microsoft think. It blames XML for the very problems Microsoft created and which don't exist in other tools and on other platforms.
He goes on, in a similar vein, to assert that many of the problems Microsoft's customers face are due to a misunderstanding of XML as implemented in Microsoft's APIs, not problems with XML itself. Read the full post if you wish to steep yourself in vitriol. One thing that Harold picked up on that is worth mentioning is the second point as summarized by Obasanjo, the restriction on character ranges in XML 1.0, which would seem to be solved by XML 1.1. That is, assuming it wasn't a confusion between characters permitted in XML text content and XML names.
As the W3C's Liam Quin noted, we'll no doubt expect rapid deployment of XML 1.1 from Microsoft.
Now, onto the most oft-cited XML fault: namespaces.
What Exactly Is the Problem with Namespaces?
Adding his bugbears to the list, Robert Koberg mentioned that he doesn't see what the problem is with namespaces. Peter Hunsberger clearly doesn't agree, citing namespaces five times over as his favorite problem with XML. So, what exactly is the issue?
Joe English writes that his complaint is the hassle of carrying around a namespace URI and a local name:.
Robin Berjon highlighted another common problem, the expectation generated by the use of URIs as namespace names.
People [think] the URIs resolve to something magical (which they should, but usually don't). Then they think that they inherit to descendant elements or to attributes. This is usually dealt with by repeating ten times over that namespaces are dumb.
Michael Kay explained that having something as fundamental as naming as an added extra to XML 1.0 was a bad idea.
Naming is architecturally fundamental: changing the naming architecture of XML by means of a bolt-on to the core standard was an incorrect layering that was bound to lead to many practical problems.
Kay continues, identifying more issues:
It has always been ambiguous whether prefixes are significant or not.
The indirection between prefixes and URIs makes the interpretation of many textual fragments (XML entities, XPath expressions, XQueries, even schema documents) context-dependent.
The use of URIs as namespace names has always been fuzzy around the edges, as exemplified by the "relative URI" debacle.
If namespaces are so bad, can we live without them? Gavin Thomas Nicol thinks so, and said so in his list of XML bugbears:
- Namespaces (who *needs* them?)
- DTD's (should be broken out of the core)
- External Entities (not really necessary)
"But what about XSLT?" asked Robin Berjon. Nicol expands:
XSLT does not need namespaces as such, and could have got along fine with just alpha-renaming (i.e. like elisp packages), and even that wasn't strictly necessary.
History shows this isn't the first time Nicol has suggested this idea. Alpha-renaming is the process of rewriting names to scope them locally, but it's not entirely clear what Nicol is proposing. Any explanations will be gratefully received.
Michael Kay presented a more tangible solution, echoing Bill de hÓra's earlier wish for a "Clark notation" for namespaces..
Births, Deaths, and Marriages
The latest announcements from the XML-DEV mailing list.
- JAXP 1.3 RI is public on java.net
JAXP 1.3 Reference Implementation showcases a variety of new features in the Java API for XML processing, including DOM L3 Core, DOM L3 LS, SAX 2, XML 1.1, XInclude, and a new Schema independent validation framework
- freebXML CC
freebXML CC is "a set of tools developed to facilitate the work of domain experts managing data dictionaries," designed to work with ebXML Core Components and interoperate with ebXML implementations.
- W3C updated XQuery/XPath working drafts
The new working drafts include a number of changes made in response to comments received during the Last Call period that ended on Feb. 15, 2004.
Scrapings
In return, can I get my name spelt correctly? ... can I ... please? ... XML pedants in a swing state ... a firestorm of 297 messages to XML-DEV last week, Len rating 8% (firestarter!) ... more enforcement of schema philosophy in XML editors ... and more talk of namespaces not to everybody's taste ... the LMNL meme will never die. | https://www.xml.com/pub/a/2004/11/03/deviant.html | CC-MAIN-2022-05 | refinedweb | 1,621 | 64 |
Persisting and accessing data forms one of the most routine yet core functionalities of any application. In the world of JEE, there are many APIs as well as frameworks to achieve to achieve the same. The Spring Framework is no exception. This article will explain how to use this framework for persisting and accessing data in your applications.
Next let us see how to use Spring's JDBC support. Since JdbcTemplate is the 'lowest-level' of all the types, the steps required for JdbcTemplate become part of the steps for all other types. The following are the steps required to make use of JdbcTemplate:
1. Develop the bean
2. Configure the bean and DataSource
3. Develop the client
Of these, the second and third steps can be divided into sub-steps. Here are the details.
Develop the bean: The bean or POJO is similar to any other POJO used with the Spring Framework except for one difference. The POJO developed to be used with JdbcTemplate requires a setter for the DataSource object. The following is an example of a POJO that can be used with JdbcTemplate. The setter will pass the DataSource object to the instance of JdbcTemplate.
public class JdbcEventDao
{
private DataSource dataSource;
public void setDataSource(DataSource dataSource)
this.dataSource = dataSource;
}
public DataSource getDataSource()
return dataSource;
The POJO can contain other methods that can work with the DataSource.
Configure the DataSource and the bean: The configuration is done using the XML file. Lets call it beans.xml. This step can be further divided into configuring the DataSource, and configuring the bean. We will look at the DataSource first.
A DataSource is configured by declaring it as a bean and providing the required information as the child nodes of the bean declaration. The configuration is done as follows:
First, a bean is declared whose class is mapped to an implementation of DataSource. One of the commonly-used implementations is org.apache.commons.dbcp.BasicDataSource. For example, to declare a bean with its class mapped to BasicDataSource class as the DataSource implementation class, the statement will be
<bean id="dataSource" destroy-
:
</bean>
Second, the required details for the DataSource such as driver name, URL of the DataSource, credentials etc. can be passed onto the DataSource through property tags. The property tags are the children of the <bean> tag. In versions prior to 2.5, the property tags had the <value> tag as the child tag. For example, to pass "com.mysql.jdbc.Driver" as the value of a property named "driverClassName," the statement in version 2.5 and above will be
<bean id="dataSource" destroy-<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
and in previous versions it will be
<bean id="dataSource" destroy-
<property name="driverClassName">
<value> com.mysql.jdbc.Driver</value>
</property>
The next step is configuring the bean. | http://www.devshed.com/c/a/Java/Data-Access-Using-Spring-Framework-JDBC/1/ | CC-MAIN-2014-10 | refinedweb | 470 | 58.38 |
Options to configure a
DataLoader.
More...
#include <dataloader_options.h>
Options to configure a
DataLoader.
Definition at line 13 of file dataloader_options.h.
The number of worker threads to launch.
If zero, the main thread will synchronously perform the data loading.
The maximum number of jobs to enqueue for fetching by worker threads.
Defaults to two times the number of worker threads.
Whether to enforce ordering of batches when multiple are loaded asynchronously by worker threads.
Set to
false for better performance if you do not care about determinism.
Whether to omit the last batch if it contains less than
batch_size examples. | https://caffe2.ai/doxygen-c/html/structtorch_1_1data_1_1_data_loader_options.html | CC-MAIN-2022-40 | refinedweb | 101 | 61.43 |
.
thats not correct
thats because without them the variable is not defined.
MiKKaV
With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.
use break statement which will come out of the loop, you will have the proper value for "newestLogFile".
mb...
File[] logFiles = someDirectory.listFiles();
List logFilesAsList = Arrays.asList(logFiles);
Collections.sort(logFilesA
public int compare(Object o1, Object o2) {
File f1 = (File)o1;
File f2 = (File)o2;
return (int)(f1.lastModified() - f2.lastModified());
}
});
File latest = (File)logFilesAsList.get(l | https://www.experts-exchange.com/questions/21084400/Using-variables-set-inside-of-if-blocks.html | CC-MAIN-2018-13 | refinedweb | 106 | 52.87 |
Dear LAMMPS users and developers,
High time to try start another open ended discussion. This time I would like to discuss about some recently added features in LAMMPS. Support to creating outputs in YAML format for easy reading and post-processing with python or other script languages. This originally started from discussions on adapting the example files in the LAMMPS distribution for regressions testing.
One of the challenges is to reliably extract only the thermodynamic data for creating a reference and storing and analyzing them efficiently. There is a logfile analyzer tool written in Python as part of Pizza.py that can do the extraction part, but it is rather fragile and not fully compatible with the needs for testing. Based on what was written in the first part of 8.3.8. Output structured data from LAMMPS — LAMMPS documentation the idea arose to add a new thermodynamic output style that does the YAML format automatically. The structure of YAML files also makes it easier to extract only the contents in YAML syntax and skip over contents that are not thermodynamic output.
Another contributing factor was the request to have the option to rename the column headers, especially for computes and fixes, so they will be more descriptive of what data they contain. Combined with some necessary refactoring to modernize the code and get rid of some complexity those changes were all implemented recently.
There are two ways to enable YAML style thermo output. a) Use
thermo_style yaml where you get a fixed set of properties similar to the default output, b) Use
thermo_style custom followed by
thermo_modify line yaml.
Now extracting and plotting the data is extremely simple in Python when using the pyaml, pandas, and matplotlib modules.
import re, yaml import pandas as pd import matplotlib.pyplot as plt # extract YAML format part from log file docs = "" with open("log.lammps") as f: for line in f: m = re.search(r"^(keywords:.*$|data:$|---$|\.\.\.$| - \[.*\]$)", line) if m: docs += m.group(0) + '\n' thermo = list(yaml.load_all(docs, Loader=CSafeLoader)) # convert list of list to a pandas data file and plot df = pd.DataFrame(data=thermo[0]['data'], columns=thermo[0]['keywords']) fig = df.plot(x='Step', y=['E_bond', 'E_angle', 'E_dihed', 'E_impro'], ylabel='Energy in kcal/mol') plt.savefig('thermo_bondeng.png')
In combination with the
thermo_modify colname option to rename columns, creating a plot of the thermodynamic data in high quality should be very easy. Certainly easier than with the older tools.
In addition, we also now have a dump style yaml that can import data in a similar fashion, and work on fix ave/time and other averaging fixes to support YAML format output has started.
What is most compelling to me about these feature is that this is built entirely on well supported widely used support software (pyyaml, pandas, numpy, matplotlib) and since pandas uses numpy storage underneath it is also very effective and fast for processing large amounts of data.
What do people think about this?
Do you see any other applications that can be built on top of this, or parts of LAMMPS that could benefit from interfacing with YAML format data?
Are there alternatives worth looking into? | https://matsci.org/t/topic-of-the-month-yaml-support-in-lammps/42005 | CC-MAIN-2022-21 | refinedweb | 532 | 54.63 |
A multiple-tau algorithm for Python/NumPy.
Project description
Multipe-tau correlation is computed on a logarithmic scale (less data points are computed) and is thus much faster than conventional correlation on a linear scale such as numpy.correlate.
Installation
multipletau supports Python 2.6+ and Python 3.3+ with a common codebase. The only requirement for multipletau is NumPy (for fast operations on arrays). Install multipletau from the Python package index:
pip install multipletau
Usage
import numpy as np import multipletau a = np.linspace(2,5,42) v = np.linspace(1,6,42) multipletau.correlate(a, v, m=2) array([[ 0. , 569.56097561], [ 1. , 549.87804878], [ 2. , 530.37477692], [ 4. , 491.85812017], [ 8. , 386.39500297]])
Citing
The multipletau package should be cited like this (replace “x.x.x” with the actual version of multipletau that you used and “DD Month YYYY” with a matching date).
Paul Müller (2012) Python multiple-tau algorithm (Version x.x.x) [Computer program]. Available at (Accessed DD Month YYYY)
You can find out what version you are using by typing (in a Python console):
>>> import multipletau >>> multipletau.__version__ '0.1.4'
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/multipletau/0.1.6/ | CC-MAIN-2021-39 | refinedweb | 216 | 51.24 |
Revision history for Perl extension XML::LibXML. 1.88 Wed Sep 21 12:54:33 IDT 2011 - Sat Aug 27 14:05:37 IDT 2011 - Fix t/49callbacks_returning_undef.t to not read /etc/passed which may not be valid XML. Instead, we're reading a local file while using URI::file (assuming it exists - else - we skip_all.) 1.86 Thu Aug 25 11:42:55 IDT 2011 - Changed SvPVx_nolen() to SvPV_nolen() in LibXML.xs for better compatibility. - SvPVx_nolen() appears to be undocumented API. - Resolves - Thanks to Paul for the report. 1.85 Wed Aug 24 17:05:19 IDT 2011 - Sat Jul 23 23:12:28 IDT 2011 - Fix for perl 5.8.x before 5.8.8: - "You can now use the x operator to repeat a qw// list. This used to raise a syntax error." - - fixes . - thanks to paul@city-fan.org for the report. 1.83 Sat Jul 23 14:28:40 IDT 2011 - Wed Jul 20 23:43:53 IDT 2011 - Sat Jul 16 18:30:02 IDT 2011 - Tue Jul 12 23:35:03 IDT 2011 - Fri Jul 8 20:02:32 IDT 2011 - Wed Jul 6 20:23:58 IDT 2011 - Fri Jul 1 22:27:56 IDT 2011 - Thu Jun 30 20:58:46 IDT 2011 - Fri Jun 24 19:00:40 IDT 2011 - Thu Jun 23 16:20:42 IDT 2011 - Sat Jun 18 10:53:44 IDT 2011 - Thu Jun 16 19:26:13 IDT 2011 - Removed a stray file from the MANIFEST - - Warned on "kit not complete". - Thanks to obrien.jk 1.71 Tue Jun 14 19:43:50 IDT 2011 - - provide context and more accurate column number in structured errors - clarify license and copyright - support for Win32+mingw+ActiveState 1.69_1 - - fix incorrect output of getAttributeNS and possibly other methods on UTF-8 - added $node_or_xpc->exists($xpath) method - remove accidental debug output from XML::LibXML::SAX::Builder 1.68 - compilation problem fixes 1.67 - - fix reconcilation *NOTE:* Version 1.54 fixes potentional buffer overflows were possible with earlier versions of the package. 1.54 - 1.53 - fixed some typos (thanks to Randy Kobes and Hildo Biersma) - fixed namespace node handling - fixed empty Text Node bug - corrected the parser default values. - added some documentation 1.51 - - Removed C-layer parser implementation. - Added support for prefixes in find* functions - More memory leak fixes (in custom DOMs) - Allow global callbacks 1.30 - Full PI access - New parser implementation (safer) - Callbacks API changed to be on the object, not the class - SAX uses XML::SAX now (required) - Memory leak fixes - applied a bunch of patches provided by T.J. Mather 1.00 - Added SAX serialisation - Added a SAX builder module - Fixed findnodes in scalar context to return a NodeList object - Added findvalue($xpath) - Added find(), which returns different things depending on the XPath - Added Boolean, Number and Literal data types 0.99 - Added support for $doc->URI getter/setter 0.98 - New have_library implementation 0.97 - - Addition of HTML parser - getOwner method added - Element->getAttributes() added - Element->getAttributesNS(URI) added - Documentation updates - Memory leak fixes - Bug Fixes 0.94 - Some DOM Level 2 cleanups - getParentNode returns XML::LibXML::Document if we get the document node 0.93 - Addition of DOM Level 2 APIs - some more segfault fixes - Document is now a Node (which makes lots of things easier) 0.92 - Many segfault and other bug fixes - More DOM API methods added 0.91 - Removed from XML::LibXSLT distribution - Added DOM API (phish) 0.01 Sat Mar 3 17:08:00 2001 - original version; created by h2xs 1.19 | https://metacpan.org/changes/release/SHLOMIF/XML-LibXML-1.90 | CC-MAIN-2020-24 | refinedweb | 595 | 65.42 |
Core Java coding questions frequently asked in written tests and interviews - part 2: equals Vs ==
Core Java Coding Questions and Answers for beginner to intermediate level
Q2. What will be the output of the following code snippet?
Object s1 = new String("Hello"); Object s2 = new String("Hello"); if(s1 == s2) { System.out.println("s1 and s2 are =="); }else if (s1.equals(s2)) { System.out.println("s1 and s2 are equals()"); }A2. The answer is:
s1 and s2 are equals()
Here is the explanation with a diagram.
So, the above question tests your understanding of "==" versus "equals( )" applied to objects in Java. One compares the references and the other compares the actual values.
Here are some follow on questions:
Q. What will be the output for the following code snippet?
Object s1 = "Hello"; Object s2 = "Hello"; if (s1 == s2) { System.out.println("s1 and s2 are =="); } else if (s1.equals(s2)) { System.out.println("s1 and s2 are equals()"); }A. The answer is
s1 and s2 are ==
Now the above answer violates the rule we saw before. This is a special case (or rule) with the String objects and a typical example of the flyweight design pattern in action. This design pattern is used to conserve memory by reducing the number of objects created. The String object creates a pool of string and reuse an existing String object from the pool if the values are same instead of creating a new object. The flyweight pattern is all about object reuse. This rule applies only when you declare the String object as Object s = "Hello"; without using the keyword "new".
-->
This is a very popular Java interview question.
More Java coding questions and answers
- Core Java coding questions frequently asked in written tests and interview - part 3
- Reviewing a given Java code at job interviews and code review sessions
17 Comments:
good examples........
really useful info. thank you.
Can u please clarify
1. if i create Object s3 = new Object("Hello");
where this Hello object will be placed in Heap memory or in String pool or in both?
2. If so, how about Object s1 = "Hello";
It will be placed in only in String pool or in both?
Try this.
public class Temp {
public static void main(String[] args) {
Object o1 = "Hello";
Object o2 = "Hello";
Object o3 = new String("Hello");
Object o4 = new String("Hello");
if(o1 == o2) {
System.out.println("o1 is in String pool as both point to the same object");
}
if(o3 == o4) {
System.out.println("o3 is in String pool as both point to the same object");
}
//If it does not print, then the objects are created in the heap.
}
}
Thanks. Was so helpful.
o1 is in String pool as both point to the same object
thanx nic example
I really like that you are providing information on core and advance java , being enrolled in, i was looking for such information online to assist me on core and advance java concepts and your post helped me a lot .thAnks
Very Useful information...
Thanks
thank you so much..
very useful info
very useful...
Thak you
it's more useful information......
1. the Hello will be placed in heap as well as in string constant pool both.
so whenever we create a string object by using new operator then always 2 objects get created...
anything between " " is stored in string constant pool.
2. the answer of 2nd is Hello get stored in string constant pool.
because in this way only one object is creating ...
thank u friend ,,...your site helped me to get the job.
Bravo.............!
Object s3 = new Object("Hello")
it wont compile
Thanks for this wonderful site
Nicely presented.. easy to understand concepts.. :) | http://java-success.blogspot.com.au/2012/06/core-java-coding-questions-frequently_26.html | CC-MAIN-2018-09 | refinedweb | 615 | 66.23 |
XML Source
The XML source reads an XML data file and populates the columns in the source output with the data.
The data in XML files frequently includes hierarchical relationships. For example, an XML data file can represent catalogs and items in catalogs. Before the data can enter the data flow, the relationship of the elements in XML data file must be determined, and an output must be generated for each element in the file.
The XML source uses a schema to interpret the XML data. The XML source supports use of a XML Schema Definition (XSD) file or inline schemas to translate the XML data into a tabular format. If you configure the XML source by using the XML Source Editor dialog box, the user interface can generate an XSD from the specified XML data file.
The schemas can support a single namespace only; they do not support schema collections.
The data in the XML files frequently includes hierarchical relationships. The XML Source Editor dialog box uses the specified schema to generate the XML source outputs. You can specify an XSD file, use an inline schema, or generate an XSD from the specified XML data file. The schema must be available at design time.
The XML source generates tabular structures from the XML data by creating an output for every element that contains other elements in the XML files. For example, if the XML data represents catalogs and items in catalogs, the XML source creates an output for catalogs and an output for each type of item that the catalogs contain. The output of each item will contain output columns for the attributes of that item.
To provide information about the hierarchical relationship of the data in the outputs, the XML source adds a column in the outputs that identifies the parent element for each child element. Using the example of catalogs with different types of items, each item would have a column value that identifies the catalog to which it belongs.
The XML source creates an output for every element, but it is not required that you use all the outputs. You can delete any output that you do not want to use, or just not connect it to a downstream component.
The XML source also generates the output names, to ensure that the names are unambiguous. These names may be long and may not identify the outputs in a way that is useful to you. You can rename the outputs, as long as their names remain unique. You can also modify the data type and the length of output columns.
For every output, the XML source adds an error output. By default the columns in error outputs have Unicode string data type (DT_WSTR) with a length of 255, but you can configure the columns in the error outputs by modifying their data type and length.
If the XML data file contains elements that are not in the XSD, these elements are ignored and no output is generated for them. On the other hand, if the XML data file is missing elements that are represented in the XSD, the output will contain columns with null values.
When the data is extracted from the XML data file, it is converted to an Integration Services data type. However, the XML source cannot convert the XML data to the DT_TIME2 or DT_DBTIMESTAMP2 data types because the source does not support these data types. For more information, see Integration Services Data Types.
The XSD or inline schema may specify the data type for elements, but if it does not, the XML Source Editor dialog box assigns the Unicode string data type (DT_WSTR) to the column in the output that contains the element, and sets the column length to 255 characters.
If the schema specifies the maximum length of an element, the length of output column is set to this value. If the maximum length is greater than the length supported by the Integration Services data type to which the element is converted, then the data is truncated to the maximum length of the data type. For example, if a string has a length of 5000, it is truncated to 4000 characters because the maximum length of the DT_WSTR data type is 4000 characters; likewise, byte data is truncated to 8000 characters, the maximum length of the DT_BYTES data type. If the schema specifies no maximum length, the default length of columns with either data type is set to 255. Data truncation in the XML source is handled the same way as truncation in other data flow components. For more information, see Error Handling in Data.
You can modify the data type and the column length. For more information, see Integration Services Data Types.
The XML source supports three different data access modes. You can specify the file location of the XML data file, the variable that contains the file location, or the variable that contains the XML data.
The XML source includes the XMLData and XMLSchemaDefinition custom properties that can be updated by property expressions when the package is loaded. For more information, see Integration Services (SSIS) Expressions, Use Property Expressions in Packages, and XML Source Custom Properties.
The XML source supports multiple regular outputs and multiple error outputs.
SQL Server Integration Services includes the XML Source Editor dialog box for configuring the XML source. This dialog box is available in SSIS Designer.
You can set properties through SSIS Designer or programmatically.
For more information about the properties that you can set in the XML Source Editor dialog box, click one of the following topics:
XML Source Editor (Connection Manager Page)
XML Source Editor (Columns Page)
XML:
Curated Answer, Create an XML Destination for an SSIS Package, on curatedviews.cloudapp.net. | https://msdn.microsoft.com/en-us/library/ms140277.aspx | CC-MAIN-2015-22 | refinedweb | 959 | 59.74 |
AWS Developer Blog allows you to test REST APIs as well as all the event handlers for AWS Lambda supported by Chalice.
Basic Usage
To show how to use this new test client, let’s start with a hello world REST API. If you haven’t used Chalice before, you can follow our quickstart guide which will walk you through installing, configuring, and creating your first Chalice application.
First, we’ll update our
app.py file with two routes.
from chalice import Chalice app = Chalice(app_name='testclient') @app.route('/') def index(): return {'hello': 'world'} @app.route('/hello/{name}') def hello(name): return {'hello': name}
To test this API, we’ll create a
tests directory and create a
tests/__init__.py and a
tests/test_app.py file with two tests, one for each route.
$ mkdir tests $ touch tests/{__init__.py,test_app.py} $ tree . ├── app.py ├── requirements.txt └── tests ├── __init__.py └── test_app.py
Your
tests/test_app.py should look like this.
import app from chalice.test import Client def test_index_route(): with Client(app.app) as client: response = client.http.get('/') assert response.status_code == 200 assert response.json_body == {'hello': 'world'} def test_hello_route(): with Client(app.app) as client: response = client.http.get('/hello/myname') assert response.status_code == 200 assert response.json_body == {'hello': 'myname'}
In our test file above, we first import our test client from
chalice.test. Next, in order to use our test client, we instantiate it and use it as a context manager. This ensures that our test environment is properly set up, and that on teardown we cleanup and replace any resources we needed to modify during our test, such as environment variables.
The test client has several attributes that you can use to help you write tests:
client.http– Used to test REST APIs.
client.lambda_– Used to test Lambda functions by specifying the payload to pass to the Lambda function.
client.events– Used to generate sample events when testing Lambda functions through the
client.lambda_attribute.
To run our tests, we’ll install pytest.
$ pip install pytest $ py.test tests/test_app.py ============================= test session starts ============================== platform darwin -- Python 3.7.3, pytest-5.3.1, py-1.5.3, pluggy-0.12.0 rootdir: /tmp/testclient plugins: hypothesis-4.43.1, cov-2.8.1 collected 2 items test_app.py .. [100%] ============================= 2 passed in 0.32s ================================
Testing AWS Lambda functions
To test Lambda functions directly, we’ll use the
client.lambda_.invoke() method. First, let’s test a Lambda function that isn’t connected to any events. Add this function to your
app.py file.
@app.lambda_function() def myfunction(event, context): return {'event': event}
Our test for this function will look similar to our REST API unit tests, except we’ll use the
client.lambda_.invoke() method instead.
def test_my_function(): with Client(app.app) as client: response = client.lambda_.invoke('myfunction', {'hello': 'world'}) assert response.payload == {'event': {'hello': 'world'}}
Testing event handlers
In the previous example, we’re creating our own event payload to pass to our Lambda function invocation. We can use the
client.events attribute to generate sample events for specific services that Chalice supports. To learn more about Lambda event sources with Chalice, see our event sources documentation.
Suppose we wanted to test an event handler connected to an Amazon SNS topic.
@app.on_sns_message(topic='mytopic') def myfunction(event): return {'message': event.message}
In order to test this function we need to generate an event payload that matches the schema expected by this event handler. We can use the
client.events.generate_sns_event() to do this for us:
def test_my_function(): with Client(app.app) as client: event = client.events.generate_sns_event(message='hello world') response = client.lambda_.invoke('sns_message_handler', event) assert response.payload == {'message': 'hello world'}
Testing with the AWS SDK for Python
Finally, let’s look at an example that involves the AWS SDK for Python. In this example, we have a REST API that takes the request body and forwards it to Amazon S3 using the AWS SDK for Python, boto3.
_S3 = None def get_s3_client(): global _S3 if _S3 is None: _S3 = boto3.client('s3') return _S3 @app.route('/resources/{name}', methods=['PUT']) def send_to_s3(name): s3 = get_s3_client() s3.put_object( Bucket='mybucket', Key=name, Body=app.current_request.raw_body ) return Response(status_code=204, body='')
In our test we want to verify that when we make a PUT request to
/resources/myobject the request body is sent as the object body for a corresponding
PutObject S3 API call. To test this, we’ll use the Chalice test client as well as Botocore Stubber. The Botocore Stubber allows us to stub out requests to AWS so you don’t actually send requests to AWS. It will validate that your stubbed input parameters and response values match the schema of the service API. To use the stubber, we wrap our S3 client in a
Stubber instance and specify the expected API calls. We can use the stubber as a context manager which will automatically activate our stubber as well as cleanup once we’re finished using it. Here’s how this test looks like.
import json from botocore.stub import Stubber def test_send_to_s3(): client = app.get_s3_client() stub = Stubber(client) stub.add_response( 'put_object', expected_params={ 'Bucket': 'mybucket', 'Key': 'myobject', 'Body': b'{"hello": "world"}', }, service_response={}, ) with stub: with Client(app.app) as client: response = client.http.put( '/resources/myobject', body=json.dumps({'hello': 'world'}).encode('utf-8') ) assert response.status_code == 204 stub.assert_no_pending_responses()
We can one again run these tests using pytest.
$ py.test tests/test_app.py ============================= test session starts ============================== platform darwin -- Python 3.7.3, pytest-5.3.1, py-1.5.3, pluggy-0.12.0 rootdir: /tmp/testclient plugins: hypothesis-4.43.1, cov-2.8.1 collected 5 items test_app.py ..... [100%] ============================= 5 passed in 0.43s ================================
Next Steps
For more information on using the test client in Chalice, you can check out our testing documentation as well as our API reference. Let us know what you think. You can share feedback with us on our GitHub repository. | https://aws.amazon.com/blogs/developer/introducing-the-new-test-client-for-aws-chalice/ | CC-MAIN-2020-40 | refinedweb | 999 | 61.53 |
Details
- Type:
Improvement
- Status: Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: None
-
- Component/s: modules/join
- Labels:None
- Lucene Fields:New
Description
Have similar scoring for query time joining just like the index time block join (with the score mode).
Issue Links
Activity
- All
- Work Log
- History
- Activity
- Transitions
Hi David, I just committed
LUCENE-4704. Query instances returned from JoinUtil will implement equals and hashcode in future versions.
I got suggested to extend the Query class and return a Hash myself.
My query class contains:
@Override
public int hashCode()
It seems to work now.
Same problem if I use:
IndexSearcher searcher = new IndexSearcher(req.getSearcher().getIndexReader());
I can't override the class since the constructor is private. This is probably to only use the static methods.
I've used an output to see the hashcode and it stays the same. Can I change this behaviour somehow?
I think this happens b/c the Query that the JoinUtil returns doesn't override the equals and hashcode method (See TermsIncludingScoreQuery). This should be fixed, otherwise this query can never be cached (this is what the SolrIndexSearcher does). Can you check if the following works:
IndexSearcher searcher = new IndexSearcher(req.getSearcher().getIndexReader()); return JoinUtil.createJoinQuery("pageId", true, "fileId", q, searcher, ScoreMode.Max);
Could it be that you're JoinUtil doesn't clear a cache or something?(Could also be my code) My plugin always works once when I start Solr. It joins and gives good scores. The second query it always returns the same as the first one? After restarting Solr it works 1 time and then does the same.
public class TestParserPlugin extends ExtendedDismaxQParserPlugin {
@Override
public QParser createParser(String string, SolrParams sp, SolrParams sp1, SolrQueryRequest sqr)
@Override
public void init(NamedList nl) {
}
public class TestParser extends QParser {
public TestParser(String qstr, SolrParams localParams, SolrParams params, SolrQueryRequest req){ super(qstr, localParams, params, req); }
@Override
public org.apache.lucene.search.Query parse() throws org.apache.lucene.queryparser.classic.ParseException {
//IndexReader reader;
try
catch (IOException ex){ Logger.getLogger(TestParserPlugin.class.getName()).log(Level.SEVERE, null, ex); }
return null;
}
@Override
protected void finalize() throws Throwable
}
}
Solr uses a different joining implementation. Which doesn't support mapping the scores from the `from` side to the `to` side. If you want to use the Lucene joining implementation you could wrap this in a Solr QParserPlugin extension.
Hi, Is this possible to use in solr I tried setting{!scoreMode=Avg}
, but it doesn't seem to have any effect.
Committed to trunk and branch4x.
Oops... I see. I'll commit soon!
I still see one omitted
Otherwise this looks great: +1 to commit!
Thanks for reviewing Mike! I've updated the patch.
You don't need to use your own growFactor ... just call ArrayUtil.grow directly (it already oversizes under the hood for you).
Sure. (I didn't release that the ArrayUtil#oversize() was doing this).
Fix omitted to emitted in the comment on top of "class MVInnerScorer".
Done.
Probably javadocs should somewhere explain about the "first time doc is emitted it gets that score"?
Done.
Maybe explain added RAM requirements when scores are tracked in the javadocs?
Done
Maybe rename TermsWithScoreCollector.MV.Avg.ordScores -> .scoreCounts (and .scores -> .scoreSums?).
Done.
Can we put back the non-wildcard imports?
Done. (IDE was trying to be smart... I'll change my settings...)
Patch looks great!
You don't need to use your own growFactor ... just call ArrayUtil.grow
directly (it already oversizes under the hood for you).
Maybe remove @throws IAE from createJoinQuery's javadocs? (But, still
throw it... in case we add a new ScoreMode and forget to fix this
code, in the future). Because today all ScoreMode enum values work...
Fix omitted to emitted in the comment on top of "class MVInnerScorer".
Probably javadocs should somewhere explain about the "first time doc
is emitted it gets that score"?
Maybe explain added RAM requirements when scores are tracked in the
javadocs?
Maybe rename TermsWithScoreCollector.MV.Avg.ordScores -> .scoreCounts
(and .scores -> .scoreSums?).
Can we put back the non-wildcard imports?
Updated patch.
- Fixed random tests.
- Added support for explain.
- Added ScoreMode support for documents that relate to more than one document.
I think it is ready to be committed.
Updated patch.
- Started adding randomizing score mode in TestJoinUtil test class.
- Made ScoreMode a public enum in join package.
Draft patch. Added ScoreMode as parameter to JoinUtil#createJoinQuery(...).
Maybe ScoreMode should be a public enum inside the join package.
Thanks alot!
I'll try the patch when 4.2 get's released. | https://issues.apache.org/jira/browse/LUCENE-4043?focusedCommentId=13270623&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-48 | refinedweb | 752 | 61.12 |
Java varargs was introduced in Java 1.5. Java varargs is also known as java variable arguments.
Table of Contents
Java varargs
varargs in java enables a method to accept variable number of arguments. We use three dots (…) also known as ellipsis in the method signature to make it accept variable arguments. For example;
Copypublic static int sum(int i, int...js ){ //do something }
Important points about varargs in java
Few points to know about varargs in java are;
- We can have only one varargs in the method.
- Only the last argument of a method can be varargs.
- According to java documentation, we should not overload a varargs method. We will see why it’s not a good idea.
How java varargs work?
When we invoke a method with variable arguments, java compiler matches the arguments from left to right. Once it reaches to the last varargs parameter, it creates an array of the remaining arguments and pass it to the method. In fact varargs parameter behaves like an array of the specified type.
Copy//method with variable arguments public static int sum(int i, int...js ){ int sum = i; for(int x : js){ sum+=x; } return sum; } //method with same implementation as sum with array as argument public static int sumArray(int i, int[] js ){ int sum = i; for(int x : js){ sum+=x; } return sum; }
If you will look at both
sum and
sumArray methods, you will see that the implementation body is exactly same. So we should use varargs when API offers them, for example
java.io.PrintStream.printf() method but we should not take it as a replacement for array.
Why we should not overload varargs method
Let’s look at an example why overloading java varargs method is not a good idea.
Copypackage com.journaldev.misc; public class VarargsExample { public static void main(String[] args) { System.out.println(sum(1)); System.out.println(sum(1,2)); //compiler error, ambiguous method } public static int sum(int i, int...js ){ System.out.println("sum1 called"); int sum = i; for(int x : js){ sum+=x; } return sum; } public static int sum(int i, int k, Object...js ){ System.out.println("sum2 called"); int sum = i+k; for(Object x : js){ sum+=1; } return sum; } }
In above example, you will notice that compiler will not complain when we overload methods with varargs. But when we try to use it, compiler get’s confused which method to use when mapping the second argument.
If there is only one argument, compiler is smart to use first method because it can work with minimum one argument but second method needs at least two arguments. Below image from Eclipse shows the error message as
The method sum(int, int[]) is ambiguous for the type VarargsExample.
That’s all about java varargs. It’s good to know feature but you don’t need to use it when writing code yourself.
Reference: Official Documentation
Adam says
Hie Thanks you very much. I am new to java and i hope i will get to learn lot from these lessons.
Priya says
varargs is generally known as variable arguments. It uses internal mechanism of eclipse.
siddu says
The above program works fine, no compile time error,
omprakash says
Hi, in the above overloaded method use “int… js” instead of “Object…js” as 3rd parameter
Larry says
Hi Pankaj,
Thanks for sharing it!!! Well explained about the use of the Java Varargs and how one need to use it without overloading the compiler…This was informative. | https://www.journaldev.com/1257/java-varargs | CC-MAIN-2019-30 | refinedweb | 588 | 63.39 |
>>."
Analog Computers (Score:3, Informative)
I seems to me that this problem would pop up any time you worked with an irrational number.
Back in the early days the analog computer was used for things like ballistic calculations. I would think that they would be less prone to this type of problem.
Linearity may still be an issue (analog systems have their own set of problems).
Re: : (Score:2): (Score:3, Insightful)
Sure, you can make it a problem, but it isn't particularly insidious.
And the part where I say "The bigger issue is how the errors combine when doing calculations" is a pretty compact version of what you said.
Re:Analog Computers (Score:5, Informative)
No, irrationality has nothing to do with it. It's a matter of numeric systems, i.e. binary vs. decimal. For example, 0.2 is a rational number. Express it in binary floating point and you'll see the problem: 2/10 is 1/5 is 1/101 in binary. Let's calculate the mantissa: 1/101=110011001100... (long division: 1/5->2/5->4/5->8/5=1,r3->6/5=1,r1->2/5->4/5->8/5...)
All numeric systems have this problem. It keeps tripping up programmers because of the conversion between them. Nobody would expect someone to write down 1/3 as a decimal number, but because people keep forgetting that computers use binary floating point numbers, they do expect them not to make rounding errors with numbers like 0.2.
Re: (Score:2)
True, but irrational numbers are those that cannot be written down exactly in *any* base - not even if you use recurring digits.
Re: (Score:3, Informative)
True, but irrational numbers are those that cannot be written down exactly in *any* base
... except the irrational number's own base.
;)
Re: (Score:3, Funny)
Nobody would expect someone to write down 1/3
I use base 3, so 0.1 is a perfectly easy number to express in floating point..
Re: .
Interval arithmetic (Score:5, Insightful).
.9999999984 Post (Score:5, Funny)
Damn...Missed it! lol
Re:.9999999984 Post (Score:5, Funny)
I see you're still using that Pentium CPU.
Re: .
Re: (Score:2)
The lack of associativity is a bigger problem than you might think, because the compiler can rearrange things. If you're using x87 FPU code, you get 80-bit precision in registers, but only 64-bit or 32-bit precision when you spill to the stack. Depending on the optimisations that are run, this spill happens at different times, meaning that you can get different results depending on how good your register allocator is. Even upgrading the compiler can change the results.
If you are using floating point v
Re: (Score:2)
If you're using the x87, just give up. It is very hard to efficiently conform to IEEE on that evil beast. (even setting the control register to mung precision only affects the fraction, not the exponent, so you still have to store to memory and reload to properly set precision.)
A former colleague described it (the entire x87 unit) as "Satan incarnate in silicon".
:-)
Re: (Score:2)
strictfp (Score:4, Informative)
Re: (Score:2)
Done
:)
Re: (Score:3, Insightful)
Not really. It might point to BigDecimal, but leave strictfp out of it. Remember, this is for starting programmers, not creators of advanced 3D or math libs.
If you want accuracy... (Score:3, Interesting)
use BCD math. With h/w support it's fast enough...
Why don't any languages except COBOL and PL/I use it?:4, Interesting)
also it would absolutely be very slow
Depends on the architecture. IBM's most recent POWER and System-Z chips have hardware for BCD arithmetics.
Re: (Score:3, Informative)
It didn't, however, as you imply, have BCD hardware. In fact, it had no hardware at all for arithmetic. At the start of the core memory, you had two lookup tables, one for addition and
Re: (Score:2)
Maybe because BCD is the worse possible way to do 'proper' decimal arithmetic,
"0.1 + 0.2 = 0.30000000000000004" doesn't really seem all that proper to me.
But BCD *does* do "0.1 + 0.2 = 0.3".
also it would absolutely be very slow.
Without h/w support.
Instead you can put 20 decimal digits in 64bits (3.2 bits per db) and do math much more faster
I want accurate math, not estimates.
Exactly
Do you pride yourself a Rational Man, or a low down dirty bigot?
Re: (Score:2)
How is that Hardware Support going? Just curious.
Re: (Score:2)
How is that Hardware Support going?
Very well, on machines designed for business (i.e., mainframes and VAXen).
Re: : (Score:3, Insightful)
You completely missed my point.
I'm not comparing BCD to floating point, I'm comparing BCD with other ways of encoding decimal numbers in a computer: (Score:2)
A rational number class seems like a better solution, though there are some issues with representing every number as a ratio of integers... For instance, you need extremely large numerators as your denominators get large. For another, you need to keep computing gcfs to reduce your fractions. Still, this is the approach used in some calculator programs.
I wonder if a continued fraction [wikipedia.org] representation would have advantages -- and if it has ever been used as a number representation in practice? It seems lik
Re: (Score:2, Informative)
Continued fractions are a straightforward way to implement exact computable real arithmetic. So yes, it's been used. And it's slow. But it is exact.
and Ada (Score:3, Informative)
Correction: COBOL, PL/I and Ada. Ada has both fixed and floating point BCD arithmetic. And yes I too wonder why it is not in wider use. Perhaps it has to do with the ill conceived notion of "light weight languages" - most of which are not light weight at all any more once they are on the market for decade or so.
Martin
Re: (Score:2)
$x_numerator = 1;
$x_denominator = 3;
Algorithms do exist for fractional number arithmetic. If the denominator gets unwieldy, who cares, its a computer and its fast and memory is "free".
Re: (Score:2)
how do you express 1/3 in BCD?
Just as in "paper" decimal arithmetic, you must truncate it somewhere.
Since I've long forgotten how to "punch" the sign, this is what it would look like in 8(2) imaginary "unsigned" BCD, in hex: 00000033.
Re: (Score:2, Insightful)
Another potential solution is Interval arithmetic (Score:4, Insightful)
Re: (Score:2)
Why don't you write it up yourself and give me a github pull request?
:)
Re: (Score:2)
Because I don't know how to use github and it looks to me as a really, really complicated way to make a wiki..
Re: (Score:2)
Well, I'll do it when I get around to it. Doing it as a wiki would mean that I'd have to deal with vandalism and spam, and it's really intended more as small self-contained site than a community thing.
Re: (Score:2)
I'm very sorry for whatever horrible things happened to you that makes you read that into my words. They were meant as an acceptance of the suggestion that a section about interval arithmetic would be a good addition to the site and an invitation to contribute.
Re:Another potential solution is Interval arithmet (Score:4, Informative)
Internal arithmetic always includes the exact solution, but only the rarest circumstances does it actually give the exact solution. For example, an acceptable interval answer for 1/3 would be [0.33,0.34]. That interval includes the exact answer, but does not express it.
Before we get (Score:5, Informative)
this will save a lot of time & questions to most beginning (and maybe mediocre) programmers..
Re:I'd just avoid it (Score:5, Informative)
The non-trivial problems with floating-point really only turn up in the kind of calculations where *any* format would have the same or worse problems (most scientific computing simply *cannot* be done in integers, as they overflow too easily).
Floating-point is an excellent tool, you just have to know what it can and cannot do.: (Score:2)
And once you've finished writing your algorithm in manually coded fixed point to avoid the "complexities" of float-point you can sit down and tuck into a tasty plate of quiche.
An intuitive description of floating-point (Score:2): (Score:2)
Actually, I tried to make it very clear in several places that base 10 has just the same problems. I am open to any suggestions for improvement, though.
Re: (Score:2)
The article gives the impression that base 10 arithmetic is somehow "more accurate". It's not. You still get errors for, say, 1/3 + 1/3 + 1/3. It's just that the errors are different.
What kind of errors are you referring to?
Stop with the educational articles (Score:5, Funny)
Re: : (Score:3, Funny)
Except for the fact that companies don't care about floating point they are looking for 3+ years on windows 7. 20 years of Linux. and 15 years of
.NET.
Not sure it belongs in an intro explanation, but (Score:5, Informative)
CS stuff (Score:2)
This sounds a lot like the stuff i heard in CS/Mathematics classes more than 20 years ago (Hackbusch, Praktische Analysis, Summer term 1987/88). That course was mandatory then for any CS, Mathematics and Physics student. Has that changed yet? It's about the differences between a number at it's binary representation (and examples about consequences).
I completely agree, that every programmer should know about that. But this is nothing new, it was already important 40 years ago. I'm pretty sure some space prob
Please look here (Score:5, Informative)
According to my personal experience the paper by David Goldberg cited in the post isn't that difficult after all. Plenty of interesting materials can also be found in the Oppenheim & Shafer [amazon.com] textbook about digital signal processing.
Re:Please look here (my horror story) (Score:2)
I was brought in a bit after the start of a state project to write a system to track about a half billion dollars in money for elderly and disabled indigent care. I was horrified to find that all the money variables were float. After raising the issue and explaining the technical details, I proposed a Money class and if they didn't want that gave them the option of a simple fix: just change all the floats to long and keep the amounts in pennies, inserting the decimal point only when displaying numbers. The]
Thank goodness for IEEE 754 (Score:2)
Most of you are too young to have dealt with architectures like VAX and Gould PowerNode with their awful floating point implementations.
IEEE 754 has problems but it's a big improvement over what came before.
Not equivalent to Spolsky's article. (Score:3, Funny)
It's missing the irritating cutesy "humor".
ACM Digital Library link (Score:2)
If anyone with ACM digital libray access wants the DOI link to the original article, rather than the edited version Sun/Oracle's site it's [acm.org].
It is an old article though, so it's a 44 page scanned PDF.]
Re:#1 Floating Point Rule (Score:5, Informative)
I think the original poster was referring to this piece by the father of floating point, William Kahan, and Joe Darcy
"How Java's Floating-Point Hurts Everyone Everywhere" [berkeley.edu]
Re: : (Score:3, Informative):#1 Floating Point Rule (Score:5, Informative)
Java have a strictfp keyword for strict IEEE-754 arithmetic.
Re: (Score:2)
True, but in many cases (those that don't require too much performance) it is probably better to use BigDecimal anyway. I've never used that keyword, IMHO it's for inner libs only. It's certainly not something I would learn starting programmers (just as you should not learn new programmers the use of wait() or sleep(), rather CountDownLatch and the different Queue's).
Re: (Score:3, Informative)
Actually, the linked article says exactly the opposite, and up above I posted a link to the JVM definition that verifies it. So you are 100% incorrect.
Re: (Score:2)
Re: (Score:2)
That's what he said, it just got rounded.
Re: (Score:2)
Java is a good programming language if you know how to use it and you can write some very efficient and small code. I'm tired of people attacking it for being slow when the people who use don't have a clue how to use it properly and just iterate their way through array lists (probably the most common mistake I see).
You should always use the right language for the right job.
Re: (Score:3, Informative)
>>> (1.0/3)*3
1.0
Re: (Score:2)
Re: (Score:2)
I'm completely baffled by this (in Python):
>>> print (1/3)*3
0
I expected 1, and my FPU is from AMD, not Intel
;)
You didn't use the FPU. Try this: print (1.0 / 3) * 3;
Re: (Score:2)
That's just integer division, nothing to do with FPU. In Python3 they actually changed this behaviour, so that now floating point is used:
>>> print((1/3)*3)
1.0
A thing that I found a little more surprising in Python is:
>>> -5 / 2
-3
When you come from C you'd expect that to be -2.
Re: (Score:2)
These days, not everyone who writes programs has some sort of formal CS education. And those who do may have forgotten about it by the time they run into this kind of problem.
Re: (Score:2)
The other day I had to explain ICMP to someone who was trying to ping a specific port on some server.
He actually did have a CS degree...from China.
Re:float are over (Score:5, Funny)
Really, the best answer is to store all numbers on the cloud, and just use a 256-bit GUID to look them up when needed.
Re: (Score:2)
Whatever floats your boat.
Re: (Score:2, Informative)
And wrong. I don't know how to use Github and if he won't bother to post an email address, I won't bother to learn about Github just for this.
The comparison [floating-point-gui.de] page is wrong. Take nearlyEqual(0.0000001, 0) for example. As the author said, using Epsilon can be bad if you don't know what you are doing. The correct form of the function is:
epsilon = 0.00001;
function nearlyEqual(a,b)
{
return (Math.abs(b) < epsilon) ? (Math.abs(a) < epsilon) : (Math.abs((a-b)/b) < epsilon);
}
Also,
Re: (Score:2)
Discuss.
Re: (Score:2)
Fails on a=epsilon/2 and b=0
Re: (Score:2)
Your criticism is correct.
While your algorithm would say that plank's constant (6.6E-34) and the inverse speed of light in a vacuum (3.33E-9) are the same. You just underscore the fact that it isn't easy to write a good nearlyEqual. But I would expect the author of an article on the topic to write a good one. After all, it is what people are most likely to go to look for.: (Score:3, Informative)
In case you weren't just being funny, that == is correct, as it's meant to prevent NaN or Infinity results from the division, which can only happen with the actual "zero" values.
Re: (Score:2)
I would suggest you heed your own advice: do some research before mouthing off. Yes, there's a lot of stuff about FP math. But there didn't seem to be anything that is both novice-friendly and comprehensive.
Re: :Simple, effective and useful (Score:5, Funny)
That would be because 0.1 + 02 is 2.1.
:-)
Re: wit
Re: (Score:2)
Actually, it will yield the correct result because (nonzero)/0 = Infinity. But it will return the wrong result for a smaller than epsilon. Will correct that.
Re: | https://developers.slashdot.org/story/10/05/02/1427214/what-every-programmer-should-know-about-floating-point-arithmetic?sdsrc=nextbtmprev | CC-MAIN-2017-47 | refinedweb | 2,694 | 64.81 |
Quickly create HTML reports using a set of JINJA templates
This is a simple package to easily build HTML reports using JINJA templating.
pip install reports
Here is a simple example that creates an empty report based on the generic templates provided:
from reports import Report r = Report() r.create_report(onweb=True)
The next step is for you to copy the templates in a new directory, edit them and fill the jinja attribute to fulfil your needs:
from report import Report r = Report("myreport_templates") r.jinja["section1"] = "<h1></h1>" r.create_report()
See Sphinx documentation for more details
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/reports/ | CC-MAIN-2017-26 | refinedweb | 116 | 53.41 |
.myfaces.renderkit.html;17 18 19 20 21 /**22 * @author Manfred Geiler (latest modification by $Author: matze $)23 * @version $Revision: 1.10 $ $Date: 2004/10/13 11:51:00 $24 * $Log: HtmlListboxRenderer.java,v $25 * Revision 1.10 2004/10/13 11:51:00 matze26 * renamed packages to org.apache27 *28 * Revision 1.9 2004/07/01 22:05:07 mwessendorf29 * ASF switch30 *31 * Revision 1.8 2004/05/18 14:31:39 manolito32 * user role support completely moved to components source tree33 *34 */35 public class HtmlListboxRenderer36 extends HtmlListboxRendererBase37 {38 }39
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/apache/myfaces/renderkit/html/HtmlListboxRenderer.java.htm | CC-MAIN-2016-50 | refinedweb | 107 | 52.76 |
There is a typo in some printings; the code for returning a floating-point random number in the interval [0,1) should be
#define frand() ((double) rand() / (RAND_MAX+1.0))
If you want to get random integers from M to N, you can use something like
M + (int)(frand() * (N-M+1))
``[Setting] the seed for rand'' refers to the fact that, by default, the sequence of pseudo-random numbers returned by rand is the same each time your program runs. To randomize it, you can call srand at the beginning of the program, handing it some truly random number, such as a value having to do with the time of day. (One way is with code like
#include <stdlib.h> #include <time.h>which uses the time function mentioned on page 256 in appendix B10.)
srand((unsigned int)time((time_t *)NULL));
One other caveat about rand: don't try to generate random 0/1 values (to simulate a coin flip, perhaps) with code like
rand() % 2This looks like it ought to work, but it turns out that on some systems rand isn't always perfectly random, and returns values which consistently alternate even, odd, even, odd, etc. (In fact, for similar reasons, you shouldn't usually use rand() % N for any value of N.) A good way to get random 0/1 values would be
(int)(frand() * 2)based on the other frand() examples above.
Read sequentially: prev up top
This page by Steve Summit // Copyright 1995, 1996 // mail feedback | http://c-faq.com/~scs/cclass/krnotes/sx10n.html | CC-MAIN-2015-18 | refinedweb | 251 | 65.05 |
NAME
fsync, fdatasync - synchronize a file's in-core state with storage device
SYNOPSIS
#include <unistd.h> int
DESCRIPTION
f
On success, these system calls return zero.
4.3BSD, POSIX.1-2001.
AVAILABILITY
On POSIX systems on which fdatasync() is available, _POSIX_SYNCHRONIZED_IO is defined in <unistd.h> to a value greater than 0. (See also sysconf(3).)
NOTES.
SEE ALSO
bdflush(2), open(2), sync(2), sync_file_range(2), hdparm(8), mount(8), sync(8), update(8)
COLOPHON
This page is part of release 3.35 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.ubuntu.com/manpages/precise/man2/fsync.2.html | CC-MAIN-2016-30 | refinedweb | 105 | 68.87 |
Object Integrity & Security: Duplicating Objects, Part 3
Multiple Heads
To take care of this problem, you must physically create a second head object and assign it to the new Dog object, spot. As usual, there are several ways that you could do this. In this case, you will explicitly create a new head when you copy the Dog object, inside the actual copy( ) method.
ref = (Dog)clone(); Head h1 = new Head(no); ref.head = h1;
This is a 3-step process. First, the Dog object is cloned. Second, the new Head object is created. Third, you assign the newly created head object to the reference of the cloned Dog object. The Head class is presented in Listing 6 and the completely redesigned Dog class is seen in Listing 7. The Duplicate class is contained in Listing 8.
// Class Head class Head { String nose; public Head(String no) { nose = no; } public String getNose() { return nose; } public void setNose(String no) { nose = no; } }
Listing 7: The new Dog class
// Class Dog class Dog implements Cloneable { String name; String breed; Dog ref; Head head; public Dog(String n, String b, String s) { name = n; breed = b; head = new Head(s); } public Dog copy(String n, String b, String no) { ref = (Dog)clone(); Head h1 = new Head(no); ref.setName(n); ref.setBreed(b); ref.head = h1; return ref; } public Object clone() { try { System.out.println("Inside Dog Clone"); return super.clone(); } catch (CloneNotSupportedException e) { throw new InternalError(e.toString()); } } public String getName() { return name; } public void setName(String n) { name = n; } public String getBreed() { return breed; } public void setBreed(String b) { breed = b; } }
Listing 8: The new Duplicate class
// Class Duplicate public class Duplicate { public static void main(String[] args) { Dog fido = new Dog("fido", "retriever", "black"); System.out.print("fido : = "); System.out.print("name = " + fido.getName()); System.out.print(", breed = " + fido.getBreed()); System.out.println(", nose = " + fido.head.getNose()); Dog spot; spot = (Dog)fido.copy("spot", "mutt", "pink"); System.out.print("fido : = "); System.out.print("name = " + fido.getName()); System.out.print(", breed = " + fido.getBreed()); System.out.println(", nose = " + fido.head.getNose()); System.out.print("spot : = "); System.out.print("name = " + spot.getName()); System.out.print(", breed = " + spot.getBreed()); System.out.println(", nose = " + spot.head.getNose()); } }
When the redesigned application is executed, you can see by the results in Figure 3 that you now retain the proper value of the nose attributes for the fido object. The spot object also has the proper value for its nose attribute.
Figure 3: Separate Head objects
Finally, Diagram 3 provides a graphical representation of what the final version of the code looks like in memory. You have two Dog objects, fido and spot. And each of these objects has its own, separate Head object; fido has a black nose and spot has a pink nose.
Diagram 3: Separate Head Objects
Conclusion
Attempting to keep track of complicated object relationships is a daunting task. Consider the responsibility of modeling complex objects. Think about one of the more common objects, an automobile. This provides a real-world example of how object relationships are modeled. Each car contains so many separate parts that it is hard to even list them. On top of that, take into account how many different parts are manufactured at different sites.
The obvious example is that of an engine. Each car (an object) contains an engine (another object); however, the engine is made up of other parts, like pistons (an object) and camshafts (another object). Pistons are made up of even more objects. And I could go on and on.
In a world of hardware, there are physical objects to help with the manufacturing process; for example, you can hold a piston in your hands. Yet, when modeling an engine in software, you have to be aware of all these relationships and how you will duplicate them. It is important to pay attention to this process; otherwise, it can lead to major confusion.<< | https://www.developer.com/lang/article.php/10924_3687321_5/Object-Integrity-amp-Security-Duplicating-Objects-Part-3.htm | CC-MAIN-2018-22 | refinedweb | 658 | 56.96 |
In “IoT: FreeRTOS Down to the Micro Amps” I’m using an application with FreeRTOS to get down in micro amps low power mode. Well, nearly all or my applications are using FreeRTOS because it makes the application scalable and extensible. Still, for anyone not used to an RTOS, that might be a hard start. So here we go: how to get into the Kinetis Low Power LLS Mode *without* an RTOS.
Outline
In this project I create a very simple bare-metal application for the Freescale FRDM-KL25Z board. All what it does is to blink an LED from the Low Power wake-up interrupt to show that the application is running. Right after the wake-up interrupt, it enters again the LLS low power mode. So I enter LLS low power mode, and the timer interrupt will wake me up every second. As in LLS mode only a few timer and wake-up sources are available, I’m going to use the 1 kHz Low Power clock source.
The project and all the settings is available on GitHub at the link provided at the end of this article.
Processor Expert Components
In this project, I’m using the following components:
- 3 LED components (RGB)
- TimerInt with TimerUnit_LDD for the low power wake-up timer
- WAIT component to flash different LED’s after power-up with a delay.
Timer
As timer I use the LPTMR with a period of one second:
Inside the period settings, I configure it to use the LPO 1 kHz timer as clock source, as this clock still will run in LLS mode:
CPU Component
First, we need to configure the CPU component to be ready for low power mode. I enable the methods SetOperationMode() and GetLLSWakupFlags() (so the black x is removed):
In the CPU properties, I enable all low power modes, configure LPTMR0 as wake-up source, and enable the INT_LLW wake-up interrupt:
Then I configure the WAIT, SLEEP and STOP modes (these are the three different low power modes in Processor Expert):
Source Files
Time to write the application code. I’m using Application.h and Application.c:
/* * Application.h * * Created on: Mar 16, 2014 * Author: Erich Styger */ #ifndef APPLICATION_H_ #define APPLICATION_H_ /*! \brief callback called from the LLS wake-up interrupt */ void APP_OnLLSWakeUpInterrupt(void); /*! \brief callback called from low power timer interrupt */ void APP_TimerInterrupt(void); /*! \brief application main entry point */ void APP_Run(void); #endif /* APPLICATION_H_ */
The the Application.c looks like this:
/* * Application.c * * Created on: Mar 16, 2014 * Author: Erich Styger */ #include "Application.h" #include "LED1.h" #include "LED2.h" #include "LED(); /* red LED */ } void APP_Run(void) { LED2_On(); WAIT1_Waitms(1000); LED2_Off(); for(;;) { //LED3_Neg(); /* blue LED */ //Cpu_SetOperationMode(DOM_WAIT, NULL, NULL); /* next interrupt will wake us up */ //Cpu_SetOperationMode(DOM_SLEEP, NULL, NULL); /* next interrupt will wake us up */ Cpu_SetOperationMode(DOM_STOP, NULL, NULL); /* next interrupt will wake us up */ } }
From main() in ProcessorExpert.c I call APP_Run():
#include "PE_Types.h" #include "PE_Error.h" #include "PE_Const.h" #include "IO_Map.h" /* User includes (#include below this line is not maintained by Processor Expert) */ #include "Application. ***/ APP!!! ***/
And from Events.c I call my hooks:
/* ** =================================================================== ** Event : TI1_OnInterrupt (module Events) ** ** Component : TI1 [TimerInt] ** Description : ** When a timer interrupt occurs this event is called (only ** when the component is enabled - and the events are ** enabled - ). This event is enabled only if a ** is enabled. ** Parameters : None ** Returns : Nothing ** =================================================================== */ void TI1_OnInterrupt(void) { APP_TimerInterrupt(); } /* ** =================================================================== ** Event : Cpu_OnLLSWakeUpINT (module Events) ** ** Component : Cpu [MKL25Z128LK4] */ /*! ** @brief ** This event is called when Low Leakage WakeUp interrupt ** occurs. LLWU flags indicating source of the wakeup can be ** obtained by calling the [GetLLSWakeUpFlags] method. Flags ** indicating the external pin wakeup source are automatically ** cleared after this event is executed. It is responsibility ** of user to clear flags corresponding to internal modules. ** This event is automatically enabled when [LLWU interrupt ** request] is enabled. */ /* ===================================================================*/ void Cpu_OnLLSWakeUpINT(void) { APP_OnLLSWakeUpInterrupt(); }
Summary
This application shows how to set up an application for Low Power LLS mode, in a bare-metal mode. So it should be useful for you if you want to have a starting point for your own project without an RTOS.
The project and sources are available on GitHub.
PS: the KL25Z should use about 3 micro amps in LLS mode. I measure twice as much on the FRDM-KL25Z RevD board, because other components on the board are making the measurement inaccurate.
Happy LowPowering 🙂
Hi Erich
Thanks for this timely project. I am also working on getting low power operation on a FRDM-KL25Z so it is interesting to compare my results with yours.
My own code does pretty much the same thing as yours, and I can get about 1.2uA current on the FRSM-KL25Z (Rev D) in LLS mode, and 1.9uA on a modified version where I change the crystal to a 32kHz crystal.
I have run your LowPower_LLS project on my FRDM-KL25Z and get a current of about 1.7uA (measured on two different boards). There is a spike every second when the timer wakes up.
So I am interested to know why you seem to measure higher currents than I do. Is it the PCB revision – are you using a Rev E? Do you have the debugger interface active? I am measuring the current by cutting the JP4 link and bridging it with a 10 ohm resistor and measuring the voltage with a volt meter.
The KL25Z is not perfectly isolated from the rest of the circuit: it is not impossible that the debug interface and the accelerometer interface (powered on the USB side of JP4) could be influencing our readings. I am normally meticulous about ensuring that GPIO pins are in the right state – e.g. not floating. Mis-configured pins can have a big effect on sleep current. The Kinetis family seems to have a novel feature where if the PORTx_PCRn register MUX bits are set to disabled, you can overlook how the pins are terminated. (I think…)
FYI: I changed the toggling LED in your app with code that clears PTB8 in APP_OnLLSWakeUpInterrupt() and sets it in APP_TimerInterrupt(). This makes no different to the power consumption (since the LEDs draw their current on the USB side of JP4) but allows me (monitoring PTB8 on a scope) to see that there is about 10us between these two events.
I will now return to FreeRTOS – so far I have not managed to get it into a low current state. For me your Freedom_LowPower project draws 610uA. (Would you care to revisit this?)
I used in that post an older RevD board. I know these boards make it hard to measure things as there are other components drawinig current. I have re-run that LLS mode application on a KL25Z RevE board, with R73. R81 and R74 removed (with jumper headers installed). Now I measure the current flow thorugh J4 with a multimeter, and I get 2.05 uA (with the spike you report during timer wake-up).
Hi Erich,
How do I use two timers at the same time?
One is counter in wake status and the interrupt period is 1ms
The other is for LLWU source and the interrupt period is 300ms.
Thanks.
BR,
Sean
Hi Sean,
not sure if I understand your problem, as there should be no problem to use multiple timers the same time?
Erich
Dear Sir,
Sorry for reply too late.
I used this example. And I used interrupt period 1ms in run status. But I want to change the interrupt period to 300ms in sleep status to wake up the device, and then change back to 1ms after wake up. How could I do?
Thanks.
BR,
Sean
If you are using my latest FreeRTOS component, then it has two additional hooks: vOnPreSleepProcessing() and vOnPostSleepProcessing(): you can use the hooks to do whatever you want, including changing the clock frequency.
Great info here, Erich!
I have successfully used LLS mode on the KL25Z. Now I was trying out the same on the FRDM-K64F Board, but ran into the following problem:
In Processor Expert –> CPU component Properties –> Low Power mode settings
…there are only two categories: “Allowed Power modes” and “Operation mode settings”. Where is the “LLWU settings” category?
Without that category it seems difficult to select a “wake-up source”, etc…
Any suggestions? Have you tried using the LLS mode on the K64F Board?
Regards,
Erik
Hi Erik,
thanks! But I have not used low power modes for the K64F yet. Simply because the boards I have used with it did not need in low power mode (always connected to power). But looking at the CPU component, it looks it now only supports the Kinetis SDK :-(. Which means a completely different way and method has to be used. I would need to wrap my head around this first (speaking in pictures)….
Erich
Hi Mr Styger
I am currently trying to use the CMP module in LLS mode. The CMP should wake up the MCU on rising and falling edge. As soon as the LLS is entered, the CMP module throws numerous interrupts (with a delay of 49us between the interrupts) through NVIC, altought there is no edge in the analog signal. The LLWU interrupt is only sporadically called by CMP, not on every edge as it should.
What i have already checked:
– voltage reference of CMP
– LLS works in combination with low power timer. If I add the CMP module, the LTPM does not work reliable any more.
– Resetting the interrupt flags of CMP (similar to LPTM) in LLWU interrupt routine.
– I considered the errata sheet, which contains two entrys of CMP and LLS => Recommended solutions do not solve the problem.
Project setup:
– Kinetis K22P64M120
– Eclipse with Processor Expert plugin
– IAR 6.6 (debug)
Have you ever used the CMP with LLS? Any suggestions?
Regards,
Mathias
Hi Mathais
I have not used the comparator, but here are a few extra things you could check:
– As you leave and enter low power mode the processor current will change and this may lead to changes in supply voltage. Could this be causing the comparator to trigger?
– Do you have plenty of power decoupling capacitance, and decoupling on your analog inputs?
– Have you enabled the comparator output function, so you can monitor the comparator’s output on a pin? Then you could be certain whether the interrupts you see coincide with real comparator transitions or some other reason.
– Explore settings for filtering and hysteresis. The KL15 data sheet says “The window and filter functions are not available in LLS”. Are you relying on some feature that vanishes when you sleep?
– Are you comparing two external voltages or one voltage and the internal DAC? What voltages do you expect these to be?
Hi
– Supply voltage looks smooth
– Decoupling capacitors are similar to TWRK21-Evalboard
– CMP works correctly => The problem must on NVIC or LLWU
– No filter or window functions are enabled. Hysteresis is set to 30mV.
– I compare an external votlage with the internal reference. I already changed it to comparing two external voltages, internal bandgap,.. with no effect.
I tested some more cases:
– If the CMP event is disabled in Processor Expert, it no longer throws interrupts through NVIC (makes sense). But the LLWU does not work too, respectively only throws ISR’s randomly.
– If the CMP event is disabled and it triggers only on rising edges, everything is functioning.
– IF the CMP event is enabled and it triggers only on rising edges, the LLWU functions and the NVIC only throws interrupts if the voltage at the CMP has a high level.
– If all events of Processor Expert are disabled and two CMP’s should trigger independently at either falling or rising edge, the LLWU stops working again. I’m really confused.
It seems that there are two problems:
1. NVIC interrupt throwing during LLS. Maybe a problem of priority..
2. LLWU fails, if two wakeup sources are active.
Have you ever used two wakeup sources of LLWU?
Regards,
Mathias
Hi Erich,
I am working with Freescale KL-15 processor and PE version (Version: 10.4.0
Build: b140319). Initially I am trying to use your FRDM KL-25Z LLS example, when I import the project in PE 10.4.0, I do not see the following three Methods under the CPU (SetClockConfiguration, GetClockConfiguration and SetOPerationMode). Any suggestions or ideas why these methods are not appearing under the CPU?
regards,
Mo
Hi Mo,
I created a project under DS 10.4.2 (means: I have the update 10.4.2 installed) for KL15Z128VLK4, and I do have these methods. Maybe it is because you do not have the 10.4.2 installed?
Erich
Hi Erich,
I changed the PE setting from Basic to Advance, now I can see the Methods…
regards,
Mo
Hi Mo,
ah, I did not think about this! Because I have set my default to Advanced all the time.
Erich
Hi Erich,
I am using FRDM-KL25Z and trying to use the RTC with KSDK 1.3 and PE. Mi idea is to wake up the processor from LLS mode with an alarm.
My problem appears when trying to set a date, no matter the fields in the structure rtc_datetime_t, the date always sets to:
{year = 2106, month = 2, day = 6, hour = 6, minute = 28, second = 15 ‘\017’}
I have connected PTC3 to PTC1 to clock the RTC with the 32kHz source. I am able to blink an LED in trtcTimer1_SecIRQHandler,so the RTC is working but problem is saving the correct date.
When I call RTC_DRV_SetDatetime, and step into it, I notice that in function CLOCK_SYS_GetExternalRefClock32kFreq it goes through this case:
case kClockEr32kSrcRtc: /* RTC 32k clock */
freq = g_xtalRtcClkFreq;
break;
And it returns a frequency of 0 as the variable g_xtalRtcClkFreq is global and never initialized. So the variable seconds in RTC_DRV_SetDatetime is set to 0 too. Do you know how can I solve this?
Thanks in advance.
Valentín
I could solve it by assigning 32768 to g_xtalRtcClkFreq in fsl_mcg_hal.c. But is it ok that I have to do it manually?
Thanks
Hi Valentin,
I have not used the SDK RTC component (yet?). But the advantage of the SDK is that you can do such things manually. So I think setting that manually is ok.
Hello Erich,
I would like to use the sleep mode (STOP) in KL25Z128 but to wake up use the different interrupts (ExtInt) that I have, Is this possible? Do you have some example? I have to use another sleep mode? I was checking the manual an it says “Normal stop mode (STOP): Asynchronous Wakeup Interrupt Controller (AWIC) is used to wake from interruptions.”
Thanks for all the help,
Iñaki
Here we are using the DOM_SLEEP -> Normal Stop (VPLS)?
Link 1:
Link 2:
I check that this project use DOM_STOP->LLS, but I have to use the SOM_SLEEP->Normal stop(VLPS) in order to use the interrupts? DO you have same example using this one?
I don’t have an example ready for that transition.
Yes, you can wakeup from STOP with external interrupts. You need to configure the controller that it uses your interrupt as wakeup source.
Hi Erich, very good post as always!
I’m having an issue here. I used a delay to blink a LED when the MCU wakes up. Firts time (before the first sleep), its blinks normal. But when the MCU sleeps, and wakes up after I send something on my UART port, the blink function takes a long time. I’m using the WAIT component with a 250ms delay. Seems like there’s an issue with my clock (after the sleep). It’s taking much more than 250ms to blink the LED.
Hi Henrique,
are you using a Cortex-M4(F) or the M0(+)? If you are using the M4(F) and have enabled the ‘Use Cycle Counter’ option, can you try turnig it off (I just fixed an issue with this today).
Thanks for the reply Erich! I’m using Cortex-M4 (MK20). I fixed the issue half an hour after I posted here. Inside APP_OnLLSWakeUpInterrupt(), I called Cpu_SetOperationMode() with DOM_RUN, and seems to work normal now.
Cpu_SetOperationMode(DOM_RUN,NULL,NULL);
I know this thread is very old but I am having issues finding a solution to my problem.
I am using the K10 series and would like to use the 4MHz internal clock while awake and the 32.768kHz internal clock while asleep.
Looking in the user manual doesn’t seem to be helping.
Am I missing something?
I cannot comment on the K10 as I have not used it in my designs, but I would think that if you use Processor Expert, things should be pretty much possible. You would need to go through the clock gear shifting. Processor Expert has the option to allow multiple different clock settings, and it generates the correct code to shift between them. Otherwise you might look a the MCUXpresso SDK (if it does support your device) if there are examples for this?
Hi,
It seems you have removed your code from git. Can you please share your kinetis code?
Thanks,
Bee
Hi Bee,
Ups! That project has been moved to a different folder on Git, it is now at. Thanks for pointing out that wrong link, I have fixed it now in the article!
Erich
Hi, Thanks, I am trying to repeat what you have done as a practice but I find several tiny issues with CodeWarrior that is slowing me down. I know this might not be directed to your post but your answer can be very valuable to me. I cannot find some of the components in my CodeWarrior. For example, I cannot find the WAIT or LED. I have read some of your pages about missing components. I refreshed the pages and also my software is updated. Do you have any suggestions?
Thanks,
Bee Zee
All these components are not part of CodeWarrior or Kinetis Design Studio (Processor Expert), but available on SourceForge as download. See for currently the latest release of the components. Installation instructions are on
I hope this helps,
Erich
Thanks alot, it was useful. I managed to add the components. I repeated your work but with a different FRDM board (KL-46Z256). In this case the current consumption is 18mA at 3V before the LED blinks and it is 23mA at 3V when the LED blinks. I think it is not gone to the low power mode. I am not expert and probably I have done something wrong but the only thing I changed compare to your code is the LEDs pin numbers (Since it is different for KL-46Z256). I can see on the board the LED is blinking, that means interrupt has happened. Do you have any example for KL-46Z256 or how to debug with CodeWarrior to find if the system is really gone to low power sleep or not? I used this application.c code (changed your code slightly) :
/*
* Application.c
*
* Created on: Mar 16, 2014
* Author: Erich Styger
*/
#include “Application.h”
#include “LED1.h”
#include “LED(); /* GREEN LED */
}
void APP_Run(void) {
//LED2_On(); /* RED LED */
//WAIT1_Waitms(1000);
//LED2_Off();
for(;;) {
//LED2_Neg(); /* RED LED */
//Cpu_SetOperationMode(DOM_WAIT, NULL, NULL); /* next interrupt will wake us up */
//Cpu_SetOperationMode(DOM_SLEEP, NULL, NULL); /* next interrupt will wake us up */
Cpu_SetOperationMode(DOM_STOP, NULL, NULL); /* next interrupt will wake us up */
} // IS THIS FOR LOOP CORRECT?
}
that current amount indicates that your *not* in sleep mode. Without seeing the full picture it is hard to say what is missing. But have you measured it with or without the debug cable attached?
Thanks, I looked at the problem in details. I am sure my current reading is correct as I have done it several times in the past, with other MCU as well and I read a correct value.
I have saved 2 videos and photos here:
I share a video with you which shows that the current consumption changes slightly when an interrupt happen (when the yellow LED is on). This is shown in the “FirstVideo” and the code is in a png file as “CodeForFirstVideo.png”. Then secondly I put and LED2_Neg(); /* RED LED */ inside the for(;;) loop and run the code. In the “SecondVideo” you can see the RED LED is blinking. I also put the code “CodeForSecondVideo.png”.
So I guess I am in the for loop and also the interrupt happens but something is wrong then.
My question is: APP_OnLLSWakeUpInterrupt() function is responsible for interupt or APP_TimerInterrupt()? In my case APP_OnLLSWakeUpInterrupt() is not sending an interupt.
Thanks,
Bee Zee
Hi,
APP_TimerInterrupt() is only used to toggle the RED LED to show the system is running. It gets called by the T1 (LPTMR_CMR) interrupt which wakes up from LLS mode.
In case the CPU wakes up from LLS, it calls Cpu_OnLLSWakeUpINT() which then calls APP_OnLLSWakeUpInterrupt(). If APP_OnLLSWakeUpInterrupt() does not happen, your system is not waking up from LLS.
I hope this helps,
Erich
Hi Erich,
Do you know if there is anyway to look at the peripherals in CodeWarior and see that low power mode has happened? I am looking for a method that shows we are in low power without measuring the current every time. I have done your work but it seems Cpu_SetOperationMode(DOM_STOP, NULL, NULL); /* next interrupt will wake us up */
does not do anything and I am not in low power stop mode, even if I go to the function I do not know what to check or look for to find the problem.
Thanks,
Markus
To my understanding, there is no such information on the device itself. What I did is measuring the current to check if it really enters low power mode. Keep in mind that you should disconnect the debugging cable (and not debugging it while in low power mode).
I hope this helps,
Erich
Hi Erich,
I make low power in tiny KL05 with full success (2uA in stop mode). But when I added SynchroMaster component for NRF24L01 it will stop working. The MCU doesn’t wake UP. Any suggestion to resolve this problem? My repo
I’m not aware of such a problem, but I admit I have not used the KL05Z in such a configuration. Are you sure the wake-up settings for the interrupts are not changed somehow?
Does it wake up with the other wake-up sources?
This is exactly the same example which you describe in this article, but with SynchroMaster component. I check fiew time and when I add this SM1 component, DOM_STOP doesn’t work – with DOM_SLLEP and DOM_WAIT work fine but with DOM_STOP doesn’t. I add Cpu_SetOperationMode(DOM_RUN, NULL, NULL) on wake event up but it does’t change anything 😦
UPDATE:
My Gosh! I found stupid mistake! In TimerInt and Interrupt period I didn’t change clock source to LPO_1kHzSrc 😦
Uff, now it works perfect
:-). Glad to hear that it has been sorted out, and thanks for putting up the solution here!
Hi Erich,
I try to use interrupt period with interval or list of values. When I change Runtime settings type from fixed to interval or list in Timing dialog I have message: “Inherited component does not support this feature: Runtime setting type interval”. I would like to use SetPeriodMS method in TimerInt but I can’t change it to enable in Component Inspector (the method is gray).
I would be very grateful for your help.
I found this solution
I use SetPeriodTicks and it works fine for my project.
Ok, that works too! | https://mcuoneclipse.com/2014/03/16/starting-point-for-kinetis-low-power-lls-mode/ | CC-MAIN-2020-40 | refinedweb | 3,913 | 73.27 |
NAME
utime, utimes - change file last access and modification times
SYNOPSIS
#include
<sys/types.h>
#include <utime.h>
int utime(const char *filename, const struct utimbuf *times);
#include <sys/time.h>
int utimes(const char *filename, const struct timeval times[2]);
DESCRIPTION
Note:
CONFORMING TO
utime(): SVr4, POSIX.1-2001. POSIX.1-2008 marks utime() as obsolete.
utimes(): 4.3BSD, POSIX.1-2001.
NOTES
Linux does not allow changing the timestamps on an immutable file, or setting the timestamps to something other than the current time on an append-only file.
SEE ALSO
chattr(1), touch(1), futimesat(2), stat(2), utimensat(2), futimens(3), futimes(3), inode(7)
COLOPHON
This page is part of release 5.06 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://man.cx/utime(2) | CC-MAIN-2020-29 | refinedweb | 144 | 66.64 |
Makes a soft body from the object(s) passed on the command line or in the selection list. The geometry can be a NURBS, polygonal, lattice object. The resulting soft body is made up of a hierarchy with a particle shape and a geometry shape, thus: T / T G / P Dynamics are applied to the particle shape and the resulting particle positions then drive the locations of the geometry’s CVs, vertices, or lattice points. With the convert option, the particle shape and its transform are simply inserted below the original shape’s hierarchy. With the duplicate option, the original geometry’s transform and shape are duplicated underneath its parent, and the particle shape is placed under the duplicate. Note that any animation on the hierarchy will affect the particle shape as well. If you do not want then, then reparent the structure outside the hierarchy. When duplicating, the soft portion (the duplicate) is given the name copyOf+ original object name. The particle portion is always given the name original object name+ Particles.None of the flags of the soft command can be queried. The soft -q command is used only to identify when an object is a soft body, or when two objects are part of the same soft body. See the examples.
Derived from mel command maya.cmds.soft
Example:
import pymel.core as pm pm.sphere() # Result: [nt.Transform(u'nurbsSphere1'), nt.MakeNurbSphere(u'makeNurbSphere1')] # pm.soft( 'nurbsSphere1', c=True ) # Result: [u'nurbsSphere1Particle'] # # Creates a sphere named nurbsSphere1 and converts nurbSphere1 into # a soft object. The particle portion of the soft object will # be parented (with its own transform) under nurbsSphere1. pm.sphere() # Result: [nt.Transform(u'nurbsSphere2'), nt.MakeNurbSphere(u'makeNurbSphere2')] # pm.soft( 'nurbsSphere1', d=True ) # Same as the previous example, except that the soft command will make # a duplicate of nurbsSphereShape1. The resulting soft body will be # completely independent of nurbSphere1 and its children. Input connections # to nurbsSphereShape1 will be duplicated, but not any upstream history # (in other words, just plain "duplicate"). pm.sphere() pm.soft( 'nurbsSphere1', dh=True ) # Same as the previous example, except that upstream history on # nurbsSphereShape1 will be duplicated as well (equivalent to # "duplicate history"). pm.sphere() pm.soft( 'nurbSphere1', g=0.3 ) # This will make a duplicate of the shape under nurbSphere1 (as for -d), # and use it as the shape for the newly created soft object. # The original nurbsSphereShape1 will be made a goal for the particles of # softy, with a goal weight of 0.3. This will make those particles try to # follow nurbSphere1 loosely as it moves around. pm.soft( 'foobar', q=True ) # Returns true if foobar is a soft object. pm.soft( 'foobar', 'foobarParticles', q=True ) # Returns true if foobar and foobarParticles are parts of the same # soft object. This is useful because when you select a soft body, # both the overall transform and the particle transform get put into # the selection list. | http://www.luma-pictures.com/tools/pymel/docs/1.0/generated/functions/pymel.core.effects/pymel.core.effects.soft.html#pymel.core.effects.soft | crawl-003 | refinedweb | 487 | 56.35 |
Recently Technical Fellow Anders Hejlsberg visited MDCC. He presented the latest landmark he has been working on: TypeScript. TypeScript enables compile time checking of JavaScript code – namespaces, OO classes etc. Further it gives you great tooling – such as auto-complete and refactoring. TypeScript is implemented in JavaScript and can be used in any existing JavaScript application – and it can be enabled as gradually, making adoption easier. And it is Open Source. If you are writing JavaScript you will love this.
For more information on TypeScript visit:
I’m consistently impressed with Anders Hejlsberg and his team’s accomplishments. On a less serious note, I was quite amused while watching this talk. We got optional parameters and dynamic variables in C# 4.0. Features X++ has had since Day 1. TypeScript also supports optional parameters, and JavaScript is by nature a dynamic language. Some of the value offered by TypeScript relies on Duck-typing (“If it quacks like a duck – it is a duck”). For better or for worse X++ behaves quite identically. Now; if you pay close attention to the video, you will notice about 38:06 into the video, that the dangling semicolon also found its way to JavaScript. Perhaps X++ is really the superset of all languages? :-)
FYI - Anders Hejlsberg covered the same material on Channel 9. | http://blogs.msdn.com/b/mfp/archive/2012/11/05/typescript-techtalk-at-mdcc.aspx | CC-MAIN-2015-06 | refinedweb | 219 | 67.25 |
#include <FilterWriter.h>
List of all members.
For example:
StdFile outfile("file.dat", StdFile::e_write_mode); FilterWriter fwriter(outfile); fwriter.WriteBuffer(buf, buf_sz); fwriter.Flush();
Write a single character to the output stream.
Write an integer to the output stream.
Write an long integer to the output stream.
Write an unsigned long to the output stream.
Write a string to the output stream.
Write a null terminated string.
Write the entire input stream to the output stream (i.e.
to this FilterWriter).
Write out a null terminated 'line' followed by a end of line character default end of line character is carriage return.
This number may less than buf_size if the stream is corrupted.
Attaches a filter to the this FilterWriter.
Reports the current read position in the stream relative to the stream origin.
Forces any data remaining in the buffer to be written to input or output filter.
Forces any data remaining in the filter chain to the source or destination. | http://www.pdftron.com/net/html/classpdftron_1_1Filters_1_1FilterWriter.html | crawl-001 | refinedweb | 161 | 62.95 |
Red Hat Bugzilla – Bug 798806
RHLogin merged 2 users into 1 account?
Last modified: 2015-05-14 21:04:55 EDT
Description of problem:
This was on #openshift 2/29/2012 @2PM PST with gastaldi (George Gastaldi).
He had a login that has always been associated with a specific domain where he was testing Express:
- RHLogin: gegastaldi@gmail.com
- namespace: gastaldi
After this weekend, logging in with gegastaldi@gmail.com no longer associates with gastaldi namespace. It now associates to 'gegastaldi' namespace which he created cause he noticed that all his work was lost.
He has another way to login with username 'george.gastaldi' but that is also associated with 'gegastaldi' namespace.
Shouldn't these be 2 different logins with 2 different domains? Why the sudden merge? He has no other logins to OpenShift.
Also, blentz checked the backend. Seemed the 2 domains were associated to the correct rhlogin too:
- gastaldi ==> gegastladi@gmail.com
- gegastladi ==> george.gastaldi
Update from our user:
I managed to reproduce that bug that I told you, about my accounts disappearing.
The fact is that I can login using the same e-mail but with different password.
If I login using username ="gegastaldi@gmail.com", with password "123456", I can see the applications on the gegastaldi namespace
If I login using username ="gegastaldi@gmail.com", with password "654321", I can see the applications on the gastaldi namespace.
You may login on my account using these credentials to check it out.
Got around to testing this only today and as of now the username with password 654321 is the only set of credentials that works. The username with 123456 as password returns invalid credentials (authentication failed).
Haven't heard of this case from any other user. Pushing this to ON_QA for QE to decide whether we can close this bug for now or keep it open.
Can not reproduce the issues above, and 1 account with 2 password is not valid now, so verify this bug. | https://bugzilla.redhat.com/show_bug.cgi?id=798806 | CC-MAIN-2017-34 | refinedweb | 329 | 67.45 |
*
Designing a Java DSEL
Garrett Rowe
Ranch Hand
Joined: Jan 17, 2006
Posts: 1296
posted
Jun 03, 2007 17:37:00
0
Off and on for the past few weeks, I been playing around with making a
Craps
simulator in
Java
. Everything was going along pretty well, I had a DiceRoller object that was responsible for rolling the dice, and publishing events to all RollListeners that were registered to it.
public interface RollListener { void rollOccurred(RollEvent event); }
I then created a Bet interface that extended the RollListener interface.
public interface Bet extends RollListener { public enum Status { ACTIVE, WON, LOST; } Status getStatus(); BetType getType(); int getAmount(); CrapsPlayer getPlayer(); int getRollCount(); int getPoint(); }
The tedious portion would be coding the myriad of concrete classes to represent the logic for each Bet:
public class PassBet implements Bet{} //... public class DontPassBet implements Bet{} //... public class Hard6Bet implements Bet{}
The Bets would all basically do the same thing, when a RollEvent was fired, they would examine the RollEvent and see if it meant they won, lost, or just continued on. If they won, they would calculate the payout based upon the amount bet, and then transfer that amount to the player via CrapsPlayer.acceptWinnings(int). If they lost, they would do nothing but change their status to Status.LOST, and the CrapsTable would take care of unregistering them as listeners to the DieRoller. After coding about three of the concrete Bet implementations, I just got bored with the whole idea. Then I came across
this
paper from the jMock guys about creating an EDSL in Java. Inspired by the coolness of it all, I decided to develop my Bet system to be a blatant plagiarism of jMock's declarative style. Now I can declare all my Bets using their own *special-syntax* like:
/** Win when 7, or 11 comes on the first roll, or when a point comes after the first roll. Lose when 2, 3, 0r 12 comes on the first roll, or when a 7 comes after the first roll*/ Bet passLineBet = new BetBuilder(BetType.PASS) .amount(AMOUNT) .winOn(or( and(or(any(7), any(11)), firstRoll()), and(point(), not(firstRoll())) )) .loseOn(or( and(or(any(2), any(3), any(12)), firstRoll()), and(any(7), not(firstRoll())) )) .payout(evenMoney()) .payWinningsTo(player) .toBet(); /*Win if the first roll is 2, 3, 4, 9, 10, 11, or 12. If you don't win, you automatically lose (its a one-roll bet). If you win, even money gets paid to PLAYER.*/ final BetEvaluator winners = and(or(any(2), any(3), any(4), any(9), any(10), any(11), any(12)), firstRoll()); Bet fieldBet= new BetBuilder(BetType.FIELD) .amount(AMOUNT) .winOn(winners) .loseOn(not(winners)) .payout(evenMoney()) .payWinningsTo(PLAYER) .toBet();
any(int) is a static method of BetBuilder which returns a BetEvaluator object
public interface BetEvaluator { Boolean evaluate(Bet b, RollEvent re); }
or(BetEvaluator...) and and(BetEvaluator...) are also static methods of the BetBuilder class which accept an arbitrary number of BetEvaluators and return a BetEvaluator.
I think its clearer when I read the code exactly what the rules are for the Bet, and it makes it simple to create new Bet types with new expectations.
I'm interested in hearing what others think of this declarative style embedded in a Java program.
[ June 03, 2007: Message edited by: Garrett Rowe ]
Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them. - Laurence J. Peter
I agree. Here's the link:
subject: Designing a Java DSEL
Similar Threads
Finding a suitable abstraction
Help with Control Structures.
Help with this code
Trouble changing variable within loop
Annoying Error
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/100522/patterns/Designing-Java-DSEL | CC-MAIN-2014-52 | refinedweb | 627 | 59.23 |
- Ext.view.View error "no method addBodyCls"
- [MVC] How to get grid action column with component query on this.control()
- Increase height of htmlEditor on change in extJS 4
- How i can do two or more line's on tips?
- problem tab listener getting grid.hasselection() extjs 4.2
- Dynamically Changing Combobox displayField Property
- What can I do to speed up rendering?
- Saving Store/Model in Memory
- Grid Cell Editing Editor - selectOnFocus
- How to enter text into editable grid as all UPPERCASE?
- ExtJs components disappear on Chrome resize to full screen
- fire custom event on image el not working
- Get response headers from inside model.save()'s failure callback.
- animate initial display of a component?
- Nested Card Layouts and TreeGrid
- Chart & RTL/Right To Left Mode
- calling parent method from child instance
- Image setSrc() does not update image.
- Geolocation Google Maps API in Extjs 4.2
- Cannot sync store with hasmany association
- Rendering custom xtype widget to a container
- Incorrect colors of pie charts extjs
- Extending the enter App class
- How to create master page for mvc application
- ExtJS 4.2.1 - Issues with sync on deleted records and on newly-created/edited records
- IE11 support
- Zeroclipboard with ExtJS 4
- How to extend customizing class which extens ExtJS class ?
- Read/write sharepoint list data using Ext JS
- Unwanted white space after panel boudary
- Extjs 4.2.1 apply filter on treeCombo
- Help! Upgrade Extjs from 4.2.1 to 4.2.2 performance problem
- In IE when first click on Grid with selType: checkboxmodel, page scrolling jump up
- How to make Extjs 4.2 Tree panel as accessible with ARIA properties?
- Recommended architecture for Desktop application in Ext JS 4
- Why in the grouping grid new records added to the end?
- loadRecord into a radio button
- How to prevent new but unedited(empty) grid record from being submitted on save?
- iframe z-index
- Why does making window background transparent mess up the layout?
- ExtJS Grid on refresh
- How to remove tools in panel while doing collapsible when panel in expand mode
- grid cellmousedown event is creating error
- How add request id
- TreeStore / NodeInterface issue - Why is response data not take for new node?
- Ext.util.Format
- Updating container with text - Line Breaks
- Get/Send value from/to Database
- Raphael Wrapper needed for 4.1
- Grid header filter plugin using remote store for 4.2?
- Migrating Extensible calendar app from Ext3 to Ext4.1
- Grouping Property Grid
- Nested tab panel in grid rowexpander
- Sort column by dynamic column value
- Tree-column icon tooltip
- Events and Mixins - Where to add things
- close after submit event
- Custom Drag And Drop
- Panel connect with arrows
- Submit form target in extjs 4
- Setting a fixed top style for window/panel ,and also enable Dragging
- Ext 4.2 and IE8
- Collapse tool placement on collapsed panel headers with icon
- Setting "defaultUnit" to use percentage
- Panel scrolls to top when child components are dynamically shown/hidden
- UX Development: Add Controller causes in typeError
- Problem with IE 9 while using Extjs 4.2.1
- new Date not working for Korean format
- how to know when you loaded the last store?
- Accessing the container of a XTemplate object
- Extjs4.2.1- after loading data manually, store.group() function of store not working.
- Anywhere specific for logging Accessibility bugs ?
- Where can I set the STRIPEROW color in a theme
- ExtJS Load Data on Ajax return (json) and load store for grid.panel
- Uncaught TypeError: Object prototype may only be an object or null
- Issue with expand all when using rowexpander+bufferedgrid
- Is GNU GPL license v3 free to use for a school project?
- Getting error as "Ext.Error: You're trying to decode an invalid JSON String:"
- Drag-and-Drop to select locked and unlocked grid columns
- Treepanel expand/collapse on a node without children
- Are there any tool available for automated code review and checkstyle for extjs
- KNX visualisation client
- Add item to RadioGroup with ExtJS 4.1
- Delegate listener to ExtJS component inside view
- Need urgent help on expand/collapse on Ext.grid.Panel
- Switchable ARIA / accessibility support?
- Checkbox template example
- how to simulate a click on a menu in web desktop
- drag an drop not giving same results as store.add(record)... some records.data undef
- Unable to read edited data in extjs Grid
- Accessibility Application Sample in Extjs 4.2.1
- Ext JS 4.2 not working in IE8
- load sore and grid with data returned from ajax request
- Extj4.2.1 - BoxSelct Component not avialable.
- How to fire event when element is added to panel
- Column Reorder not working on Grids
- Drag and Drop with Ext-JS 4 Charts?
- Adding container to container
- Adding a close button to a window
- How to send updated PropertyGrid feilds
- Multiselect ComboBox's getValue() returns different types
- Reconfigure infinite grid?
- Fully Editable Grid At Once, Has Anyone Ported or Developed?
- Using shared models for two apps
- got Splitter working by using custom theme instead of default
- Why is the event I fire in my grid panel action column handler not heard by grid.
- Why does clicking icon in grid panel actioncolumn not select the row?
- Why the subTable can not operate like a normal table
- remove + and - icon in row expander
- Grid is not displaying store data
- Dynamically call a function inside Ext js class
- Encrypting the data
- Why not refresh after calling Ext.create?
- Using Textarea with Grid.RowEditing plugin
- Can't find the correct layout solution to build my page
- store revert to initial sort?
- Parsing very large JSON files
- autoSize()-ing grid columns after a reconfigure()
- Change tool tip at runtime
- Does beginEdit() still help when using rec.data?
- Creating dynamic textfields then inserting them into form?
- Dynamic Model's Best Practices
- toggle button's event handler not called when toggle() called
- Extjs tree error: Cannot call method 'getRootNode' of undefined
- What event for when click tree and tab and different panel displayed.
- Remove fbar in panel
- How to listen for event on Xtemplate element inside controller?
- Grid Panel data is not displaying properly
- Google Maps API not showing
- How to use the loaded library files in application
- CSS link missing (loader problems in IE)
- How to use disabledCls style in 'actioncolumn'?
- Ext.tree.Panel afteritemexpand event is not working in Mozilla Firefox browser
- After deleting a record in grid key board up & down arrows are not working
- Error for dynamic text in nested view, non-nested view is ok.
- Auto show/hide scrollbar
- stateful window position?
- Unable to zoom chart in ExtJS 4.2.2
- In Extjs4.2.2 Column header height will get increase in IE-7,IE-8 for ColumnLock Grid
- JSON issue
- how to responsive panel?
- How to do multiple grouping in grid?
- Slider component with width
- How to display a hasMany association in a Datagrid
- RowExpander - template is not rendered
- CDN for 4.2.2
- Access nested(belongsTo) JSON in Grid/column:templatecolumn/tpl:
- Problem with Accordian Layout
- Component with iFrame tag is not loading in IE and FireFox
- Semantic Url
- Problem with bar chart
- iPhoneSwitch plugin error.
- How integrate ExtJS to IPB 3.4.X ???
- mixins help
- Ext.Queryable - child and down function warnings when more than 1 match
- Locked grid columns messing with Grid Printer and LiveSearchGridPanel plugins
- Reset combobox without initiating Beforequery event
- Display field Value above label when using CSS
- combobox background transparent / opacity
- Howto get Application.name out of UX
- How to send data into server-side by using Ext.data.DirectStore
- Extjs 4.2.2 - BoxSelect(Ext.ux.form.field.BoxSelect) - Default selection and width
- Help on Grid with RowExpander include Nested Grids and Editable fields
- store.sync() called twice causes duplicate insert
- highlighting the current menu with changing the background color
- Initial server load, all calls after are from local.
- Drop between rows
- Grid filters titles
- load time increases progressively
- Seeing the following warning on Chromes Javascript Console
- Howto access loading Class out of Mixins Function (this; parent; ...?)
- How to float text over panel
- Understanding Sencha Cmd
- Ext.form.TextArea: how to disable spellcheck?
- Ext.form.TextArea: how to disable spellcheck?
- Testing viewcontroller event handler is fired using Deft, EXTJS4 and Jasmine
- Resize view port
- Moving vertically (top-down) on Grid using Tab Key
- table layout problem
- TreeStore sorting behavior
- Why does not api property(create, remove) work property ?
- Bug fieldLabel - ComboBox
- How do I set a validator for a field outside of the initialization? (in Controller)?
- Grid Cell Editing not working in MVC 4.2
- Delete with rest proxy sends whole record
- How to add a custom sort for table column click event
- How to Reload/refresh current page only on clicking Browser refresh button (Chrome)
- ExtJS chart issue for DST changes
- Override associations getter method.
- Extjs 4 Combobox with check boxes and select All option
- How to access two levels of data in grid model?
- textfield with place holder and icon
- need New Trigger Field Component for tab flow issue
- Customized trigger field component is not able to disabl the textfield alone,
- How to dynamically add / remove panel tools item ?
- Image resizing in HtmlEditor doesn't work on chrome.
- Datefield format 'm/Y' error
- hasMany
- EXTJS: How do you indicate that a UI supports drag and drop?
- How to dynamically generate or reload a tree in treepanel?
- How can I get my tree data loaded in my tree panel?
- "store" some temporary defined user data
- Problem with tree panel, treestore load and mvc
- How to get all child components from a parent container and hide them
- How do I auto expand/auto select added node when adding nodes to ExtJS 4.2 treepanel?
- ExtJS 4.2 grid loading problem
- Strange issue with DateField
- Dynamic loading in production.
- protect application against CSRF attack.
- Save dynamically created stores to local array and just save them on "Save changes"
- capturing scrolling in GridPanel
- Experimental card flip transition - nearly working
- Not able to listen to click event on a div a treecolumn
- Extjs adding an click event listener on a div for treecolumn not working
- Deploy ExtJS 4.2.2 to production for app not initially created with Sencha Cmd.
- how to properly extend, clone, array, define, create, copy, duplicate, add, remove
- How to Hide the Button in Docked Items of Grid?
- disable the back end windows of popup window
- How do I handle server push within ExtJS 4.2.2 app?
- How to extend 'Ext.grid.feature.Summary'?
- what is the difference between Ext.onReady() and using Ext MVC
- image event handling issue
- Ext JS 4 and CORS
- What should be the response time of the support team?
- Horizontal Data inside Ext.view
- Drag and drop: work out index position of dropped element
- Sencha web server
- TabPanel theme bug
- Not able to access the window which i created in controller file itself.
- When mouse over first row-Uncaught TypeError: Cannot call method 'removeCls' of null
- How to refresh an infinite grid?
- Bar chart columns animate twice
- Return the modified checkbox for the treepanel
- Read XML data issue
- Unable To Acces Column Search tab by the Touch Pad Using Safari browser
- extjs phantomjs maven integration
- Error on using floating Tabpanel/GroupTabPanel inside Window
- Position of action column not well-defined
- Uncaught InvalidCharacterError: The string contains invalid characters. model.set()
- ExtJs 4.2.1: Two Load masks are seen when a panel is loading
- Destroying Grids
- Ext.DomQuery.select failing if comma in selector
- Parse XML data question
- splash screen not displayed until application loaded
- How does Sencha keep its codebase so clean and organized?
- Applying Load Mask To Grids in Tabs using CSS.
- Paging Toolbar Help
- Combining Theme Mixins for 1 UI?
- Please help! problem with tree grid! | https://www.sencha.com/forum/archive/index.php/f-87-p-65.html?s=4afd0cf4f78e737f9af26d00bf104c19 | CC-MAIN-2020-05 | refinedweb | 1,942 | 54.12 |
The Data
We will train and test on anything that’s easy to parse. Up today is a basic English part-of-speech tagging for Twitter developed by Kevin Gimpel et al. (and when I say “et al.”, there are ten co-authors!) in Noah Smith’s group at Carnegie Mellon.
The relevant resources are:
- Paper: Gimpel, Kevin et al., 2011, Part-of-Speech Tagging for Twitter: Annotation, Features and Experiments. ACL. [pdf]
- Google Code Page: ARK Tweet NLP Home Page
- Corpus: ARK Tweet Manually-Annotated POS Corpus, Version 2 [tarred,gzipped]
Their paper describes their tagging scheme as well as their CRF-based tagger. It uses Stanford’s CRF tagger with baseline features as a performance comparison. The code for their tagger’s also in the distribution. I’m not sure what the license is — it’s listed as “other open source” (I didn’t even know Google Code let you do that — I thought it was “free beer” or nothing with them).
Training and Evaluating a LingPipe POS Tagger
Their corpus was very easy to parse (thanks, I really appreciate it). It only took me about an hour or so to download the data, parse it, and evaluate LingPipe’s baseline POS tagger on it. (It helps to be the author of code. The patterns feel awfully comfortable.)
Our performance was 85.4% accuracy on their train/test split using the default parameters for tagging in LingPipe. In contrast, the Stanford CRF tagger with default features was 85.9% accurate, whereas Gimpel et al.’s tagger achieved 89.4% accuracy. As usual, LingPipe’s HMM tagger is competitive with out-of-the-box CRFs and a few percentage points behind tuned, feature-rich CRFs.
Their paper (on page 5) says the annotator agreement is 92.2%. They also break accuracy out per tag, which LingPipe’s output also does; you can see this yourself if you run it.
LingPipe’s Baseline POS Tagger
The baseline POS tagger in LingPipe is a bigram HMM with emissions defined by a bounded character language model. Estimation is with simple additive smoothing (i.e., MAP estimates given symmetric Dirichlet priors) for the initial state and transition probabilities and Witten-Bell smoothing for the character LMs. Our main motivation for doing things this way is that (a) it’s online, letting us train an example at a time, and (b) it’s reasonably fast when it runs. We should be able to decode this tag set at well over 500K tokens/second by turning on caching of character LM results and pruning.
We could also implement their approach using LingPipe’s CRFs. It’s just that it’d take a bit longer than an hour all in.
Run it Yourself
You can get their code from their project home page, linked above.
All of my code’s checked into the LingPipe Sandbox in a project named “twitter-pos”. You can check it out anonymously using Subversion:
svn co
The code’s in a single file, stored under the
src subdirectory of the package:
package com.lingpipe.twpos; import com.aliasi.classify.*; import com.aliasi.corpus.*; import com.aliasi.io.*; import com.aliasi.hmm.*; import com.aliasi.tag.*; import java.io.*; import java.util.*; public class Eval { public static void main(String[] args) throws IOException { System.out.println("Reading Corpus"); TwitterPosCorpus corpus = new TwitterPosCorpus(new File(args[0])); System.out.println("Training Tagger"); HmmCharLmEstimator hmm = new HmmCharLmEstimator(); corpus.visitTrain(hmm); HmmDecoder tagger = new HmmDecoder(hmm); System.out.println("Evaluating"); boolean storeTokens = true; TaggerEvaluator evaluator = new TaggerEvaluator(tagger,storeTokens); corpus.visitTest(evaluator); System.out.println(evaluator.tokenEval()); } static List<Tagging> parse(File f) throws IOException { List<Tagging> taggings = new ArrayList<Tagging>(); FileLineReader reader = new FileLineReader(f,"UTF-8"); List tokens = new ArrayList(); List tags = new ArrayList(); for (String line : reader) { String[] tokTag = line.split("\\s+"); if (tokTag.length != 2) { taggings.add(new Tagging(tokens,tags)); // System.out.println("tokens=" + tokens); // System.out.println("tags=" + tags); tokens = new ArrayList(); tags = new ArrayList(); } else { tokens.add(tokTag[0]); tags.add(tokTag[1]); } } return taggings; } static class TwitterPosCorpus extends ListCorpus<Tagging> { public TwitterPosCorpus(File path) throws IOException { for (Tagging t : parse(new File(path,"train"))) addTrain(t); for (Tagging t : parse(new File(path,"dev"))) addTrain(t); for (Tagging t : parse(new File(path,"test"))) addTest(t); } } }
LingPipe’s pretty fast for this sort of thing, with the entire program above, including I/O, corpus parsing, training, and testing taking a total of 5 seconds on my now ancient workstation.
Although it wouldn’t be a fair comparison, there’s usually a percent or so to be eked out of a little tuning in this setting (it would’ve been fair had I done tuning on the dev set and evaluated exactly once). This was just a straight out of the box, default settings eval. In general, one shouldn’t trust results that report post-hoc best settings values as they’re almost always going to overestimate real performance for all the usual reasons.
Finally, here’s the confusion matrix for tags in the first-best output:
,D,E,#,!,G,&,@,A,$,L,N,O,,,U,T,V,P,S,R,~,^,X,Z D,446,0,0,1,0,0,0,4,0,0,0,7,0,0,0,0,11,0,7,0,8,1,0 E,0,53,0,1,2,0,0,0,1,0,0,0,5,0,1,0,0,0,0,0,0,0,0 #,0,0,44,0,1,0,0,0,0,0,10,0,0,0,0,3,0,0,0,0,20,0,0 !,0,0,1,140,1,0,0,5,0,1,15,5,0,0,0,3,1,0,7,0,7,0,0 G,1,1,5,2,14,0,0,1,3,0,10,0,10,0,0,4,1,0,1,2,15,0,0 &,0,0,0,0,0,122,0,1,0,0,1,0,0,0,0,0,1,0,1,0,1,0,0 @,0,0,0,0,0,0,328,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0 A,0,0,0,1,0,1,0,248,3,0,44,0,0,0,2,30,2,0,24,0,12,0,0 $,0,0,0,0,0,0,1,0,79,0,2,0,0,0,0,0,3,0,0,0,0,0,0 L,2,0,0,0,1,0,0,0,0,120,3,1,0,0,0,2,0,0,0,0,0,0,0 N,1,0,1,5,1,0,0,49,1,1,783,2,0,0,2,52,6,0,14,0,63,0,0 O,4,0,0,0,1,0,0,2,0,0,2,456,0,0,1,0,0,0,2,0,4,0,0 ,,0,4,0,0,2,0,0,0,0,0,0,0,861,0,0,2,0,0,0,11,0,0,0 U,0,0,0,1,0,0,0,0,0,0,0,0,1,114,0,0,0,0,0,0,1,0,0 T,0,0,0,0,0,0,0,0,0,0,0,1,0,0,24,0,9,0,1,0,1,0,0 V,0,1,0,0,0,0,0,21,0,1,69,1,0,0,0,921,9,0,7,2,21,0,0 P,2,0,0,1,0,0,0,4,1,0,1,0,0,0,11,6,571,0,12,0,4,0,0 S,0,0,0,0,0,0,0,0,0,0,3,0,0,0,0,0,0,2,0,0,1,0,0 R,4,0,0,1,0,0,0,13,0,0,20,1,0,0,1,6,15,0,269,0,8,1,0 ~,0,0,0,1,1,0,0,0,0,0,0,0,32,0,0,1,0,0,0,177,0,0,0 ^,1,0,4,1,2,0,0,29,2,0,101,0,2,0,0,16,4,0,1,0,331,0,1 X,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,2,0,0,3,0 Z,1,0,0,0,0,0,0,0,0,1,4,0,0,0,0,0,0,1,0,0,13,0,2
I should really figure out how to format that a bit more neatly.
November 4, 2011 at 2:10 pm |
Nice! BTW it’s Apache License.
November 14, 2011 at 7:15 pm |
[…] Twitter POS Tagging with LingPipe and ARK Tweet Data by Bob Carpenter. […] | https://lingpipe-blog.com/2011/11/04/twitter-pos-tagging-with-lingpipe-ark-tweet-data/ | CC-MAIN-2019-35 | refinedweb | 1,453 | 56.66 |
News aggregator
Douglas M. Auclair (geophf):
Jasper Van der Jeugt: Patat and Myanmar travels
At work, I frequently need to give (internal) presentations and demos using video conferencing. I prefer to do these quick-and-dirty presentations in the terminal for a few reasons:
- I don’t spend time worrying about layout, terminal stuff always looks cool.
- I want to write markdown if possible.
- You can have a good “Questions?” slide just by running cowsay 'Questions?'
- Seamless switching between editor/shell and presentation using tmux.
The last point is important for video conferencing especially. The software we use allows you to share a single window from your desktop. This is pretty neat if you have a multi-monitor setup. However, it does not play well with switching between a PDF viewer and a terminal.Introducing patat
To this end, I wrote patat – Presentations And The ANSI Terminal – because I was not entirely happy with the available solutions. You can get it from Hackage: cabal install patat.
patat screenshot
You run it simply by doing:patat presentation.md
The key features are:
Built on Pandoc:
The software I was using before contained some Markdown parsing bugs. By using Pandoc under the hood, this should not happen.
Additionally, we get all the input formats Pandoc supports (Literate Haskell is of particular importance to me) and some additional elements like tables and definition lists.
Smart slide splitting:
Most Markdown presentation tools seem to split slides at --- (horizontal rulers). This is a bit verbose since you usually start each slide with an h1 as well. patat will check if --- is used and if it’s not, it will split on h1s instead.
Live reload:
If you run patat --watch presentation.md, patat will poll the file for changes and reload automatically. This is really handy when you are writing the presentation, I usually use it with split-pane in tmux.
An example of a presentation is:--- title: This is my presentation author: Jane Doe ... # This is a slide Slide contents. Yay. # Important title Things I like: - Markdown - Haskell - Pandoc - Traveling How patat came to be
I started writing a simple prototype of patat during downtime at ICFP2016, when I discovered that MDP was not able to parse my presentation correctly.
After ICFP, I flew to Myanmar, and I am currently traveling around the country with my girlfriend. It’s a super interesting place to visit, with a rich history. Now that NLD is the ruling party, I think it is a great time to visit the country responsibly.
Riding around visiting temples in Bagan
However, it is a huge country – the largest in south-east Asia – so there is some downtime traveling on domestic flights, buses and boats. I thought it was a good idea to improve the tool a bit further, since you don’t need internet to hack on this sort of thing.
Pull requests are welcome as always! Note that I will be slow to respond: for the next three days I will be trekking from Kalaw to Inle Lake, so I have no connectivity (or electricity, for that matter).
Sunset at U Bein bridge
Sidenote: “Patat” is the Flemish word for “potato”. Dutch people also use it to refer to French Fries but I don’t really do that – in Belgium we just call fries “Frieten”.
JP Moresmau: Everything is broken.
Well-Typed.Com: Hackage reliability via mirroring
In the last several years, as a community, we’ve come to greatly rely on services like Hackage and Stackage being available 24/7. There is always enormous frustration when either of these services goes down.
I think as a community we’ve also been raising our expectations. We’re all used to services like Google which appear to be completely reliable. Of course these are developed and operated by huge teams of professionals, whereas our community services are developed, maintained and operated by comparatively tiny teams on shoestring budgets.A path to greater reliability
Nevertheless, reliability is important to us all, and so there has been a fair bit of effort put in over the last few years to improve reliability. I’ll talk primarily about Hackage since that is what I am familiar with.
Firstly, a couple years ago Hackage and haskell.org were moved from super-cheap VM hosting (where our machines tended to go down several times a year) to actually rather good quality hosting provided by Rackspace. Thanks to Rackspace for donating that, and the haskell.org infrastructure team for getting that organised and implemented. That in itself has made a huge difference: we’ve had far fewer incidents of downtime since then.
Obviously even with good quality hosting we’re still only one step away from unscheduled downtime, because the architecture is too centralised.
There were two approaches that people proposed. One was classic mirroring: spread things out over multiple mirrors for redundancy. The other proposal was to adjust the Hackage architecture somewhat so that while the main active Hackage server runs on some host, the the core Hackage archive would be placed on an ultra-reliable 3rd party service like AWS S3, so that this would stay available even if the main server was unavailable.
The approach we decided to take was the classic mirroring one. In some ways this is the harder path, but I think ultimately it gives the best results. This approach also tied in with the new security architecture (The Update Framework – TUF) that we were implementing. The TUF design includes mirrors and works in such a way that mirrors do not need to be trusted. If we (or rather end users) do not have to trust the operators of all the mirrors then this makes a mirroring approach much more secure and much easier to deploy.Where we are today
The new system has been in beta for some time and we’re just short of flipping the switch for end users. The new Hackage security system in place on the server side, while on the client side, the latest release of cabal-install can be configured to use it, and the development version uses it by default.
There is lots to say about the security system, but that has (1, 2, 3) and will be covered elsewhere. This post is about mirroring.
For mirrors, we currently have two official public mirrors, and a third in the works. One mirror is operated by FP Complete and the other by Herbert Valerio Riedel. For now, Herbert and I manage the list of mirrors and we will be accepting contributions of further public mirrors. It is also possible to run private mirrors.
Once you are using a release of cabal-install that uses the new system then no further configuration is required to make use of the mirrors (or indeed the security). The list of public mirrors is published by the Hackage server (along with the security metadata) and cabal-install (and other clients using hackage-security) will automatically make use of them.Reliability in the new system
Both of the initial mirrors are individually using rather reliable hosting. One is on AWS S3 and one on DreamHost S3. Indeed the weak point in the system is no longer the hosting. It is other factors like reliability of the hosts running the agents that do the mirroring, and the ever present possibility of human error.
The fact that the mirrors are hosted and operated independently is the key to improved reliability. We want to reduce the correlation of failures.
Failures in hosting can be mitigated by using multiple providers. Even AWS S3 goes down occasionally. Failures in the machines driving the mirroring are mitigated by using a normal decentralised pull design (rather than pushing out from the centre) and hosting the mirroring agents separately. Failures due to misconfiguration and other human errors are mitigated by having different mirrors operated independently by different people.
So all these failures can and will happen, but if they are not correlated and we have enough mirrors then the system overall can be quite reliable.
There is of course still the possibility that the upstream server goes down. It is annoying not to be able to upload new packages, but it is far more important that people be able to download packages. The mirrors mean there should be no interruption in the download service, and it gives the upstream server operators the breathing space to fix things.
Neil Mitchell: Full-time Haskell jobs in London, at Barclays.
Don Stewart (dons): Haskell dev roles with Strats @ Standard Chartered
The Strats team at Standard Chartered is growing. We have 10 more open roles currently, in a range of areas:
- Haskell dev for hedging effectiveness analytics, and build hedging services.
- Haskell devs for derivatives pricing services. Generic roles using Haskell.
- Web-experienced Haskell devs for frontends to analytics services written in Haskell. PureScript and or data viz, user interfaces skills desirable
- Haskell dev for trading algorithms and strategy development.
- Dev/ops role to extend our continuous integration infrastructure (Haskell+git)
- Contract analysis and manipulation in Haskell for trade formats (FpML + Haskell).
- Haskell dev for low latency (< 100 microsecond) components in soft real-time non-linear pricing charges service.
You would join an existing team of 25 Haskell developers in Singapore or London. Generally our roles involve directly working with traders to automate their work and improve their efficiency. We use Haskell for all tasks. Either GHC Haskell or our own (“Mu”) implementation, and this is a rare chance to join a large, experienced Haskell dev team.
We offer permanent or contractor positions, at Director and Associate Director level, with very competitive compensation. Demonstrated experience in typed FP (Haskell, OCaml, F# etc) is required or other typed FP.
All roles require some physical presence in either Singapore or London, and we offer flexiblity with these constraints (with work from home available). No financial background is required or assumed.
More info about our development process is in the 2012 PADL keynote, and a 2013 HaskellCast interview.
If this sounds exciting to you, please send your PDF resume to me – donald.stewart <at> sc.com
Tagged: jobs
Well-Typed.Com: Sharing, Space Leaks, and Conduit and friends
We use large lazy data structures in Haskell all the time to drive our programs. For example, considermain1 :: IO () main1 = forM_ [1..5] $ \_ -> mapM_ print [1 .. 1000000]
It’s quite remarkable that this works and that this program runs in constant memory. But this stands on a delicate cusp. Consider the following minor variation on the above code:ni_mapM_ :: (a -> IO b) -> [a] -> IO () {-# NOINLINE ni_mapM_ #-} ni_mapM_ = mapM_ main2 :: IO () main2 = forM_ [1..5] $ \_ -> ni_mapM_ print [1 .. 1000000]
This program runs, but unlike main1, it has a maximum residency of 27 MB; in other words, this program suffers from a space leak. As it turns out, main1 was running in constant memory because the optimizer was able to eliminate the list altogether (due to the fold/build rewrite rule), but it is unable to do so in main2.
But why is main2 leaking? In fact, we can recover constant space behaviour by recompiling the code with -fno-full-laziness. The full laziness transformation is effectively turning main2 intolongList :: [Integer] longList = [1 .. 1000000] main3 :: IO () main3 = forM_ [1..5] $ \_ -> ni_mapM_ print longList
The first iteration of the forM_ loop constructs the list, which is then retained to be used by the next iterations. Hence, the large list is retained for the duration of the program, which is the beforementioned space leak.
The full laziness optimization is taking away our ability to control when data structures are not shared. That ability is crucial when we have actions driven by large lazy data structures. One particularly important example of such lazy structures that drive computation are conduits or pipes. For example, consider the following conduit code:import qualified Data.Conduit as C countConduit :: Int -> C.Sink Char IO () countConduit cnt = do mi <- C.await case mi of Nothing -> liftIO (print cnt) Just _ -> countConduit $! cnt + 1 getConduit :: Int -> C.Source IO Char getConduit 0 = return () getConduit n = do ch <- liftIO getChar C.yield ch getConduit (n - 1)
Here countConduit is a sink that counts the characters it receives from upstream, and getConduit n is a conduit that reads n characters from the console and passes them downstream.
To illustrate what might go wrong, we will use the following exception handler throughout this blog post5:retry :: IO a -> IO a retry io = do ma <- try io case ma of Right a -> return a Left (_ :: SomeException) -> retry io
The important point to notice about this exception handler is that it retains a reference to the action io as it executes that action, since it might potentially have to execute it again if an exception is thrown. However, all the space leaks we discuss in this blog post arise even when an exception is never thrown and hence the action is run only once; simply maintaining a reference to the action until the end of the program is enough to cause the space leak.
If we use this exception handler as follows:main :: IO () main = retry $ C.runConduit $ getConduit 1000000 C.=$= countConduit 0
we again end up with a large space leak, this time of type Pipe and ->Pipe (conduit’s internal type):
Although the values that stream through the conduit come from IO, the conduit itself is fully constructed and retained in memory. In this blog post we examine what exactly is being retained here, and why. We will finish with some suggestions on how to avoid such space-leaks, although sadly there is no easy answer. Note that these problems are not specific to the conduit library, but apply equally to all other similar libraries.
We will not assume any knowledge of conduit but start from first principles; however, if you have never used any of these libraries before this blog post is probably not the best starting point; you might for example first want to watch my presentation Lazy I/O and Alternatives in Haskell.Lists
Before we look at the more complicated case, let’s first consider another program using just lists:main :: IO () main = retry $ ni_mapM_ print [1..1000000]
This program suffers from a space leak for similar reasons to the example with lists we saw in the introduction, but it’s worth spelling out the details here: where exactly is the list being maintained?
Recall that the IO monad is effectively a state monad over a token RealWorld state (if that doesn’t make any sense to you, you might want to read ezyang’s article Unraveling the mystery of the IO monad first). Hence, ni_mapM_ (just a wrapper around mapM_) is really a function of three arguments: the action to execute for every element of the list, the list itself, and the world token. That means thatni_mapM_ print [1..1000000]
is a partial application, and hence we are constructing a PAP object. Such a PAP object is an runtime representation of a partial application of a function; it records the function we want to execute (ni_mapM_), as well as the arguments we have already provided. It is this PAP object that we give to retry, and which retry retains until the action completes because it might need it in the exception handler. The long list in turn is being retained because there is a reference from the PAP object to the list (as one of the arguments that we provided).
Full laziness does not make a difference in this example; whether or not that [1 .. 10000000] expression gets floated out makes no difference.Reminder: Conduits/Pipes
Just to make sure we don’t get lost in the details, let’s define a simple conduit-like or pipe-like data structure:data Pipe i o m r = Yield o (Pipe i o m r) | Await (Either r i -> Pipe i o m r) | Effect (m (Pipe i o m r)) | Done r
A pipe or a conduit is a free monad which provides three actions:
- Yield a value downstream
- Await a value from upstream
- Execute an effect in the underlying monad.
The argument to Await is passed an Either; we give it a Left value if upstream terminated, or a Right value if upstream yielded a value.1
This definition is not quite the same as the one used in real streaming libraries and ignores various difficulties (in particular exception safely, as well as other features such as leftovers); however, it will suffice for the sake of this blog post. We will use the terms “conduit” and “pipe” interchangeably in the remainder of this article.Sources
The various Pipe constructors differ in their memory behaviour and the kinds of space leaks that they can create. We therefore consider them one by one. We will start with sources, because their memory behaviour is relatively straightforward.
A source is a pipe that only ever yields values downstream.2 For example, here is a source that yields the values [n, n-1 .. 1]:yieldFrom :: Int -> Pipe i Int m () yieldFrom 0 = Done () yieldFrom n = Yield n $ yieldFrom (n - 1)
We could “run” such a pipe as follows:printYields :: Show o => Pipe i o m () -> IO () printYields (Yield o k) = print o >> printYields k printYields (Done ()) = return ()
If we then run the following program:main :: IO () main = retry $ printYields (yieldFrom 1000000)
we get a space leak. This space leak is very similar to the space leak we discussed in section Lists above, with Done () playing the role of the empty list and Yield playing the role of (:). As in the list example, this program has a space leak independent of full laziness.Sinks
A sink is a conduit that only ever awaits values from upstream; it never yields anything downstream.2 The memory behaviour of sinks is considerably more subtle than the memory behaviour of sources and we will examine it in detail. As a reminder, the constructor for Await isdata Pipe i o m r = Await (Either r i -> Pipe i o m r) | ...
As an example of a sink, consider this pipe that counts the number of characters it receives:countChars :: Int -> Pipe Char o m Int countChars cnt = Await $ \mi -> case mi of Left _ -> Done cnt Right _ -> countChars $! cnt + 1
We could “run” such a sink by feeding it a bunch of characters; say, 1000000 of them:feed :: Char -> Pipe Char o m Int -> IO () feed ch = feedFrom 10000000 where feedFrom :: Int -> Pipe Char o m Int -> IO () feedFrom _ (Done r) = print r feedFrom 0 (Await k) = feedFrom 0 $ k (Left 0) feedFrom n (Await k) = feedFrom (n-1) $ k (Right ch)
If we run this as follows and compile with optimizations enabled, we once again end up with a space leak:main :: IO () main = retry $ feed 'A' (countChars 0)
We can recover constant space behaviour by disabling full laziness; however, the effect of full laziness on this example is a lot more subtle than the example we described in the introduction.Full laziness
Let’s take a brief moment to describe what full laziness is, exactly. Full laziness is one of the optimizations that ghc applies by default when optimizations are enabled; it is described in the paper “Let-floating: moving bindings to give faster programs”. The idea is simple; if we have something likef = \x y -> let e = .. -- expensive computation involving x but not y in ..
full laziness floats the let binding out over the lambda to getf = \x = let e = .. in \y -> ..
This potentially avoids unnecessarily recomputing e for different values of y. Full laziness is a useful transformation; for example, it turns something likef x y = .. where go = .. -- some local function
intof x y = .. f_go .. = ..
which avoids allocating a function closure every time f is called. It is also quite a notorious optimization, because it can create unexpected CAFs (constant applicative forms; top-level definitions of values); for example, if you writenthPrime :: Int -> Int nthPrime n = allPrimes !! n where allPrimes :: [Int] allPrimes = ..
you might expect nthPrime to recompute allPrimes every time it is invoked; but full laziness might move that allPrimes definition to the top-level, resulting in a large space leak (the full list of primes would be retained for the lifetime of the program). This goes back to the point we made in the introduction: full laziness is taking away our ability to control when values are not shared.Full laziness versus sinks
Back to the sink example. What exactly is full laziness doing here? Is it constructing a CAF we weren’t expecting? Actually, no; it’s more subtle than that. Our definition of countChars wascountChars :: Int -> Pipe Char o m Int countChars cnt = Await $ \mi -> case mi of Left _ -> Done cnt Right _ -> countChars $! cnt + 1
Full laziness is turning this into something more akin tocountChars' :: Int -> Pipe Char o m Int countChars' cnt = let k = countChars' $! cnt + 1 in Await $ \mi -> case mi of Left _ -> Done cnt Right _ -> k
Note how the computation of countChars' $! cnt + 1 has been floated over the lambda; ghc can do that, since this expression does not depend on mi. So in memory the countChars 0 expression from our main function (retained, if you recall, because of the surrounding retry wrapper), develops something like this. It starts of as a simple thunk:
Then when feed matches on it, it gets reduced to weak head normal form, exposing the top-most Await constructor:
The body of the await is a function closure pointing to the function inside countChars (\mi -> case mi ..), which has countChars $! (cnt + 1) as an unevaluated thunk in its environment. Evaluating it one step further yields
So where for a source the data structure in memory was a straightforward “list” consisting of Yield nodes, for a sink the situation is more subtle: we build up a chain of Await constructors, each of which points to a function closure which in its environment has a reference to the next Await constructor. This wouldn’t matter of course if the garbage collector could clean up after us; but if the conduit itself is shared, then this results in a space leak.
Without full laziness, incidentally, evaluating countChars 0 yields
and the chain stops there; the only thing in the function closure now is cnt. Since we don’t allocate the next Yield constructor before running the function, we never construct a chain of Yield constructors and hence we have no space leak.Depending on values
It is tempting to think that if the conduit varies its behaviour depending on the values it receives from upstream the same chain of Await constructors cannot be constructed and we avoid a space leak. For example, consider this variation on countChars which only counts spaces:countSpaces :: Int -> Pipe Char o m Int countSpaces cnt = Await $ \mi -> case mi of Left _ -> Done cnt Right ' ' -> countSpaces $! cnt + 1 Right _ -> countSpaces $! cnt
If we substitute this conduit for countChars in the previous program, do we fare any better? Alas, the memory behaviour of this conduit, when shared, is in fact far, far worse.
The reason is that both the countSpaces $! cnt + 1 and the expression countSpaces $! cnt can both be floated out by the full laziness optimization. Hence, now every Await constructor will have a function closure in its payload with two thunks, one for each alternative way to execute the conduit. What’s more, both of these thunks will are retained as long as we retain a reference to the top-level conduit.
We can neatly illustrate this using the following program:main :: IO () main = do let count = countSpaces 0 feed ' ' count feed ' ' count feed ' ' count feed 'A' count feed 'A' count feed 'A' count
The first feed ' ' explores a path through the conduit where every character is a space; so this constructs (and retains) one long chain of Await constructors. The next two calls to feed ' ' however walk over the exact same path, and hence memory usage does not increase for a while. But then we explore a different path, in which every character is a non-space, and hence memory behaviour will go up again. Then during the second call to feed 'A' memory usage is stable again, until we start executing the last feed 'A', at which point the garbage collector can finally start cleaning things up:
What’s worse, there is an infinite number of paths through this conduit. Every different combination of space and non-space characters will explore a different path, leading to combinatorial explosion and terrifying memory usage.Effects
The precise situation for effects depends on the underlying monad, but let’s explore one common case: IO. As we will see, for the case of IO the memory behaviour of Effect is actually similar to the memory behaviour of Await. Recall that the Effect constructor is defined asdata Pipe i o m r = Effect (m (Pipe i o m r)) | ...
Consider this simple pipe that prints the numbers [n, n-1 .. 1]:printFrom :: Int -> Pipe i o IO () printFrom 0 = Done () printFrom n = Effect $ print n >> return (printFrom (n - 1))
We might run such a pipe using3:runPipe :: Show r => Pipe i o IO r -> IO () runPipe (Done r) = print r runPipe (Effect k) = runPipe =<< k
In order to understand the memory behaviour of Effect, we need to understand how the underlying monad behaves. For the case of IO, IO actions are state transformers over a token RealWorld state. This means that the Effect constructor actually looks rather similar to the Await constructor. Both have a function as payload; Await a function that receives an upstream value, and Effect a function that receives a RealWorld token. To illustrate what printFrom might look like with full laziness, we can rewrite it asprintFrom :: Int -> Pipe i o IO () printFrom n = let k = printFrom (n - 1) in case n of 0 -> Done () _ -> Effect $ IO $ \st -> unIO (print n >> return k) st
If we visualize the heap (using ghc-vis), we can see that it does indeed look very similar to the picture for Await:Increasing sharing
If we cannot guarantee that our conduits are not shared, then perhaps we should try to increase sharing instead. If we can avoid allocating these chains of pipes, but instead have pipes refer back to themselves, perhaps we can avoid these space leaks.
In theory, this is possible. For example, when using the conduit library, we could try to take advantage of monad transformers and rewrite our feed source and our count sink as:feed :: Source IO Char feed = evalStateC 1000000 go where go :: Source (StateT Int IO) Char go = do st <- get if st == 0 then return () else do put $! (st - 1) ; yield 'A' ; go count :: Sink Char IO Int count = evalStateC 0 go where go :: Sink Char (StateT Int IO) Int go = do mi <- await case mi of Nothing -> get Just _ -> modify' (+1) >> go
In both definitions go refers back to itself directly, with no arguments; hence, it ought to be self-referential, without any long chain of sources or sinks ever being constructed. This works; the following program runs in constant space:main :: IO () main = retry $ print =<< (feed $$ count)
However, this kind of code is extremely brittle. For example, consider the following minor variation on count:count :: Sink Char IO Int count = evalStateC 0 go where go :: Sink Char (StateT Int IO) Int go = withValue $ \_ -> modify' (+1) >> go withValue :: (i -> Sink i (StateT Int IO) Int) -> Sink i (StateT Int IO) Int withValue k = do mch <- await case mch of Nothing -> get Just ch -> k ch
This seems like a straight-forward variation, but this code in fact suffers from a space leak again4. The optimized core version of this variation of count looks something like this:count :: ConduitM Char Void (StateT Int IO) Int count = ConduitM $ \k -> let countRec = modify' (+ 1) >> count in unConduitM await $ \mch -> case mch of Nothing -> unConduitM get k Just _ -> unConduitM countRec k
In the conduit library, ConduitM is a codensity transformation of an internal Pipe datatype; the latter corresponds more or less to the Pipe datastructure we’ve been describing here. But we can ignore these details: the important point here is that this has the same typical shape that we’ve been studying above, with an allocation inside a lambda but before an await.
We can fix it by writing our code ascount :: Sink Char IO Int count = evalStateC 0 go where go :: Sink Char (StateT Int IO) Int go = withValue goWithValue goWithValue :: Char -> Sink Char (StateT Int IO) Int goWithValue _ = modify' (+1) >> go withValue :: (i -> Sink i (StateT Int IO) Int) -> Sink i (StateT Int IO) Int withValue k = do mch <- await case mch of Nothing -> get Just ch -> k ch
Ironically, it would seem that full laziness here could have helped us by floating out that modify' (+1) >> go expression for us. The reason that it didn’t is probably related to the exact way the k continuation is threaded through in the compiled code (I simplified a bit above). Whatever the reason, tracking down problems like these is difficult and incredibly time consuming; I’ve spent many many hours studying the output of -ddump-simpl and comparing before and after pictures. Not a particularly productive way to spend my time, and this kind of low-level thinking is not what I want to do when writing application level Haskell code!Composed pipes
Normally we construct pipes by composing components together. Composition of pipes can be defined as(=$=) :: Monad m => Pipe a b m r -> Pipe b c m r -> Pipe a c m r {-# NOINLINE (=$=) #-} _ =$= Done r = Done r u =$= Effect d = Effect $ (u =$=) <$> d u =$= Yield o d = Yield o (u =$= d) Yield o u =$= Await d = u =$= d (Right o) Await u =$= Await d = Await $ \ma -> u ma =$= Await d Effect u =$= Await d = Effect $ (=$= Await d) <$> u Done r =$= Await d = Done r =$= d (Left r)
The downstream pipe “is in charge”; the upstream pipe only plays a role when downstream awaits. This mirrors Haskell’s lazy “demand-driven” evaluation model.
Typically we only run self-contained pipes that don’t have any Awaits or Yields left (after composition), so we are only left with Effects. The good news is that if the pipe components don’t consist of long chains, then their composition won’t either; at every Effect point we wait for either upstream or downstream to complete its effect; only once that is done do we receive the next part of the pipeline and hence no chains can be constructed.
On the other hand, of course composition doesn’t get rid of these space leaks either. As an example, we can define a pipe equivalent to the getConduit from the introductiongetN :: Int -> Pipe i Char IO Int getN 0 = Done 0 getN n = Effect $ do ch <- getChar return $ Yield ch (getN (n - 1))
and then compose getN and countChars to get a runnable program:main :: IO () main = retry $ runPipe $ getN 1000000 =$= countChars 0
This program suffers from the same space leaks as before because the individual pipelines component are kept in memory. As in the sink example, memory behaviour would be much worse still if there was different paths through the conduit network.Summary
At Well-Typed we’ve been developing an application for a client to do streaming data processing. We’ve been using the conduit library to do this, with great success. However, occassionally space leaks arise that difficult to fix, and even harder to track down; of course, we’re not the first to suffer from these problems; for example, see ghc ticket #9520 or issue #6 for the streaming library (a library similar to conduit).
In this blog post we described how such space leaks arise. Similar space leaks can arise with any kind of code that uses large lazy data structures to drive computation, including other streaming libraries such as pipes or streaming, but the problem is not restricted to streaming libraries.
The conduit library tries to avoid these intermediate data structures by means of fusion rules; naturally, when this is successful the problem is avoided. We can increase the likelihood of this happening by using combinators such as folds etc., but in general the intermediate pipe data structures are difficult to avoid.
The core of the problem is that space leak, in practice the resulting code is too brittle and writing code like this is just too difficult. Just to provide one more example, in our application we had some code that looked like this:go x@(C y _) = case y of Constr1 -> doSomethingWith x >> go Constr2 -> doSomethingWith x >> go Constr3 -> doSomethingWith x >> go Constr4 -> doSomethingWith x >> go Constr5 -> doSomethingWith x >> go
This worked and ran in constant space. But after adding a single additional clause to this pattern match, suddenly we reintroduced a space leak again:go x@(C y _) = case y of Constr1 -> doSomethingWith x >> go Constr2 -> doSomethingWith x >> go Constr3 -> doSomethingWith x >> go Constr4 -> doSomethingWith x >> go Constr5 -> doSomethingWith x >> go Constr6 -> doSomethingWith x >> go
This was true even when that additional clause was never used; it had nothing to do with the change in the runtime behaviour of the code. Instead, when we added the additional clause some limit got exceeded in ghc’s bowels and suddenly something got allocated that wasn’t getting allocated before.
Full laziness can be disabled using -fno-full-laziness, but sadly this throws out the baby with the bathwater. In many cases, full laziness is a useful optimization. In particular, there is probably never any point allocation a thunk for something that is entirely static. We saw one such example above; it’s unexpected that when we writego = withValue $ \_ -> modify' (+1) >> go
we get memory allocations corresponding to the modify' (+1) >> go expression.Avoiding space leaks
So how do we avoid these space leaks? The key idea is pretty simple: we have to make sure the conduit is fully reconstructed on every call to runConduit. Conduit code typically looks likerunMyConduit :: Some -> Args -> IO r runMyConduit some args = runConduit $ stage1 some =$= stage2 args ... =$= stageN
You should put all top-level calls to runConduit into a module of their own, and disable full laziness in that module by declaring{-# OPTIONS_GHC -fno-full-laziness #-}
at the top of the file. This means the computation of the conduit (stage1 =$= stage2 .. =$= stageN) won’t get floated to the top and the conduit will be recomputed on every invocation of runMyConduit (note that this relies on runMyConduit to have some arguments; if it doesn’t, you should add a dummy one).
This might not be enough, however. In the example above, stageN is still a CAF, and the evalation of the conduit stage1 =$= ... =$= stageN will cause that CAF to be evaluated and potentially retained in memory. CAFs are fine for conduits that are guaranteed to be small, or that loop back onto themselves; however, as discussed in section “Increasing sharing”, writing such conduit values is not an easy task, although it is manageable for simple conduits.
To avoid CAFs, conduis like stageN must be given a dummy argument and full laziness must be disabled for the module where stageN is defined. But it’s more subtle than that; even if a conduit does have real (non-dummy) arguments, part of that conduit might still be independent of those arguments and hence be floated to the top by the full laziness optimization, creating yet more unwanted CAF values. Full laziness must again be disabled to stop this from happening.
If you are sure that full laziness cannot float anything harmful to the top, you can leave it enabled; however, verifying that this is the case is highly non-trivial. You can of course test the code, but if you are unlucky the memory leak will only arise under certain specific usage conditions. Moreover, a small modification to the codebase, the libraries it uses, or even the compiler, perhaps years down the line, might change the program and reintroduce a memory leak.
Proceed with caution.Further reading
- Reddit discussion on the original version of this article which had some incorrect advice on how to avoid space leaks, and the reddit discussion on the erratum
- GHC ticket #12620 tracks some suggestions for alternative ways to address these isuses.
- Blog post by Fixing a space leak by copying thunks by Philipp Schuster on duplicating thunks to avoid unwanted sharing of “control structures”.
- Neil Mitchell has a more introductory level SkillsMatter talk Plugging Space Leaks, Improving Performance on space leaks and how to debug them. It doesn’t cover the kinds of leaks we discuss in this blog post however.
Let’s go back to the section about sinks; if you recall, we considered this example:countChars :: Int -> Pipe Char o m Int countChars cnt = let k = countChars $! cnt + 1 in Await $ \mi -> case mi of Left _ -> Done cnt Right _ -> k feedFrom :: Int -> Pipe Char o m Int -> IO () feedFrom n (Done r) = print r feedFrom 0 (Await k) = feedFrom 0 $ k (Left 0) feedFrom n (Await k) = feedFrom (n - 1) $ k (Right 'A') main :: IO () main = retry $ feedFrom 10000000 (countChars 0)
We explained how countChars 0 results in a chain of Await constructors and function closures. However, you might be wondering, why would this be retained at all? After all, feedFrom is just an ordinary function, albeit one that computes an IO action. Why shouldn’t the whole expressionfeedFrom 10000000 (countChars 0)
just be reduced to a single print 10000000 action, leaving no trace of the pipe at all? Indeed, this is precisely what happens when we disable ghc’s “state hack”; if we compile this program with -fno-state-hack it runs in constant space.
So what is the state hack? You can think of it as the opposite of the full laziness transformation; where full laziness transforms\x -> \y -> let e = <expensive> in .. ~~> \x -> let e = <expensive> in \y -> ..
the state hack does the opposite\x -> let e = <expensive> in \y -> .. ~~> \x -> \y -> let e = <expensive> in ..
though only for arguments y of type State# <token>. In general this is not sound, of course, as it might duplicate work; hence, the name “state hack”. Joachim Breitner’s StackOverflow answer explains why this optimization is necessary; my own blog post Understanding the RealWorld provides more background.
Let’s leave aside the question of why this optimization exists, and consider the effect on the code above. If you ask ghc to dump the optimized core (-ddump-stg), and translate the result back to readable Haskell, you will realize that it boils down to a single line change. With the state hack disabled the last line of feedFrom is effectively:feedFrom n (Await k) = IO $ unIO (feedFrom (n - 1) (k (Right 'A')))
where IO and unIO just wrap and unwrap the IO monad. But when the state hack is enabled (the default), this turns intofeedFrom n (Await k) = IO $ \w -> unIO (feedFrom (n - 1) (k (Right 'A'))) w
Note how this floats the recursive call to feedFrom into the lambda. This means thatfeedFrom 10000000 (countChars 0)
no longer reduces to a single print statement (after an expensive computation); instead, it reduces immediately to a function closure, waiting for its world argument. It’s this function closure that retains the Await/function chain and hence causes the space leak.Addendum 2: Interaction with cost-centres (SCC)
A final cautionary tale. Suppose we are studying a space leak, and so we are compiling our code with profiling enabled. At some point we add some cost centres, or use -fprof-auto perhaps, and suddenly find that the space leak disappeared! What gives?
Consider one last time the sink example. We can make the space leak disappear by adding a single cost centre:feed :: Char -> Pipe Char o m Int -> IO () feed ch = feedFrom 10000000 where feedFrom :: Int -> Pipe Char o m Int -> IO () feedFrom n p = {-# SCC "feedFrom" #-} case (n, p) of (_, Done r) -> print r (0, Await k) -> feedFrom 0 $ k (Left 0) (_, Await k) -> feedFrom (n-1) $ k (Right ch)
Adding this cost centre effectively has the same result as specifying -fno-state-hack; with the cost centre present, the state hack can no longer float the computations into the lambda.Footnotes
The ability to detect upstream termination is one of the characteristics that sets conduit apart from the pipes package, in which this is impossible (or at least hard to do). Personally, I consider this an essential feature. Note that the definition of Pipe in conduit takes an additional type argument to avoid insisting that the type of the upstream return value matches the type of the downstream return value. For simplicity I’ve omitted this additional type argument here.
Sinks and sources can also execute effects, of course; since we are interested in the memory behaviour of the indvidual constructors, we treat effects separately.
runPipe is (close to) the actual runPipe we would normally use; we connect pipes that await or yield into a single self contained pipe that does neither.
For these simple examples actually the optimizer can work its magic and the space leak doesn’t appear, unless evalStateC is declared NOINLINE. Again, for larger examples problems arise whether it’s inlined or not.
The original definition of retry used in this blogpost wasretry io = catch io (\(_ :: SomeException) -> retry io)
but as Eric Mertens rightly points out, this is not correct as catch runs the exception handler with exceptions masked. For the purposes of this blog post however the difference is not important; in fact, none of the examples in this blog post run the exception handler at all.
Michael Snoyman: Respect
As I'm sure many people in the Haskell community have seen, Simon PJ put out an email entitled "Respect". If you haven't read it yet, I think you should. As is usually the case, Simon shows by example what we should strive for.
I put out a Tweet referring to a Gist I wrote two weeks back. At the time, I did not put the content on this blog, as I didn't want to make a bad situation worse. However, especially given Simon's comments, now seems like a good time to put out this message in the same medium (this blog) that the original inflammatory messaging came out in:
A few weeks back I wrote a blog post (and a second clarifying post) on what I called the Evil Cabal. There is no sense in repeating the content here, or even referencing it. The title is the main point.
It was a mistake, and an offensive one, to use insulting terms like evil in that blog post. What I said is true: I have taken to using that term when discussing privately some of the situation that has occured. I now see that that was the original problem: while the term started as a joke and a pun, it set up a bad precedent for conversation. I should not have used it privately, and definitely should not have publicized it.
To those active members in projects I maligned, I apologize. I should not have brought the discourse to that level.
FP Complete: Updated Hackage mirroring in that repo to generate and update the 00-index.tar.gz file*
- Update the all-cabal-hashes and all-cabal-metadata repos using the appropriate tools
- run.sh uses the hackage-watcher to run run-inner.sh each time a new version of 01-index.tar.gz is available. It's able to do a simple ETag check,.
Ken T Takusagawa: [rotqywrk] foldl foldr
foldl: (x * y) * z
foldr: x * (y * z)
Also a nice reference:'
Functional Jobs: Senior Backend Engineer at Euclid Analytics (Full-time)
We are looking to add a senior individual contributor to the backend engineering team! Our team is responsible for creating and maintaining the infrastructure that powers the Euclid Analytics Engine. We leverage a forward thinking and progressive stack built in Scala and Python, with an infrastructure that uses Mesos, Spark and Kafka. As a senior engineer you will build out our next generation ETL pipeline. You will need to use and build tools to interact with our massive data set in as close to real time as possible. If you have previous experience with functional programming and distributed data processing tools such as Spark and Hadoop, then you would make a great fit for this role!
RESPONSIBILITIES:
- Partnering with the data science team to architect and build Euclid’s big data pipeline
- Building tools and services to maintain a robust, scalable data service layer
- Leverage technologies such as Spark and Kafka to grow our predictive analytics and machine learning capabilities in real time
- Finding innovative solutions to performance issues and bottlenecks
REQUIREMENTS:
- At least 3 years industry experience in a full time role utilizing Scala or other modern functional programming languages (Haskell, Clojure, Lisp, etc.)
- Database management experience (MySQL, Redis, Cassandra, Redshift, MemSQL)
- Experience with big data infrastructure including Spark, Mesos, Scalding and Hadoop
- Excited about data flow and orchestration with tools like Kafka and Spark Streaming
- Have experience building production deployments using Amazon Web Services or Heroku’s Cloud Application Platform
- B.S. or equivalent in Computer Science or another technical field
Get information on how to apply for this position. | http://sequence.complete.org/aggregator?page=6 | CC-MAIN-2017-04 | refinedweb | 7,584 | 57.81 |
In this article, we will learn about a very important topic Django Template Inheritance. We’ve already learned what templates in Django are. We’ll carry our knowledge from there and build up on it.
What is Django Template Inheritance?
Template Inheritance is a method to add all the elements of an HTML file into another without copy-pasting the entire code. For example, consider the Facebook homepage.
Here the underlying theme of the web page; background, elements, etc., are same for all FB endpoints
There are two ways to achieve this:
- Add the same CSS/JS/HTML code all the Endpoint Templates
- Or, create a single file containing all common elements and then simply include it in others.
The second method is what exactly the Template Inheritance does.
Why Inheritance?
Just like Facebook, most of the Applications have long HTML codes for a single page itself. Now to write all that again and again for every page is not possible and a very inefficient method.
Thus Django provides the method of Template inheritance to ensure more efficiency and less repetition of code.
Another significant benefit with Template inheritance is that, if we modify the main file, it will automatically get changed at all places where it was inherited. Thus we don’t need to modify it at all other places
Hands-on with Django Template Inheritance
Let us create a base HTML file at the project level and then have the Django App templates inherit it.
1) Modify TEMPLATES in settings.py
To make the base file accessible, add the following line into TEMPLATES in settings.py as shown in screenshot below.
'DIRS': [os.path.join(BASE_DIR,'django_project/templates')],
This line executes the following function:
- We get the path of the Django project directory using the predefined variable BASE_DIR (Our Django project folder)
- Then with the os module, we join it to the django_project/templates file.
We are basically telling Django to search for templates outside the app, at the project level(path indicated by the above code) as well.
2) Coding the Parent Base HTML file
Create a template BaseTemplate.html in the Templates folder present at the Django project directory level outside all the Apps.
And add the following code into the file:
<h2>Basic File Inheritence</h2> {% block <block_name> %} <p> PlaceHolder to be Replaced</p> {% endblock %}:
{% extends 'base.html' %} {% block content %} <h3>Welcome to the App</h3><br> {% endblock %}
Note: The {% extends ‘base.html’ %} line should always be present at the top of the file.
We need to add the template content in a similar Block with the same names as the parent file. The content of the blocks in the base file will be replaced with the corresponding block contents of this file.
4) Creating an App View to display the Template
We now just need a View to render and display our App Template. The Code for the View will be simply:
from django.shortcuts import render def TemplateView(request): return render(request,'<app_name>/AppTemplate.html(path to the Template)')
The URL path for the view:
path('template/', TemplateView)
Implementing Template Inheritance
That’s all with the coding part, let us now implement the Templates in the browser.
Run the server and hit the URL
You can now continue to create pages with formatting that’s similar to the main template. In our case, that’s base.html.
If you add the required CSS, and HTML formatting options to base.html, the same styles will be applied to all templates that inherit the base file.
Conclusion
That’s it with the Template Inheritance!! See you in the next article!! Till then keep practicing !! | https://www.askpython.com/django/django-template-inheritance | CC-MAIN-2021-31 | refinedweb | 609 | 64.3 |
Detailed replies below, but first of all, a quick top-level response, because in my previous mail I missed mentioning an obvious point: systemd can easily become the default for Debian GNU/*Linux* without necessarily becoming the default for Debian GNU/kFreeBSD. This seems like the most likely scenario going forward. Packages providing services can then ship .service files (as they currently can), and can ship init scripts if they want to run on other platforms; packages satisfied with systemd's init script support and not interested in providing a .service file can just ship an init script. This effectively matches the status quo, with the sole difference that systemd would get installed by default on Linux and sysvinit would continue to get installed by default on kFreeBSD. I assume that from a kFreeBSD perspective you have no particular problem with that scenario? (I say "from a kFreeBSD perspective" to detach that point from any thoughts you might have about the use of systemd on Linux, which seem entirely orthogonal.) If so, I don't think we disagree on anything, and feel free to simply ignore the rest of this mail. As an extension of that, I frequently wonder whether it would make more sense to package daemons *without* init scripts or .service files or anything else, and ship those in separate packages that you can install *if* you want to run the default system-wide instance of those services, rather than a custom-configured local service, twelve local instances, per-user instances, or any other configuration you might want. Apache already does this, with apache2.2-bin and apache2.2; git does the same thing for git-daemon, with the git package shipping git-daemon, and git-daemon-run and git-daemon-sysvinit packaging the init scripts for different init systems. Personally, I'd love to see a split like that for almost all daemon packages. Given such a split, the same people don't necessarily need to maintain the -sysvinit or -systemd packages that maintain the daemons themselves... Feel free to ignore everything below this point if you agree with the notion that "default for Linux" does not have to mean "default for all Debian ports including non-Linux ports". On Wed, Mar 07, 2012 at 10:12:49AM +0100, Wouter Verhelst wrote: > On Tue, Mar 06, 2012 at 02:46:44PM -0800, Josh Triplett wrote: > > W. > > I think it does. > > >, > > Yes you can. > > It's possible to write an init implementation in such a way that it > provides more features on one architecture than on another. If there are > daemons that fundamentally depend on the functionality that isn't > available on non-Linux versions of systemd, then these daemons won't be > able to work on non-Linux, but that would be the case anyway. You've just moved the bug reports around, then; instead of reporting a bug on systemd for not working on non-Linux, you'd report a bug on a service for using a systemd feature that works on Linux. That seems like a net loss: instead of having an init system with the explicit goal of making the full functionality of Linux available to services, you have a more complex init system whose featureset varies based on the capabilities of the target platform, and services would have to cope with that, generally by just not using any features other than the lowest common denominator supported on all platforms. Given that this thread exists in large part because people want those features, to make services more reliable, easier to write, and easier to maintain, that doesn't seem like a workable solution. > > just that you can write a least-common-denominator init system as > > capable as sysvinit. This thread exists in large part because many > > people want an init system more capable than sysvinit. > > > > To give one particular example: systemd uses Linux-specific features to > > accurately track all the processes started by a service, which allows > > accurate monitoring and shutdown of processes which could otherwise > > disassociate themselves from their parent processes via the usual > > daemonizing trick. > > Yes, that's the cgroups feature. Right. > > POSIX doesn't provide features that allow this in general, but Linux > > does. (Quite possibly other OSes provide those features too, but not > > in a standardized way.) > > Indeed. That means there would need to be multiple implementations of > the same functionality; one per supported OS. On some systems said > functionality wouldn't be available, and that'd be fine; on others, it > could behave slightly different in the undefined parts of the spec, and > that would also be fine. No, that's not fine. Systemd services use specific functionality with the expectation that that functionality will work, not "kinda sorta provide best effort if the platform can handle it", and certainly not fail to work at all. And if systemd did provide such inconsistent support, that would just punt the bug reports from systemd ("doesn't work on $PLATFORM") to services ("please don't use this feature in your service because it doesn't work on $PLATFORM"). > Portability doesn't require you to limit yourself to POSIX, it just > means you should have all functionality covered everywhere. Same problem. Systemd specifically provides support and integration for functionality only available on Linux. If you want portability, you have to force people not to use those features, or to reimplement them, at which point why do they exist in the first place? To give just a few examples of functionality systemd provides (and exposes to services) that uses Linux-specific features, and thus does not fall under "all functionality covered everywhere": - .device units and support for running a service once a device becomes available (including via hotplugging). - Support for giving a service a private network namespace, or no networking at all. Likewise for other namespaces, such as filesystem mount namespaces. - CapabilityBoundingSet, and many other features relying on capabilities - KillMode=control-group, and many other features relying on cgroups - TTYVHangup, TTYVTDisallocate, and various other TTY features. - Support for resource limits, including Linux-specific resource limits. - OOMScoreAdjust - PipeSize in socket-activated units, which uses the fcntl F_SETPIPE_SZ. And that just represents the stuff I knew of off the top of my head; a quick check of the systemd manpages turns up many more. > If you can > do that with POSIX, then do it with POSIX; if you can't, then you'll > require #ifdefs or parallel implementations. Or a new program for another target platform. Nobody expects LXC to work on non-Linux platforms without Linux containers; other platforms can implement container support and corresponding userspace tools. > It also doesn't mean you > have to do *everything* yourself, and it's perfectly acceptable to > request that people who care about the architecture in question do the > bits specific to that architecture. However, systemd upstream has > expressed hostility to that concept, so I don't think it's going to > happen. systemd upstream has made it clear that they'd recommend implementation of native init systems for other platforms, taking full advantage of the features of those platforms. > > >. > > Again, portability doesn't have to imply you limit yourself to the lowest > common denominator. Yes, it does. To make the program itself portable, you can't build functionality that relies on features only available on some platforms. Presumably you wouldn't consider this acceptable portability: #ifdef appropriate_symbol linux_syscall_that_avoids_an_inherent_race_condition(...); #else #error Other platforms provide no way to implement this functionality securely. #endif (And writing the same code without the #ifdef and #error amounts to the same thing, just with a different error message.) And for programs like systemd which provide interfaces to other software, even if you built those interfaces to only expose functionality available on the target platform, you've just punted portability to the next layer up: services can then only rely on the features available on all platforms. Lowest common denominator. systemd promises a given featureset to other software built on top of it, and doesn't just leave out bits of that featureset depending on the platform. > > >. > > These are definitely options, *if* someone can be found to do that. If > not, I don't think systemd can expect to be the default init > implementation in Debian. If systemd advocates want that, they should do > the hard work themselves, and not inflict such work on the kFreeBSD > developers. So, if systemd had gotten packaged first, and Debian daemon packages had all switched over to .service files and not init scripts, you'd find it reasonable for systemd maintainers to say that kFreeBSD developers should do the hard work themselves and not inflict such work on the systemd maintainers? That doesn't seem acceptable either. Regardless of the current status of either technology, I think we can reasonably evaluate both kFreeBSD and systemd, consider to what extent they can coexist, and to the extent they can't, consider what provides users and developers with the most value. The same goes for any other technical decision. Status quo does not automatically provide the most value for users and developers. > > -. > > There's a difference. > > Udev was written so that some functionality which was statically defined > in the kernel could be moved into userspace. As a result, today it's > impossible to use Linux without udev if you want to be able to talk to a > number of devices. The userspace bits that depend on udev are still > replaceable, though I'll grant you that it gets harder as time moves on. > So, what makes udev virtually required today is the kernel, not > userspace. udev doesn't just exist to move kernel functionality into userspace. In fact, that part of its functionality (the ability to create devices dynamically in response to hotplugging) no longer exists in udev, and has moved back into the kernel in the form of devtmpfs. udev primarily exists to expose Linux device events to userspace, and allow userspace to wait on those events, extract information from those events, and otherwise take action based on those events. > Systemd isn't written to move functionality from the kernel to > userspace; instead, it was written to replace one userspace interface by > another. While it's possible that systemd may become the leading init > implementation on Linux, it wouldn't be the case everywhere unless it > gets ported; software that wants to be portable will have to support > other init implementations anyway. So I doubt that it will ever become > impossible to run a Linux system without systemd. systemd, by design, exposes a large amount of Linux functionality to userspace, so that userspace can make use of that functionality. I didn't say it would become impossible to run a Linux system without systemd; I suggested that upstreams will start relying on it. If more upstreams start relying on features of systemd, and shipping .service files rather than init scripts, Debian will end up with more and more porting work to do, turning five-line .service files into hundred-line init scripts with less functionality and less reliability. In any case, I don't think we need to argue over that particular scenario. Either it'll happen or it won't, and Debian will have little to do with that either way. > > > kFreeBSD is already part of Debian. Systemd is not. The answer would > > > seem to be obvious. > > > > "First come, first served" does not inherently represent a sensible > > problem-solving strategy, and in particular it has no ability to escape > > local maxima. > > From our constitution: > >. > > If you're saying "I want to do this work, but it introduces a problem > for those other people. Heck, I don't care, let them deal with it", > you're "imposing an obligation to do work for the Project" on others. Not at all. Nothing obligates anyone to do work on kFreeBSD, systemd, or any other technology; people choose to do so. To use an analogous example, consider two packages, libfoo1 and bar, with bar depending on libfoo1. Nothing obligates the maintainer of libfoo1 to update bar to work with libfoo2, nor does anything obligate the maintainer of bar to update bar to work with libfoo2. When libfoo1 goes away because it has inherent bugs that nobody plans to fix (or *can* fix without breaking ABI), nothing obligates the maintainers of either libfoo2 or bar to fix the resulting RC bug on bar, either by porting bar to libfoo2 or maintaining libfoo1 indefinitely. And when that bug gets reassigned to, nothing obligates the ftpmasters to remove bar. :) > > As this thread has demonstrated, people can very sensibly argue that > > both kFreeBSD and systemd have value; kFreeBSD does not automatially > > win that argument by saying "frist psot". > > I'm not saying the kFreeBSD people shouldn't have to do anything at all > to support systemd. However, given the above, I think it's only fair if > systemd advocates who wish to see systemd become the default in Debian > will do the hard work to make that happen, and not try to chicken their > way out because "kFreeBSD is a toy architecture". You've entirely ignored my point, and then reasserted that you believe kFreeBSD wins the argument by default, by implying that if someone doesn't do the work then systemd loses. I certainly agree that the best-case scenario involves a peaceful coexistence of systemd and kFreeBSD. I described a likely scenario for such peaceful coexistence at the top of this mail. However, in the event that no such coexistence can happen, neither systemd nor kFreeBSD inherently gets to say "well, we win, go away and don't come back". - Josh Triplett | https://lists.debian.org/debian-devel/2012/03/msg00237.html | CC-MAIN-2018-26 | refinedweb | 2,269 | 51.18 |
19 June 2007 09:30 [Source: ICIS news]
TOKYO (ICIS news)--Japanese polypropylene (PP) producers are looking at increasing PP capacity due to vigorous demand from China, with the country’s largest producer considering an ambitious expansion project, it said on Tuesday.?xml:namespace>
Japan Polypropylene, a joint venture of Mitsubishi Chemical and Chisso Corp, was considering the second phase of a “scrap-and-build project”, which involved the closure of old lines and the building of new, more efficient ones, a company spokesman said. While Prime Polymer, another major Japan-based PP producer, was mulling a similar scheme.
Japan Polypropylene wanted to increase capacity as supply of PP was tight due to booming demand from ?xml:namespace>
The company expected demand would outstrip supply for quite some time, though it mentioned new PP plants were being built overseas.
Japan Polypropylene announced earlier it was constructing a 300,000 tonne/year PP unit in Kashima,
Two existing PP units in Kawasaki, Kanagawa prefecture, with a combined capacity of 138,000 tonnes/year would be scrapped about a year after the new Kashima unit comes on line, the company said.
However, despite the closures, Japan Polypropylene's PP output will rise from about 1.08m tonnes/year to 1.25m tonnes/year due to new capacity.
Meanwhile, Prime Polymer was also mulling a scrap-and-build project with the building of a new unit with a capacity of 200,000-300,000 tonnes/year in Chiba in 2009-2010 on the table, a company spokesman said on Tuesday, adding it could boost overall capacity by about 100,000 tonnes/year.
The planned plant could get its supply of propylene feedstock from a 150,000 tonne/year olefins conversion unit that Idemitsu, Mitsui Chemicals and Sumitomo Chemical were building in
Prime Polymer is a joint venture of Mitsui Chemicals and Idemitsu Kosan. It produces a total of 1.36m tonnes/year of PP.
($1 = Y123 | http://www.icis.com/Articles/2007/06/19/9038290/focus-japan-pp-producers-to-capitalise-on-demand.html | CC-MAIN-2013-48 | refinedweb | 323 | 57.1 |
rogerdpack (Roger Pack), 07/14/2012 03:39 PM
Feature #6731 [add new method "Object.present?" as a counter to #empty?]
Basically Object#present? , "An object is present if it?s not #empty?" [2]
or, in other words, "is there any data inside?"
Examples:
>> [].present?
=> false
>> [3].present?
=> true
>> ''.present?
=> false # because it's #empty?
>> 'a'.present?
>> nil.present?
Example usage:
button.text=text if text.present? # I only care whether the text actually was set to something, and also don't want to worry about whether it's nil or not.
Thanks.
Basic implementation ([1]):
class Object
def present?
!(respond_to?(:empty?) ? empty? : !self)
end
end
[1]
[2] they also use #blank but that's for a different feature request. | https://bugs.ruby-lang.org/attachments/2897/present.txt | CC-MAIN-2017-47 | refinedweb | 120 | 61.93 |
Important: Please read the Qt Code of Conduct -
strategy for structuring app
Hi
Please bear with me, I'am still a bloody C++/Qt beginner even if I already created some minor apps
just for fun and home use.
But I always stumble upon the same difficulty... How shall I structure
my app if there is more that one window in it and data needs to be send
between the two windows.
My current goal is to have three classes: a main-mainwindow and a second mainwindow that has as centralwidget a custom derived QChartView. I want to override
a mouse move event in the QChartView and send the current mouse position to a QLabel the main-mainwindow.
Now I have two ideas in my mind to exchange the data.
First, to send it via Signals&Slots and second, to implement setter/getter
methods between the main-mainwindow and the QChartview class.
Now for the latter, how would you do this (if at all)?
Is it possible that I create a setter method in my mainwindow
that sets a QLabel's text. Now from the QChartview class if a mouseMoveEvent occurs I call somehow the setter method of my mainwindow
and pass over the mouseposition? But I cannot access my mainwindows object
in main.cpp:
MainWindow w.setLabel(QString mousepos)
from inside the QChartView class?
Where is my fallacy?
I usually start with with "New Project->Application->Qt-Widget Application".
Then I have my mainwindow.h/cpp that gets created in the main.cpp (as MainWindow w;). Then I do it usually wrong by implementing any additional windows inside the mainwindow class and then I wonder why I cant use a setter method in the mainwindow class from the other window.
Where should i create the second mainwindow object and where the Qchartview (as centralwidget)? Should I do it in main.cpp besides the MainWindow w; so that it acts as a third point where I can use the setter/getter methods of off all the three classes?
Or should I completly drop this setter/getter methods idea and use
signal and slots?
- mrjj Lifetime Qt Champion last edited by mrjj
Hi
Often using signals and slots makes a better design as you get better
encapsulation than using get/setter since the "other" window do not need to know anything about
the receivers of the signals.
So for your Chart you could define a new signal
void MousePos( QString mousepos ); // (signals have no bodies)
and for mainwindow , define new slot ( both in .h and body in .cpp)
void MainWindow::MousePos( QString mousepos ) {
ui->label->setText(mousepos );
}
and in main.cpp hook that up to a slot in mainwindow
(assuming you new the QChart widget there.)
connect(ChartWinPointer, SIGNAL(MousePos( QString)) , &w, SLOT(MousePos( QString ) ) );
and in mouseMove in QChart child
void xxx::mouseMove(xx) {
QString mp;
mp=xxxx; // convert to text
emit MousePos(mp);// tell mainwin ( or what else ever u hook up)
}
This way, none of the objects in question need to know much of each other. The QChart will simply emit info to the world
and what ever class that hooks up a slot to the signal will get informed.
Its very much like a setter in mainwindow, but free from types of the setter holder and more generic as
you can have multiple slots connected and all get the string. You could hook up something else than the first mainwindow by
simply giving it a slot that matches.
So there is no real fallacy here, just that Qt offer a nice way to allow unrelated objects to communicate in a
generic way and it will in most cases just work better in the long run than directly calling a other windows functions.
But since its not (included in) normal c++, it takes some practice to start thinking in terms of signals and slots.
Thank you, so I will stick with Signals&Slots and try your approach.
I'll report back.
It works!
I did it all from scratch and didn't use the templates.
But I did not wrote anything else to main.cpp than what
is needed for my mainwindow.
main.cpp
#include <QApplication> #include "mainwindow.h" int main(int argc, char *argv[]) { QApplication app(argc,argv); MainWindow w; w.show(); return app.exec(); }
mainwindow.h
#ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QWidget> #include <QMainWindow> #include <QLineSeries> #include <QChart> #include <QChartView> #include <QLabel> class MainWindow : public QWidget { Q_OBJECT public: explicit MainWindow(QWidget *parent = nullptr); private: QMainWindow *chartWindow; QLabel *labelMouse; signals: public slots: void MousePos(QString mp); }; #endif // MAINWINDOW_H
mainwondow.cpp
#include "mainwindow.h" #include "mychartview.h" MainWindow::MainWindow(QWidget *parent) : QWidget(parent) { this->setWindowTitle("Mainwindow"); this->resize(100,100); chartWindow = new QMainWindow(this); chartWindow->setWindowTitle("Chartwindow"); QChart *chart = new QChart(); QLineSeries *series = new QLineSeries(); series->append(1,2); series->append(2,4); series->append(3,5); series->append(4,7); chart->addSeries(series); chart->createDefaultAxes(); MyChartView *chartView = new MyChartView(chart); chartWindow->setCentralWidget(chartView); chartWindow->show(); chartWindow->move(30,30); labelMouse = new QLabel("n/a",this); labelMouse->setStyleSheet("font-size:14pt;"); labelMouse->resize(100,50); connect(chartView,&MyChartView::mousePos,this,&MainWindow::MousePos); } void MainWindow::MousePos(QString mp) { labelMouse->setText(mp); }
mychartview.h
#ifndef MYCHARTVIEW_H #define MYCHARTVIEW_H #include <QObject> #include <QChartView> QT_CHARTS_USE_NAMESPACE class MyChartView : public QChartView { Q_OBJECT public: explicit MyChartView(QChart *chart, QWidget *parent = nullptr); protected: void mouseMoveEvent(QMouseEvent *event)override; signals: void mousePos(QString mp); }; #endif // MYCHARTVIEW_H
mychartview.cpp
#include "mychartview.h" MyChartView::MyChartView(QChart *chart, QWidget *parent) :QChartView(chart, parent) { } void MyChartView::mouseMoveEvent(QMouseEvent *event) { QString xpos = QString::number(event->pos().x()); QString ypos = QString::number(event->pos().y()); QString mpos = xpos + "," + ypos; emit mousePos(mpos); }
I can now see the x,y position of my mouse in the QChartview.
My ultimate goal is to be able to somehow select an x range from the QLineSeries with the mouse and send the selected x-range to my mainwindow. I currently have no idea how to do that but I will do search, maybe someone already has done that. | https://forum.qt.io/topic/86556/strategy-for-structuring-app/3 | CC-MAIN-2021-25 | refinedweb | 1,001 | 54.73 |
a convention followed by the caller and callee. In addition to that, the CLR manages its own stack of frames to mark transitions in the stack, for example unmanaged to native calls, security asserts, and uses the information to mark the addresses of GC roots that are active in the call stack. These are stored
We'll use this set of types in our examples below:
using System;
using System.Runtime.CompilerServices;
class Foo
{
[MethodImpl(MethodImplOptions.NoInlining)]
public int f(string s, int x, int y)
{
Console.WriteLine("Foo::f({0},{1},{2})", s, x, y);
return x*y;
}
[MethodImpl(MethodImplOptions.NoInlining)]
public virtual int g(string s, int x, int y)
{
Console.WriteLine("Foo::g({0},{1},{2})", s, x, y);
return x+y;
}
}
class Bar : Foo
{
[MethodImpl(MethodImplOptions.NoInlining)]
public override int g(string s, int x, int y)
{
Console.WriteLine("Bar::g({0},{1},{2})", s, x, y);
return x-y;
}
}
delegate int Baz(string s, int x, int y);
Furthermore, we'll imagine the following variables are in scope for examples below:
Foo f = new Foo();
Bar b = new Bar();
The CLR's jitted code uses the fastcall Windows calling convention. This permits the caller to supply the first two arguments (including this in the case of instance methods) in the machine's ECX and EDX registers. Registers are significantly faster than using the machine's stack, which is where the remaining arguments are supplied, in right-to-left order (using the push instruction).
fastcall
ECX
EDX
push | http://www.wrox.com/WileyCDA/Section/id-291453.html | crawl-002 | refinedweb | 252 | 54.93 |
Daniel,
> I managed to run mptest.py ("Hello World" of official
> doc) with Apache win32 but I don't manage to run the
> others python scripts :
> When I run the other python scripts, my browser show
> me "hello world"...the same result than mptest.py !!
The answer to this question depends on your configuration.
If your configuration looks something like this
<Directory "/some/directory/">
AddHandler python-program .py
PythonHandler mptest
</Directory>
Then what you're telling Apache is "Run mptest.py whenever
anyone requests a URI in the directory "/some/directory"
which has the extension ".py". So *all* of the following
URIs result in the same call to the "handler" function
of "mptest.py".
ter.py
The important section of the manual to understand is
section 3.2: So what Exactly does Mod-python do?-
what-it-do.html
In particular, you must understand the section at the
bottom called "Some food for thought".
> I think I must add or modify other apache python's
directives. So
> - I woud like to get a good httpd.conf for Apache win32
and Python
> module which works.
"Works" is a relative thing. "Works" for what purpose is
more the right question to ask. I've found that the bare
minimum changes to a httpd.conf for mod_python are two
changes,
1. to load the modpython module, and
2. to set up a handler module/function.
These are handled as follows
1. Start with a clean httpd.conf from the Apache
installation, and add the following entries.
LoadModule python_module modules/mod_python.{dll|so}
There should be a list of "LoadModule"s in the default
httpd.conf, just put it with them. Probably best to put it
before any other "LoadModule".
Then add a python handler for whichever directories you
wish. The following directive indicates that
myPythonHandler.py should be run whenever any URI ending
in ".py" is requested anywhere in the document tree.
<Directory /my/document/root/>
AddHandler python-program .py
PythonHandler myPythonHandler
</Directory>
So whenever a request arrives for any URI ending in ".py",
anywhere in the document hierarchy, at any level, then
the "handler" function in the module "myPythonHandler.py"
will be run.
> - some information about the "standard" headers of python
scripts
> running in Apache web server.
I'm not sure what you mean by "headers". This is not CGI,
so you don't need shebang lines, or anything like that. The
only headers you need to know about are the HTTP headers
sent back with each http response, such as "Content-
type", "Content-length", "Content-encoding", and so on.
These are set in your handler function as follows
def handler(req):
req.content_type = "text/html"
req.headers_out['Content-Length'] = str(length)
Hopefully this well help you get configured and running.
Bon chance,
Alan.
---------------------------------------------
This message was sent using WebMail by CyberGate. | http://modpython.org/pipermail/mod_python/2002-February/012397.html | CC-MAIN-2015-18 | refinedweb | 470 | 70.09 |
Free Webkinz Guide - Magical Charm Forest
- 9th -- 100 Kinz Cash
- 10th -- 100 Kinz Cash
- 11th -- 200 Kinz Cash
- 12th -- 200 Kinz Cash
- 13th -- 200 Kinz Cash
- 14th -- 300 Kinz Cash
- 15th -- 300 Kinz Cash
- 16th -- Fairy Falls Fountain
- 17th -- 400 Kinz cash
- 18th -- 400 Kinz cash
- 19th -- 400 Kinz cash
- 20th -- 500 Kinz cash
- 21st -- 500 Kinz Cash
- 22nd -- 500 Kinz Cash
- 23rd -- 500 Kinz Cash
- 24th -- Super-Exclusive Charm Forest Item
- Free Webkinz Guide - Tips and Tricks
Your guide to everything Webkinz, from where to find virtual charms, to the quickest ways to earn Kinzcash!
- Free Webkinz Guide - Finding Virtual Charms
Forest Charms
Once you have bought your first charm, every six hours you can search down one of four paths for Forest Charms in the Magical Charm Forest section of Webkinz World. Virtual Charms, and other exciting prizes are hidden in Pixie Pods, or sometimes carried by the Good Fairies you see throughout the Forest. Clicking on the fairy reveals what she is carrying. If you find a Bad Fairy disguised as a Good Fairy, you don't win anything, but if the fairy drops a key, you can click on it to pick it up, and search the forest for hidden doors which contain anywhere from 2 to 5 Pixie Pods! . If you find a charm it will be added to your virtual collection. If you already have that charm, you will be awarded 100 Kinz Cash instead! Collecting entire rows of virtual Forest Charms also earns you Kinz Cash and exclusive or rare items.
-!
hello again stevo i got it from a mate so i think this is there web address
filling address ,ring them if you need them in a hurry ,tell them netsimsy put you on
I actually ment shuts and rare lol and ps everybody who has like 20 pets and think they are super duper lucky[i'm not saying they aren't]i have 83 pets look at my page again please i really need plumy, witch, swirl tiara, or mostly charm please i swear i will send shuts and two kinzstyle or something maybe even swan dress
hi guys if you send me charm i promise i will send shuts i really have them but first add me on webkinz:sdg9132000 and i'll send you shuts or rare
Hey everyone. do u guys know if the swirl tiara is worth plumpy glasses? if you know the answer, plz coment!!! thanks, i appreciate it!!
pinky i will add you
Omigosh, I got plumpy!!!!!!!!!!!!! Just looking for jingle elf, wacky, and purple which now.
Oh PLEASE will someone here trade/send me a charm tiara??? I want one soooooooooooo bad!!! If you trade/send it to me I will send you anything you want back trust me I have AWESOME stuff like a million exclusives items and a TON of kinzstyle exclusive clothes!!! :) ok??? thank you please comment back here if you will be nice/kind enough to trade with me for a charm tiara I will be SOOOO happy!!! and I'm not lying I promise I will trade/send you back a super exclusive or kinzstyle item k??? bye, bye!!! :) im so excited if I get one!!!
Hey does anyone have Charm Forest Tiara or Plumpy Glasses or Orange Army they don't want??? I will even trade you something for them :) I have Pink Bunny Ears and full Lion Costume :) I also have lots of kinz style. I know it may not be worth all most of the stuff I want but if your interested and you don't really wanted your Plumpy, Orange Army, and/ or Charm Tiara, then I am your girl!! :D My user name on webkinz is hanah11 add me and we can trade or if you want to send any of the items lie if your account is going to expire and you dont plan on renewing it. or if you plainly just want to get rid of your stuff because you plan on getting off webkinz. THANKS A BUNCH!! :D please add me and we can/send or whatever you want :) and answer me here! thanks!
i was wondering how i can get the charmf fairy outfit. i have the top and shoes but i want the pants
Smartie76 if you sent me purple which hat wz jeans and plumpy i will send the best three rare and kinzstyle I have. My username is glamourgirl35 and I'm usually in the light blue zone
-
should i trade wacky jeans for swirtl tiara and purple slippers???
Wait sorry Smartie76. I'll send you three pieces of kinz style.
Smartie76, I'll send you two pieces of kinz style. Could you maybe send me maybe Wacky and a purple which hat? And maybe shutters? My user name is wynner7. I'm just looking for Wacky, purple which hat, and plumpy. If you have plumpy, please send it with wacky and purple which.
how do ppl still play webkinz????? im the jakec98 person up at the top who says he needs secrets.. that me omg 2 years ago how can ppl still play? sorry but sad
hey smile dude, what would you like for the swimsuit?
(other than what you have posted asking for)
hello??
omigosh!
thanks a ton katzrdabom!!!!
I just sent them a request and they accepted, then I sent them purple slips and they sent them back with a panda slide!
yayz
ok guys I am going everywhere posting stuff because I just had the greatest thing in my entire life happen to me!!!!!!!
I sent lps3cheese a love puppy couch and she sent it back along with 2 super beds!!!!!!!!!
I am sooooooooooooioo happy!!!
my friend saw a post about this awhile ago, and since I have been giving up on webkinz for awhile I decided to try it.
omigosh it worked!!
then my bff tried it and she sent her gold dress and got it, a golf cart, and a dragon super bed!
we are sooooo happy!!
ok buh-bye!
Any of the things I listed :)
I have the bed wat kind of stuff wud u trade for it
Anyone have priceless cat costume peices?or the mystical panda bed?
I have
Charm tiara,woodland tiara,charmed dress,charm tophat,charm boots,retired green swimsuit*both peices,and a zum fountain for them?
I'm not a scammer.I want to meet in the trading room,because I don't like sending 1st
-
Ps in zone light blue
yes you heard me right i have mystical panda bed
i would like a puple witch hat for itfor trade
if you have 1 meet me in kinz chat plus trading room(2)
number 17, at 5:00 p.m. april 17 2010
wasnt talking about you bueary
i was talking about smartie76(his uer is pink555510)
ps smartie 76 is a lier! he does not send you stuff
I'm the same person ^^^
I meant my user name is Buneary13
NOT Buneary12
Hey guys :D
I'm not even kidding if you send kinz style i swear I will send rare sweet tooth tiger food AND cheeky dog and cat food.
I'm not kidding!I might send coupons too (I have 32)
I've been a member ever since the start of Webkinz
Don't belive me, Ask Milk561
By the way my Username is Buneary12
the person who said he would give items like plumpy and stuff if you sent him rare and kinz style is a LIAR!!!!!!
i sent him 2 purple kinz stlye cow girl hats and a rare item
and he did not send any thing.
and his user is pink555510
do nt send him anything you will just lose the items you send
o if you still visit here (or have your account)
you can pick the rares/kinz style
sometime i will list the ones i have
(and i have been trying for ever to get a purple witch hat i will give any rare/kinz stlye that i have or can get)
hi smartie76 i have some rares and kinz style i don't need
please respond!!!
(ps. this was posted sunday march 21 of 2010)
hi peoples :)
sorry i dont hav 1 ninjagirl!
hey can someone please send me a witch hat i want one so badly my user is Ninjagirl50
make me your friend my user name is 2008heart and my site is going to expire plus i dont play webkinz that much so if any one sends me some exclusive rare trophy clothing any kind of item i will send them some items from the trading card series egiptian and lots more i have (i chosse randomly first to send items gets the best items like beds and couch intead of chairs and tables anyway send me a trophy or something like that and i promise u will ge a rare exclusive pet specific item even in return (i do not scam i am tired of webkinz and would like to give my items away before my account expires
i have 42 webkinz includind the new ice fawn and i have 67 rooms in my house and i have 2 trophy rooms and sooooo many rare itams i also have some extra crown of wondres if anyone wants them but i want good items in return!
thanks for writing stuff about me! usually not many people do that for me.
at first i didnt believe smartie76 , then i sent her one kinz style,pink knit capelet, and she sent me a charm tiara and plumpy glasses! she gained my trust, and now i wil send her more stuff and make my friends jealous!!! i hope you go to heaven, smartie76!!!!
smartie76 is awesome! i sent her a rare swan bed and a wedding dress, and she sent me a swirl leaf tiara and a purple witch hat!!!! im so happy!! im gonna stock up on rare and kinz style:)
thanks smartie76! you gave me the best items!(wacky jeans and red lava and shutter shades, and i only sent you holnwon golf shoes, freeworld rocking pants, and red dotted party dress.) i will SO send you more stuff! you are the best person ever!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
hey guys! my account is expiring, so i am giving all my stuff away, but i wanr to like die rich, sort of like expire rich. i got lots of cool stuff, like 27 red lavas, 14 purple witch hats, 21 wacky jeans, 12 plumpy glasses, 7 charm tiaras, 5 swirl leaf tiaras, 1 year one trophy, 1 year two trophy, every signature pet item, and 30 shutter shades. if you can send me any rare items or kinz style( cuz they sell for alot) i will send you two items i randomly choose(no joke!)i am pink555510 and one more thing. if you send three rare items, three kinz styles, or a mix of kinz style and rare, you get three items, not two. i hope this helps anyone that want priceless items! by the way, i will never run out of items because my parents OWN webkinz but i got tired of webkinz
does anyone have all the charms?
I love webkinz!!!!!!!!!!!!!!!!!!!!!!!!!!!!!It would be terible if there was no such thing as webkinz!!! I have 23 !!!!O even have a webkinz note book. It has everything about webkinz.
HELLO PEEPS! srry i am bored
awesome i will add u and give u somethin good!!! promise! and if u want to add me on msn mine is jadelust@tmsn.com if u want to add me... ill add u on webkinz perdy12! :(
u go on google images tipe wut u want and wen u found wan save and put as pic :)
kool graden? how did u get those pictures of stewie? when i try to get a picture out of google it wont let me cause it said i needed to download something help me plzz!
i one of u ad me on webkins and send me an exlusive item ill send you some thing back and im not lying. :)
does any of you have guitar hero ???????? :)
i have webkinz ill ad you and maybe send you something (maybe) k
maybe u can maybe u wont!
Hey 4 people on webkinz...can I have a rare item or something cool? my username is SnowberryStorm
cool =)
again my website is
u guys i edited my hub page now i figured out how to put pics on!
hehe
yeah lol
i do! o wait, desi u alredy know that!
ok thats fine
no i dont
no problem! :D do you have an email?
Thanx desi
i am one of ur fans
desi7,haenc can you plzz become one of my fans?
oh, ok ill check it out
ok you guys this is kind of a fun site its called............ its Kinda fun but webkinz is alot funner ratings: aplus math 30% webkinz1,000,000,000% more funner
my site is named its pretty cool
my files cool
haenc will you give me the new exclusives if i give you something back
yea i do, haenc is my username, and my other 1 is haenc2
i want the new exclusive items?
class griddle not glass griddle. u get it for finishing level 5 on the cooking class at kinzville academy
what is the glass griddle?
yeah my username is deshero!!!!!!!!! well i do have a superbed and potm (the TV thing)item and a whole bunch of exclusies. i am looking for these items: cooking class griddle, big city window, and any egyptian item. thankssss
whoa, sorry to hear that so is you username actually deshero and what stuff do you have?
whoa, some one was acting me i DO NOT have all that stuff!
no i do not want that if you like, ill trade you the scaucer for the super stunt unicicle
want the elephant fountain?
ann add me my user name is jadelust add me if you send me exclusives and ill send you rare exclusive or pet of the month items please?
i have 107 webkinz and im still getting more! so haha
i hope you win im rooting for you!
sure. ooooh i have a soccer game i am so nervous!!!!!!
i made a mistake on the first line above this ok if anyone will ill trade them something good for the super stunt unicicle!?
ok desi what stuff do u have that u want to trade?
ok ill give somthing real good for the super stunt unicicle even one of my pet of the month items how about my flying saucer {pet of the month} for a super stunt unicicle if u want to trade
or actually the super stunt unicicle please? ill give you something good! i promise!
ok sorry bout that dont worry ill send you pet of the month or better exclusive if you promise to send me something good back tomorrow when i send you something exclusive can i have the eagle chair cause i really want that please?
ok, well for the music box, ill send u an exclusive. i am looking for trading card stuff as well, but i have a webkinz cares tv. i thought of something does anyone have the exclusive frome the silver back gorilla and the webkinz seep cloud blower if so ill send you something like a exclusive or pet of the month bye
I have all of those! my username is deshero, but you have to send me something first.
HI IM JADELUST IF YOU SEND ME A RARE EXCLUSIVE\PETOFTHEMONTH\RARE ILL GIVE YOU SOMETHING BACK IT WILL DEFINITLEY BE EXCLUSIVE OR RARE NOT A SCAM ITS TRUE MY USERNAME ON WEBKINZ IS IS JADELUST
it doesn't have to be from there....... anything you have, just tell me and i will trade some superbeds yay! :-) trust me any rare stuff you have its all good with me
141 | https://hubpages.com/games-hobbies/Webkinz_Charms | CC-MAIN-2018-30 | refinedweb | 2,672 | 85.52 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to script the creation of a new database?
We are trying to improve our development process and part of that is ensuring that people create a new database and install our modules in their dev environment as part of their test process before they commit their changes to our git repo.
Using ..
/usr/bin/openerp-server -c /etc/openerp/openerp-server.conf --init=our_module --stop-after-init -d test_database
.. I can script the installation of our modules in a new database that as been created through the web interface, but how can that new test_database be created from the command line?
We must update our modules to a duplicate of our existing database .. which leads to a 2nd related question .. is there a way to duplicate an existing database from the command line?
For Odoo 9, and most likely with every Odoo version higher than 6.1
first:
pip install oerplib
Then create a python script : create_db.py
#!/usr/bin/python
import sys #import sys if you want command line args
import oerplib
oerp = oerplib.OERP(server='localhost', protocol='xmlrpc', port=8069)
oerp.db.create_database('SUPER_ADMIN_PWD', "DB_NAME", False, 'en_US', "DB_PASS")
---------
You can replace DB_NAME and DB_PASS with sys.argv[n] and pass the value from command line or script
On Odoo 9.0 (and I guess also 8.0), if test_database does not exist, the exact command you wrote will create it automatically through the -d flag...
As far as I can tell this does not work in 8.0: Odoo will fail startup with an OperationalError.
stackoverflow.com/questions/876522/creating-a-copy-of-a-database-in-postgres (can't post links so removed the http part)
From there duplicating db from command line:
createdb -O ownername -T originaldb newdb
Haven't tested this one but would assume that it works because it has been voted as a good answer.
Hmm that is a valid workaround. I was actually trying to work out how to do it via OpenERP. Ultimate goal is to have a single script that runs on the OpenERP server that drops some test databases, creates one 'empty' OpenERP database to install our module and a duplicate of an existing database and updates our modules. Running it via OpenERP (running 'openerp-server' as the 'openerp' user) gets around the idea of having to provide additional credentials to do something on the DB server.
# thanks to
I have been able to create a database using oerplib from, one of the two client libraries I have found "out there", the other being openerplib from.
Using oerplib:
# pip install oerplib
# python
>>> import oerplib
>>> jconn = oerplib.rpc.ConnectorJSONRPC('localhost', port=8069)
>>> jconn.proxy.web.database.get_list()
>>> jconn.proxy.web.database.create(fields=[{'name': 'super_admin_pwd', 'value': 'master-secret'}, {'name': 'db_name', 'value': 'my-new-database'}, {'name': 'create_admin_pwd', 'value': 'secret'}, {'name': 'create_confirm_pwd', 'value': 'secret'}, {'name': 'db_lang', 'value': 'en_US'}])
There are more concise mechanisms also, without all the 'name', 'value', etc.
This mechanism as written has indeed create a database for me.
God speed on your database creation automation journey,
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-script-the-creation-of-a-new-database-27033 | CC-MAIN-2017-43 | refinedweb | 562 | 55.64 |
ModelChoiceField implementation that can accept lists of objects, not just querysets
Project description
The application provides ModelForm, ModelChoiceField, ModelMultipleChoiceField implementations that can accept lists of objects, not just querysets. This can prevent these fields from hitting DB every time they are created.
The problem
Imagine the following form:
class MyForm(forms.Form): obj = ModelChoiceField(queryset=MyModel.objects.all())
Every time you render the form ModelChoiceField field will hit DB. What if you don’t want it? Can’t you just pass the list of objects (from cache) to the field? You can’t. What to do? Use CachedModelChoiceField.
The solution
Form with CachedModelChoiceField:
from cached_modelforms import CachedModelChoiceField class MyForm(forms.Form): obj = CachedModelChoiceField(objects=lambda:[obj1, obj2, obj3])
This field will act like regular ModelChoiceField, but you pass a callable that returns the list of objects, not queryset, to it. Calable is needed because we don’t want to evaluate the list out only once.
A callable can return:
- a list of objects: [obj1, obj2, obj3, ...]. obj should have pk property and be coercible to unicode.
- a list of tuples: [(pk1, obj1), (pk2, obj2), (pk3, obj3), ...].
- a dict: {pk1: obj1, pk2: obj2, pk3: obj3, ...}. Note that dict is unsorted so the items will be ordered by pk lexicographically.
Same is for CachedModelMultipleChoiceField.
Warnings
There is no special validation here. The field won’t check that the object is an instance of a particular model, it won’t even check that object is a model instance. And it’s up to you to keep cache relevant. Usually it’s not a problem.
Modelform
But what about modelforms? They still use original ModelChoiceField for ForeignKey fields. This app has its own ModelForm that uses CachedModelChoiceField and CachedModelMultipleChoiceField. The usage is following:
# models.py class Category(models.Model): title = CharField(max_length=64) class Tag(models.Model): title = CharField(max_length=64) class Product(models.Model): title = CharField(max_length=64) category = models.ForeignKey(Category) tags = models.ManyToManyField(Tag) # forms.py class ProductForm(cached_modelforms.ModelForm): class Meta: model = Product objects = { 'category': lambda:[...], # your callable here 'tags': lambda:[...], # and here }
That’s all. If you don’t specify objects for some field, regular Model[Multiple]ChoiceField will be used.
m2m_initials
If you use ManyToManyField in ModelForm and load an instance to it, it will make one extra DB request (JOINed!) – to get initials for this field. Can we cache it too? Yes. You need a function that accepts model instance and returns a list of pk’s – initials for the field. Here’s a modification of previous example:
# models.py class Product(models.Model): title = CharField(max_length=64) category = models.ForeignKey(Category) tags = models.ManyToManyField(Tag) def tags_cached(self): cache_key = 'tags_for_%(product_pk)d' % {'product_pk': self.pk} cached = cache.get(cache_key) if cached is not None: return cached result = list(self.tags.all()) cache.set(cache_key, result) return result # forms.py class ProductForm(cached_modelforms.ModelForm): class Meta: model = Product objects = { 'category': lambda:[...], # your callable here 'tags': lambda:[...], # and here } m2m_initials = {'tags': lambda instance: [x.pk for x in instance.tags_cached()]}
Compatibility
For sure is works fine with Django 1.2-1.4. Altering ModelForm has required some copy-pasting from Django source code. It couldn’t be done with inheritance. I don’t think there will be problems with futher versions of Django, but don’t forget to run the tests if something seems wrong.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/django-cached-modelforms/ | CC-MAIN-2022-05 | refinedweb | 586 | 54.08 |
The QPolygonF class provides a vector of points using floating point precision. More...
#include <QPolygonF>
Inherits QVector<QPointF>.
Note: All the functions in this class are reentrant.
The QPolygonF class provides a vector of points using floating point precision.
A QPolygonF is a QVector<QPointF>. The easiest way to add points to a QPolygonF is to use its streaming operator, as illustrated below:
QPolygonF polygon; polygon << QPointF(10.4, 20.5) << QPointF(20.2, 30.2);
In addition to the functions provided by QVector, QPolygonF provides the boundingRect() and translate() functions for geometry operations. Use the QMatrix::map() function for more general transformations of QPolygonFs.
QPolygonF also provides the isClosed() function to determine whether a polygon's start and end points are the same, and the toPolygon() function returning an integer precision copy of this polygon.
The QPolygonF class is implicitly shared.
See also QVector, QPolygon, and QLineF.
Constructs a polygon with no points.
See also QVector::isEmpty().
Constructs a polygon of the given size. Creates an empty polygon if size == 0.
See also QVector::isEmpty().
Constructs a copy of the given polygon.
Constructs a polygon containing the specified points.
Constructs a closed polygon from the specified rectangle.
The polygon contains the four vertices of the rectangle in clockwise order starting and ending with the top-left vertex.
See also isClosed().
Constructs a float based polygon from the specified integer based polygon.
See also toPolygon().
Destroys the polygon.
Returns the bounding rectangle of the polygon, or QRectF(0,0,0,0) if the polygon is empty.
See also QVector::isEmpty().
Returns true if the given point is inside the polygon according to the specified fillRule; otherwise returns false.
This function was introduced in Qt 4.3.
Returns a polygon which is the intersection of this polygon and r.
This function was introduced in Qt 4.3.
Returns true if the polygon is closed; otherwise returns false.
A polygon is said to be closed if its start point and end point are equal.
See also QVector::first() and QVector::last().
Returns a polygon which is r subtracted from this polygon.
This function was introduced in Qt 4.3.
Creates and returns a QPolygon by converting each QPointF to a QPoint.
See also QPointF::toPoint().
Translate all points in the polygon by the given offset.
This is an overloaded member function, provided for convenience.
Translates all points in the polygon by (dx, dy).
Returns a polygon which is the union of this polygon and r.
This function was introduced in Qt 4.3.
See also intersected() and subtracted().
This is an overloaded member function, provided for convenience.
Writes the given polygon to the given stream, and returns a reference to the stream.
See also Format of the QDataStream Operators.
This is an overloaded member function, provided for convenience.
Reads a polygon from the given stream into the given polygon, and returns a reference to the stream.
See also Format of the QDataStream Operators. | http://doc.trolltech.com/4.3/qpolygonf.html | crawl-002 | refinedweb | 491 | 61.83 |
Opened 8 years ago
Closed 8 years ago
#13531 closed (invalid)
django.core.files.File throws an exception on _get_size
Description
The File _get_size() function incorrectly uses self.file.name as a path.
When using a custom FileStorage like storage object, the files name and actual path may be different. This will trigger an AttributeError even though the file actually exists on disk and is readable.
Attachments (1)
Change History (6)
comment:1 Changed 8 years ago by
comment:2 Changed 8 years ago by
If the file does not have a name (for example, is a StringIO object), the AttributeError is at line 39 in django/core/files/base.py (svn trunk):
def _get_size(self): if not hasattr(self, '_size'): if hasattr(self.file, 'size'): self._size = self.file.size elif os.path.exists(self.file.name): #<-----------HERE self._size = os.path.getsize(self.file.name) else: raise AttributeError("Unable to determine the file's size.") return self._size
Changed 8 years ago by
comment:3 Changed 8 years ago by
The attached patch shows one way to find the size, though now I think about it, it is broken, because it leaves the file in a different state (you really need to find the initial offset with tell(), seek to the end, get size with tell, then seek back to the original place).
comment:4 Changed 8 years ago by
attempting to verify this bug...
comment:5 Changed 8 years ago by
The File class is just a wrapper for local file operations and has nothing to do with a custom file storage. file.name /is/ the path, so if you're using it for something else, then you're probably using it wrong. Every place that it's used in that class it's assumed to be the path (e.g., see the open() method). If you're trying to do something else, then you may need to create your own independent File class and use that in your custom FileStorage instead.
Closing this as invalid again...
I'm going to mark this invalid because I can't work out the actual bug that is being reported.
What attribute is causing an AttributeError? Why? What exact conditions cause the problem to occur? This is one of those situations where some sample code would be very helpful.
I don't deny you're seeing a problem, but there's a limit to what we can do to fix a problem when you don't give us the detail required to reproduce, diagnose and fix the problem. Feel free to reopen if you can provide more specifics. | https://code.djangoproject.com/ticket/13531 | CC-MAIN-2018-13 | refinedweb | 438 | 72.56 |
Before being introduced to Elixir, a functional programming language built on top of Erlang, I had no idea what pattern matching was. Hopefully, by the end of this article you will have at least a rudimentary understanding of how awesome it is.
In most programming languages, you will assign a value to a variable using something like:
const myVariable = 'my value'; console.log(myVariable); // 'my value'
Now, myVariable is bound to the value you assigned to it and you can continue living your life.
When you need to check the value of a variable, in most other languages you would use conditional “if statements”, which can get unreadable as soon as you add more than 2 or 3. This is because it’s difficult to see the flow of logic, especially if the function spans many lines.
Technically you can do the same thing in Elixir, but how the compiler interprets it is significantly different. The = sign is actually called the ‘match’ operator. It will use the value on the left and compare it to the value on the right to determine if they are a match.
Tuples are used frequently in Elixir code to enable returning multiple values from a function. Typically, you would come across a {status, value} tuple, for example:
{:ok, return\_value} = do\_stuff()
do_stuff() must return a tuple which matches that structure (otherwise Elixir will raise a ‘MatchError’), and return_value is now bound to the second item in the tuple returned from this function.
This is basically how pattern matching works, but the real beauty is how you use it in various contexts, for example:
- When a function can return multiple values, such as the {status, value} tuple we came across earlier:
case do\_stuff() do {:ok, value} -> value {:error, \_} -> raise "Oh no!" end
- In function heads you can pattern match on parameters, to only run when particular requirements are met:
def my\_func({:ok, value}), do: value def my\_func({:error, \_}), do: raise "Oops!" IO.puts my\_func({:ok, "hello"}) # "hello"
- You can even match on lists:
[first, second, third] = [1, 2, 3]
- And decompose data structures
%{value: value} = map\_func()
There are so many examples of pattern matching in Elixir because it’s incredibly useful and powerful, and also very performant when compared to traditional methods.
In a nutshell, that’s pattern matching!
Discussion (0) | https://dev.to/jackmarchant/elixir-pattern-matching-in-a-nutshell-3cmo | CC-MAIN-2022-21 | refinedweb | 394 | 54.36 |
This document provides information on the Run > Run on Server menu item,
including how it works, and how to make it appear or disappear on a given
object.
The Run, Debug, and Profile menu items are contributed by the Eclipse debug
component. These menus appear on all objects in the UI that are adaptable to
org.eclipse.debug.ui.actions.ILaunchable. Enabling these menus (by adapting
objects to ILaunchable) is a prerequisite to enabling Run on Server.
If the Run menu appears on something that is not runnable, one solution is
to try to remove the ILaunchable adapter to remove the menu entirely.
To determine when the Run on Server submenu appears in the Run menu, the
org.eclipse.wst.server.ui.moduleArtifactAdapters extension point is used.
This extension point provides an enablement expression that allows other
plugins to enable the Run on Server submenu for specific objects.
The Run on Server menu item appears on all objects that are accepted by
at least one moduleArtifactAdapters expression.
Once the Run on Server menu item is selected, the selected object is
adapted to IModuleArtifact. If the object cannot be adapted, an error
message will be displayed (this should never happen unless an external
plugin provides enablement but not adapting for an object). If the object
can be adapted, Run on Server will proceed to try to find an available
server to run on.
The most common problems with Run on Server are it not appearing
on objects that should be runnable, or appearing on objects that aren't
runnable:
This problem occurs when the selected object cannot be adapted (either
directly or via an adapter factory) to ILaunchable. The owner of the
object should provide an adapter to make the menu appear.
This problem occurs when the selected object is incorrectly adaptable to
ILaunchable. The owner of the object should remove the ILaunchable interface
or the adapter factory that supports the object.
Is Run on Server not appearing on an object that you think is runnable?
Contact the development team responsible for the artifact (e.g. J2EE team
for Servlet) and ask them to add support.
This problem can only occur when some plugin is using the moduleArtifactAdapter
extension point to enable for an object that it shouldn't. Use the steps below
to figure out which plugin is causing the problem and open a defect up against
them.
Using a development environment, turn on tracing for org.eclipse.wst.server.ui
and launch a workbench. When you right click on the object and hover over the
Run menu, the following line of trace will be output as Run on Server appears:
"Run On Server for XX is enabled by YY." XX is the object that you have selected,
and YY is the extension point id for the plugin that is doing the enablement.
Track down the id and open a defect against this component.
This problem can only occur when a plugin provides enablement for an object
but does not allow the object to adapt to IModuleArtifact. Use the tracing
steps above to identify the plugin and open a defect to get the object adapted.
The appendix contains some useful hints and tips.
Enablement expressions allow you to create arbitrarily complex expressions using ands, ors,
nots, and Java code. For performance reasons, it's always better to use the expression
support instead of using Java code, which is slower or may be in plugins that aren't loaded
yet. However, simple Java tests can provide much better flexibility and do not need to
cause large delays.
The best practice is to use enablement expressions as much as possible to filter down the
number and types of objects that are applicable. As a final check, a property tester can
then be used. However, it is still important to use expressions first, even if you know
you'll be using a property tester for the last check - this will keep the performance up
for all the objects that are filtered out.
To add a property tester to an enablement expression, try using the following example.
In your plugin.xml, add the propertyTesters extension point:
<extension point="org.eclipse.core.expressions.propertyTesters">
<propertyTester
id="org.eclipse.myplugin.propertyTester"
namespace="org.eclipse.myplugin"
properties="someProperty"
type="java.lang.Object"
class="org.eclipse.myplugin.internal.MyPropertyTester">
</propertyTester>
</extension>
package org.eclipse.myplugin.internal;
import org.eclipse.core.expressions.PropertyTester;
public class MyPropertyTester extends PropertyTester {
public boolean test(Object receiver, String property, Object[] args, Object expectedValue) {
// cast receiver and/or expectedValue
// if (xx) // check some dynamic property
return true;
// else
//return false;
}
}
<test property="org.eclipse.myplugin.someProperty" value="true"/> | http://www.eclipse.org/webtools/wst/components/server/runOnServer.html | CC-MAIN-2015-06 | refinedweb | 774 | 54.52 |
.
Alex,
No big deal but your missing a std:: before endl here
std::cout << "First linenSecond line" << endl;
These may be dumb questions but is it pretty common to have extra data left in the input buffer? Would you ever want to do that in a program? If you can use endl to make sure the output buffer is emptied, is there a way make sure the input buffer is emptied? Would you ever need to do this?
To me char,is probably one of the most confusing basic data types in C++. Since char can be both a number or a letter. Signed or unsigned and both can be the default.
Do you ever have to use the word unsigned with char, isn't char all you need?
Since ASCII uses the positive signed char(int). How do you call out signed negatives -1 to-128?
In this lesson you initialized a char ch1(97)and char ch2('a') and you wrote they both are the same 97. Isn't int8_t(97) also the same?
I'm still a bit confused as to when and where to us the code number(97)and the code character ('a'). Is this right? if you initialize char with the code in the previous paragraph then cout will always outputs the (a) character, unless you assign it to an integer with int i(ch1), then
it willprint out 97.
Thanks for pointing out the typo. Fixed.
Generally input being left in the input buffer is the result of the user entering something unexpected (e.g. a number instead of a letter or vice versa). In such cases, you'll likely want to clean out the extraneous input, which you can do as follows:
You can use unsigned char if you want, but I recommend you avoid it outside of very specific instances such as bit-fields.
> Since ASCII uses the positive signed char(int). How do you call out signed negatives -1 to-128?
I'm not sure what you're asking here.
In most cases, int8_t is treated the same as a signed char. But this isn't guaranteed, so don't depend on it!
97 and 'a' are identical (both resolve to integer 97) -- however, you can use them contextually to provide the reader a clue as to your intent. Use 97 when you intend 97 to be a number, and 'a' when you intend 'a' to be treated as a character.
Alex,
Thanks for the quick reply.
Maybe I'm reading to much into char and confusing myself. Since char is an integer aren't all negative char just used as constants in the range of -1 to -128. What is the proper syntax to us? char(-128) or int(-128)?
Can you show a couple of instance on how to use them maybe that will square me away?
Yes, chars are integers and as such, you can assign integer values to them directly.
You can also assign chars to them:
when i use v in the line
then instead of working as a vertical tab v is replaced by a symbol in output string which is a ring wid a small arrow . I am using visual studio 2015. can u plz tell me the reason
Your Windows console font has the '\v' character mapped to that symbol.
Hello people I need some help in here
I have a homework about the char, we didn't take the lesson yet
Here's what he gave me:
"I want you to try to code a program that does the following:
- Define a variable of type "char".
- Ask the user to enter a capital letter and store it in the variable (using cin).
- Cout the variable.
- Cout (the variable + 32) (Example: cout << c + 32 << endl).
What do you notice? How can you explain this?"
I have done everything correctly
But I just can't preview the small letter.
Please help!!
What do you mean you can't preview the small letter?
Why do these two declarations give different results while printing?
char ch('8');
cout<< ch <<endl; //prints 8, as expected
char32_t ch1('8');
cout<< ch1 <<endl; // prints code point for 8, that is, 56......why?????
I can't say for sure, but I can think of two possibilities. char32_t is supposed to be a distinct type, but:
1) It's possible your compiler has implemented char32_t as a typedef of integer, so cout is treating it as an integer for printing.
2) It's possible that cout has decided that char32_t should printed as an integer since the Windows console font is horrible doesn't have support for most unicode code points.
It's worth noting that C++ really only has partial support for char16_t and char32_t. While there's now a data type to hold the values, the iostream library doesn't provide support for dealing with these types.
So is it best to avoid using char32_t since results can be unpredictable? I am using Xcode. Thanks for these wonderful tutorials and your prompt replies!
Personally, I'd avoid it and use char8_t and UTF-8 encoding if you need unicode support.
I never use "return 0" at the end of my main() and my programs work fine. So why is it used in every program here? Are there some special cases where not using "return 0" will lead to errors or is it just a standard practice?
In C++, the main function must return a value to the operating system. We typically return 0 to indicate that "the program ran okay".
Some compilers (such as Visual Studio) will let you omit the return value in main (they'll return 0 for you), but best practice is not to rely on this.
"cout outputs the char variable as an ASCII character instead of a number", Alex please tell me how cin takes input from a user. As an ASCII code or simple keyboard character. This program is a bit tricky for me:
Suppose someone entered 9 as input for variable ch. When cout is printing result, 9 remains 9 (before static_cast) and cout does not interpret it as ASCII code. When cin is removed from the program and ch is given a value like this:
It produces "b has ASCII code 98". Sorry, if I am so stupid, but why results are different in above cases. Does cin takes input in this form:
One more thing. When I assign 98 with single quotes to a char variable (char ch(’98’)), my compiler warns me that the variable is given a multi-character constant. On running the program, cout only prints the last digit (e.g. 8 for 98). Why so?
In your top example, ch is of type char, so when the user inputs '9', "cin >> ch" will treat the user input as '9' (the character) and not 9 (the integer).
If you had defined ch as an int, "cin >> ch" would have treated the input as 9 (the integer) instead. If ch was defined as a string, "cin >> ch" would have treated the input as "9" (the string) instead.
So, in other words, cin looks at the data type of the variable being input into, and infers what the most appropriate value should be from that.
Remember that chars can only hold a single char. '98' is two chars, which is why you are getting a warning. Visual Studio at least seems to truncate this to the last character. I'm not sure why they made that decision instead of treating it as an error.
is string a char type ?
for example :
can you explane a little about this variable type ?
string isn't a char type. string is a special type of object called a class, and explaining classes at this point would be complicated.
You can think of a string as a sequential collection of chars (each character in "qwerty" is a char).
Where you say
static_cast<new_type>(expression)
the <new_type> gets parsed as html and is not showing
using static_cast<new_type>(expression) would fix this
Thanks! It used to work -- I'm not sure why it broke. It's fixed now.
You have suggested to use std::endl when you need to ensure your output is output immediately and use ‘\n’ in other cases.
But I see the both are line buffered. i.e., both output the full output buffer when used.
When printing to a console, this is generally true. However, when writing to disk, '\n' may not cause any buffered input to be written to disk immediately, whereas std::end will.
Typos.
"When using cout to print a char, cout outputs the char variable as an ASCII characters (character) instead of a number"
"The rest of the user input is left in the input buffer that cin uses, and can be access (accessed) with subsequent calls to cin."
Thanks for catching these.
Even if the value of 'op' the user enters is not ;+; or '-', i do not get the invalid operator output. It assumes op=- and caries out subtraction
I understand that i could use switch case to get the desired result, I am curious as to why this fails.
Thanks!
Your problem is on line 7:
should be:
= is for assignment
== is for comparison
Alex
I want to tell you how much I appreciate this tutorial - and ask one question/clarification.
I am just learning C++ so I can create a stand-alone application for my daughter. I have previously coded in php, bash, java and html, but this is my first crack at a compiled program. I (as usual) just dove in and started trying some things (usually worked best for me) but found numerous topics in C++ confusing so started looking for more help.
I found this site and it has been wonderful! It lets me learn by trying - something I like to do - AND it explains the why of things - something I need to really learn something.
Thank you.
I do have one question.
In the example above where you have:
is that supposed to be:
If so, I understand what you are saying.
If not, I am not sure what you are getting at.
Thanks again for making this interesting, informative and educational.
R
Hey Ramblin
I am just learning also but I think the way it shows is correct.
I don't think Alex was trying to illustrate that they will both print the same. In fact I think he was trying to show how the use of the single quotes changes the value that is initialised.
Why does the questionmark character have an escape sequence? Does "?" do something special in a string or character-typed object? If so, what chapter can I jump to to find out more?
Stack Overflow has an answer.
Your introduction to ASCII took me back decades, when ASCII was new and slowly replacing Baudot code used by teletypes. I was an AF communications tech, we still had tube equipment while integrated circuits were coming out and the biggest fastest communications terminal we had was the IBM 360 to do a job done by a single chip today. Some of those characters below char(32) were vital to digital transmission, which consisted of 80 character blocks transmitted synchronously at speeds up to 1200 baud (about 1.2 kB/s) using very expensive modems. SOH, ETX, EOM, ACK NAK and SYN for example were critical at the time. Of course they are obsolete but they certainly played an important roll in their day. Thanks for the memories. The tutorial is very good, thank you.
I've never heard of Baudot code. I did have a 1200 baud modem though. :)
Thanks for visiting and sharing your story.
I think there is typo in the sentence below(should be 'char' instead of 'chat'):
"The following two initializations both initialize the chat with integer value 97:"
Also, the syntax for static_cast could have been(just saying):
static_cast(expression)
In the last sentence, I wanted to write:
static_cast <new_data_type> (expression)
Thanks, fixed.
You can also print a char as a number using "integral promotion". Basically a unary + or - operator to the left of the variable:
#include <iostream>
int main()
{
char chLetter = 'a';
std::cout << chLetter << std::endl;
// the unary + operator gives the integer value of a char:
std::cout << +chLetter << std::endl;
// the unary - operation gives the negative integer value of a char:
std::cout << -chLetter << std::endl;
// integral promotions are "value preserving" and do not change the value of the variable:
std::cout << chLetter << std::endl;
return 0;
}
Hey
Thanks for this tut, it's very useful for me.
First; after title "Escape sequences" in line 2 you wrote \\. should not that be \?
Second; can you bring an example for \(number)?
Fixed and done. I used the hex escape code instead of the octal escape code because hex is more commonly used in programming than octal. I can't remember the last time I used octal.
Can anyone tell me why the following program prints "-128" to the screen for chValue2?
#include
char chValue1 = 'a';
char chValue2 = 128;
int main()
{
using namespace std;
cout << chValue1 << endl;
cout << (int)chValue2 << endl;
return 0;
}
A char variable is only one byte, which means it can only hold 256 values. If it's signed (most compilers automatically sign variables) that means it can only hold integers from -128 to 127. If you unsign the variable using the unsigned command it'll be able to hold integers from 0 to 255. I ran the program to test it and if you unsign your char variable it works like a charm as long as you don't assign it anything higher than 255.
Your chars are apparently signed by default (which is common).
128 is outside the range for signed chars, so you're overflowing your char and it's wrapping around to the next number, which is -128.
When you cast -128 to an int (also signed by default), cout prints it as -128.
cout << "\a";
best thing ever :D
Please i need some help here.
For example on these lines of code
char = cchar;
cin>> cchar;
if i input 10 at cchar it stores 10 as a string or as an integer?
i made the code below to make a test in order to get the
answer on the question that is above
char cchar;
cin>>cchar;
char cchar2 = 5;
cout<<cchar+cchar2<<endl;
cout<<cchar+(int)cchar2;
i input 5 and it gives me a sum of 58
on both couts, this make me understand that it takes
the input of cchar as string because as you said in the tutorial:
char chValue = '5'; // assigns 53 (ASCII code for '5')
but i cant understand why at the second cout it takes (int)cchar2 as 53 shouldnt it have the value of 5 because of the (int).
Please an answer would be appreciated.
You answered your own question - it stores 53, because as you said, it stores the ASCII code. when you add the (int), nothing changes, because cchar2 already has an int in it (53), hence it still thinks of it as 53.
Do programmers have to memorize escape sequences and stuff? It would be helpful if there was a single page with all the reference tables in it.. I'll just bookmark 'em for now (;
You just remember them naturally after using them dozen of times. Also, there's always google, should you forget any of them :p
Although C++ supports a fair number of escape sequences, in reality you'll rarely use more than \n, \t, \\, \', and \".
In the following example above:
char chValue = '5'; // assigns 53 (ASCII code for '5')
char chValue2 = 5; // assigns 5
Does the bottom statement assign 5 or does it assign the character associated with the ASCII code for 5 (i.e. a club symbol)?
Thanks
The top statement assigns the character '5' (integer 53), which is printed as '5'.
The bottom statement assigns in the integer value 5, which is printed as a club symbol in the Windows console.
Alex, thanks for the info!
From my background I've used binary, octal, hex and decimal when referring to characters, so please make sure above when you say an 'a' is 97 you specify that it's (it is) a decimal 97. Granted the compiler assumes decimal, but readers may not.
Also, "The following program asks the user to input a character, then prints out both the character and it’s ASCII code:" Please check in all the tutorials each use of "it's" which is short for "it is" and is never possessive. "its" is possessive just is his, hers and theirs are possessive and you do not use an apostrophe for them.
respected ALEX, i have been deligently and sincerely trying to follow your "lessons".
i tried to compile one code re. CHAR.
I AM NOT GETTING THE RESULTS AS PROJECTED BY U.
[One word of caution: be careful not to mix up character (keyboard) numbers with actual numbers. The following two assignments are not the same
view sourceprint?
1.
char chValue = '5'; // assigns 53 (ASCII code for '5')
2.
char chValue2 = 5; // assigns 5 ]
What 'cout' gives me is
a)5 for chValue
b)a sign like "club in playing cards"
please advise me---thanks prabhakar
If you print char 53, you'll get the character '5'.
If you print char 5, you'll get the "club in playing cards" on Windows, because the font that Windows uses in the console has this club symbol mapped to code point 5. Why? I guess Microsoft figured that putting a printable symbol at that code point was better than leaving something unprintable there.
Note that this club symbol is not a standard part of ASCII and will not work on other operating systems, like Unix.
Hi. Been having fun going through this tutorial. I was just compiling the ASCII code...
It gave me a warning with visual c++ 2008 express until I changed the top line to...
Thanks for creating this tutorial! :)
I tried declaring a variable as
but it didn't work, how can i create a string variable
See
Ok, i tried the /v for form feed, which in my case could be represented as a new page(would feed lines until the text shown would be gone. But instead it read out as the male gender sign as a character. I just dont want to have to input like 15 /n's to make the same effect. I know an alternative is to make 15 /n's a char constant and just input it when i need a new page but im just wondering if im just doing something wrong. Help, please! :P
I've never tried using the form feed char. I doubt it works on a monitor, though it might still work in printers... maybe.
You can't stick 15 /n's in a char because a char only holds one character, not 15. If you want 15 spaces, for now you'll have to do 15 cout << "\n"; until you learn how to do loops.
cout << "\n";
I was just reading through the comments on this excellent tutorial.
If you want to wipe the text off the terminal you can use the clear screen command, system("cls")
Hope This is what you were lookign for! :)
This solution will only work on some operating systems and is not recommended. system() makes a call to the operating system and runs the program "cls", which happens to clear the console on Windows. But it won't work on Unix, since Unix doesn't have an executable named cls.
Why do you need '/?'.... we can use question mark with cout....
Honestly, I don't know. :)
From my years of C coding, single quotes enclose one character and double-quotes enclose a string. Strings were (are?) far different than a char - mostly in storage (memory) and using them. Strings are one or more characters and terminated (usually with a zero - not the character 0, but all zero bits). Strings always take up more than 8 bits (one byte).
cout may not care, but your variable declaration (compiler) may be particular about it and/or your code may not work.
The slash question mark may be used in case a compiler interprets question marks as something else. This depends on your compiler, IE, it's just to make sure your code really does (compiles to) what you want it to do. Some pattern matching (MS Word?) use a question mark to substitute/match a single character, and to match a question mark you have to escape it. Same thing may be for some compilers.
thank you
I made a program that makes a ASCII chart. Even though you can find this chart in almost any place dealing with coding. I thought it be fun to make my own reference.
Code
#include "stdafx.h"
#include <iostream>
int main()
{
using namespace std;
for(unsigned char nCount = 1; nCount <= 127; nCount++)
{
if (nCount <= 127)
cout << (int)nCount << " : " << nCount << endl;
}
return 0;
}
Oh! you may notice that nCount should be ucCount...thats because when I first wrote the program, it was an exercise that just counted up from 0. I just changed the int integer(int) to a char integer (unsigned char). Leaving the variable the same. Technicality there nothing wrong with that. However if you copy this program. I would suggest trying to use Hungarian Notion the correct way.
Hi Jeffey. Nice idea to construct such a code. Actually you don't need to use the hypothetical statement if. I simplified your code:
Name (required)
Website | https://www.learncpp.com/cpp-tutorial/27-chars/comment-page-1/ | CC-MAIN-2019-13 | refinedweb | 3,622 | 71.95 |
Library code snippets
Working with Excel Files Using VB6
Adil Hussain Raza, published on 01 Feb 2006
The code is totally self explanatory, In the load event we're going to open the new instance of the excel library and our excel file “book1.xls” will be accessible from our code. Then we'll use Command1 to retrieve data from book1, please note that you must have some data in the excel file. Similarly Command2 is used to put/replace the data in the excel sheet cells.
'do declare these variables you need to add a reference 'to the microsoft excel 'xx' object library. 'you need two text boxes and two command buttons 'on the form, an excel file in c:\book1.xls Dim xl As New Excel.Application Dim xlsheet As Excel.Worksheet Dim xlwbook As Excel.Workbook Private Sub Command1_Click() 'the benifit of placing numbers in (row, col) is that you 'can loop through different directions if required. I could 'have used column names like "A1" 'etc. Text1.Text = xlsheet.Cells(2, 1) ' row 2 col 1 Text2.Text = xlsheet.Cells(2, 2) ' row 2 col 2 'don't forget to do this or you'll not be able to open 'book1.xls again, untill you restart you pc. xl.ActiveWorkbook.Close False, "c:\book1.xls" xl.Quit End Sub Private Sub Command2_Click() xlsheet.Cells(2, 1) = Text1.Text xlsheet.Cells(2, 2) = Text2.Text xlwbook.Save 'don't forget to do this or you'll not be able to open 'book1.xls again, untill you restart you pc. xl.ActiveWorkbook.Close False, "c:\book1.xls" xl.Quit End Sub Private Sub Form_Load() Set xlwbook = xl.Workbooks.Open("c:\book1.xls") Set xlsheet = xlwbook.Sheets.Item(1) End Sub Private Sub Form_Unload(Cancel As Integer) Set xlwbook = Nothing Set xl = Nothing End Sub
Related articles
Related discussion
VB6 system conversion using VBA to Word 2007
by b.macgregor@vodamail.co.za (0 replies)
How to open .bat application from excel VBA or VB6
by NaseemAhmed (0 replies)
Outlook VBA query
by James Crowley (1 replies)
ms access report
by Uncle (1 replies)
VB6, SQL 2005 & DMO
by elajaunie3 (1...
Vrom VB6 How would you execute a macro that was embedded in this workbook?!--removed tag-->
1. Click menu "Project"
2. Select "Reference..."
3. Check the item "microsoft excel 'xx' object library"
I wonder if the problem is related to the lines:
in Command1_Click() and Command2_Click(). When I move these two lines to Form_Unload(), before the set nothing statement, the problem solved.
Could somebody please help me on how to add a reference 'to the microsoft excel 'xx' object library. I read your code and seems nice but am a newbie in vb and i don't know how add reference to microsoft excell. I seriosly want to be able to send the data from a datagrid to an excel sheet for anaylsis. Please help.
Hi, I tried your code with 2 command buttons and two text boxes. There is no syntax error. However at run time ONLY 1 command button works at a time. Clicking the 2nd button causes an error. An error is highlighted in debug for the 2nd button, even though there is no error.Closing the program and re-running it , this time clicking on the 2nd command button first and the 1st one second, gives an error message and shows a line that is error free!. What should I doPls help Regds Balachandran
<head><title>Customers</title></head>
<body background="../image/niceblue.gif">
<TABLE cellpadding="5" border="3">
<tr><td>
<h1>Customers</h1>
ECG has approximately 1.4 million customers made up of 120 thousand <br>
Prepaid customers and 1.28 million Credit Customers as at end of May,2006.
<br>
Customers who had been supplied with any of the types of <a href = "meters.html">prepaid meters</a>
are<br> known as prepaid customers.
Customers using credit meters are known as credit<br>
customers,
simply because the former makes payment before power is consumed<br>
where as the latter consumes power before payment is made.<br>
<hr>
<br>
<h2>Types of Customers</h2>
<br>
ECG customers are categorised into 2 broad ones:<BR><BR>
1. Special Load Tarrif -(SLT Customers)
Special Load Tariff (SLT) is paid by <br>customers whose load requirement<br> is 200 KVA and above.
<br><br>
An SLT customers can be one of the following types:
<ul>
<Li> High Voltage - HV</Li>
<Li> Medium Voltage - MV</Li>
<Li> Low Voltage - LV</Li>
</ul>
<hr>
2. Non Special Load Tarrif -NSLT Customers<br>
Non Special Load Tariff (Non SLT) are paid by customers whose load requirements<br> is below 200 KVA.
<br>
SLT & Non SLT receipts form the bulk of the company’s revenue.
<br><BR>
The NSLT Customers are of 2 types:
<ul>
<Li> Residential - HV</Li>
<Li> Non Residential - MV</Li>
</ul>
Non Special Load Tariff (Non SLT) are paid by customers whose load requirements<br>
is below 200 KVA. SLT & Non SLT receipts form the bulk of the company’s revenu
</td>
<td><h3><font color ="red">Read More about:</font></h3><br>
<img src ="../image/nobulls1.gif" width="20" Height="20"><a href="cus.html" target="show"> Customer Population</a><br>
<BR>
<img src ="../image/nobulls1.gif" width="20" Height="20"><a href="servcon.htm" target="show">Getting New Service Connection</a><br>
<BR>
<img src ="../image/nobulls1.gif" width="20" Height="20"><a href="elecuse.htm" target="show">Using Electricity efficiently</a><br><br>
<BR>
<img src="../image/subst.jpg" height="550" width="500" Alt="ECG Customer operating a Prepaid meter"><br>
<i> a Substation ar Achimota
</td>
</tr>
</table>
</body>
If I wanted to read into VB6 from excel, lets say 100 rows (containg 8 or 9 cells of numerical data, e.g. A1, B1, C1...,) and i wanted to use each row's data to do certain calculations and after each row is processed I want to go to the next row. How would write the code to go from one row in excel to the next row automatically. And how would my program know when it reached the end of file (excel file)?
If you could keep the code as simple and clear as the code you have above that would be great.
Thanks,
Donnovan
hey, i cannot get the code snippet to work i guess that i need to import a namespace or something??
It is work, but there's something I want to ask you more. Can you tell me how to set coloumn/cell fillcolor with any condition? Thx
Regards
Ocky
Hi,
Thnx for this piece of code, as this was very usefule for my application development.. Thnx a lot dude
Harish
Chennai, India
Hello,
Would u know how to compare .XLS or .CSV then output the results to a txt file?
This thread is for discussions of Working with Excel Files Using VB6. | http://www.developerfusion.com/code/5322/working-with-excel-files-using-vb6/ | crawl-002 | refinedweb | 1,155 | 65.93 |
As many of you know, Be uses a variation of the Berkeley socket interface. Writing socket code on the BeBox has been something of a black art, since our documentation is only a single page! In this article, I hope to clear up some of the mysteries of Be socket programming. If you aren't already familiar with Berkeley sockets, then this article will be extremely boring for you. For the rest of you, this article will be merely boring.
The most important thing to remember when using Be sockets is that they're not file descriptors the way Berkeley sockets are. This difference is felt in a number of ways:
You cannot call
read(),
write(), or
close() on a Be socket. Use
recv(),
send() and
closesocket() respectively, instead.
Be sockets live in a separate namespace from file descriptors, so the first socket you open is "0". Normally, in UNIX, the first socket is "3", since "0", "1", and "2" are opened already for standard I/O. So don't think you've made a mistake when you get back "0" when opening a socket.
Unlike file descriptors, Be sockets are not inherited by child
processes, so after a POSIX
fork() and
exec() call, you lose all of
your socket descriptors in the child process (the parent is not
affected).
With Be sockets, you aren't limited by the per-team file table space in the kernel. You can pretty much open as many sockets as you want, provided you have enough memory.
The next point has to do with unblocking a blocked socket call. In a single-threaded UNIX system, you can do this with a signal, which will cause the blocking call to unblock. This currently doesn't work under the Be OS, and signals aren't really a very good way of doing this in a multithreaded system anyway. So the rule is that any blocked socket call can be unblocked simply by operating on that same socket in another thread.
For example, if a socket is blocked on
recv(), it can be unblocked by
another thread calling
send() on the same socket. (If you don't actually
want to DO anything, call something innocuous like
getsockname().)
Currently, an interrupted read returns a status of -1; in DR8,
errno will
be set to the more descriptive
EINTR. This isn't much different from
single-threaded UNIX, since in order to operate on the same socket when
you're blocked, you must do so in a signal handler that will cause the
blocked call to fail and set
errno to
EINTR.
(For any of you who might be wondering about the thread safety of setting
the
errno variable, you can relax: In DR8,
errno is *not* a global
variable. Through the magic of the C preprocessor and function calls,
errno returns a value that is unique per thread.)
A Be OS socket blocks if there isn't any data to be read, and unblocks
when data arrives. This is precisely the UNIX behavior, as far as it
goes, but UNIX also lets you specify nonblocking I/O as an option. This
option will be available in DR8: In nonblocking mode, a socket that would
otherwise have blocked will, instead, immediately return
EWOULDBLOCK
(this is the same as System V's
EAGAIN).
Another item worth mentioning is
select(). Under UNIX, you can put any
descriptor into the select mask. Typically, the mask only contains two
sockets, but sometimes tty descriptors are thrown in there, as well. The
Be OS only supports sockets in the select mask. Also, it only reports
descriptors that are blocked on read. The write mask always reports that
sockets are ready for writing, even if they may block. (This is something
that will probably be fixed in DR8.)
I recommend avoiding
select() if you can, because it isn't really the
right way to do socket programming in a multithreaded system. It's
implemented by spawning a lot of threads that block, and so it isn't any
more efficient than spawning the threads yourself. You should spawn
threads for blocking calls, and use the trick described above (operating
on the same socket in another thread) if you need to unblock any of the
threads.
The next point concerns the netdb functions (
gethostbyname(),
getservbyname(), and so on). The only netdb functions that are completely
implemented right now are the host lookup functions,
gethostbyname() and
gethostbyaddr(). The others, such as
getservbyname() and
getprotobyname(), are minimally implemented now, but will likely be fully
implemented in a future release.
Most of the options that you can set with
setsockopt() are not
implemented. Again, these will get implemented in future releases.
An infelicity that will likely be cleared up in DR8 concerns server TCP code that binds to the address "0.0.0.0". In UNIX, this is understood to mean "bind to all interfaces." In DR7 of the Be OS, it means "bind to the first interface you find." Typically, the interface that's found first is your Ethernet or PPP interface; your server will NOT bind to the local loopback address of "127.0.0.1". If you need local clients to talk to the server, then you need to explicitly bind to the address "127.0.0.1" with another socket. Thus, your server is actually listening on two sockets: One for the external network and one for the internal loopback network.
In summary, the basic stuff is implemented and many people have been successful writing network code on the BeBox. If you're porting code, you may have some trouble with some of the more obscure stuff that isn't implemented yet, or with the fact that Be sockets aren't file descriptors. This should be cleared up in future releases, although making Be sockets real file descriptors is probably a long way off.
I remember the fateful day last fall when I typed "" into my Web browser and was treated to a vision of the future of personal computing. What I saw and read was breathtaking. I picked my jaw back up off the floor, wiped the drool off my keyboard, and went back for a second look. Could it be true? A dual-processor machine, tons of I/O, a modern OS complete with multiprocessing, multithreading, memory protection, and an integrated database—all at an affordable price?
The BeBox was the first computer that made me want to give up my Macintosh. Coming from a Mac-fanatic like me, that's a pretty serious statement.
I've been working on the Macintosh since 1987, when I co- developed a HyperCard database program for an archaeology research project at the University of Maryland. After getting my anthropology degree in 1991, I plunged straight into—you guessed it—the computer field. What's an anthropologist doing in computers? I'll let you do the math...
In the last five years, I've developed and maintained numerous research databases for a military medical project, using mostly Macintosh computers with statistical analysis done on a VAX. Whereas I started out doing mostly database development, I've expanded my scope to the much more exciting world of multimedia development. My current position focuses almost entirely on multimedia and program development with a special emphasis on medical and emergency medical readiness.
But I digress. Back to me finding Be on the Web... The BeBox and Be OS looked like hot stuff—too good to keep to myself—so I walked down the hall to a friend's office (he's an NT guy) and said, "Point your browser at and check this out."
"Hmmm...interesting...seems a bit like the NeXT though..." he said.
The low price tag and use of the PC-clone organ bank made him take a second look, but it was when I showed him the Be demo video that he became a Be-liever. There's nothing like seeing the BeBox in action to turn skeptics into Be-lievers.
The BeBox presents a unique opportunity for a fresh start. It offers a clean, open highway to the next generation in personal computing—and that's where I want to be: Contributing to the growth of a new OS. It's exciting! The glowing comments in comp.sys.be from Peter Lewis and John Norstad—two Macintosh developers for whom I have the highest regard -- really caught my attention. If THESE guys are hot on the BeBox...
Not only are the BeBox and Be OS pretty special, but so are the people behind Be. Developer support is exquisite, in fact, I'd say it's the best I've ever encountered. Be seems to understand that happy developers are key to their success.
Currently I develop for the BeBox on my own time, although my employer is expressing a growing interest in the platform. Right now I'm working on a text editor and some desktop utility programs to sink my teeth into the Be OS. I'm particularly excited about the BMessaging capability and having applications talk to each other and work together. That's the anthropologist in me.
I'm also interested in supporting a Be Users' Group in the Washington, DC, area. If you're in the DC area and would like to get together, please e-mail me at popernak@io.com.
My beret goes off to JLG and the folks at Be. You all have created something very special. I can't wait to reciprocate.
If you've been reading Be newsletters for a while, it will come as no surprise that our primary focus is on developers. Today, we only sell machines to bona fide developers who are enrolled in our developer program, and we devote all of our sales and marketing efforts to recruiting and supporting new developers. However, we get many enquiries from end users, resellers, VARs, and other nondevelopers who want to buy or resell BeBoxes. So, what are our plans?
By the late fall of this year, we expect tens, and then hundreds, of interesting new applications to ship on the Be OS and BeBox. For our sake, and the sake of our developers, we need to have ways to get product to the end users of these applications and, just as important, ways to provide hardware and software support. To this end, our intention is to offer direct mail/web order sales from Be, with Internet- based support, and to work with qualified resellers who will be able to configure, sell, service, and support products in specific geographic or application markets. Market efficiencies will guarantee that direct and reseller pricing will be very close, so the end user will have a true choice of how and where to buy BeBoxes. A partnership between Be, application developers, and resellers will be able to deliver solutions to end users; there's nothing innovative about this idea, but it has the great advantage that it's known to work well and to create satisfied customers. The most innovative aspect will be the cost (and hence price) reductions that will come from Internet-based software distribution and support services.
There's a class of end user who doesn't want to wait for applications. These are the true geeks (and we use that term with affection, as many of us at Be could easily be described as geeks), who don't necessarily want to develop applications for others but who want a neat platform to experiment with. The only reason that we're not offering BeBoxes to this geek market right now is that we're awaiting FCC Class B approval. We expect to be able to offer BeBoxes to the geek/hobbyist market in August; we'll announce it on our Web page and in this newsletter when the time comes.
We're very enthusiastic about working with the geek market: It's populated with fearless, creative, and innovative individuals who will torture our product and its new applications and figure out ways to use it that we couldn't begin to imagine.
Fifteen years ago the conventional wisdom was that the US computer industry was about to be streamrolled by the Japanese, in the same way they had flattened everything else on their path—from cameras to steel mills, from stereos to autos. They had the determination, the government support, the technology, and the discipline to conquer any market on which they set their sights.
In spite of these dire predictions, the US computer industry is doing very nicely. In some respects it's doing even better than fifteen years ago: It now virtually dominates Western Europe and has made substantial inroads in Japan. US domination is even more striking if we focus on software: Companies such as Microsoft, Oracle, Netscape, and Apple own the OS and application markets world-wide.
If this is the case—if only American software products can succeed -- why do we even bother to evangelize and support developers in Australia, Iceland, France, or Finland? Are we financing yet another remake of "Beau Geste," or do we know something the rest of the industry doesn't know? It's a trick question, which offers little real choice: Foolishness or arrogance. But the trick is well-known. It's hidden in the assumptions that are made about the software trade.
We're told that large US companies dominate the software market because of the huge development and marketing investments required to wage war. The linchpin of this argument is in its mixing of two notions: "US" and "large." We simply don't buy this line. Neither US-centricity nor a work force the size of a small town with a concomitant budget are necessary in the new software world.
Let's look at the size argument: Does good software require a long and expensive gestation and a cast of thousands? If you're an established player updating a legacy application, then yes, it does. That's because existing programming models have become unendurably baroque, and because established companies tend to be conservative: They dedicate most of their resources to conserving their franchise, their existing business, whether it's application software or an operating system. Individual authors and small companies have mixed feelings about entering and playing in the legacy markets; the ocean is wide, but inhabited by nasty predators that can devour small fish faster than these fish can spawn. A fresh developer in the office-automation market has to spend roughly the same amount of time sifting the arcana of a baggage-laden OS as the big guys, but must do so without the benefit of a large and well-paid squad of marketeers and spin artists defending the line and clearing some space on the shelf at Fry's.
Conversely, with the BeBox, one or two individuals can write a major application in a fraction of the time and cost that it would take on an older platform: The programming model is simpler, (still) free from baggage, and code size is correspondingly smaller (by a factor of four or five). As expected, baggage-free code is also faster code. Admittedly, developers could harbor a mix of sentiments towards the Be platform: Exciting and unencumbered, yet unproven. But once we add the Internet, the leap of faith becomes safer for a larger number of developers in a wider range of locations.
Some have called the Internet the new printing press. But it's also a great marketing and distribution equalizer. A software developer in Finland can promote and deliver an application as easily and quickly as a competitor in the heart of Silicon Valley. But is the spreadsheet from a company in San Jose, simply by the fact of the company's location, better than the one from Helsinki? If you look at the demographics of the US software industry, you begin to suspect that the last decade of US domination wasn't entirely home-grown: In a land of immigrants, the software industry is well-populated by programmers from other countries (our company, minuscule as it is, is no exception). In fact, the advantages of developing in the US are mostly historical and coincidental —it's no great surprise that young, well-paid programmers have, in the past, jumped at the chance to live in California.
But when programmers can't come to the US, we'll go to them, via the net. Size and location are no longer the issue. With exciting new technology, a platform unencumbered by large competitors, with inexpensive market access, the most important question is: "Who can write the best code?" It may not be the only question. But it is the first one—before company size and location.
A number of heavy-traffic threads (notably "Call for type/creator reorganization") disappeared this week. The help system thread is still reasonably strong; "GUI/shell replacement" is losing steam. And to answer a question that was posed in one of this week's threads: Yes, Be does listen to the suggestions made in BeDevTalk.
More for amusement than anything else, the age of each thread ("NEW", "WEEK 2", and so on) is announced above the subject line. Older threads are given first.
Threads topics are given by the subject lines as they appear (literally, typos and all) in the mail. We don't summarize every thread, so don't take it personally if your favorite is left out. Better yet: Start a thread.
To subscribe to BeDevTalk, visit the mailing list page on our web site:.
This thread has become an HTML battleground. In addition to the staunch pro and con views, there are also opinions regarding the invention of app-specific HTML tags.
We also learned that the perlism "$_" is pronounced "ding- under."
More thoughts on parting the iron curtain that separates shells from GUI windows. This week: Should the Browser provide a name-only mode when displaying items in the dock?
How should the BeBox protect the motherboard from a virus that's loaded into the EEPROM? Latched (once-per-boot) access to the EEPROM was suggested.
THE BE LINE: We're working on it.
Where should the add-ons that apply to a particular Browser item be displayed? Currently, you have to right click and descend the "Add-ons" submenu. It was suggested that the add-ons should be in the item's top-level context menu. Should this decision (submenu versus top-level menu) be settable in a preference?
A few participants lauded the concept of OpenDoc, and have suggested that some of the BeBox's "missing features" would be filled in by supporting ODF. The ensuing negative reaction was countered with a) a request for more descriptive evidence of ODF's flaws and b) the suggestion that designing something better is possible, but, given that OpenDoc exists today, spending the time to design the alternative may be a waste.
This thread started with questions: Are record_refs values maintained through a database reindex? (Yes.) Through a database rebuild? (No.)
It then moved into a larger discussion of what a record_ref actually identifies, whether you should use refs instead of names, what sort of user prefers symbolic (that is, ref- like) identification as opposed to filenames, how refs and filenames compare when fit into the live query world, whether refs are simply new-fangled gizmos that will never replace the reality of pathnames, and so on. A pleasantly heated discussion on a topic that is, after all, at the heart of one of the selling points of the Box.
Jon Watte (Metrowerks) outlined a command-line mail-sending program and opined that it would probably be a one-night hack to get the thing working. A five-minute (shell script) implementation showed up a few days later; also, news of a mail server was posted.
Woven throughout was a discussion of what Be's mail API should look like, and what mail services Be should provide. (A branching thread about add-ons is summarized below.)
THE BE LINE: Be will provide a mail daemon and mail API with DR8. Nothing terribly fancy—we're not out to compete with our developers —we just want to provide a base level of support for certain features in the SMTP/POP world.
Within this thread (sprouted from "Write a mailer!") were technical descriptions of how add-ons work, how to tell CodeWarrior to share data between add-ons, and so on, as well as a discussion of whether/where Be should allow a repository for "global" add-on modules, whether script-like manipulations (provided by an add-on) should be performed on an open document (as opposed to having to drop the document on an add-on icon), and to what extent Be should allow its interface to be customized through add-ons. | http://www.haiku-os.org/legacy-docs/benewsletter/Issue1-30.html | crawl-001 | refinedweb | 3,477 | 60.45 |
Visitor Pattern Tutorial with Java Examples
Visitor Pattern Tutorial with Java Examples
Learn the Visitor Design Pattern with easy Java source code examples as James Sugrue continues his design patterns tutorial series, Design Patterns Uncovered
Join the DZone community and get the full member experience.Join For Free
Today we're going to take a look at the Visitor pattern. Of all of the patterns that I've used so far, Visitor is by far the most powerful and convenient.
Vistors in the Real World
A real world analogy always helps with the understanding of a design pattern. One example I have seen for the Visitor pattern in action is a taxi example, where the customer calls orders a taxi, which arrives at his door. Once the person sits in, the visiting taxi is in control of the transport for that person.
Shopping in the supermarket is another common example, where the shopping cart is your set of elements. When you get to the checkout, the cashier acts as a visitor, taking the disparate set of elements (your shopping), some with prices and others that need to be weighed, in order to provide you with a total.
It's a difficult pattern to explain in the real world, but things should become clearer as we go through the pattern definition, and take a look at how to use it in code.
Design Patterns Refcard
For a great overview of the most popular design patterns, DZone's Design Patterns Refcard is the best place to start.
The Visitor Pattern
The Visitor is known as a behavioural pattern,as it's used to manage algorithms, relationships and responsibilities between objects. Thedefinition of Visitor provided in the original Gang of Four book on DesignPatterns states:
Allows for one or more operation to be applied to a set of objects at runtime, decoupling the operations from the object structure.
What the Visitor pattern actually does is create an external class that uses data in the other classes. If you need to perform operations across a dispate set of objects, Visitor might be the pattern for you. The GoF book says that the Visitor pattern can provide additional functionality to a class without changing it. Let's see how that can work, first by taking a look at the classic diagram definition of the Visitor pattern:
The core of this pattern is the Visitor interface. This interface defines a visit operation for each type of ConcreteElement in the object structure. Meanwhile, the ConcreteVisitor implements the operations defined in the Visitor interface. The concrete visitor will store local state, typically as it traverses the set of elements. The element interface simply defines an accept method to allow the visitor to run some action over that element - the ConcreteElement will implement this accept method.
Where Would I Use This Pattern?
The pattern should be used when you have distinct and unrelated operations to perform across a structure of objects. This avoids adding in code throughout your object structure that is better kept seperate, so it encourages cleaner code. You may want to run operations against a set of objects with different interfaces. Visitors are also valuable if you have to perform a number of unrelated operations across the classes.
In summary, if you want to decouple some logical code from the elements that you're using as input, visitor is probably the best pattern for the job.
So How Does It Work In Java?
The following example shows a simple implementation of the pattern in Java. The example we'll use here is a postage system. Our set of elements will be the items in our shopping cart. Postage will be determined using the type and the weight of each item, and of course depending on where the item is being shipped to.
Let's create a seperate visitor for each postal region. This way, we can seperate the logic of calculating the total postage cost from the items themselves. This means that our individual elements don't need to know anything about the postal cost policy, and therefore, are nicely decoupled from that logic.
First, let's create our general visitable interface:
//Element interface public interface Visitable{ public void accept(Visitor visitor); }
Now, we'll create a concrete implementation of our interface, a Book.
//concrete element public class Book implements Visitable{ private double price; private double weight; //accept the visitor public void accept(Visitor vistor) { visitor.visit(this); } public double getPrice() { return price; } public double getWeight() { return weight; } }
As you can see it's just a simple POJO, with the extra accept method added to allow the visitor access to the element. We could add in other types here to handle other items such as CDs, DVDs or games.
Now we'll move on to the Visitor interface. For each different type of concrete element here, we'll need to add a visit method. As we'll just deal with Book for now, this is as simple as:
public interface Visitor{ public void visit(Book book); //visit other concrete items public void visit(CD cd); public void visit(DVD dvd); }
The implementation of the Vistor can then deal with the specifics of what to do when we visit a book.
public class PostageVisitor implements Visitor { private double totalPostageForCart; //collect data about the book public void visit(Book book) { //assume we have a calculation here related to weight and price //free postage for a book over 10 if(book.getPrice() < 10.0) { totalPostageForCart += book.getWeight() * 2; } } //add other visitors here public void visit(CD cd) {...} public void visit(DVD dvd) {...} //return the internal state public double getTotalPostage() { return totalPostageForCart; } }
As you can see it's a simple formula, but the point is that all the calculation for book postage is done in one central place.
To drive this visitor, we'll need a way of iterating through our shopping cart, as follows:
public class ShoppingCart { //normal shopping cart stuff private ArrayList<Visitable> items; public double calculatePostage() { //create a visitor PostageVisitor visitor = new PostageVisitor(); //iterate through all items for(Visitable item: items) { item.accept(visitor); } double postage = visitor.getTotalPostage(); return postage; } }
Note that if we had other types of item here, once the visitor implements a method to visit that item, we could easily calculate the total postage.
So, while the Visitor may seem a bit strange at first, you can see how much it helps to clean up your code. That's the whole point of this pattern - to allow you seperate out certain logic from the elements themselves, keeping your data classes simple.
Watch Out for the Downsides
The arguments and return types for the visiting methods needs to be known in advance, so the Visitor pattern is not good for situtations.
Next Up
Later on this week, we're going to visit the Proxy }} | https://dzone.com/articles/design-patterns-visitor | CC-MAIN-2019-47 | refinedweb | 1,139 | 50.46 |
During a little-overlooked feature, of Visual Studio, using which you can generate the classes directly by pasting the JSON or XML.
Using “Paste JSON As Classes” or “Paste XML As Classes”, Visual Studio and automatically have your classes generated.
Let’s have a quick look at an example. Consider you have following JSON.
{ "id": 1, "name": "Product1", "price": 12.50, "tags": ["tag1", "tag2"] }
Now, You want to have a class for this JSON. Instead going through the process of manual creation, do the following:
- Create an empty class in Visual Studio
- From main menu Edit -> Paste Special -> Paste JSON As Classes
Paste JSON As Classes
With that, you will find you have a transformed class created from the selected JSON, as shown in the above image.
Similarly, this is going to be same for XML Files. You can use Paste XML As Classes for generating the class from an XML File.
Consider the following sample XML file
<book id="1"> <author>Book Author</author> <title>Book Title</title> <price>49.95</price> <description>Book description</description> </book>
Now, you can generate the complete class by selecting “Paste XML As Classes”
Paste XML As Classes
This is not a new feature of Visual Studio, it is there since long. Give it a try, if you are not using it.
With this feature you can make your classes simpler, faster, and more interesting. Isn’t it!
Hope this helps!
Pingback: Dew Drop - November 9, 2017 (#2600) - Morning Dew
Very good tip I was not aware.Thank you for sharing.
NIce one !
Pingback: Compelling Sunday – 17 Posts on Programming and QA
The example would be more impression if the JSON/Xml had more than one level , requiring VS to generate multiple classes
{
“id”: 1,
“name”: “Product1”,
“price”: 12.50,
“tags”: [“tag1”, “tag2”],
“someObject”: {“id”: “1a”, “value”: 1234}
}
public class Rootobject
{
public int id { get; set; }
public string name { get; set; }
public float price { get; set; }
public string[] tags { get; set; }
public Someobject someObject { get; set; }
}
public class Someobject
{
public string id { get; set; }
public int value { get; set; }
}
I have “Paste XML as classes” but no “Paste JSON as classes” ? Any ideas? I DO have ASP.NET installed btw.
@Jim Delaney
It’s not that intuitive, but if you only see Paste As XML Classes, Simply Create a new Class File, then go to Edit -> Paste Special , then Paste JSON as classes will become visible.
I don’t know why it’s not available on menu otherwise | https://dailydotnettips.com/2017/11/09/did-you-know-you-can-automatically-create-classes-from-json-or-xml-in-visual-studio/ | CC-MAIN-2018-13 | refinedweb | 415 | 70.23 |
fnmatch - match a filename or a pathname
#include <fnmatch.h>
int fnmatch(const char *pattern, const char *string, int flags);
The fnmatch() function shall match patterns as described in the Shell and Utilities volume of IEEE Std 1003.1-2001, Section 2.13.1, Patterns Matching a Single Character, and Section 2.13 the Shell and Utilities volume of IEEE Std 1003.1-2001,() shall return 0. If there is no match, fnmatch() shall return FNM_NOMATCH, which is defined in <fnmatch.h>. If an error occurs, fnmatch() shall return a period at the beginning of a filename.
This function replaced the REG_FILENAME flag of regcomp() in early proposals of this volume of IEEE Std 1003.1-2001.(), the Base Definitions volume of IEEE Std 1003.1-2001, <fnmatch.h>, the Shell and Utilities volume of IEEE Std 1003.1-2001
First released in Issue 4. Derived from the ISO POSIX-2 standard.
Moved from POSIX2 C-language Binding to BASE. | https://pubs.opengroup.org/onlinepubs/009696699/functions/fnmatch.html | CC-MAIN-2020-29 | refinedweb | 160 | 69.68 |
I'm starting out a new vue.js project so I used the vue-cli tool to scaffold out a new webpack project (i.e.
vue init webpack).
As I was walking through the generated files I noticed the following imports in the
src/router/index.js file:
import Vue from 'vue' import Router from 'vue-router' import Hello from '@/components/Hello' // <- this one is what my qusestion is about Vue.use(Router) export default new Router({ routes: [ { path: '/', name: 'Hello', component: Hello } ] })
I've not seen the at sign (
@) in a path before. I suspect it allows for relative paths (maybe?) but I wanted to be sure I understand what it truly does.
I tried searching around online but wasn't able to find an explanation (prob because searching for "at sign" or using the literal character
@ doesn't help as search criteria).
What does the
@ do in this path (link to documentation would be fantastic) and is this an es6 thing? A webpack thing? A vue-loader thing?
Thanks Felix Kling for pointing me to another duplicate stackoverflow question/answer about this same question.
While the comment on the other stackoverflow post isn't the exact answer to this question (it wasn't a babel plugin in my case) it did point me in the correct direction to find what it was.
In in the scaffolding that vue-cli cranks out for you, part of the base webpack config sets up an alias for .vue files:
This makes sense both in the fact that it gives you a relative path from the src file and it removes the requirement of the
.vue at the end of the import path (which you normally need).
Thanks for the help!
This is done with Webpack
resolve.alias configuration option and isn't specific to Vue.
In Vue Webpack template, Webpack is configured to replace
@/ with
src path:
const path = require('path'); ... resolve: { extensions: ['.js', '.vue', '.json'], alias: { ... '@': path.resolve('src'), } }, ...
The alias is used as:
import '@/<path inside src folder>';
Also keep in mind you can create variables in tsconfig as well:
"paths": { "@components": ["src/components"], "@scss": ["src/styles/scss"], "@img": ["src/assests/images"], "@": ["src"], }
This can be utilized for naming convention purposes:
import { componentHeader } from '@components/header';
I get over with following combination
import HelloWorld from '@/components/HelloWorld' => import HelloWorld from 'src/components/HelloWorld'
IDE will stop warning the uri, but this causes invalid uri when compile, in "build\webpack.base.conf.js"
resolve: { extensions: ['.js', '.vue', '.json'], alias: { 'src': resolve('src'), } },
Bingoo!
resolve('src') no works for me but path.resolve('src') works
resolve: { alias: { 'vue$': 'vue/dist/vue.esm.js', '@': path.resolve('src') }, extensions: ['*', '.js', '.vue', '.json'] },
Maybe try adding in webpack. mix.webpackConfig references laravel mix.
mix.webpackConfig({ resolve: { alias: { '@imgSrc': path.resolve('resources/assets/img') } } });
And then in vue use.
<img src="@imgSrc/logo.png" />
Something must have changed. The answer given here is no longer correct. This project in Chapter09 uses the @ sign in its import statements but the webpack.config.js file doesn't have a path resolve statement:
let service = process.VUE_CLI_SERVICE if (!service || process.env.VUE_CLI_API_MODE) { const Service = require('./lib/Service') service = new Service(process.env.VUE_CLI_CONTEXT || process.cwd()) service.init(process.env.VUE_CLI_MODE || process.env.NODE_ENV) } module.exports = service.resolveWebpackConfig() | https://javascriptinfo.com/view/40193/es6-import-using-at-sign-in-path-in-a-vue-js-project-using-webpack | CC-MAIN-2021-04 | refinedweb | 549 | 58.18 |
In the two pass solution, we have to scan through the string twice, which is O(2*n). If the unique character appears at the end of the string, this two pass solution will not be very good.
We are able to just scan the string by once by recording both the counts of each character and the first index the char occurs in the string in O(n). Then, we scan the aggregated result in O(26). The total complex is O(26) + O(n).
When the string is very long, this method can save half time.
public class Solution { // //two passes // public int firstUniqChar(String s) { // int[] counts = new int[26]; // for (int i = 0; i < s.length(); i++) { // counts[s.charAt(i) - 'a']++; // } // for (int i = 0; i < s.length(); i++) { // if (counts[s.charAt(i) - 'a'] == 1) return i; // } // return -1; // } //one pass solution of the string, public int firstUniqChar(String s) { //each row records the number of occurence of this char and the first index it appears //countIndex[i][0]: number of times this char occurs //countIndex[i][1]: the index where the first ime the char occurs int[][] countsIndexes = new int[26][2]; //scan through the string O(n) for (int i = 0; i < s.length(); i++) { int index = s.charAt(i) - 'a'; int[] countIndex = countsIndexes[index]; //if the char has not occured yet if (countIndex[0] == 0) { countsIndexes[index][0] = 1; countsIndexes[index][1] = i; } else { //if this char occurs multiple times, set the index to -1 countIndex[1] = -1; } } int minIndex = s.length(); //Scan through the countsIndex, O(26) for (int[] countIndex : countsIndexes) { if (countIndex[0] != 0 && countIndex[1] >= 0) minIndex = Math.min(minIndex, countIndex[1]); } return minIndex == s.length() ? -1 : minIndex; } } | https://discuss.leetcode.com/topic/59962/easy-to-understand-one-pass-solution-o-n-26-in-java | CC-MAIN-2017-34 | refinedweb | 290 | 81.12 |
NUNIT
You’re probably familiar with the unit-testing framework NUnit, the .NET variant of the original Java-based JUnit. Just in case you don’t do unit testing at all (and you should, of course!), here’s how NUnit can be used to implement a test:
public static class Calculator {
public static int Add (int a, int b) {
return a + b;
}
}
...
[TestFixture]
public class Tests {
[Test]
internal void AddTest1( ) {
Assert.AreEqual(20 + 10, Calculator.Add(20, 10));
}
}
Using a test runner, a program that loads the assembly with the tests inside and executes all the methods marked with the attributes, you can now execute the tests against your implemented functionality. When a test fails, the Assert.AreEqual() method (or one of the other methods in the Assert class) throws an exception, and the test runner outputs information about the test that has failed and the exception it has caught.
For quite a while now, NUnit has had a fluent API in addition to the standard one. Here’s how you can rewrite the preceding test with that fluent API:
[Test]
public void AddTest1f( ) {
Assert.That(Calculator.Add(20, 10),
Is.EqualTo(10 + 20));
}
The idea of fluent APIs is that the user of the API can chain together calls to various API functions. These APIs are quite hard to write because of the complex interactions that are possible between the return values and parameters of all the functions involved. Here are a few more examples:
public static class DataSource {
public ... | https://www.safaribooksonline.com/library/view/functional-programming-in/9780470971109/xhtml/sec98.html | CC-MAIN-2016-44 | refinedweb | 251 | 64.1 |
Hi everyone and thanks for looking to help me. I am working with a program involving arrays. I needed to write one that involves the use to
have the program take
about 10 numbers, list the minimum, the maximum, and the average.
Now the program that I have below runs fine which finds the average. But when I try to loop the program to find the minimum and maximum numbers it just repeats.
How do I loop in a loop to find the minimum and maximum nubers.
Please let me know. TY
#include using namespace std; const int arraymax = 10; typedef int arraytype[arraymax]; float findaverage(arraytype numarray, int count); void array2(arraytype numarray, int & count); int main(void) { int array1; arraytype array3; float avg; array2(array3, array1); avg = findaverage(array3, array1); cout << endl << "The average is: " << avg << endl << endl; return 0; } void array2(arraytype numarray, int & count) { int k; cout << "How many numbers would you like to enter? "; cin >> count; for (k = 0; k < count; k++) { cout << "Enter your numbers " << k << " "; cin >> numarray[k]; } } float findaverage(arraytype numarray, int count) { int k; float sum; sum = 0.0; for (k = 0; k < count; k++) sum=sum + numarray[k]; if (count > 0) return sum / count; else return 0.0; } | https://www.daniweb.com/programming/software-development/threads/63575/homework-help | CC-MAIN-2017-47 | refinedweb | 208 | 60.14 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.