text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
A React component goes through different phases as it lives in an application, though it might not be evident that anything is happening behind the scenes. Those phases are: - mounting - updating - unmounting - error handling There are methods in each of these phases that make it possible to perform specific actions on the component during that phase. For example, when fetching data from a network, you’d want to call the function that handles the API call in the componentDidMount() method, which is available during the mounting phase. Knowing the different lifecycle methods is important in the development of React applications, because it allows us to trigger actions exactly when they’re needed without getting tangled up with others. We’re going to look at each lifecycle in this post, including the methods that are available to them and the types of scenarios we’d use them. The Mounting PhaseThe Mounting Phase Think of mounting as the initial phase of a component’s lifecycle. Before mounting occurs, a component has yet to exist — it’s merely a twinkle in the eyes of the DOM until mounting takes place and hooks the component up as part of the document. There are plenty of methods we can leverage once a component is mounted: constructor() , render(), componentDidMount() and static getDerivedStateFromProps(). Each one is handy in it’s own right, so let’s look at them in that order. constructor()constructor() The constructor() method is expected when state is set directly on a component in order to bind methods together. Here is how it looks: // Once the input component is mounting... constructor(props) { // ...set some props on it... super(props); // ...which, in this case is a blank username... this.state = { username: '' }; // ...and then bind with a method that handles a change to the input this.handleInputChange = this.handleInputChange.bind(this); } It is important to know that the constructor is the first method that gets called as the component is created. The component hasn’t rendered yet (that’s coming) but the DOM is aware of it and we can hook into it before it renders. As a result, this isn’t the place where we’d call setState() or introduce any side effects because, well, the component is still in the phase of being constructed! I wrote up a tutorial on refs a little while back, and once thing I noted is that it’s possible to set up ref in the constructor when making use of React.createRef(). That’s legit because refs is used to change values without props or having to re-render the component with updates values: constructor(props) { super(props); this.state = { username: '' }; this.inputText = React.createRef(); } render()render() The render() method is where the markup for the component comes into view on the front end. Users can see it and access it at this point. If you’ve ever created a React component, then you’re already familiar with it — even if you didn’t realize it — because it’s required to spit out the markup. class App extends React.Component { // When mounting is in progress, please render the following! render() { return ( <div> <p>Hello World!</p> </div> ) } } But that’s not all that render() is good for! It can also be used to render an array of components: class App extends React.Component { render () { return [ <h2>JavaScript Tools</h2>, <Frontend />, <Backend /> ] } } …and even fragments of a component: class App extends React.Component { render() { return ( <React.Fragment> <p>Hello World!</p> </React.Fragment> ) } } We can also use it to render components outside of the DOM hierarchy (a la React Portal): // We're creating a portal that allows the component to travel around the DOM class Portal extends React.Component { // First, we're creating a div element constructor() { super(); this.el = document.createElement("div"); } // Once it mounts, let's append the component's children componentDidMount = () => { portalRoot.appendChild(this.el); }; // If the component is removed from the DOM, then we'll remove the children, too componentWillUnmount = () => { portalRoot.removeChild(this.el); }; // Ah, now we can render the component and its children where we want render() { const { children } = this.props; return ReactDOM.createPortal(children, this.el); } } And, of course, render() can — ahem — render numbers and strings… class App extends React.Component { render () { return "Hello World!" } } …as well as null or Boolean values: class App extends React.Component { render () { return null } } componentDidMount()componentDidMount() Does the componentDidMount() name give away what it means? This method gets called after the component is mounted (i.e. hooked to the DOM). In another tutorial I wrote up on fetching data in React, this is where you want to make a request to obtain data from an API. We can have your fetch method: fetchUsers() { fetch(``) .then(response => response.json()) .then(data => this.setState({ users: data, isLoading: false, }) ) .catch(error => this.setState({ error, isLoading: false })); } Then call the method in componentDidMount() hook: componentDidMount() { this.fetchUsers(); } We can also add event listeners: componentDidMount() { el.addEventListener() } Neat, right? static getDerivedStateFromProps()static getDerivedStateFromProps() It’s kind of a long-winded name, but static getDerivedStateFromProps() isn’t as complicated as it sounds. It’s called before the render() method during the mounting phase, and before the update phase. It returns either an object to update the state of a component, or null when there’s nothing to update. To understand how it works, let’s implement a counter component which will have a certain value for its counter state. This state will only update when the value of maxCount is higher. maxCount will be passed from the parent component. Here’s the parent component: class App extends React.Component { constructor(props) { super(props) this.textInput = React.createRef(); this.state = { value: 0 } } handleIncrement = e => { e.preventDefault(); this.setState({ value: this.state.value + 1 }) }; handleDecrement = e => { e.preventDefault(); this.setState({ value: this.state.value - 1 }) }; render() { return ( <React.Fragment> <section className="section"> <p>Max count: { this.state.value }</p> <button onClick={this.handleIncrement}+</button> <button onClick={this.handleDecrement}-</button> </section> <section className="section"> <Counter maxCount={this.state.value} /> </section> </React.Fragment> ) } } We have a button used to increase the value of maxCount, which we pass to the Counter component. class Counter extends React.Component { state={ counter: 5 } static getDerivedStateFromProps(nextProps, prevState) { if (prevState.counter < nextProps.maxCount) { return { counter: nextProps.maxCount }; } return null; } render() { return ( <div className="box"> <p>Count: {this.state.counter}</p> </div> ) } } In the Counter component, we check to see if counter is less than maxCount. If it is, we set counter to the value of maxCount. Otherwise, we do nothing. You can play around with the following Pen below to see how that works on the front end: See the Pen getDerivedStateFromProps by Kingsley Silas Chijioke (@kinsomicrote) on CodePen. The Updating PhaseThe Updating Phase The updating phase occurs when a component’s props or state changes. Like mounting, updating has its own set of available methods, which we’ll look at next. That said, it’s worth noting that both render() and getDerivedStateFromProps() also get triggered in this phase. shouldComponentUpdate()shouldComponentUpdate() When the state or props of a component changes, we can make use of the shouldComponentUpdate() method to control whether the component should update or not. This method is called before rendering occurs and when state and props are being received. The default behavior is true. To re-render every time the state or props change, we’d do something like this: shouldComponentUpdate(nextProps, nextState) { return this.state.value !== nextState.value; } When false is returned, the component does not update and, instead, the render() method is called to display the component. getSnapshotBeforeUpdate()getSnapshotBeforeUpdate() One thing we can do is capture the state of a component at a moment in time, and that’s what getSnapshotBeforeUpdate() is designed to do. It’s called after render() but before any new changes are committed to the DOM. The returned value gets passed as a third parameter to componentDidUpdate(). It takes the previous state and props as parameters: getSnapshotBeforeUpdate(prevProps, prevState) { // ... } Use cases for this method are kinda few and far between, at least in my experience. It is one of those lifecycle methods you may not find yourself reaching for very often. componentDidUpdate()componentDidUpdate() Add componentDidUpdate() to the list of methods where the name sort of says it all. If the component updates, then we can hook into it at that time using this method and pass it previous props and state of the component. componentDidUpdate(prevProps, prevState) { if (prevState.counter !== this.state.counter) { // ... } } If you ever make use of getSnapshotBeforeUpdate(), you can also pass the returned value as a parameter to componentDidUpdate(): componentDidUpdate(prevProps, prevState, snapshot) { if (prevState.counter !== this.state.counter) { // .... } } The Unmounting PhaseThe Unmounting Phase We’re pretty much looking at the inverse of the mounting phase here. As you might expect, unmounting occurs when a component is wiped out of the DOM and no longer available. We only have one method in here: componentWillUnmount() This gets called before a component is unmounted and destroyed. This is where we would want to carry out any necessary clean up after the component takes a hike, like removing event listeners that may have been added in componentDidMount(), or clearing subscriptions. // Remove event listener componentWillUnmount() { el.removeEventListener() } The Error Handling PhaseThe Error Handling Phase Things can go wrong in a component and that can leave us with errors. We’ve had error boundary around for a while to help with this. This error boundary component makes use of some methods to help us handle the errors we could encounter. getDerivedStateFromError()getDerivedStateFromError() We use getDerivedStateFromError() to catch any errors thrown from a descendant component, which we then use to update the state of the component. class ErrorBoundary extends React.Component { constructor(props) { super(props); this.state = { hasError: false }; } static getDerivedStateFromError(error) { return { hasError: true }; } render() { if (this.state.hasError) { return ( <h1>Oops, something went wrong :(</h1> ); } return this.props.children; } } In this example, the ErrorBoundary component will display “Oops, something went wrong” when an error is thrown from a child component. We have a lot more info on this method in a wrap up on goodies that were released in React 16.6.0. componentDidCatch()componentDidCatch() While getDerivedStateFromError() is suited for updating the state of the component in cases where where side effects, like error logging, take place, we ought to make use of componentDidCatch() because it is called during the commit phase, when the DOM has been updated. componentDidCatch(error, info) { // Log error to service } Both getDerivedStateFromError() and componentDidCatch() can be used in the ErrorBoundary component: class ErrorBoundary extends React.Component { constructor(props) { super(props); this.state = { hasError: false }; } static getDerivedStateFromError(error) { return { hasError: true }; } componentDidCatch(error, info) { // Log error to service } render() { if (this.state.hasError) { return ( <h1>Oops, something went wrong :(</h1> ); } return this.props.children; } } And that’s the lifecycle of a React component!And that’s the lifecycle of a React component! There’s something neat about knowing how a React component interacts with the DOM. It’s easy to think some “magic” happens and then something appears on a page. But the lifecycle of a React component shows that there’s order to the madness and it’s designed to give us a great deal of control to make things happen from the time the component hits the DOM to the time it goes away. We covered a lot of ground in a relatively short amount of space, but hopefully this gives you a good idea of not only how React handles components, but what sort of capabilities we have at various stages of that handling. Feel free to leave any questions at all if anything we covered here is unclear and I’d be happy to help as best I can! Yo dawg, very nice article about the lifecycle events. Do you use getDerivedStateFromProps()on your daily work? I never find a use for it at my job.. There is a syntax error at line 4 of the render() example Peace out! Wow really well explained. Kudo It’s interesting but kind of deprecated as writing, since hooks simplify a lot all of this logic, which will be removed from react 17.
https://css-tricks.com/the-circle-of-a-react-lifecycle/
CC-MAIN-2022-40
refinedweb
2,024
56.76
Cover_78_Layout 1 03/02/2017 11:27 Page 1 essence Price | £3.95 Issue 78 | FEBRUARY 2017 Also inside this issue PACIFIC COAST HIGHWAY Plus One Gallery La La Land’s iconic Route 1 REBIRTH OF AN ICON Aston Martin’s DB11 SPRING COLLECTION Hyperrealism’s showpiece Chester Bar WEALTH MANAGEMENT. EXECUTED BEAUTIFULLY. THE ICE AGE IS OVER Ed_Cont_78_Layout 1 03/02/2017 12:12 Page 1 contents Issue 78 | FEBRUARY 2017 8 | Art | PLUS ONE GALLERY Plus One Gallery is widely recognised as the showplace for hyperrealism from all parts of the world. Founders Maggie Bollaert and Clive Pettit recently relocated the gallery to Battersea. Andrew Peters took a peek. 16 | Travel | CALIFORNIA 8 Art | PLUS ONE GALLERY For those who believe California is just about Hollywood, they’re missing out, as travel writer Chantal Borciani finds out. 22| Garden design | ALLADIO SIMS Advocate of hyperrealism Emanuela Alladio advocates planning outdoor spaces from the beginning of any design project.. >>> 26| Motoring | ASTON MARTIN The first product launched under the company’s ‘second century’ plan, DB11 is the bold new figurehead of the illustrious ‘DB’ bloodline and Euan Johns is suitably impressed. 30 | Men’s fashion | CHESTER BARRIE Chester Barrie’s Spring Summer 2017 collection is designed to make sure men look great when it matters most. Summer Passion, 91 x 204cm, Oil on canvas, by Francois Chartier 16 36 | Fashion | KALITA Kalita al Swaidi and Raechel Temily have developed a resort wear brand that offers statement holiday pieces for women, a reflection of the duo’s personal style. 56| Finance | PMW LaLa Land’s ultimate road trip, 46 | Food | CRATES LOCAL PRODUCE Mitchell Thompson, Associate at Mundays LLP, discusses how best to prepare for one certain aspect of life, death. Travel | CALIFORNIA Bixby Bridge, California A Don’t keep sweet Bordeaux for dessert, this wonderful wine can be paired with more than just pudding. Food and wine writer Nick Harman goes to meet the makers. 54| Legal | MUNDAYS FEBRUARY 2017 | essence-magazine.co.uk 9 For those who believe California is just about Hollywood, they’re missing a whole lot, as travel writer Chantal Borciani found out. 42| Wine review | SWEET BORDEAUX Seasonal and local food comes in the form of kale and mussels with recipes to try. ARTWORK IMAGES USED IN THIS ARTICLE HAVE BEEN PROVIDED BY PLUS ONE GALLERY. 8 essence-magazine.co.uk | FEBRUARY 2017 >>> PHOTO COPYRIGHT: PHITHA TANPAIROJ | DREAMSTIME.COM 16 essence-magazine.co.uk | FEBRUARY 2017 26 Simon Lewis, CEO at Partridge Muir & Warren Ltd, believes that 2017 is likely to be an eventful year for both the global economy and financial markets. Motoring | ASTON MART. 58| Education | ACS EGHAM FEBRUARY 2017 | essence-magazine.co.uk 17 >>> Jeremy Lewis, Head of School at ACS Egham International School, discusses the importance of teaching children to be ‘global citizens’ from a young age. 60| Leisure breaks | VENICE Despite crowds in summer, visitors are never more than a bridge or alley away from a secluded square in Venice, as Rebecca Underwood finds out. 64| Events | SURREY Linda Seward’s detailed diary of the best of what’s on in theatre, music, exhibitions, arts and the countryside. 72| Ceramics | BITOSSI 26 essence-magazine.co.uk | FEBRUARY 2017 FEBRUARY 2017 | essence-magazine.co.uk 27 30 The passion and skill of the hands of those who work the ceramics is at the heart of the renowned Bitossi Ceramiche. 78| Interiors | 1508 The London-based design studio 1508 is known for creating exceptional residences and interior spaces. Creative Director Louise Wicksteed shares her best advice. FEBRUARY 2017 | essence-magazine.co.uk 3 P.4 Adverts_Layout 1 02/02/2017 19:02 Page 1 OP OPEN EN M MORNINGS ORNINGS N Nursery urser y & JJunior unior School S c ho ol College College & Sixth Six th Form Form 3&4M March arch 2017 2017 4 March March 2017 2017 Call Call 01932 01932 839437 839437 Book Book now now a att w w w.stgeorgesweybridge.com LLeading eading independent independent co-educational co - educational Roman Roman Catholic Catholic day day schools schools for 3 to in in Surrey Surrey offering of fering a values-led values-led education education for to 18 18 year year olds. ol d s. AR Registered egistered Educational Educational Charity Charit y No.1017853 No.1017853 W E D D I N G S • PA R T I E S • N O V E M B E R 5 T H E X P L O S I V E E N T E R TA I N M E N T WHATEVER YOUR EVENT CONTACT US FOR A QUOTATION T: 01483 417475 E: INFO@FLASHPOINT-FIREWORKS.CO.UK 4 essence-magazine.co.uk | FEBRUARY 2017 Ed_Cont_78_Layout 1 03/02/2017 12:12 Page 2 72 IMAGE COURTESY BITOSSI CERAMICHE 60 PHOTO COPYRIGHT: VELA ARCHIVE 36 essence 78 COVER: Susan the Cow by Alexandra Klimas, Oil on canvas, courtesy Plus One Gallery, Mitchell Thompson, Simon Lewis, Nick Harman, Route 1 With the inauguration of Donald Trump, a worldwide collective drawing in of breath appears to have been taken. Americans have, perhaps, (and unfairly), been regarded as being rather overly forceful, but I think the description would fit in this instance. Whilst in the business world, Trump was used to taking Route 1. He may not find it so easy this time round, but time will tell. America has another association with Route 1 and this one is well worth experiencing as essence this month takes a trip along California’s iconic Pacific Coast Highway. If all the political behaviour over the past year seems a little dreamlike and surreal, we offer a dose of realism through art, hyperrealism to be precise. This art form is championed by Plus One Gallery in Battersea which has become an international focal point for collectors and artists alike. Spring is just around the corner and Chester Barrie of Savile Row offers a welcome insight into its Spring Summer 2017 collection, whilst we look even further ahead with Kalita and its travel orientated designers: Kalita al Swaidi and Raechel Temily. Style is never far away in essence and Aston Martin has embarked upon a new chapter in its history, repositioning itself as more than a car maker with DB11, a simply exquisite and beautiful vehicle. If thoughts are turning back to the garden, we suggest planning your outside space from a different viewpoint with Alladio Sims. So, as we peer past the remaining gloomy days and forward into 2017, essence offers up beauty, legal, financial and educational advice, together with the pick of activities highlighting food and events to enjoy. The essence team © Maple Publishing 2017 FEBRUARY 2017 | essence-magazine.co.uk 5 6 essence-magazine.co.uk | FEBRUARY 2017 FEBRUARY 2017 | essence-magazine.co.uk 7 4pp_Gallery_Layout 1 03/02/2017 11:48 Page 1 Summer Passion, 91 x 204cm, Oil on canvas, by Francois Chartier 8 essence-magazine.co.uk | FEBRUARY 2017 4pp_Gallery_Layout 1 03/02/2017 11:48 Page 2 Art | PLUS ONE GALLERY Advocate of hyperrealism. >>> ARTWORK IMAGES USED IN THIS ARTICLE HAVE BEEN PROVIDED BY PLUS ONE GALLERY. FEBRUARY 2017 | essence-magazine.co.uk 9 4pp_Gallery_Layout 1 03/02/2017 11:48 Page 3 H ow do you define the term ‘hyperrealism’ which is not universally understood outside the world of collectors and connoisseurs? Clive Head, British artist and curator of the first in a series of exhibitions at Plus One Gallery in 2008 christened the initial show ‘Exactitude’ and called it a figurative art form of “breathtaking precision and remarkable clarity.” Maggie Bollaert and Colin Pettit, who founded Plus One in 2001, travelled to New York to visit major galleries at the forefront of the movement showing photorealism, as it was then known. Here they met and discussed the art with such luminaries in the art world as the renowned Ivan Karp at OK Harris Gallery, as well as Louis K. Meisel, to learn from the experience and widen their own knowledge. Hyperrealist painters depict textures, spaces and detail with astonishing creative skill and technical virtuosity. Their subjects can be humble as a frankfurter, Coke can, choc bar or gaudy wrapper de rigueur, or as grandiose as a vast cityscape, underwater nymph or lifesize portrait, as cinematic as a downtown diner or a lonely automobile on a deserted highway. So how did hyperrealism evolve? Flashback to sixties’ New York City and east coast San Francisco for the birth of photorealism, and home to a series of twentieth century ‘isms’ echoing around the world. Photorealism led to hyperrealism and, as Maggie Bollaert affirms, the origin of this genre art goes back ‘in an unbroken line’ to the earliest realist masters of the seventeenth century Dutch Golden Age. “Artists influenced photographers, photographers artists. The camera influenced everybody” LOUIS K. MEISEL The late, great art dealer Ivan Karp and mentor to many originally spearheaded the movement with legendary dealer Lou Castelli and later in his own eponymous gallery in Soho, New York. For the next 45 years Karp was described by the New York Times as “New York’s deftest and most enthusiastic salesman of the new art.” He helped find, popularise and market Pop Art, including Andy Warhol, Roy Lichtenstein and Robert Rauschenberg (currently winning raves for a retrospective at Tate Modern until April). Karp made the New York Soho district come alive, opening up a decrepit neighbourhood and making it world famous. He played a fundamental role in discovering and promoting hyperrealist painting, following on from photorealism. He actively promoted major exponents of the new art such as Ralph Goings, Robert Bechtle, Richard Estes, Chuck Close and John Salt. 10 essence-magazine.co.uk | FEBRUARY 2017 British Pound by Diederick Kraaijeveld Salvaged wood, 130cm diameter Karp as scholar and historian attempts to analyse hyperrealism. What was the spark that set it alight, what do hyperrealist painters have in common, what direction are today’s realists taking? As he explained: “Hyperrealism is related to minimalism, but is bereft of intellectual pretensions. At its best it is starkly ‘matter of fact’, ‘anti sensibility’. Its realism records the previously unrecorded places and objects of everyday life. Humanity is rarely in evidence.” Over the past four decades, art dealer and author Louis K. Meisel has also worked assiduously to champion mainly American photorealism, a term he claims to have coined in the late sixties with it first appearing in print in 1969. In 1970, the first show of the decade at the Whitney Museum of American Art was ‘22 Realists’ and the catalogue foreword also used the word ‘photorealism.’ That was the beginning. Meisel describes the roots of photorealism as: “The photo realists made it legitimate to use the camera again after artists had rejected it, saying they didn’t need realism to document faces, places and things because the camera can record it. There were great realist painters before who recorded early twentieth century American urban life and landscape with emotional truth. Among them are Thomas Eakins, Edward Hopper, and Winslow Homer, who denied using the camera, or didn’t want to talk about it or admit it. It didn’t become acceptable for an artist to say, ‘Hey, the camera and photograph are tools, and we’re using them’ until the photorealists came along in the nineteen seventies. Artists influenced photographers, photographers artists. The camera influenced everybody.” The new genre became a movement and many of the first photorealists grouped together and became friends. Robert Bechtle took a picture of himself in the mirror with a car outside, and then painted it. A contemporary, Audrey Flack, used a picture out of a magazine and painted it. That was ‘Kennedy Motorcade’ taken from a 1963 news photo. John Salt photographed abandoned car wrecks and then made a painting of his image. Nobody taught them to do it. It all just evolved as they went along. The birth of the instant camera became a tool for artists to snap away at images from which they gained inspiration. How they turned that inspiration into a painting was each artist’s own calling. What they shared was the excitement of a new way of finding inspiration and depicting the world in which they lived at that time. The creation of work that is ultra perfection is labour intensive and to achieve this degree of high definition work means most of the artists could >>> 4pp_Gallery_Layout 1 03/02/2017 11:48 Page 4 Art | PLUS ONE GALLERY Painted fish. Panta rei, Oil on canvas 100 x 120 cm by Francesco Stile FEBRUARY 2017 | essence-magazine.co.uk 11 4pp_Gallery_Layout 1 03/02/2017 11:48 Page 5 Macaroon Sensations, Oil on canvas, 150 x 150cm, 2016, by Pedrom Campos 12 essence-magazine.co.uk | FEBRUARY 2017 4pp_Gallery_Layout 1 03/02/2017 11:48 Page 6 Art | PLUS ONE GALLERY Profile: Plus One Gallery Plus One Gallery began trading in 2001 in Marylebone, moved to Pimlico Road in 2006, and last year relocated to the vast St George development at Battersea Reach. The gallery is built on several levels, in a dazzling modern design, with white walls, pale timber floors and large windows, creating an atmosphere of light and air. When owner-founders Maggie Bollaert and Colin Pettit started to plan their gallery, they did not have a strong focus on the type of art they wanted to handle. They were drawn towards art that was precise and meticulous: conceptual art held no attractions, only works with a lifelike quality. They knew they wanted to work with living artists and to sell only ‘truly contemporary art’. When I Grow Up, Lime wood, acrylics, LED 55 x 65 x15.5cm, by Peter Demetz only produce very few perfect paintings a year. Some, in fact, can and still do only produce work that can take much more than a year to produce a single work of art to the exacting standards they impose on themselves. Collectors bought them because they liked the paintings to look at, to take home, hang on the wall and view. To see a perfect image in paint is fascinating and although it is a ‘real image’, it possesses an almost surreal quality that stands the test of time and endures. The most fascinating aspect of hyperrealism is its return (perhaps not intentional) to the seventeenth century masters of the Dutch Golden Age. Most scholars agree that Vermeer, one of the most sublime painters the world has ever known, though he barely made ends meet in his own lifetime, used the camera obscura (a sort of precursor of the modern photographic camera) as an aid to his painting. Artists in that time also used perspective manuals, drawing frames and almost any form of technology or scientific knowledge which might help them achieve one of their prime objectives: the highest degree of mimetic illusion. Realism, with its roots so firmly embedded in Europe with the Dutch masters, and to Germany and Middle Europe, has a rich history of highly realistic art that almost defies definition. Plus One Gallery today has sought out the very best in European hyperrealist art, as well as American, and now shows art from all corners of the world, but always with the common theme of highly realistic perfect art. In short, Exactitude in all its many forms and widely ranging subject matters. But technical devices, such as today’s high digital cameras, computer graphics and virtual reality simulation, are only tools. The creation of a work of art is always, said Ivan Karp: “ineffable, a kind of miracle no matter what the process or equipage.” The extraordinary powers of these hyperrealist artists cannot be explained by training or apprenticeships, the most plausible and yet simple explanation is a ‘blessing of nature’. Critics have tried to explain the essence of hyperrealism in a few words, using neat phrases such as an “embellished, heightened sense of reality” or “a layer of vision that would otherwise remain unseen.” So, ‘exactitude’ with a small ‘e’ has played an important part in art history, ever since the discovery of perspective. Writer on fine art and After crystallising their ideas by talking to dynamic far-sighted leaders of the New York modern art scene, those erudite gallery owners who brought Pop Art and photorealism to the world in the sixties and seventies, they have become dedicated to the discovery and promotion of hyperrealist artists, whom Maggie seeks out around the world. Maggie believes in the increasing importance of hyperrealism (could it succeed conceptualism as the Next Big Thing?) to investors and collectors. The Walker Gallery Liverpool, BBK Gallery in Bilbao and Museo del Tabac Andorra all held hyperrealist exhibitions in 2015. Three of Maggie’s artists are BP Award winners and are represented in the National Portrait Gallery: Craig Wylie, Philip Harris and Andrew Tift. Sculptor Paul Day won a competition to produce the Queen Mother’s Memorial in the Mall and his Battle of Britain relief has been installed on the Embankment. Hyperrealism has given the gallery a strong identity. Maggie explains that for her one of the compelling attractions of hyperrealism is: “the connection with different art movements of the past, such as Dutch still lifes of the Golden Age and American Pop Art.” Plus One Gallery is widely recognised as the show place for hyperrealism from all parts of the world. In stock is art from the best of the original American photorealists, as well as work from artists based in Europe, Latin America and the Far East. The gallery has remained pure in its endeavours to promote realism in all its many aspects and has become a destination point for artists and collectors alike. cinema, John Russell Taylor, describing the origins of realism in the definitive book ‘Exactitude: Hyperrealistic Art Today’, reminds us that even before Vermeer, Aristotle in fourth century BC knew all about the camera obscura, so everything goes back to the Greeks, yet again. Plus One Gallery is still at the forefront of the ever growing and evolving movement promoting the very best of realism in all its forms and its list of artists continues to grow, as do the ranks of discerning collectors. It is a continuing project that constantly changes, can never be out of date and is a chronicle of life itself. essence INFO Plus One Gallery B&C Trafalgar House, Juniper Drive, Battersea Reach, York Road, London SW18 1GY Website: Winter exhibition from 25 January to 25 February 2017 Plus One Publishing is responsible for Exactitude and several books dedicated to specific artists and their work. FEBRUARY 2017 | essence-magazine.co.uk 13 4pp_Gallery_Layout 1 03/02/2017 11:48 Page 7 Major art trends | BARNEBYS GROWTH OF ONLINE MEANS THE DEATH OF THE PRINTED CATALOGUE Ten major trends evident from 2016 B arnebys, the leading search engine aggregator for art and antique auctions, covering 1,600 auction houses and carrying half a million objects at any one time, has taken a snapshot of the 2016, highlighting trends. 1. The increasing importance of online bidding. Anecdotal as well as researched evidence with leading international auction houses shows that on average some 35% of bids now come in over the internet. 2. The widening of users of online bidding to include younger wealthy buyers. A new generation is logging on to buy instead of searching the high stree, so we can expect growth among the millennials. When it comes to the volume market, they will be central to its growth, motivated in part by quality and also the environmental aspect of buying on the second-hand (i.e. auction) market. Recycling is going beyond cans and bottles – and it has ever greater strength. Auction houses and antique dealers offer millennials qualities that appeal to them – environmental sensitivity as well as quality, durability and sustainability. 6. Emerging art markets: China, Africa and, with some emerging regions. 8. The provenance of an item becomes ever more important as celebrity connections add value. During 2016 we saw a lot of ‘white glove’ auctions, especially celebrity-led sales or collections of famous people. That will continue to grow and increase in revenue. 9. Collectables such as watches, coins and classic cars, areas that win on globalisation and increased online bidding will continue to grow. 3. And partly as a result of this online revolution, auctioneers are cutting back on the numbers of catalogues they print – some are doing without catalogues at all, using online catalogues. 10. Female artists. The search for new names and the best works continues to grow. The high prices for female artists who have written the history of art in their own time will be central to future museum exhibitions.. 14 essence-magazine.co.uk | FEBRUARY 2017 essence INFO As the world's leading auction search service (or aggregator), Barnebys features, at any given time, over a half million items for sale through auction houses worldwide. Its revenue is split 80 per cent international vs. 20 per cent US. Overall, Sweden (Barnebys' home country) represents one per cent of the global auction market (vs. 20 per cent for the UK and 38 per cent for the United States). Website: 4pp_Travel_Layout 1 03/02/2017 08:56 Page 1 Bixby Bridge, California LaLa Land’s ultimate road trip A For those who believe California is just about Hollywood, they’re missing a whole lot, as travel writer Chantal Borciani found out., 16 essence-magazine.co.uk | FEBRUARY 2017 PHOTO COPYRIGHT: PHITHA TANPAIROJ | DREAMSTIME.COM 4pp_Travel_Layout 1 03/02/2017 08:56 Page 2 Travel | CALIFORNIA >>> FEBRUARY 2017 | essence-magazine.co.uk 17 4pp_Travel_Layout 1 03/02/2017 08:56 Page 3 PHOTO COPYRIGHT: MARK WEBER Ballard Canyon, Los Olivos District and the Sta. Rita Hills – and this American Viticulture Area recreates conditions very similar to French growing regions such as Bordeaux, the Rhône and Burgundy, so it’s a great spot to test the tastebuds. Some favourite wine rooms include Municipal Winemakers (municipalwinemakers.com), Kunin Winery and AVA Santa Barbara (avasantabarbara.com). Where to stay Santa Barbara attracts a cool crowd, many of whom head for Bacara (meritagecollection.com/bacararesort), five minutes north of Santa Barbara in Goleta. From the hushed reception adorned with a bounty of white flowers to the 42,000 sq. ft. spa (the largest on the west coast) and the fairylit terraces, this five star hotel oozes coastal class. Low lying buildings sit in harmony with the coastal surrounds and the private slither of beach is picture-perfect. Many come to enjoy the spa – the Gaviota Herbal Therapy treatment is heavenly and uses a warm compress of regional herbs to soothe jet-lagged muscles – or to dine at the exceptional Oak Grill restaurant. Sip a pinot noir by firepits on the terrace before devouring the aged steaks and fresh seafood on offer. Gaviota coast While the views out across Bacara’s idyllic beach may whet the whistle, there is nothing better than getting out and enjoying the Pacific coast. Refugio beach is one of the few undeveloped coastlines to explore and lies 20 minutes north of the hotel. Learn about the beaches, wildlife, flowers and fauna on kayak tours, paddle boarding and surf lessons with the super friendly Santa Barbara Adventure Company (). State Street, downtown Santa Barbara Old Mission, Santa Barbara PHOTO COPYRIGHT: MARK WEBER Perfect Pismo Past Santa Barbara, it’s a pick ‘n’ mix of picture-perfect towns, including sleepy Solvang, artisan shops and – of course – incredible scenery. Head north () to laid-back Pismo where visitors can fish, kayak, surf or simply soak up the views. Sunsets don’t get better than at Pismo’s Dolphin Bay Resort (), which sits pretty on an ocean bluff. With capacious suites and family rooms, a large pool and enough to do for kids big and little, it’s a great place to relax for a few days. Forget Sonoma! Just over an hour inland from Pismo, Paso Robles () is a veritable nirvana for wine connoisseurs. Less crowded than Napa and Sonoma, the town (and its surrounding 200 wineries) is not to be missed. It’s easy to see why this is a weekend hotspot for Californians – days can be spent at the wineries, evenings sauntering around the pretty tree-lined avenues in town, dining at the farm-to-table restaurants, pretty shops and drinking cocktails alfresco overlooking the park. Niner Wine Estates () boasts a spectacular setting – overlooking its heart shaped valley – and a fantastic vineyard restaurant where the wonderfully engaging executive chef Maegen Loring delivers an exciting, seasonal and widely celebrated menu inspired by Niner’s kitchen garden. Wine tasting flights start from $15, and the vineyard now runs gourmet evenings and cookery nights. 18 essence-magazine.co.uk | FEBRUARY 2017 PHOTO COPYRIGHT: DOLPHIN BAY RESORT Dolphin Bay Resort pool at sunset 4pp_Travel_Layout 1 03/02/2017 08:56 Page 4 Travel | CALIFORNIA TRAVEL TIP: Sign up to the free Hertz Gold Plus Rewards programme prior to booking and a car will be waiting with a range of free extras, deals and discounts. The eclectic La Bellasera () is a great overnight spot – ideally located between town and vineyards and with a pool for the oh-so-warm days and a revered in-house restaurant. The big one The stretch of coast winding around to Big Sur and the iconic Bixby Bridge is undoubtedly one of the highlights of any PCH road trip, and those in the know will tell you to time the weather and take it slow. The coastal road winds like a snake along the coast, past rugged cliffs, golden stretches of beach and redwood forests laced in mist. On a clear day, there isn’t much to beat the ocean-view patio at Nepenthe, sitting pretty on land once owned by Hollywood legends Orson Welles and Rita Hayworth. Charming Carmel Fortunately, the views don’t stop there. Cruising down towards the seaside, chocolate-box town of Carmel () is another roof down, wind-in-the-hair, grins as far as the eye can see, moment. The beautiful ‘17 Mile Drive’ twists and turns around one of the most picturesque (and expensive) headlands in California, and the town is a chic hub of boutiques, independent restaurants (chains have been banned) and wine rooms. We stayed at Quail Lodge (), a slice of heaven in the valley just inland from Carmel’s golden sands. Our suite overlooked the hotel’s Mallard Lake and as with all ‘lodges’ it boasts an exterior deck for quiet contemplation. For golf enthusiasts, Quail Lodge has one of the many excellent ranges along the PCH and for those who aren’t partial to a round, the views and open fires (it gets cool here out of peak season) and cosy snugs are perfect for some R&R. And so to San Francisco, under three hours north of Carmel, with its cosmopolitan neighbourhoods, cool cafés and fabulous shopping. This road trip may be bookended with two of the most exciting and vibrant cities in the world, but the real pleasure and heart of this holiday lies in the captivating vistas, beautiful towns and cliff top drives in between. Put the roof down (a convertible is a must if you can:), hit this magical road and don’t look back. essence INFO Hertz Website: One week’s car hire in LA with Hertz starts from £301. PHOTO COPYRIGHT: VICTORIA PERKINS View along Route 1, Big Sur FEBRUARY 2017 | essence-magazine.co.uk 19 3pp_Gardening_Layout 1 03/02/2017 08:57 Page 1 Box mentality A broader approach to any design project can achieve a property’s full potential. Emanuela Alladio of Alladio Sims Garden Landscape Design Limited advocates planning outdoor spaces right from the beginning of any project. W hen planning a new extension or build, we tend to think inside the box. We visualise the house as the main box, sitting somewhat alone on a piece of landscape, and then divide this empty box into separate rooms, each with its own very specific function to fulfil – each a smaller, yet still empty, box (furnishings are often brought into action at a later stage). Once the main box is finished, we stop to wonder how the box relates to the space around it. Only at the end do we think of ways to soften the building and make it sit more naturally within the outside space. The result is that often the finished box doesn’t connect with the neighbourhood or the wider landscape, and the inside/outside flow is seriously compromised and its potential lost. Yet when we admire images of houses and gardens conceived with an integrated approach, we are in awe. So why consider the relation between house and landscape as an afterthought? Wouldn’t it be better if someone was in charge of thinking outside the box from the very beginning of a project? We could do so much more if we engaged the building with its surrounds from the very beginning. If, at the early planning stage, client, architects, garden designer and interior designer all sat at the same table the result would mean much fewer lost opportunities, well-integrated solutions and useful economies of scale. This early diagram shows how important the connection is between house and landscape and how vital it is that the two communicate and flow in and out of each other. Sketch by Alladio Sims Garden Landscape Design, Farnham Glass House We hire architects to create forms from interconnected spaces, focusing on concepts such as flow and aesthetic, we hire interior designers to introduce the right mood and texture to each and every one of these spaces. All our energy is spent worrying about what happens inside – In big schemes with open views, it’s important to create quiet, intimate spaces for relaxation. Sketch courtesy of Alladio Sims Garden Landscape Design, Farnham Glass House 22 essence-magazine.co.uk | FEBRUARY 2017 3pp_Gardening_Layout 1 03/02/2017 08:57 Page 2 Garden design | ALLADIO SIMS Integrating the new box into the landscape at design stage throws up exciting opportunities floors, furniture, curtains, light fittings, kitchens and bathrooms – forgetting that this beautiful flow will stop as soon as those brand new bi-fold doors open – and we are faced with an empty and alien back garden. Yet the solution is out there. Bringing in a skilled garden designer can continue the dialogue outside. A skilled designer will absorb information from all sources and develop the outside space to extend the link with the house. Your brilliant, new, glass-clad, sleek kitchen living area will no longer open to an uninspiring and empty back garden. You will discover a new world of potential and create a stunning outdoor room. Some tricks are simple: choose the same porcelain tiles installed in the kitchen for the patio area – in a different finish to add slip resistance outside – to achieve that instant, seamless, indoor/outdoor transition. Make the most of the expanse of glass walls in your new extension by controlling the views out, creating new ones, adding light and water for a touch of drama. In this example, the angle projected by the terrace lines up with the view and intensifies the connection between inside and outside. The sunken seating keeps the view clear from the house. The outdoor kitchen frames a view to the surrounding ancient trees that can be appreciated both from the interior of the house and whilst dining outside. Sketch courtesy of Alladio Sims Garden Landscape Design, Farnham Glass House Of course, just like a good architect or interior designer, a great garden designer will guide you through this process, looking at the ‘outside box’ and dividing it up into a series of meaningful layers each with a different function: privacy, drama, entertaining, framing the view etc. And the difference will be in the small details – identifying the best aspect for dining or enjoying a swim or a view, making the space feel much bigger >>> FEBRUARY 2017 | essence-magazine.co.uk 23 3pp_Gardening_Layout 1 03/02/2017 08:57 Page 3 Literature | LEAPING HARE PRESS Community Gardening Handbook (Published in association with Big Dig and the Soil Association) ISBN: 9781782404491 RRP: £9.99 Emanuela and Jon in the show garden they created for the Istanbul Flower Festival in 2016 and more inviting thanks to directional paving or the right materials and plant palettes, choosing the best plants for the site given the local soil, drainage and exposure to the elements. Once this process is complete, the indoor/outdoor flow will be seamless. Despite this enormous potential, so often garden designers are called to ‘intervene’ right at the end of the renovation, new build or extension, missing out on some earlier opportunities. Considering the outer environment can bring so many tangible advantages to any development, for example, by making the most of an existing level, framing a borrowed view from the landscape and creating a positive link between the building and its surrounds. This can be easily achieved if the garden designer is engaged from the beginning as a three-way conversation with the architect and client. It would often mean saving on costs too as later ‘interventions’ are minimised. This holistic approach to an extension or a new build is already very established across the ocean and is being adopted here too, producing some amazing results. Next time we admire a stunning new build if we ask ourselves why our eyes are so drawn by what they see it will no doubt be the very unique connection that the building has managed to establish with its surrounds, the creative use of local materials, the effective and functional use of space, the clever yet understated details. This very elegant product will be the result of clever thinking outside the box. essence INFO Alladio Sims Garden Landscape Design Limited Unit C Willow House, Dragonfly Place, London SE4 2FJ Website: Email: Hello@alladiosims.co.uk 24 essence-magazine.co.uk | FEBRUARY 2017 Community gardening is a growing revolution taking root in towns and cities worldwide. Groups of like-minded people are transforming neglected plots of land into green, flourishing spaces for everyone to enjoy. In The Community Gardening Handbook, Ben Raskin shares his expertise in an invaluable introduction to a new wave of collective self-sufficiency. A look into different types of inspirational community gardens from all over the world is followed by a practical guide where planning advice is laid out alongside essential etiquette tips for running a successful site and proven ideas for involving the whole neighbourhood. “For budding growers setting up a new plot, to experienced green thumbs looking for new inspiration.” – Maddie Guerlain, The Big Dig A Family Guide to Growing Fruit & Veg (Published in association with the Soil Association) ISBN: 9781782404514 RRP: £9.99 Have you ever wondered how plants work? Or why we eat the fruit of one plant, but the leaves of another? What’s the big deal about growing things – and how do we decide what we need to grow in the space we have? In GROW, there is all the inspiration and knowledge needed to get out there and start planting. essence INFO Both books by Ben Raskin, head of horticulture at the Soil Association Published by Leaping Hare Press Website: 4pp_Motoring_Layout 1 03/02/2017 11:47 Page. 26 essence-magazine.co.uk | FEBRUARY 2017 4pp_Motoring_Layout 1 03/02/2017 11:47 Page 2 Motoring | ASTON MARTIN >>> FEBRUARY 2017 | essence-magazine.co.uk 27 4pp_Motoring_Layout 1 03/02/2017 11:47 Page 3 “Exceptional design and cutting edge technology lie at the heart of Aston Martin. We are extremely proud of the work that went into the design of DB11, so it is a real pleasure to accept the T3 award.” MILES NURNBERGER, CREATIVE DIRECTOR OF EXTERIOR DESIGN, ASTON MARTIN 28 essence-magazine.co.uk | FEBRUARY 2017 4pp_Motoring_Layout 1 03/02/2017 11:47 Page 4 Motoring | ASTON MARTIN “This is not only the most important car that Aston Martin has launched in recent history, but also in its 103 year existence. DB11 rightfully places Aston Martin once again as a leading brand in the luxury automotive market.” DR ANDY PALMER, PRESIDENT AND CEO, ASTON MARTIN though as the engine is a silent monster, taking the car from a standstill to 60mph in 3.9 seconds, so the quickest DB ever made. Built upon a new lighter, stronger and more space efficient bonded aluminium structure, DB11 is the most powerful, efficient and dynamically gifted DB model in Aston Martin’s history. As such, it’s the most significant new Aston Martin since the introduction of the DB9 in 2003. provided icons such as the DB2/4, DB5 and, most recently, the DB10 developed specifically for you know who. DB11 re-imagines the relationship between form and function with a series of fresh design features. Foremost amongst these are the fronthinging clamshell bonnet, distinctive LED headlights and accentuated lines of the iconic Aston Martin grille. The profile is stunning, in no short measure due to to the roof strakes that flow uninterrupted. The clean lines continue to the rear, with a sloping decklid that blends into sculpted taillights to create a new and unmistakeable look. The epitome of Aston Martin’s progressive design principles, DB11 is the ultimate partnership of design and cutting edge technology. In recognition of this, the car won the T3 Design of the Year Award presented annually for the world’s best technology products. The award seeks the “most pleasing looking device, or one that reimagines what a product of its type can look like”. DB11 became available in the UK at the end of last year, so for those whose New Year resolutions included buying a new car, then what are you waiting for? Above everything four things stand out: the engine, the looks, the steering and the ride. What more is there to ask? essence INFO DB11 recommended retail price: from £154,900 in the UK Website: FEBRUARY 2017 | essence-magazine.co.uk 29 30 essence-magazine.co.uk | FEBRUARY 2017 Men’s fashion | CHESTER BARRIE Gentleman’s world Chester Barrie is a quintessentially British brand. Based in London’s Savile Row, it was founded in 1935 and offers affordable luxury to the sartorially minded, bringing the style of Savile Row, and the cut of the best British tailors, to men who understand good dressing.. Chester Barrie’s Spring Summer 2017 collection is designed to make sure men look great when it matters most. Whether it’s a wedding, a garden party or even Royal Ascot, there is clothing to ensure that gentlemen. Chester Barrie offers suits that can be broken down into separates, offering more choice. The new range of suit separates is designed to be mixed and matched for versatility and maximum impact. essence INFO Chester Barrie Website: >>> FEBRUARY 2017 | essence-magazine.co.uk 31 32 essence-magazine.co.uk | FEBRUARY 2017 Men’s fashion | CHESTER BARRIE >>> FEBRUARY 2017 | essence-magazine.co.uk 33 34 essence-magazine.co.uk | FEBRUARY 2017 New Lindberg Ads June 2016_Layout 1 01/06/2016 18:30 Page 4 Summer suitcase all wrapped up Kalita al Swaidi and Raechel Temily have developed a resort wear brand that offers statement holiday pieces for women. A reflection of the duo’s personal style, the aesthetic and vision is uncomplicated: modern resort wear that goes beyond the limited ‘holiday’ genre with desirable pieces to wear in any location. Kalita first started out creating beautiful, hand-made embroidered lingerie and quickly garnered attention from the London press. Celebrity clients such as Naomi Campbell, Kate Moss, Kylie Minogue and Poppy Delevingne all snapped up bespoke pairs of the pretty lingerie. After a hiatus from fashion design, Kalita has spent considerable time crafting her first foray into women’s ready to wear, entering the market with a clear, concise aesthetic that reflects both her personal style and what she feels women around the world will also love to wear. A lot of work goes into how the pieces are cut, how they fit, how the wearer moves in them. The silks all breathe and are very comfortable to wear. It’s resort wear, clothes to take away on holidays, for special weekends away or on trips to amazing locations for when the wearer needs to look her best, even if barefoot, still salty from the sea. essence INFO Stockist: Matches Fashion Marylebone, Notting Hill, Richmond, Wimbledon Websites: and Fonteyn and the Slipper Apron Dress – Tomato Red £445 36 essence-magazine.co.uk | FEBRUARY 2017 Fashion | KALITA Tasha Maxi Skirt – Camille Reversible Maxi – Shark BlueBlue £390 Shark Silk Crepe £000 St Barths Tee – Tomato Red £170 Camille Reversible Maxi – Shark Blue Silk Crepe £390 Casablanca Beach Cover Up – White Silk Cotton £258 >>> FEBRUARY 2017 | essence-magazine.co.uk 37 Uschi and the Wild Sky Dress – White Silk Cotton £648 For the Love of Leonie Maxi – Washed Sky Cotton £390 Berenson Tunic – Caramel £258 38 essence-magazine.co.uk | FEBRUARY 2017 Beauty_Layout 1 03/02/2017 08:58 Page 1 Beauty | EPSOM SKIN CLINICS Valentine’s gift A recent survey revealed that a high proportion of women would appreciate beauty treatment, including anti-wrinkle injections or fillers, as a special gift for Valentine’s Day. Jacqui Casey of Epsom Skin Clinics found out more. V alentine’s Day is a perfect excuse to dress up and spend time with a loved one, whilst looking and feeling in tip top condition. Unfortunately, for some people, hearing the word ‘wrinkle-relaxing injections’ paints a picture of a woman with a frozen face. This is perhaps due to the obsession with ‘beauty’ within the media, causing certain celebrities to look ‘overdone’. To start, it is very important to research the clinic and therapist administering these injections and choosing a certified and experienced doctor is paramount. Our tip is to have a few different consultations with nurses and doctors at a recommended practice to really understand any procedure beforehand. Pucker up for smooth lips All Epsom Skin Clinics’ injectable fillers are based on a naturallyoccurring substance called Hyaluronic Acid found in the body which helps to lubricate joints, nerves, hair, skin and eyes. The ability to produce hyaluronic acid declines with age, making the skin drier and more wrinkled. Injected ‘HA’ is a crystal-clear, non-animal gel, helping to bind water and hydrating skin from within whilst maintaining a youthful, dewy glow to create a natural enhancement which is gentle and safe to the skin. The result is instant and longlasting, but not permanent: over time hyaluronic acid is absorbed by the body and gradually disappears. A favourite product to achieve super soft lips is Jane Iredale’s Sugar and Butter Lip Exfoliator/Plumper. The exfoliating side uses Turbinado Sugar to gently remove any dull, dry skin, while the tinted lip plumper infused with Shea Butter restores moisture to thirsty lips. Dare to bare, don’t forget the cleavage The effects of too much sun, lack of SPF cream and perfume can all contribute to a wrinkled, mottled cleavage. For those concerned with the décolleté area, don’t panic as there are changes available to prevent the skin from worsening and to bring back some life to the skin. Combination treatments are highly recommended, but there are only a few which can be combined effectively without damaging the skin. Enhanced Skin Rejuvenation is a revolutionary skin treatment using multiple laser wavelengths to eliminate and reduce pigmentation, broken capillaries, wrinkles and improve skin tone. It is most often performed on the face and neck, but the décolletage, hands and arms can be treated as well. A series of three to six treatments spaced two to four weeks apart is our standard recommendation. During a course of treatment for pigmentation, thread veins and the general mottled look to the décolleté or chest, our newest 40 essence-magazine.co.uk | FEBRUARY 2017 PHOTO COPYRIGHT: IGOR MOJZES | DREAMSTIME.COM ‘SupErficialTM Laser’ can be applied two to four weeks before or after Enhanced Skin Rejuvenation. This revolutionary laser is a gentle resurfacing laser ‘peel’ that offers a pearl finish to skin. Immediately after this treatment, the skin will benefit from a Dermalux. This light-based treatment uses three, evidence-based wavelengths of near-infrared, blue and red light which are absorbed into the skin at different levels. The wavelengths provide energy and stimulate a variety of natural processes within the skin to help produce vitamin D and serotonin, whilst encouraging cell growth and collagen rejuvenation to heal and repair skin post laser and minimise down time. Each treatment lasts approximately 20 minutes and is a pleasant, relaxing experience. To prevent further damage and protect the rejuvenating process, always apply a good sun protection factor every day as part of a daily routine. Heliocare was designed to protect users against damaging UVA and UVB sun rays and taking a supplement such as Heliocare Capsules will boost protection. The key ingredient is Polypodium leucotomos, a fern extract, which studies have shown helps guard skin from UV damage and even decreases redness after sun exposure. No supplement could ever replace the need for sunscreen, so it is advisable to use topical application to maximise sun protection. From within Skinade™ is the next generation liquid food supplement that boosts the body’s natural collagen production and improves the appearance of skin in as little as 30 days. Skinade™ has been developed by leading UK scientists and is manufactured in the UK using only the highest quality EU-approved ingredients. Skinade™ provides a perfect ratio of liquid ingredients working to create one of the most advanced, anti-ageing skincare products on the market today. So book with Epsom Skin Clinics for a consultation now to find out how to get the best out of skincare. essence INFO Epsom Skin Clinics Website: Telephone: 01372 737280 (Epsom) or 020 8399 5996 (Surbiton) P.41 Adverts_Layout 1 02/02/2017 19:34 Page 1 Fi yyour fi Find fit Trial Membership now available NATIONAL FITNESS AWARDS 2015&2016 WINNER foxhills.co.uk/fox17 Handcrafted bespoke luxury tree houses 01892 750 090 info@blueforest.com FEBRUARY 2017 | essence-magazine.co.uk 41 3pp_Wine_Layout 1 03/02/2017 08:59 Page 1 Breaking the mould Don’t keep sweet Bordeaux for dessert, this wonderful wine can be paired with more than just pudding. Food and wine writer Nick Harman goes to meet the makers. T he road sign emerges in the car’s headlights: ‘Sauternes’. A name previously familiar only from the labels on a bottle suddenly becomes a real place of bricks and mortar. Or to be exact, ancient stones. Sauternes village is a huddle of centuries-old houses unchanged by time. Everywhere are signs for its most famous product and it’s a must-see stop for anyone on the Sweet Bordeaux wine trail. And it’s a bigger trail than some imagine. There are in fact eleven sweet wines produced in the region, Sauternes, of course, but also Cadillac, Barsac and Loupillac to name just a few. The differing soils, microclimates and elevations provide a wonderfully wide variation of wine styles across the area and within individual chateaux, all of which can be explored at the Maison des Vins de Cadillac, a fascinating combination of museum and tasting centre. The unifying factor is the allimportant autumn fogs created by the nearby Garonne and Ciron rivers. “Folklore says that hundreds of years ago a white wine grower didn’t harvest before the fog came and his grapes got covered in a grey mould that made the berries shrivel up. Not wanting to lose all his money, he made wine anyway and it was a revelation: sweet, complex and delicious.” Jean-Christophe Barbe is telling me this as he offers me different vintages in the kitchen of his home, the vineyard Château Laville in Preignac. He isn’t just a winemaker, he is a professor at Bordeaux Oenologist University and his speciality is the fungus that the fog creates on the berries, the Noble Rot. 42 essence-magazine.co.uk | FEBRUARY 2017 3pp_Wine_Layout 1 03/02/2017 08:59 Page 2 Wine review | SWEET BORDEAUX The differing soils, microclimates and elevations provide a wonderfully wide variation of wine styles across the area and within individual chateaux, all of which can be explored... “The scientific name is Botrytis cinerea,” Jean-Christophe explains. “The fungus punctures the grape’s skin, so the water evaporates and that raises the sugar concentration in the remaining juice.” The downside is that it means you get a lot less wine from your vines. “About one glass per vine,” he says, as I leave, “which explains the high cost of the finished product.” Ageing barrels at Château d'Yquem Grapes with the Noble Rot I move to another kitchen, another ancient chateau, this time the home of Laure de Lambert Compeyrot, chatelaine of Château Sigalas Rabaud in Bommes. Here chef Olivier Straehli of La Maison des 5 Sens in Bordeaux is creating dishes to match Laure’s range of wonderful sweet wines. >>> FEBRUARY 2017 | essence-magazine.co.uk 43 3pp_Wine_Layout 1 03/02/2017 08:59 Page 3 The beautiful Château d'Yquem Far from being heavy and cloying, the new breed of sweet Bordeaux from winemakers who, like Laure, are increasingly women, offer all kinds of expressions. The younger wines are a lighter golden colour, vivacious, clean and fruity, especially when served chilled. They haven’t yet the full honeyed complexity of age, but they bring a lot to the plate and the palate. This is amply demonstrated over seven courses and seven wines. Flavours are complemented, enhanced and balanced, from a dish of Shepherds Pie in a pumpkin with coconut milk, to one of cress risotto with parmesan and another of Soba noodles with cream of kaffir lime and roasted sesame. You’d not normally think to pair these dishes with sweet wines, but they work wonderfully. And for those looking for true nobility, there is Château d’Yquem. My pilgrimage along the sweet wine route had to end where some of the most expensive wines in the world are produced. This beautiful chateau on a hill has dominated the landscape since around 1477 and you’re aware of the wealth and power of the brand at every turn. Here Sandrine Garbay guided my tasting. She is the ultimate decision maker on each year’s production, and with bottles costing upward of £150 to £350 or even more, it’s a serious job. The more aged Yquems are complex and layered, tropical fruits with the marmalade finish that is the mark of the finest sweet Bordeaux. Balance is perfect, a velvet mouthfeel and an acidity so gentle it’s almost invisible, but which clears the palate. I feel bathed in golden sunlight even sitting indoors. So the message is don’t stop just at Sauternes, but instead explore the remarkable range of sweet Bordeaux wines and even try, as the sweet Bordeaux website suggests, mixing it for a cocktail. A few aged sommeliers may turn purple at the very idea, but there’s a new golden age for sweet Bordeaux on the horizon and just a short plane ride will have you in paradise. 44 essence-magazine.co.uk | FEBRUARY 2017 essence INFO Nick Harman, writer Website: Maison des Vins de Cadillac, D10 Route de Langon, 33410 Cadillac, France Website: Château Laville, 6 Laville, 33210 Preignac, France Website: Château Sigalas Rabaud, Rabaud-Sigalas, 33210 Bommes, France Website: Château d’Yquem, 33210 Sauternes, France 03/02/2017 09:11: BARBRO BERGFELDT | DREAMSTIME.COM PHOTO COPYRIGHT: NATALIA LISOVSKAYA | DREAMSTIME.COM Kale Mussels A member of the brassica family, this leafy vegetable is so packed with nutrients it is now often labelled as a superfood, and rightly so. Kale enjoyed a comeback following the wartime ‘Dig for Victory’ campaign, but it was one of the most common vegetables in all of Europe during the Middle Ages. Its origins stem right back to Ancient Greece and the Romans also enjoyed it, referring to it as Sabellian kale. Today, kale varies from plain-leaved through to curly kale, with colours from light green through to very dark green, such as cavolo nero – black cabbage. All kale is packed with vitamins A, C and K in addition to calcium, folic acid and even lutein, an antioxidant that helps keeps eyes healthy. This vegetable also packs a punch in flavour and is best following the first frosts of the winter. The old saying of eating mussels only when there’s an ‘R’ in the month is not strictly true, but there is something in it as they really are at their best during the colder months. This is mainly due to them spawning in the spring, so they are usually of a far better size later in the year. Most mussels available today are rope grown resulting in plumper meat and less sand or grit than dredged mussels. Some people prefer wild mussels, but there is also a higher risk of them containing harmful toxins. It is not advisable to eat freshwater mussels as the quality of the water is likely to be very dubious. Of course there is a risk when eating any shellfish, but follow some simple rules and the risk is diminished, especially with farmed mussels. Discard any broken shells or ones already open before cooking and eat fresh mussels on the day they are bought, unless advised otherwise by a fishmonger who will know when they were harvested. The best way to cook mussels is to steam until they just open, as opposed to over boiling. The only difference between white and yellow mussels is the gender, with females being the latter. 46 essence-magazine.co.uk | FEBRUARY 2017 Food_Crates_Layout 1 03/02/2017 09:11 Page 2 Food | CRATES LOCAL PRODUCE Kale & ginger juice Cider mussels with bacon Two servings Serves two Ingredients: Six to eight curly kale leaves Two apples Three stalks celery One cucumber One lemon Fresh grated ginger, small to medium piece Honey, optional to taste Ingredients: 500g live mussels 150ml still cider 100ml single cream Three rashers of streaky bacon or pancetta Two shallots One tablespoon of rapeseed or olive oil One tablespoon whole grain mustard Two cloves garlic Sprig parsley Chives to garnish Method: w Simply wash the kale leaves and chop the remaining fruit and vegetables. w For a juicer, simply add the ingredients and serve or chill. w Otherwise, blend all the ingredients and sieve, or ideally use an even finer mesh strainer. w Add honey to taste, if required. Method: w Remove any barnacles or beards from the mussel shells, especially dredged rather than rope farmed. Rinse thoroughly under running cold water and discard any broken or open shells that won’t close if tapped. wHeat the oil in a large saucepan and add chopped bacon and shallots together with sliced garlic cloves. Cook until soft and add in the mustard. wTo this, add the mussels, then cider and cover with a lid. Shake and keep the pan over the heat, shaking occasionally for around three to four minutes or until the mussels open. wTake off the heat to pour in the cream, stir well and add the parsley. w Finally, serve in bowls with chopped chives sprinkled on top and a side of crusty bread or even french fries. essence INFO Crates Local Produce 24a Carfax, Horsham, West Sussex RH12 1EB Telephone: 01403 256435 Website: Follow on Twitter @crateslocal or Facebook page Crates Local FEBRUARY 2017 | essence-magazine.co.uk 47 Art Food_Layout 1 03/02/2017 11:36 Page 1 A gourmet Indian food explosion from Surrey Spice Food writer Shirlee Posner of Eat Surrey introduces essence readers to Mandira Sarkar, the chef behind the innovative Surrey Spice, purveyor of Indian gourmet curries and fine dining. A management consultant, Mandira Sarkar, the creative force behind Surrey Spice, worked in the public sector for many years helping organisations become more productive. After her last large project with Guildford Borough Council ended, Mandira felt it was time to try her hand at something creative and more hands on. A love of her family’s traditional cuisine and treasured handed-down recipes inspired her to launch a calendar of pop up supper clubs. I was invited to one of the first Mandira hosted with some other local food writers. Her supper clubs are all themed by festivals and ours for the evening was Holi, the festival of colours. We were treated to a fabulous evening of Indian food and storytelling with dishes that were pure bliss: no overpowering chilli hit, absolutely no puddles of oil, just fragrant, aromatic spices and complementary textures. Desserts were amazing too. However, whilst the food was as good as anything I have eaten in Singapore’s Little India (perhaps even better), it was really the delightful rhetoric from Mandira during the meal that made the evening sparkle. A natural host, Mandira embellished the evening with background information on each dish: a family party, watching her mother in the kitchen or a snippet of information about the festival. Holi commemorates the victory of good over evil, which culminates in the burning and destruction of a female demon named Holika. Holi got its name as the ‘Festival of Colours’ from Lord Krishna, a reincarnation of Lord Vishnu, 48 essence-magazine.co.uk | FEBRUARY 2017 Mandira is inspired by traditional handed-down recipes who liked to play pranks on village girls by drenching them in water and coloured powdered paint. The festival is always held at the end of February or early March which also marks the start of summer season. By the end of the evening, as the food entwined with vivid descriptions, we almost felt we had been there ourselves. If Dev Patel had danced into the room, none of us would have been at all surprised! This was in February 2015 and I have followed Mandira and her company Surrey Spice on social media as the business grew. Supper clubs, while Art Food_Layout 1 03/02/2017 11:36 Page 2 Artisan food | EAT SURREY Mandira’s chicken curry This elegant curry is easy to replicate providing there is an electric food blender or food processor to hand. The blended onions, ginger and garlic form the base of this aromatic dish and naturally thicken the sauce. Serve with steamed Basmati rice and fresh coriander for a satisfying dinner. Ingredients Two to three tablespoons vegetable oil One inch cinnamon bark Four cardamom pods, cut down the middle to release flavour Four cloves Two mild onions One inch fresh root ginger One whole garlic, skin removed One teaspoon ground cumin One teaspoon ground coriander One teaspoon red chilli powder; for a hotter curry, use more One teaspoon turmeric powder Eight boned chicken thighs, skin removed and each chopped into four to five pieces Four to five potatoes, cut into quarters One teaspoon salt Half teaspoon sugar Half teaspoon garam masala Two fresh tomatoes, chopped Half cup water great for guests, are hard work and difficult to make a living from, but they are great for having your expertise recognised. Mandira had also started to offer take away food for pick up on Fridays from her home in Guildford. Surrey Spice supper clubs have popped up at local award-winning distillery Silent Pool with Bollywood-themed evenings and at Cellar Wines in Ripley, boutique wine shop and deli with a full events’ calendar. Cookery courses and bespoke catering are also on offer. In fact, this entrepreneur has been so active she has also been a finalist at the Surrey Life Food & Drink Awards for Food Innovation. More recently, Mandira decided the time was right to sell Surrey Spice freezer-ready meals to farm shops and delicatessens. Making these fresh to order, she delivers either fresh or ready frozen and already has a keen following. There are so many Indian ready meals in supermarkets that Mandira fully supports her retailers by offering tasting events. These are a huge success, as once bitten it’s difficult to resist the charms of these authentic dishes. After trying them myself, I was delighted to have the opportunity to see them being made and hopefully learn some trade secrets. I arrived on a cold morning to watch the magic happen in Mandira’s Surrey kitchen where she has managed to find a lady from Goa to help prepare her wonderful dishes, and another helper was on hand to pack. >>> Fresh chopped coriander to garnish Method Heat two to three tablespoons of oil in a large, heavy based pan and add the cinnamon bark, cardamom pods and cloves. This may sputter, but it’s important to release the flavour from these spices. Once an aroma starts to emerge, add the blended onion, ginger and garlic. Stir fry this mixture until it starts to brown slightly: if it starts to stick, add a teaspoon of water. Add the cumin, coriander, chilli and turmeric stirring continuously. Cook for a minute and add the chicken pieces. Cook the chicken in the spice paste until it starts to brown and is coated all over. Now add the potatoes and pan fry for another two minutes. Add the rest of the ingredients and bring to the boil. Simmer with a lid on the pan for 15-20 minutes until the potato is cooked through and the gravy has thickened. Serve with rice and garnish with fresh coriander. • • • • • • Shirlee Posner, eatsurrey.co.uk FEBRUARY 2017 | essence-magazine.co.uk 49 Art Food_Layout 1 03/02/2017 11:36 Page 3 On arrival, the kitchen was in full production, huge wooden spoons were used to stir giant pans of dhal and Dhania Kaju Murgh (chicken with cashew nuts and coriander). A curious machine was whirring on the work surface and from the aroma I could tell I was in curry nirvana. I was astonished at the amount of detail that goes into the dishes. No jar of Balti curry paste has ever been welcome in this kitchen. Instead the dishes are all authentic regional recipes made exactly as they would be in Indian homes. Mandira explained that dishes such as tandoori chicken masala don’t exist in India, but her dishes of Xacuti chicken and Meen Moilee do. I watched the Dhania Kaju Murgh created from chopped, skinless chicken thigh meat, fresh coconut and coriander. Thigh meat is a preferred cut for traditionalists as it’s more tender and juicy than chicken breast (a sentiment I found when I lived in Taiwan too). The curious whirring machine it turned out was a stone grinder for spices. Used in modern Indian kitchens and powered by electricity, Mandira had the grinder brought to the UK by a friend in her suitcase. The only recognisable part of this machine is the name Prestige, but it is essential for the texture it creates when grinding ingredients. In the machine I witnessed fried onions being ground with fresh coconut with the resulting pulp seasoning and thickening the gravy. Using fresh coconut is essential said this chef, whose attention to detail was apparent. After this dish was made, a second went into production – Chicken Xacuti – for which a whole bowl of Kashmiri red chilli had been steeped in water and ground with coconut. A batch of Lehsuni Dal (yellow lentils cooked in caramelised garlic) was ready to portion up, but first we sampled a small bowl each. Satisfying, spicy, smooth and aromatic, it’s a delight to find such brilliant Indian food being made locally. Mandira sources her ingredients from a local Indian food retailer who also has a butcher’s counter, so Surrey Spice supports other local food businesses too. Currently there are ten dishes available in Surrey Spice’s ready meal range, one of which is a Paneer (Indian cottage cheese) cooked in spinach which is the best I have ever tried. I highly recommend these new freezer ready meals. They are beautifully cooked in small batches in a spotlessly clean kitchen. The effort and expertise that goes into their production is hard to beat and the recipes are totally authentic. In addition, these Surrey Spice meals are all gluten free and contain no preservative. It’s just really good food! Mandira’s amazing food is currently for sale in several farm shops in Surrey and a full list can be found on the Surrey Spice website. Websites: and eatsurrey.co.uk Telephone: 07876 135096 Email: info@surreyspice.com Shirlee Posner is a food writer and blogger at and provides social media management, web copywriting and food photography. Member of the Guild of Food Writers 50 essence-magazine.co.uk | FEBRUARY 2017 Baking_Layout 1 03/02/2017 09:01 Page 1 Baking | JEN’S CUPCAKERY VALENTINE VANILLA CUPCAKES A good vanilla cupcake, one that is soft and moist, with just the correct touch of quality vanilla extract and topped with a creamy buttercream is a must-have in anyone’s baking repertoire and a perfect, easy treat to make for a sweetheart this Valentine’s Day, or any other! Bake in a heart shaped baking cup and top with mini hearts or sprinkles – both can be found in some supermarkets or online – and top with a vanilla buttercream tinted pink or a chocolate ganache. After all, a way to a (wo)man’s heart and all that… Makes around 12 Ingredients 120g plain flour 140g caster sugar One and a half teaspoons baking powder A pinch of salt 40g unsalted butter, at room temperature 120ml whole milk One egg Quarter teaspoon vanilla extract Vanilla icing 250g icing sugar, sifted 80g unsalted butter, at room temperature 25ml whole milk A couple of drops of vanilla extract Chocolate ganache 250g dark chocolate 235ml double cream Method w Preheat the oven to 170°C (325°F). w Cream the butter and sugar until soft and fluffy. w Add the eggs one at a time, beating well between each, and then add the vanilla extract. w Add the flour, baking powder and milk in equal batches until the mixture is smooth, but don’t overbeat. w Spoon the mixture into the baking cup cases until two thirds full and bake in the preheated oven for 20–25 minutes, or until light golden and the sponge bounces back when touched. w A skewer inserted in the centre should come out clean. Leave the cupcakes to cool slightly in the tray before turning out onto a wire cooling rack to cool completely. w Whilst cooling, make the vanilla icing by creaming the butter until smooth then adding the other ingredients and beating until light and fluffy. If chocolate ganache is preferred, simply break the chocolate into squares and then heat the double cream until just simmering, add the chocolate and stir briskly until melted into the cream. Take off the heat, leave to thicken for a few minutes and then spoon or pipe on. w When the cupcakes are cold, pipe the vanilla frosting or chocolate ganache on top and decorate with whatever the heart desires! essence INFO TOP TIP: If the cupcake batter looks as if it’s curdling when adding the eggs, add a spoonful of flour after each egg. Website: Telephone: 07751 553106 Email: mail@jenscupcakery.com Facebook: Twitter: @jenscupcakery Blog: FEBRUARY 2017 | essence-magazine.co.uk 51 Lit_Layout 1 03/02/2017 11:31 Page 1 Literature | REVIEW Venetian Chic Venetian art connoisseur, interior designer and hotelier Francesca Bortolotto Possati knows the intricacies of Venice. To have her as a guide is to experience firsthand her passion for the mythic city whose daily visitors outnumber its population. Join her to visit artists’ studios, elegant Venetian friends and to discover palaces’ secrets. Follow her on a gondola ride or through secret gardens, discover restaurants, markets and artisan shops. Everywhere one wanders, a sense of history saturates buildings and landscapes, harking back to the artists of the Renaissance and the chic masquerade balls of centuries past. The discerning eye of photographer Robyn Lea makes this book a revelation of the Venice of dreams. A sentimental foreword by Jeremy Irons perfectly complements this stunning volume. Francesca Bortolotto Possati is the chief executive officer of Venice’s Bauer Hotel group. A native of the city, she is greatly involved in its culture as a patron of the arts. Australian-born Robyn Lea moved to Milan, Italy to work as a photographer’s assistant. Over the last two decades, Lea has become an internationally renowned photographer, TV commercial director and writer. She has worked around the globe for clients such as Peroni, Kodak, Time, Vogue and Harper’s Bazaar. By Francesca Bortolotto Possati Photography by Robyn Lea RRP: $85.00 264 pages • Hardback 150 Illustrations ISBN: 9781614285380 Published by Assouline Publishing 52 essence-magazine.co.uk | FEBRUARY 2017 REVOLVER 50: birth of an icon A Spitfire Girl, Mary Ellis The Grammy Anniversary edition of REVOLVER 50 features rare new photographs and tipped-in illustrations, with written contributions by Paul McCartney and Ringo Starr. The Collector copies include a signed and numbered book – hand-bound in buckram with gold foil blocking and gilt page edges – as well as a 12-page commemorative booklet and a unique Klaus Voormann pencil drawing. Telling his stories behind the making of Revolver and its cover design, each copy in Voormann’s Grammy Anniversary Edition will come with a signed unique, one-of-a-kind pencil drawn artwork. Limited to 500 sets, this is a valuable opportunity to acquire a piece of the Revolver narrative, as well as a beautiful piece of art. The original Klaus Voormann drawing is presented on a record-sized 29 x 29cm (approximately 12 x 12") acidfree mount, which is suitable for framing. Each book and unique drawing will be signed by the artist. As Paul McCartney says: “In the end, the Revolver cover was a classic and this book is another.” Ellis. Her story is one of the most remarkable and endearing of the war, as this young woman, serving as a ferry pilot with the Air Transport Auxiliary, transported aircraft for the RAF, including fast fighter planes and huge four-engine bombers.. Writer Melody Foreman is a qualified journalist and graduate with experience in newspapers and television documentaries, author of the bestselling ‘Bomber Girls’. By Klaus Voormann RRP: collector edition (433 copies) £325.00; deluxe edition (67 copies) £845 (both prices pre-publication) Published by Genesis Publications Author Mary Ellis as told to Melody Foreman RRP: £25.00 240 pages • Hardback 16 illustrations ISBN: 9781473895362 Published by Pen & Sword Books Limited Dying For The Truth The Concise History of Frontline War Reporting The role of war correspondents is crucial to democracy and the public’s discovery of the truth. Without them, the temptation to manipulate events with propaganda would be irresistible to politicians of all hues. This book starts by examining how journalists have plied their trade over the years, most particularly from the Crimean War onwards. Their impact on the conduct of war has been profound and the author, Professor Paul Moorcraft, explains how this influence has shaped the actions of politicians and military commanders. By the same token, the media is a potentially valuable tool to those in authority and this two-way relationship is examined. Technical developments and twenty-four hour news have inevitably changed the nature of war reporting, with political masters ignoring this at their peril, and the author examines key milestones on this road. Using his own and others’ experiences in recent conflicts including Korea, Falkland Islands, the Balkans, Iraq or Afghanistan, the author opens readers’ eyes to an aspect of warfare that is all too often overlooked, but can be crucial to the outcome. The public’s attitude to the day-to-day conduct of war is becoming ever more significant and this fascinating book examines why. By Paul Moorcraft RRP: £25.00 376 pages • Hardback 100 black and white images ISBN: 9781473879157 Published by Pen & Sword Books Limited Cranmore School Independent Preparatory School for girls and boys 2 ½ - 13 MAR 17 APR 17 4 28 The quality of the pupils’ academic achievements is excellent - Independent Schools Inspectorate OPEN MORNINGS 09.30 -11.30 Saturday 4 March & Friday 28 April 2017 Assisted Places Available 01483 280340 admissions@cranmoreprep.co.uk West Horsley, Surrey KT24 6AT Mundays_Layout 1 03/02/2017 09:02 Page 1 Contemplating the inevitable Mitchell Thompson is an Associate in the Private Wealth Department at Mundays and here discusses how best to prepare for one certain aspect of life, death. 2 Mitchell Thompson is an Associate in the Private Wealth Department at Mundays. He advises on a broad range of private client matters, including Wills, estate planning and the administration of estates. He is also experienced in dealing with lasting powers of attorney and deputyship applications to the Court of Protection. Mitchell can be contacted on 01932 590664 or at mitchell.thompson@mundays.co.uk. 54 essence-magazine.co.uk | FEBRUARY 2017 016 has passed and with it the death of many notables. It looks like a busy year for HMRC in terms of inheritance tax. I am told that the list for last year of lost stars is no more or less than the years preceding it, however, because I am getting older, the loss resonates more with me as the celebrities have earned their place in my own life memories. Who could forget Alan Rickman in ‘Die Hard’, Prince with ‘Purple Rain’, Carrie Fisher and her legacy of ‘Star Wars’ or George Michael with ‘Careless Whisper’? But because I am getting older, these deaths, particularly the more unexpected ones, make me consider my own mortality and how abruptly it can come to an end. We all will die, that is as certain as night follows day, yet not all of us think, or don’t like to think, what will happen once we are gone or what stress we may cause to the people we leave behind. Simple steps taken during lifetime may temper or even reduce the amount of problems caused. We have all read news stories of families squabbling over estates. Indeed I recall a surge in claims on estates I was administering around the time of the 2008 crash. At the time, people faced with an uncertain financial future and austerity considered all their potential sources of income. Compromise soon flies out of the window where hardship may be a real possibility. Similarly we all know of families where a large amount of Inheritance Tax was paid, sometimes unnecessarily. Isn’t it therefore a logical step to spend some time talking to a solicitor to make sure you and your estate do not become an anecdote? So have you made a Will? Making a Will seems such an obvious thing to advise, but still much of the UK population does not have one in place because they think they don’t have anything worth leaving, they haven’t got round to it or they make assumptions as to how the estate will pass. By taking the time to make a Will you can: • Specify how you want your estate to be divided; • Make sure bequests to cohabiting partners, friends and charities are included; • Make sure, in the case of married couples, planning is undertaken to cap how much may be lost in paying care home fees; and • Make sure you can minimise the amount of Inheritance tax paid by your estate. The last couple of points are not guaranteed just by making a Will, but making a Will in conjunction with legal advice. Do you have business interests? Do you have children from a previous relationship? Have you adopted or have step-children? Why not take the time now to make sure you take appropriate steps to make sure everyone you want to benefits from your estate, you minimise the tax payable as a result of those wishes and at the same time maximise how much will pass to your nearest and dearest. Probate and beyond When the time does come and a loved one dies, sometimes grief is rudely forced aside whilst the administration of the estate is dealt with. This is, of course, the red tape associated with a person dying; drawing a line under their paper and digital life, gathering in assets, Mundays_Layout 1 03/02/2017 09:02 Page 2 Legal | MUNDAYS PHOTO COPYRIGHT: MONKEY BUSINESS IMAGES | DREAMSTIME.COM Best Private Wealth Lawyer UK 2016 Cobham-based law firm Mundays can announce Partner and Head of Private Wealth, Julie Man has been named Best Private Wealth Lawyer UK in the ‘2016 Women in Wealth Awards’ which showcases the very best women from across the financial environment. Julie joined Mundays as a solicitor in November 2006 and progressed to Associate, Partner and most recently Head of Private Wealth in 2013. The detailed knowledge and experience that Julie has acquired over the last ten years of continuous progression at Mundays has helped to establish her as a solid legal adviser in the private client arena, both internally within Mundays and externally. This has been complemented by an approachable and grounded style, which has earned her an enviable reputation for success amongst her loyal client base. Her ability to distil complex points of law and clearly explain them based on the nuances and practical objectives of each client has been highly praised. Valerie Toon, Managing Partner at Mundays comments on this fantastic achievement: “Julie believes that there really is no ‘typical’ client. It is important to be flexible and adaptable to the needs of the client. Julie leads a hand-picked team that are chosen for their exceptional abilities. Julie deserves this award for the commitment she has shown to her career over the last 10 years at Mundays.” Commenting on the programme, Awards Coordinator Daisy Johnson stated: “Women’s contribution to the finance industry cannot be underestimated, and as such this awards’ programme is showcasing the most committed, successful and professional women from across the market. I am truly proud to be able to highlight the hard work of every one of my winners and would like to wish them an even more prosperous future.” To learn more about all the deserving award winners and to gain insight into the working practices of the ‘best of the best’, please visit the Wealth & Finance website where you can access the winners’ supplement. settling debts and distributing the remaining estate in accordance with the Will or statutory rules of Intestacy (where there is no Will). An executor is appointed under the Will; an Administrator under Intestacy (where there is no Will); together they are Personal Representatives or PRs. Normally these are family member(s) or close friend(s), occasionally it may be a bank, solicitor or other professional. In addition to dealing with asset providers, the PRs may also need to report to HMRC (Income Tax and Inheritance Tax) and apply to the Court for Probate (the Court order recognising who has the legal authority to deal with the estate). Dependent on the size and complexity of the estate, this can become an onerous task and comes at the worst possible time when trying to cope with loss. Reverting to a solicitor to assist can relieve the day to day burden leaving you to make the important decisions and sign documents. Too often we have seen PRs try to rush the process. We, of course, recognise that they want to move on following the bereavement, but they often end up falling foul of HMRC and can risk potential litigation by disappointed beneficiaries as the result of financial loss to the estate. Binding obligations set out in the Will are ignored, Nil Rate Band (NRB) or Life Interest trusts set up as part of tax or care home planning (usually on the death of a first spouse) are incorrectly dealt with, if at all, and can often lead to additional tax on the death of the surviving spouse, penalties, interest and massive delay. Everyone is different, not everyone wants solicitors to deal with the administration of their estates. However, recognising when professional advice is required is half the battle. If there is a legal document, such as a Will, which has plenty of jargon, why not take an hour to speak to a solicitor so you understand the implications rather than taking a chance? It could save more time, money and stress in the long run. essence INFO Mundays LLP Cedar House 78 Portsmouth Road, Cobham KT11 1AN Telephone: 01932 560500 Website: FEBRUARY 2017 | essence-magazine.co.uk 55 Finance_Layout 1 03/02/2017 12:41 Page 1 Financial outlook for 2017 Simon Lewis believes that 2017 is likely to be an eventful year for both the global economy and global financial markets. The changes that are likely to take hold will have a profound effect on many investors. I am usually reluctant to make predictions and to do so might be considered a particularly precarious endeavour at a time of such uncertainty. Nevertheless, it is important for us to have a view of the world in order to shape both our investment and financial planning policy, so I am happy to share with you our current thoughts. Change is coming In many ways, 2017 could be the year that has been too long in coming. It will mark the beginning of the end for a world that has become unduly dependent upon the monetary policy of central banks. We have now endured over 8 years of financial engineering in the aftermath of the global financial crisis and, whilst there is good evidence that the first 4 or 5 years of monetary policy stimulus was necessary, it has probably now gone too far. It is all too easy to blame central bankers, but the reason they have been required to continue to act is that governments have generally failed to take the necessary measures to reform economies, encourage productivity gains and generate genuine economic growth. The absurdity of the current situation is illustrated by the fact that until recently, there was very little additional reward for investors prepared to lend money to both governments and business for the long term as opposed to the short term. This of course does not make sense. For example, Europe issued €242 billion worth of new bonds in 2016 that had an interest rate of zero. We are not just talking about government debt here; big companies such as Unilever and Sanofi have issued bonds with an interest rate of zero. Who would invest for a zero return? The answer of 56 essence-magazine.co.uk | FEBRUARY 2017 course is a central bank spending money that it has conjured up. But too many rabbits have now been pulled from the hat. For example, it has recently been forecast that by the end of 2017 the Bank of Japan will own 50% of all Japanese bonds issued and will be the largest shareholder in 55 blue-chip companies; paid for with ‘funny’ money. The change in direction for the US economy that will follow Trump’s policy of fiscal expansion (he’s planning to spend a lot of money) is likely to put some of these unhealthy trends into reverse, altering the fundamental attractions and risk of most classes of investment. rest of the world. Nevertheless, it appears that the fall was a little ‘overdone’, particularly in relation to the US dollar. I expect sterling to remain volatile as sentiment inevitably swings with each new revelation from the forthcoming Article 50 negotiations. However, overall I think it will strengthen and I predict that it will recover to $1.35 by the end of the year. UK base rate Although the Bank of England seems prepared to look through short-term inflationary spikes it is likely to act by raising the base rate if it thinks there is a risk that higher inflation will otherwise prevail. For “In many ways, 2017 could be the year that has been too long in coming. It will mark the beginning of the end for a world that has become unduly dependent upon the monetary policy of central banks.” Inflation The impact of sterling's post referendum devaluation is inflationary and has not yet fully fed through. Furthermore, it is likely the oil price will recover a little because the OPEC countries seem likely to agree a cut in production to reduce the current glut of global oil and support prices. UK CPI could be 4% or more by the end of 2017. Sterling The pound fell heavily in the aftermath of the EU referendum as a consequence of the uncertainly that now exists regarding the terms of the UK’s trading relations with the example, if sterling was to depreciate heavily. It is however likely to remain cautious about increasing the base rate too quickly whilst negotiations relating to the UK's exit from the EU are in progress. Although I do not expect much movement in the first half of the year, I think UK base rate could be 0.75% by year-end. Bond yields As commented in my recent article, a Trump presidency is likely to lead to a short-term acceleration in US economic growth. This acceleration will result from an increase in government spending (on infrastructure) cuts Finance_Layout 1 03/02/2017 12:41 Page 2 Finance | PMW to corporation tax and regulatory reform. This is likely to lead to higher US interest rates and higher US inflation and the impact of this is likely to be felt in the UK. One of the consequences is that bond yields are likely to rise, which means that bond prices will fall. The projected increase in UK government borrowing to 90% of GDP is also likely to push bond yields higher if we take the view that the Bank of England will no longer resort to quantitative easing. I expect the benchmark redemption yield on 10 year UK gilts to increase from around 1.5% to 2.5%. FTSE 100 share index Although I think there is much to be optimistic about in terms of global equity markets the FTSE 100 share index is somewhat peculiar in its construction. Something like 80% of the earnings of these companies are derived outside the UK and sterling's devaluation has therefore acted to inflate the value of such earnings. This has accounted for much of the index’s performance since the EU referendum. However, I expect sterling to strengthen a little over the course of the year and this will have the opposite effect on the value of those overseas earnings. Furthermore, the predicted upward trend in bond yields is likely to put pressure on the share price of bond proxy stocks; those big companies (such as utilities) that have strong cash flow and pay a good dividend. On the positive side, the banking sector (see below) is likely to perform well and oil and commodity stocks might have further to gain after a successful 2016. On balance, I expect the FTSE 100 share index to breach 7,500 by the year end but it’s probably going to be a bumpy ride. UK banks High-street banks have had a tough time since the financial crisis and shareholders have been waiting patiently for a meaningful recovery in value. Although they were rescued from financial failure (either by Government or investors) banks have since struggled to rebuild their profitability. There have been many headwinds, most notably significant increases in the amount of regulatory capital they must hold and both substantial fines and compensation payments for a multitude of misdemeanours. Such headwinds have been faced at a time when low interest rates have compressed margins and stifled operating profits. However, rising interest rates and bond yields will act to both improve margins and reduce the cost of regulatory capital. This should lead to a substantial improvement in profitability and share price gains that exceed the FTSE 100 overall. To conclude, investors will need to consider how to rebalance their financial strategies to best effect at this crucial time. If you would like some help with this we can add value to the process, please give us a call to discuss your options and find out how our ideas could provide your long term finances with a welcome boost.: FEBRUARY 2017 | essence-magazine.co.uk 57 Education ACS Egham _Layout 1 03/02/2017 09:03 Page 1 A global outlook on education Jeremy Lewis, Head of School at ACS Egham International School, discusses the importance of teaching children to be ‘global citizens’ from a young age. Learning to appreciate and absorb cultural differences is just part of everyday life in an international school. ALL PHOTOS COPYRIGHT: ACS EGHAM INTERNATIONAL SCHOOL, EGHAM 58 essence-magazine.co.uk | FEBRUARY 2017 W ould you like your child to emerge from education as a confident, independent young adult, with a full and life-long appreciation and understanding of other cultures? To possess invaluable, lifelong interpersonal skills, such as an ability to forge new friendships quickly and easily, network with confidence and communicate well with others, having developed into a caring, global citizen? Well who wouldn’t, I’m sure we all want this and more for our children, especially in today’s challenging, changing and uncertain world. But there is one form of education that can help deliver these attributes more so than perhaps any other, and that is an international education in an international school. Developing global citizens Learning to appreciate and absorb cultural differences is just part of everyday life in an international school. From a young age, students integrate with many different cultures, equipping them with the life skills to be global citizens and instilling them with a broad outlook. In a classroom with peers and teachers representing numerous different cultural backgrounds and roots, for example over 60 nationalities make up the ACS Egham school community, all comparing and sharing different perspectives and ideas, students cannot help but develop a deeper and broader understanding across complex subjects. This diversity enables common themes, such as war and conflict, to be analysed from different viewpoints and cultural experiences. And while many students are expatriates, it is local, UK families who are joining in ever-increasing numbers as awareness of the life-long benefits a truly international education can give grows. Anyone looking for truly different perspectives on an issue could not do better than to step into an international school classroom! Different perspectives develop outward looking students Students also establish lasting friendships with peers from around the world and for them nationalities are not a label or a defining characteristic. They readily share their experiences from their home culture or places they have lived and celebrate their diversity. This naturally develops a global outlook which extends way beyond the classroom too – many ACS students have worked with international development projects in Nepal and India, for example, as well as getting involved with local community and environmental projects. Like all international schools, ACS is accustomed to looking beyond national boundaries to global horizons and this unique, multi-cultural learning environment beginning from a young age benefits children later on in adult life. It provides students with the global perspective and social skills necessary to Education ACS Egham _Layout 1 03/02/2017 09:03 Page 2 Education | ACS EGHAM INTERNATIONAL SCHOOL interact with a range of people in a variety of academic, social and – further down the line – professional environments. Qualifications with a global outlook Qualifications and learning programmes that extend beyond national boundaries have to be central to an international education. Such programmes have international recognition not for their name alone, but for their academic rigour and as a positive indicator of the personal development of an individual. One of the leading education programmes in this respect is the International Baccalaureate (IB), a programme often referred to as the global educational passport. ACS Egham, one of three ACS International Schools in the UK, is the first and only school in the UK fully authorised to provide all four International Baccalaureate Programmes from age three through to 18. The programme’s aims are outlined in the IB mission statement – to develop inquiring, knowledgeable and caring young people, and it is widely commended for its academic integrity, development of key skills, and the global awareness that it instils in students. Indicative of this, the IB is consistently cited by university admissions officers as the best preparation for higher education over other traditional UK curriculums. Admissions officers believe that IB students cultivate vital aptitudes needed to thrive at university including an ability for independent enquiry, self-management skills and open mindedness. IB students can go on to university anywhere in the world, with ACS students going to the US, UK and elsewhere across the globe. International schools by definition have a unique advantage in helping students learn to see the world through others’ eyes, and this grounding in their formative years, alongside gaining highly regarded qualifications, sets them up for successful futures anywhere in the world. To find out more about ACS Egham, or to register for an Open Morning visit. essence INFO ACS Egham International School London Road, Egham, Surrey TW20 0HS Website: Telephone: 01784 430800 FEBRUARY 2017 | essence-magazine.co.uk 59 3pp_LesBrea_Viva_Layout 1 03/02/2017 09:04 Page 1 Mystery, awe and enchantment Rebecca Underwood samples the delights of a city seemingly miraculously built on water. Despite crowds in summer, visitors are never more than a bridge or alley away from a secluded square in Venice. V enice was founded in the fifth century and is made up of 118 small islands on a lagoon. Isolated by canals and connected by bridges, the city remains one of the world’s most popular tourist destinations. Parts of the city and lagoon were awarded UNESCO World Heritage status in 1987. Most visitors begin their explorations at San Marco, the magnificent central square, which Napoleon Bonaparte called ‘the drawing room of Europe’. Here the atmosphere is lively, amid the hustle and bustle of the hoi polloi, street hawkers jostle for position and busy cafés serve frothy cappuccinos throughout the day. I headed for the Florian Café, located on San Marco under the canopy of the arcade. Established in 1720, it’s said that many members of Venetian society, including playwright Carlo Goldoni and author Giacomo Casanova, were regular patrons. Today, English afternoon tea is served on silver trays by sharply dressed waiters and includes a tasty selection of delicate sandwiches and delicious scones oozing with fruity jam and cream. As diners take in the grandeur of the neo baroque surroundings, harmonious strains of classical music played by the resident orchestra drift across the café adding to the ambience. After tea, a leisurely stroll around the square, presented to Giustiniano Particiaco, the Doge of Venice. set on fire. Rebuilt over the next two years, it was consecrated in 1094 following the rediscovery of the relics of St Mark which were found secreted in a pillar 60 essence-magazine.co.uk | FEBRUARY 2017 The Grand Canal 3pp_LesBrea_Viva_Layout 1 03/02/2017 09:04 Page 2 Leisure breaks | VENICE PHOTO COPYRIGHT: VELA ARCHIVE VENICE TRAVEL TIP: For a selection of tours or tailor-made boat excursions, Dogaressa Tours operates a fleet of original Venetian boats, including the Bragòzzo, which can accommodate a large group. For more information visit.. Beside the Basilica and joined by a ceremonial entrance, the ‘Porta della Carta’ leads into the Doge’s Palace, a Gothic architectural masterpiece and major landmark. The building, erected in 1340, served as the centre of government and the official residence of the Doge of Venice. Today, visitors wander around the courtyards, the Doge’s living quarters, and the grand halls of the institutional chambers which feature frescoed walls, gold plated ceilings, elaborate murals and magnificent works of art, including Tintoretto’s Il Paradiso, completed in 1577 and located in the Sala del Collegio. For a first rate Venetian lunch, the Ristorante Al Giardinetto da Severino, located on Sestiere Castello, is only a five minute stroll from St Mark’s Square. The property dates back to the fifteenth century and has been managed by the same family since 1949. I opted for the scrumptious fegato alla veneziana (Venetian style calves’ liver), and the glass of Solaia 1988 complemented the flavours perfectly. Another famous landmark is the Bridge of Sighs, connecting the Doge’s Palace with the prison. The graceful Baroque arch features two intricately carved stone grills and it is believed prisoners on their way, Casanova was moved to a shared cell and provided with better food, bedding and books. Finding a piece of marble on his daily exercise walk he carved it into an implement for digging, and aided by a fellow prisoner, >>> PHOTO COPYRIGHT: FOTOTECA ENIT St Mark's Square FEBRUARY 2017 | essence-magazine.co.uk 61 3pp_LesBrea_Viva_Layout 1 03/02/2017 09:04 Page 3 Dogaressa Tours, Venice a selection of suites withcrumb crust with a Sorrento lemon sauce, accompanied by a glass of the delicious Orto Venezia 2011: the only wine produced on the Venetian island of St. Erasmus. For visitors on a limited budget, the Hotel Gabrielli, which has been in business since 1856 and is also located on the Riva Degli Schiavoni, offers a traditional Venetian experience. The hotel comprises four inter-connecting houses and there is a roof top terrace which is the ideal spot for an afternoon tipple. The Restaurant Gabrielli offers a wide choice of dishes. fifteenth century recipe. After dining, take a gondola or boat ride along the Grand Canal: sail underneath the charming Rialto Bridge and along the meandering narrow waterways. It is a truly enchanting experience and provides an opportunity to fully appreciate the city’s exceptional examples of Gothic and Renaissance architecture. It’s recently been well reported that tourist numbers may have to be restricted, but visitors can avoid the crowds by travelling at this time of year, apart from, of course, if visiting during the carnival weeks in February. 62 essence-magazine.co.uk | FEBRUARY 2017 Basilica di Santa Maria della Salute, Venice PHOTO COPYRIGHT: VELA ARCHIVE Carnevale di Venezia PHOTO COPYRIGHT: MATTEO BERTOLIN, VELASPA.COM “Venice is like eating an entire box of chocolate liqueurs in one go.” TRUMAN CAPOTE February 17 events_Layout 1 03/02/2017 10:26 Page 2 essence events spotlight on... Colour! Print by John Hoyland and Glass by Paul Stopler New Ashgate Gallery, Farnham Until Saturday 25 February Colour! brings together two artists: John Hoyland, a leading British painter, and Paul Stopler, an exciting name in glass art. John’s latest works, Life and Love and Warrior Universe (pictured right) display the freedom of composition and powerful use of colour for which he is so famous. Each have a limited edition of only 100, all signed and numbered by the artist. Other New Ashgate Gallery exhibitions this month include: Trevor Price: New Print and Painting with Wire Sculpture by Jane Clift, David Mayne: The Sculptor and Craig Underhill: Maker in Focus, all ending on Saturday 25 February. And, finally, the Gallery’s Spring Craft Collection is on view until Saturday 29 April. Information: 01252 713208 or newashgate.org.uk Richmond Theatre Richmond Monday 6 to Saturday 11 February Million Dollar Quartet Hit musical starring Jason Donovan. Sunday 12 February Al Murray The Pub Landlord is back. Monday 13 to Wednesday 15 February Matthew Bourne’s Early Adventures A production celebrating the company’s thirtieth anniversary. Tuesday 21 to Saturday 25 February The Miser Moliere’s classic comedy starring Griff Rhys Jones and Lee Mack. Monday 6 to Saturday 11 March Gaslight Thriller starring Kara Tointon. Tickets: 0844 871 7651 or ambassadortickets.com/richmond New Victoria Theatre Woking Monday 6 to Saturday 11 February Not Dead Enough The premiere of Peter James’ novel starring Laura Whitmore. 64 essence-magazine.co.uk | FEBRUARY 2017 Tuesday 14 to Sunday 19 February Lord of the Dance: Dangerous Games Directed by Michael Flatley. Tuesday 21 to Wednesday 22 February Ellen Kent Opera A presentation of Puccini’s La Boheme and Verdi’s Aida. Saturday 25 February Mr Bloom’s Nursery Live A family show full of songs, play and interaction. Monday 27 February to Saturday 4 March Ghost the Musical A reimagining of the classic movie. Tickets: 0844 871 7645 or atgtickets.com/woking New Wimbledon Theatre Wimbledon Thursday 16 to Sunday 19 February Cirque Berserk Described as ‘real circus made for theatre’, showcasing traditional circus skills, Cirque Berserk includes over thirty jugglers, acrobats, aerialists, dancers, drummers and stuntmen. Information: 0844 871 7646 or atgtickets.com/wimbledon Warrior Universe by John Hoyland theatre February 17 events_Layout 1 03/02/2017 10:26 Page 3 February 17 events_Layout 1 03/02/2017 10:26 Page 4 essence events Cranleigh Arts Centre Cranleigh Thursday 23 February Seann Walsh Observational comedy. Guildford Gag House Comedy The Back Room of The Star Inn, Guildford Information: 01483 278000 or Saturday 18 February, 8pm The best stand-up comedy. cranleighartscentre.org Information: gaghousecomedy.com Dorking Halls Guildford Shakespeare Company Dorking Sunday 26 February Sean Lock: Keep It Light Comedian with new stand-up show. Friday 3 March Rob Brydon: I Am Standing Up... Comic returns to stand-up. Holy Trinity Church, Guildford Information: 01306 881717 or guildford-shakespeare-company.co.uk Saturday 4 to Saturday 25 February Julius Caesar Shakespeare’s compellingly tense thriller has resonance today... Information: 01483 304384 or dorkinghalls.co.uk Harlequin Theatre Electric Theatre Redhill Guildford Saturday 11 February Lily and the Little Snow Bear Family show, perfect for half term. Saturday 11 February Round the Horne: The 50th Anniversary Tour Experience the comedy classic live. Ellen Kent Opera, Aida, New Victoria Theatre, Woking Information: 01737 276500 or harlequintheatre.co.uk Information: 01483 444789 or electrictheatre.co.uk Rose Theatre Epsom Friday 17 February Lady Chatterley’s Lover Passionate tale of a dramatic love triangle. Please note performance contains full frontal nudity. Information: 01372 742555 or epsomplayhouse.co.uk Farnham Maltings Farnham Thursday 9 February David Starkey – Henry VIII Historian illuminates both the Tudor age and our own. Tuesday 21 February Andy Parsons Great stand-up comedian. Information: 01252 745444 or farnhammaltings.com G Live Guildford Wednesday 15 to Thursday 16 February Half term Animation Nation Stop motion animation workshops. To Saturday 11 February Silver Lining A new comedy by Sandi Toksvig. Tuesday 14 to Saturday 18 February Room on the Broom Another half term treat, a show based on the classic picture book. Saturday 25 February to Sunday 2 April My Brilliant Friend A two part dramatisation of Elena Ferrante’s Neapolitan quartet of novels. Photo copyright: Darren Bell Kingston-upon-Thames Epsom Playhouse Million Dollar Quartet, ensemble, Richmond Theatre Information: 020 8174 0090 or rosetheatrekingston.org Yvonne Arnaud Theatre Guildford Saturday 18 to Saturday 25 February Guys and Dolls Romantic musical comedy containing the classic tune Luck Be A Lady. Monday 27 February to Saturday 4 March The Sound of Murder Gripping thriller. Information: 01483 369350 or Information: 01483 440000 glive.co.uk or yvonne-arnaud.co.uk 66 essence-magazine.co.uk | FEBRUARY 2017 Cirque Berserk, New Wimbledon Theatre February 17 events_Layout 1 03/02/2017 10:26 Page 5 spotlight on... Butterflies in the Glasshouse RHS Garden Wisley, Woking Blue Morpho butterfly. Photo copyright: RHS/Katy Prentice Until Sunday 5 March The butterflies return to RHS Wisley where visitors can see these exotic creatures take flight in the tropical atmosphere of the Glasshouse. The experience is a little like walking into a jungle: tree ferns, lush creepers and fantastic flower displays provide the perfect backdrop to butterflies, including the striking Blue Morpho (pictured left), giant owl, king swallowtail and colourful Malay lacewing. This is a chance to see butterflies feeding from fruits and sweet liquids at special feeding stations and the opportunity to learn more about the life cycle of these stunning creatures. During half term week (11 to 19 February), varying special activities will run every day, including the chance to create butterfly-themed weather crafts. In addition, sculptor Alison Catchlove returns with her animal sculptures. Normal garden admission applies. Information: 0845 260 9000 or rhs.org.uk/wisley music Epworth Choir Guildford International Music Festival Southern Pro Musica Various venues, Guildford Sunday 26 February, 1pm Family Concert A musical treat for all the family, with hands-on workshops. Farnham Maltings Friday 24 February to Sunday 5 March A ten day celebration of live music exploring the interaction between music and the arts with science and technology. Just a few of the performers will include a capella ensemble Apollo 5, National Youth Jazz Orchestra and a UK premiere of StopGap Dance Company’s latest work ‘The Enormous Room’. See website for details. Farnham Information: Trinity Methodist Church, Brewery Road, Woking Saturday 25 February, 11am–5pm Come and sing Rutter’s Requiem All comers are invited to a day’s workshop on Rutter’s beautiful Requiem, culminating in an informal performance of the work. G Live, Guildford Information: southernpromusica.org The Electric Theatre Family Festival The Electric Theatre, Guildford Wednesday 15 to Monday 20 February Including theatre, film, workshops, puppetry, dance, arts and crafts and storytelling. Vivace Chorus and the Brandenburg Sinfonia Information: electrictheatre.co.uk Guildford Cathedral The Electric Film Festival The Electric Theatre, Guildford Harlequin Theatre Saturday 4 March, 7.30pm Performances of Mozart’s ‘Great’ Mass in C Minor, K.427, Howells’ Requiem and Barber’s Adagio conducted by Jeremy Backhouse. A free pre-concert talk takes place at 6.30pm in The Chapter House. Redhill Information: 01483 444333 or Saturday 18 February, 7.30pm Barbara Dickson Multi-million selling singer performs. Friday 3 March, 7.30pm Gilbert O’Sullivan in concert On his 50th anniversary tour. vivacechorus.org Unravel...a festival of knitting, 2017 festivals Farnham Maltings G Live Information: 01737 276500 or Cranleigh Arts Centre Guildford harlequintheatre.co.uk Tuesday 28 February to Saturday 4 March The Centre’s second annual Festival, coinciding with World Book Day on 2 March, includes live theatre and creative workshops. Information: epworthchoir.org Thursday 23 February, 8pm Fairport Convention Band celebrates fiftieth year. Friday 24 February, 8pm Judy Tzuke A special acoustic evening. Information: 01252 745444 or farnhammaltings.com Friday 10 February, 7.30pm The Ukulele Orchestra of Great Britain A concert of all genres of music played exclusively on ukuleles. Occam Singers St Nicolas’ Church, Guildford Information: 01483 369350 or Saturday 25 February, 7.30pm Arvo Pärt: Passio (St John Passion) With the New London Sinfonia. glive.co.uk Information: occamsingers.co.uk Tuesday 21 February to Saturday 4 March An Oscar-themed film festival with showings of Gone With The Wind and The Lord Of The Rings. Information: electrictheatre.co.uk Information: 01483 278000 or Friday 17 to Sunday 19 February, various times The ninth year of this independent festival sees it creating a unique ‘Knit Aviary’ and asking crafters to donate a handmade bird to add to the special display being created. These will then be sold off after the Festival, with all money raised going to Step by Step, a young people’s charity. cranleighartscentre.org Information: farnhammaltings.com Cranleigh Literary Festival FEBRUARY 2017 | essence-magazine.co.uk 67 February 17 events_Layout 1 03/02/2017 10:26 Page 6 Dorking Museum Dorking Thursdays, Fridays and Saturdays throughout February Medieval Betchworth An exhibition which depicts life in a medieval village. Information: dorkingmuseum.org.uk Ramster Hall Chiddingfold Friday 10 to Sunday 26 March Embroidery and textile exhibition Over 200 pieces of embroidery and textile art will be on display and for sale. Visitors will also be able to explore the beautiful garden. Watts Gallery To Sunday 19 February Untold Stories: British Art from Private Collections Great works of art usually kept behind closed doors. To Sunday 19 February Mary Wondrausch: A Return to Painting New paintings and collages from leading ceramic artist. Tuesday 28 February to Sunday 5 November Watts 200: A Life in Art: G F Watts 1817–1904 Marking the great artist’s life with a timeline highlighting key occasions in Watts’ career. Information: 01483 813593 or Information: ramsterevents.com wattsgallery.org.uk The Lightbox Gallery and Museum national trust Woking To Sunday 7 May Henry Moore: Sculpting from Nature Featuring over 50 artworks from arguably the greatest British sculptor of the twentieth century. Henry Moore (1898–1986), Head, 1984, The Lightbox Compton, Guildford Photo copyright: NT/Marco Cinnirella exhibitions © Reproduced by permission of The Henry Moore Foundation essence events Claremont amphitheatre, Claremont Landscape Garden National Trust properties offer perfect venues to explore during any season. A few are shown here, but visit nationaltrust.org.uk for more. thelightbox.org.uk Claremont Landscape Garden The Art Agency Esher Esher Information: 01372 466740 or Saturday 11 to Tuesday 14 February Valentine Lovers’ Walk Enjoy a Valentine’s walk along a specially decorated trail and discover a regal romance. theartagency.co.uk Information: 01372 467806 Throughout February Featured artist: Parastoo Ganjei Talented painter’s works on display. Photo copyright: Bocketts Farm Information: 01483 737800 or 68 essence-magazine.co.uk | FEBRUARY 2017 Bocketts Farm, spring lambs February 17 events_Layout 1 03/02/2017 10:26 Page 7 East Clandon, Guildford Saturday 11 to Sunday 19 February, 10am–4pm February half term Explore, build a den, plus lots more. Information: 01483 222482 Polesden Lacey Great Bookham, near Dorking To Thursday 30 March, 10am–4pm Rules and Rollerskates: the perks of life in service Discover the work-life balance of Mrs Greville’s servants. To Tuesday 14 February, 10am–5pm The Love Tree Tie notes of love to the beech tree on the South Lawn. Saturday 11 to Sunday 19 February, 10am–3pm Secret servants: half term trail Discover Polesden’s secrets on this special half term trail. Information: 01372 452048 out & about Albury Vineyard Silent Pool, Albury Saturday 25 February Meet the vineyard manager Visit Albury for a vine pruning demonstration and a fun, informative and hands-on session at the vineyard. Ideal for gardeners and those interested in viticulture. Information: 01483 229159 or Godstone Farm Godstone Birdworld Farnham Monday 13 to Friday 17 February and Monday 20 to Friday 24 February Penguin activity half term Penguin-themed arts and crafts and trail around Birdworld. Mane Chance Sanctuary Godalming Hampton Court Palace birdworld.co.uk Hampton Court Information: 01483 351526 or Saturday 11 to Sunday 19 February, 11am–4pm Love at the Tudor Court A day out for the whole family with special activities for children including The Great Palace Quest. Collect a quest map, meet courtiers, solve puzzles and create a friendship gift. manechancesanctuary.org Information: 0844 482 7777 or Surrey Wildlife Trust hrp.org.uk Furzefield Wood, Merstham Leatherhead Saturday 11 to Sunday 19 February, 10am–4pm Historic meadows children’s trail Explore the history of the Runnymede meadows and popular pastimes across the ages. Sunday 12 February, 11am–12.30pm Snowdrop walk Tour Ankerwycke’s historic parkland and enjoy the spectacular show of snowdrops. Saturday 11 to Sunday 19 February February half term fun Enjoy a day on the farm at the start of spring lambing. Birds of Prey will be on display Monday to Friday. In addition, activities will include pony and tractor rides, animal handling and pig racing. Information: 01883 742546 or godstonefarm.co.uk Information: 01372 363764 or bockettsfarm.co.uk Haslemere Museum Haslemere Brooklands Museum Weybridge Saturday 11 to Sunday 19 February, 10am–4pm Children’s half term trail Discover and explore this beautiful arboretum following a special children’s trail. Friday 24 February, 11am–12.30pm Exploring the Arboretum Meet at the kiosk for a guided walk around the wintry Winkworth landscape. Well behaved dogs on a short lead welcome. Information: haslemeremuseum.co.uk Information: 01420 22992 or Near Egham and Wraysbury Godalming Saturday 11 to Sunday 19 February Woolly Week All the regular fun at the farm plus a new craft and play activity: Sheep Herding Hustle. Make colourful jewellery and bow ties using recycled materials. Friday 24 February, 7.30pm Mane Chance bumper table quiz Fundraising table quiz in aid of this fantastic charity which provides sanctuary to horses. Bocketts Farm Winkworth Arboretum Half term car rides, Brooklands Museum alburyvineyard.com Runnymede & Ankerwycke Information: 01784 432891 Photo copyright: Jason Dodd Hatchlands Park Monday 13 to Friday 17 February, 10am–4pm Half term family fun Take part in car rides, see Bertie the Brooklands Bear, try out the aviation family workshop ‘Build your own Race Car’ and take part in the children’s tours on Concorde. Sunday 19 February, 8–9.45am Brooklands Winter Classic Breakfast Served in the Sunbeam Café and Napier Room. Information: 01483 208936 Information: 01932 857381 or nationaltrust.org.uk brooklandsmuseum.com Tuesday 14 February, 10.30am–1.30pm Baubles and bow ties Surrey Half 2017 Woking and Guildford Sunday 12 March, from 9am Take part in the half marathon race, kids’ race or 5km race. Information: surreyhalfmarathon.co.uk Tuesday 14 February, 10am–3.30pm Wild explorers For seven to eleven year olds with games, den building and more. Information: 01483 795440 or surreywildlifetrust.org/events farmers’ markets Camberley Saturday 18 February, 10am–3pm Cranleigh Every Friday, 9.30–11am Epsom Sunday 5 February and 5 March, 9.30am–1.30pm Farnham Sunday 26 February, 10am–1.30pm Guildford Tuesday 7 February and 7 March, 10.30am–3.30pm Haslemere Sunday 5 February and 5 March, 10am–1.30pm Milford Sunday 19 February, 10am–1.30pm Ripley Saturday 11 February, 9am–1pm Walton-on-Thames Saturday 4 February and 4 March, 9.30am–2pm Woking Thursday 2 February and 2 March, 9am–2pm FEBRUARY 2017 | essence-magazine.co.uk 69 Interiors | THE RECLAIM WORKSHOP NATURAL craft The Reclaim Workshop team is dedicated to creating high quality, stunning kitchenware made solely by using reclaimed wood. Our team of craftsmen has refined their craft to offer the perfect piece for your kitchen. Walking into the workshop your senses are struck by the sights, smells and sounds. The sight of oak and mahogany in their natural state is a thing of beauty; the smell of different woods being sawn and planed is a delight. The quality of work is stunning, ranging from small chopping boards to large kitchen islands topped with end grain butchers blocks. It’s obvious these handcrafted products are lovingly finished down to the last detail. The end grain cutting blocks are arranged in random patterns giving each piece a unique signature not found in pieces that use uniformly sized wood. The larger kitchen islands contain over 1,200 pieces and are priced from £1,900 to £4,900 reflecting the craftsmanship and quality. essence INFO The Reclaim Workshop Unit 1e Oldknows Factory, Egerton Road, Nottingham NG3 4GQ Website: Telephone: 0115 837 6161 Email: jeanjacquesedward@gmail.com 70 essence-magazine.co.uk | FEBRUARY 2017 The V&A Classic Paint Collection, developed in close collaboration between Master Paintmakers and the V&A, uses the finest pigments to achieve the highest quality paint product available in its category. Full range available Spring 2017 3pp_Ceramic_Layout 1 03/02/2017 09:05 Page 1 Rimini Blu Tortora and collection Tradition and cutting edge Internationally known for its excellence and creative spirit, it’s the passion and skill of the hands of those who work the ceramics that is at the heart of Bitossi Ceramiche. Jane Pople takes a close look at the renowned brand. C ombining 72 essence-magazine.co.uk | FEBRUARY 2017 Aldo Londi ALL IMAGES COURTESY BITOSSI CERAMICHE >>> 3pp_Ceramic_Layout 1 03/02/2017 09:05 Page 2 Ceramics | BITOSSI CERAMICHE Bitossi decorative vase Rimini Blu Scimmia Rimini Blu Toro Rimini Blu Ippopotamo The Bitossi workshop FEBRUARY 2017 | essence-magazine.co.uk 73 3pp_Ceramic_Layout 1 03/02/2017 09:05 Page 3 Iconic Italian ceramics from Bitossi Ceramiche, 74 essence-magazine.co.uk | FEBRUARY 2017 Guadalupe vase, designer Bethan Laura Wood FROM CONCEPT TO CREATION – PERFECT IN FORM AND FUNCTION • info@aparattus.pt 3pp_Interiors_Layout 1 03/02/2017 09:06 Page 1 Project Pearl by 1508 London From first principles The London based design studio 1508 is known for creating exceptional residences and interior spaces. With a vast portfolio of the most remarkable interiors and architecture, the company has created over 35 projects with a combined value in excess of £226 million in more than ten countries across the globe. This year sees 1508 celebrating its eighth anniversary. To commemorate this, Jane Pople caught up with talented Creative Director Louise Wicksteed who shared her best advice and lessons learnt. Q What led to the inception of 1508 London and what was the hardest challenge you faced when starting the business? A 1508 was set up as a high-end design practice, working around the idea of a collaborative young design studio combining a multi-disciplinary team of architects and interior designers who could create and deliver great projects. We wanted to approach design from a set of principles rather than having a particular house style. Our hardest challenge is always to maintain a certain quality of design in our projects and learning to work with lots of different clients and different scales of projects. Q What was the biggest lesson learnt in your first years in the industry? A You are always learning so many things, be it about people – clients or working with other designers – about business and obviously about design. I think the biggest lesson is to retain a collaborative approach with both your client and within a studio environment. You can’t be too dictatorial. Louise Wicksteed, Creative Director & Partner, 1508 London ALL IMAGES COURTESY 1508 Q Do you have a favourite or most memorable project? A I actually don’t have a particular favourite as each project is such a different experience. They all excite me in a different way – every project is an opportunity! Q What’s the best advice you have ever received? A Not to compare yourself to others. 78 essence-magazine.co.uk | FEBRUARY 2017 >>> 3pp_Interiors_Layout 1 03/02/2017 09:06 Page 2 Interiors | 1508 Project Reuben by 1508 London Project Reuben by 1508 London Project Pearl by 1508 London Project Reuben by 1508 London Project Reuben by 1508 London Project Pearl by 1508 London FEBRUARY 2017 | essence-magazine.co.uk 79 3pp_Interiors_Layout 1 03/02/2017 09:06 Page 3 Q Do you think it is harder for women to get ahead in the design industry – and if so, how would you change it? A Within interior design, I don’t think it’s harder for women to get ahead when they first graduate. To give an example, we currently employ a 60:40 ratio of women to men and I have consistently come across lots of successful women at all levels throughout the industry. I think as women progress through their careers and start to have families, this is when discrepancies start to creep in, as this is when men pull ahead. With maternity leave and then after having a family, lots of women cannot work the same hours they did previously so they can be overlooked for promotion and development – which is wrong. I think these are issues that affect lots of different sectors and they need to be addressed at a wider level by government and business owners to help prevent this from happening by encouraging shared maternity/paternity leave, reducing child care costs and supporting women when they return to work. Q Can you tell us what a typical day looks like for you? A I try to get into the studio for around 8–8.30am as I like the quiet time to catch up before everyone starts arriving. Once everyone is here, I will spend my time either in design crits for the various projects we are working on, in design meetings with clients or working on presentations/designs myself. I like to set the concept and narrative for every project, so I will do a lot of work at the beginning helping to define the design direction. Project Pearl by 1508 London Q How would you describe your own interior style and what is your favourite room in your home and why? A My own style is relatively understated. I enjoy collecting furniture and artwork from antiques fairs and various travels so it is quite eclectic, but in a controlled way! My favourite room will be our large, sun-filled kitchen when it is finished (we are currently in construction) as it looks out over our garden and I like that connection to nature. Q If you could collaborate with any fashion design on a fashioninspired collection for the home, who would it be and why? A Probably Céline as they are very much design led, very understated, but the quality of craftsmanship is amazing and their products always have a quirky high fashion edge. Q If you didn’t work in the design industry, what would you be doing? A I would love to be a fine artist or a sculptor. Q Can you tell us about any current projects you are working on or what’s next for 1508? A We have some great private client projects in London, but our next exciting venture is moving into hospitality and we are currently working on our first full hotel project in the Middle East. We are also looking forward to the launch of the Lanesborough Private Members Club and Spa next spring, for which we did all of the interiors. essence INFO 1508 London, Howick Place, Westminster, London SW1P 1BB Websites: Telephone: 020 7802 3800 This article first appeared in The Lux Pad, 80 essence-magazine.co.uk | FEBRUARY 2017 Project Reuben by 1508 London Feb 1, 2017 essence magazine is a premier lifestyle publication available in print and online. The printed magazine is distributed via Royal Mail to Sur...
https://issuu.com/essence-magazine/docs/ess78_highres
CC-MAIN-2021-25
refinedweb
20,349
58.32
That works with the exclude-result-prefixes avoiding the namespace, thanks, but now I face following: Originally I wanted o use the name of the root or the next child to loop over in a for-each as bellow <xsl:for-each /* or this notation <xsl:for-each*/ which does not work. I get no elments from the xml tree because the xpath does not works . I cant work with the XML for example say: <xsl:variable <xsl:value-of </xsl:variable> <xsl:for-each <Version> <xsl:value-of</Version> </xsl:for-each> The output is: <Version/> but following works <xsl:for-each <Version><xsl:value-of</Version> </xsl:for-each> Output: <Version>1.26</Version> brg David Carlisle <davidc@xxxxxxxxx> schrieb am 13:36 Mittwoch, 6.November 2013: On 06/11/2013 12:24, henry human wrote: > There is still one issue, > cause the root elment is dnamiccaly , sometimes I need to get the root name: > <xsl:variable > <xsl:value-of > </xsl:variable> > Root is a bad name (since in xpath the root is / which is the parent of the element that you want) you want /*[1] (It is much better to use * rather than node() here otherwise a comment or processing instruction will break your code.) Don't use a variable with content as that generates a temporary tree, you just want a string so use <xsl:variable > I try it in output > <tesTag> > <xsl:value-of > </tesTag> > > As Result > I get the right root name, RootElement but the namespace in the output coccures too: > <tesTag xmlns:ns1="/>RootElement</tesTag> > > The namespace is not in the XML,. it is actually defined in the XSL stylesheet header!! > That is the standard behaviour for literal result elements, the declared namespace is in scope. You can use exclude-result-prefixes on your xsl:stylesheet element to stop. ________________________________________________________________________
http://www.oxygenxml.com/archives/xsl-list/201311/msg00054.html
CC-MAIN-2018-05
refinedweb
307
50.8
[ ] Doug Cutting commented on HADOOP-1298: -------------------------------------- > I don't see why you want to OR things. I don't see any possible use for that [...] The OR use is when constructing a permission to pass to chmod(). The AND use of the constants is when testing permissions, e.g. (from above) public boolean isOwnerWritable() { return (permissions & OWNER_WRITABLE) != 0; } > this seperates the FileStatus from the Permissions Why separate when we can share? DFSFileInfo should extend FileStatus. > How? What does it have to do with file status? FileStatus's javadoc should define the meaning of bits in the integer representation of file permissions. Chmod() takes an integer file permission, and should reference FileStatus's definition of that representation, no? > I don't think that a change to the clientside representation of permissions should cause a change to the DFS representation of permissions. These are both in the same project. Sharing's okay here. If we someday neeed to use a different representation inside HDFS then we can do that then. Until then, we should avoid replicating logic. > Permissions is a structure internal to the INode. DFSFileInfo objects are just a way to return immutable state to the client. Good point. So we keep Permissions, or turn it into fields of INode. Still, I don't see why DFSFileInfo shouldn't extend FileStatus. > We could move the statics to another class that is instantiated on a per name node basis. I'd prefer it to reside in FSDirectory as that is a more reasonable place to me. .
http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/200705.mbox/%3C13834615.1178748257566.JavaMail.jira@brutus%3E
CC-MAIN-2017-13
refinedweb
254
68.16
Hi all, I'm a beginner when it comes to coding, and I've never used the TeamViewer API before. Could someone help me just to get started with the API in Python, and show me some code example on how to lets say create / list all sessions? I created an app, and got a client id and client secret. Here's a little sample: import requests client_id = 'XXXX-XXXXXXXX'clientt_secret = 'XXXXXXXXXXXX url='base_url ='' headers = {"content-type": "application/json", "Authorization": "Bearer "+client_id} test = requests.get(base_url+'sessions', headers=headers) All I get is <Response [401]>. If someone could provide a short example just to get my going, that would have been great. Not sure if you're still looking, but I just came across this:
https://community.teamviewer.com/t5/API-and-Scripting/Quick-help-on-how-to-get-going-with-Python/m-p/84551/highlight/true
CC-MAIN-2020-29
refinedweb
125
72.16
Enlive is a selector-based (à la CSS) templating library for Clojure. David Nolen wrote a nice tutorial. Another tutorial is by Brian Marick. There's a quickstart section in Clojure Cookbook. Where do I get support?Where do I get support? On the Enlive Google Group ArtifactArtifact All artifacts are published to clojars. Latest version is 1.1.6: [enlive "1.1.6"] What's new in Enlive?What's new in Enlive? (most recent first) 1.1.6: - ADD: exception message when html-resource not found. - FIX: auto-reload on windows (also works with chestnut). Auto-reloading (1.1.2)Auto-reloading (1.1.2) (net.cgrand.reload/auto-reload *ns*) Each time a resource or file used by a template/snippet is updated the namespace is reloaded (as per (require ... :reload)). MiscMisc - Perf improvements - Fixes to ${vars}substitutions Pluggable parsers! (1.1.1)Pluggable parsers! (1.1.1) The *parser* dynamic var controls the parser to be used by html-resource at runtime. (or you can pass {:parser XXX} as an additional arg). For templates and snippets whose sources are not read dynamically, you can opt for another parser either locally: (deftemplate ugh {:parser xml-parser} (java.io.StringReader. "<a><div>hello</div></a>") []) or globally for the declaring ns: (set-ns-parser! xml-parser) A parser is a function from InputStream to nodes. xml-parser, net.cgrand.tagsoup/parser and net.cgrand.jsoup/parser are the three builtin ones. ${vars} substitutions (1.1.1) The following selector + function is going to replace any ${var} in text and attributes by the value found in the map (or any function). [:#container any-node] (replace-vars {:name "world" :class "hello"}) hiccup-style helper (1.1.0)hiccup-style helper (1.1.0) (content (html [:h3#hello "Hello worls"])) older stuffolder stuff By default selector-transformation pairs are run sequentially. When you know that several transformations are independent, you can now specify (as an optimization) to process them in lockstep. Note that this doesn't work with fragments selectors. Example: [:a :selector] a-transformation [:another :selector] another-transformation [:a :dependent :selector] yet-another-transformation If the first two transformations are independent you can rewrite this code as: :lockstep {[:a :selector] a-transformation [:another :selector] another-transformation} [:a :dependent :selector] yet-another-transformation Transformations are now slightly restricted in their return values: a node or a collection of nodes (instead of freely nested collections of nodes). Dynamic selectors: selectors aren't compiled anymore. It means that you don't need to wrap them in (selector ...) forms anymore nor to eval them in the most dynamic cases. Fragment selectors allow to select adjacent nodes. They are denoted by a map of two node selectors (eg {[:h1] [:p]}), bounds are inclusive and they select the smallest matching fragments. Transformations (the right-hand parts of rules) are now plain old closures. These functions take one arg (the selected node) and return nil, another node or a collection of nodes. Rules are applied top-down: the first rule transforms the whole tree and the resulting tree is passed to the next rules. Nodes are transformed deep-first, that is: if a selector selects several nodes, descendants are transformed first. Hence, when the transformation is applied to an ancestor, you can "see" the transformed descendants (but you can not see your transformed siblings). /B /(T B) A if A and B are selected and transformed by T the the resulting tree is (T A ) \C \C ConceptsConcepts snippet is a unit of your page. It may be logical or visual entry, such as header, footer, page element. Snippet is usually a part of a template, and may serve as a container for other snippets. For example, for navigation on the web page. For that, let’s first define an html template for the navigation. Snippets are created by using net.cgrand.enlive-html/defsnippet function and, same as templates, they require a corresponding HTML template file to be availble in a classpath. So, snippet function returns a seq of nodes, it can be used as a building block for more complex templates. templates combine snippets together, they serve like a basement for the snippets. In order to create a template, you can use net.cgrand.enlive-html/deftemplate function. deftemplate is used as something what you would call layout in some other templating systems. In essence, it’s either a self-contained page (rarely true in bigger applications), or a container for snippets. That said, a template is a returns a seq of string -- basically it's a snippet whose output is serialized. Templates return a seq of strings to avoid building the whole string. Templates and snippets transform a source (specified as a path (to access resources on the classpath), a File, a Reader, an InputStream, a URI, a URL, an element or a seq of nodes). Next concept is selectors, which are used within snippets and templates to identify the block of HTML code the transformation would be applied to. They’re very similar to CSS selectors, but also allow more sophisticated, predicate-based selections, for example, you can select a tag based on some part of content, or an attribute. Transformations are functions that triggered on the elements found by selectors. They receive content obtained selector, and modify it in some way. Quickstart tutorialQuickstart tutorial TemplateTemplate If you want to go see the compiled version of the following steps all in one place, you can check out an example Ring application. First thing you need to start, is to define your first template: (require '[net.cgrand.enlive-html :as html]) (html/deftemplate main-template "templates/application.html" []) Now, you can start writing selectors and transformations for the given selectors. Let's add a title to the template. Given that your template already has <head> and <title> tags, let's insert a title. Content of templates/application.html: <!DOCTYPE html> <html lang="en"> <head> <title>This is a title placeholder</title> </head> <body> </body> </html> (html/deftemplate main-template "templates/application.html" [] [:head :title] (html/content "Enlive starter kit")) Here, [:head :title] is a selector, pretty much like a css selector. If you're coming from jQuery, you can write same selector as $("head title"). html/content is a transformation. It puts the given content into the element specified by your selector. SnippetSnippet Let's add several snippets. For example, navigation and some content. For that, let's first define a template for the navigation. Content of templates/header.html <!DOCTYPE html> <html lang="en"> <body> <header> <h1>Header placeholder</h1> <ul id="navigation"> <li><a href="#">Placeholder for navigation</a></li> </ul> </header> </body> </html> (html/defsnippet main-template "templates/header.html" [:header] [heading navigation-elements] [:h1] (html/content heading) [:ul [:li html/first-of-type]] (html/clone-for [[caption url] navigation-elements] [:li :a] (html/content caption) [:li :a] (html/set-attr :href url))) SelectorsSelectors Enlive selectors can match either nodes or fragments (several adjacent nodes). At the core, every selector is a vector. The items of this vector are called steps. A step is a predicate, for example :h1, :p.some-class or even (attr? :lang). To select elements which match several predicates, you need to group predicates into a vector: inside steps, vectors mean "and". This may seem confusing but the rule is simple: the outer-most vector hierarchically chains steps, all other vectors denote intersection (and) between steps. So [:p (attr? :lang)] is going to match any elements with a lang attribute inside a :p element. On the other hand, [[:p (attr? :lang)]] is going to match any p with a lang attribute. Similarly, sets group predicates in an union. Hence inside steps, sets mean "or". So [#{:div.class1 :div.class2}] match every div which has either class1 or class2. This can alternatively be written as [[:div #{:.class1 .class2}]]. Indeed you can have nested "ors" and "ands" which means nested sets and vectors. At the top level you can have a big "or" between selectors by wrapping several selectors in a set. #{[:td :em] [:th :em]} is going to match any em insides either a th or a td. This is equivalent to [#{:td :th} :em]. Selector SyntaxSelector Syntax See syntax.html Some examples: Enlive CSS ======================================================= [:div] div [:body :script] body script #{[:ul.outline :> :li] [:ol.outline :> li]} ul.outline > li, ol.outline > li [#{:ul.outline :ol.outline} :> :li] ul.outline > li, ol.outline > li [[#{:ul :ol} :.outline] :> :li] ul.outline > li, ol.outline > li [:div :> :*] div > * [:div :> text-node] (text children of a div) [:div :> any-node] (all children (including text nodes and comments) of a div) {[:dt] [:dl]} (fragments starting by DT and ending at the *next* DD) The at form The at form is the most important form in Enlive. There are implicit at forms in snippet and template. (at a-node [:a :selector] a-transformation [:another :selector] another-transformation ;; ... ) The right-hand value of a rule can be nil. It's the idiomatic way to remove an element. Transformations are closures which take one arg (the selected node) and return nil, another node or an arbitrarily nested collection of nodes. Rules are applied top-down: the first rule transforms the whole tree and the resulting tree is passed to the next rules. TransformationsTransformations A transformation is a function that returns either a node or collection of node. Enlive defines several helper functions: ;; Replaces the content of the element. Values can be nodes or collection of nodes. (content "xyz" a-node "abc") ;; Replaces the content of the element. Values are strings containing html code. (html-content "<blink>please no</blink>") ;; Wraps selected node into the given tag (wrap :div) ;; or (wrap :div {:class "foo"}) ;; Opposite to wrap, returns the content of the selected node unwrap ;; Sets given key value pairs as attributes for selected node (set-attr :attr1 "val1" :attr2 "val2") ;; Removes attribute(s) from selected node (remove-attr :attr1 :attr2) ;; Adds class(es) to the selected node (add-class "foo" "bar") ;; Removes class(es) from the selected node (remove-class "foo" "bar") ;; Chains (composes) several transformations. Applies functions from left to right. (do-> transformation1 transformation2) ;; Clones the selected node, applying transformations to it. (clone-for [item items] transformation) ;; or (clone-for [item items] selector1 transformation1 selector2 transformation2) ;; Appends the values to the content of the selected element. (append "xyz" a-node "abc") ;; Prepends the values to the content of the selected element. (prepend "xyz" a-node "abc") ;; Inserts the values after the current selection (node or fragment). (after "xyz" a-node "abc") ;; Inserts the values before the current selection (node or fragment). (before "xyz" a-node "abc") ;; Replaces the current selection (node or fragment). (substitute "xyz" a-node "abc") ;; Takes all nodes (under the current element) matched by src-selector, removes ;; them and combines them with the elements matched by dest-selector. (move [:.footnote] [:#footnotes] content) Known limitations/problemsKnown limitations/problems - No namespaces support (hence unsuitable for most XML)
https://libraries.io/clojars/enlive
CC-MAIN-2019-43
refinedweb
1,824
58.58
Starting Ray¶ This page covers how to start Ray on your single machine or cluster of machines. Tip Be sure to have installed Ray before following the instructions on this page. What is the Ray runtime?¶ Ray programs are able to parallelize and distribute by leveraging an underlying Ray runtime. The Ray runtime consists of multiple services/processes started in the background for communication, data transfer, scheduling, and more. The Ray runtime can be started on a laptop, a single server, or multiple servers. There are three ways of starting the Ray runtime: Implicitly via ray.init()(Starting Ray on a single machine) Explicitly via CLI (Starting Ray via the CLI (ray start)) Explicitly via the cluster launcher (Launching a Ray cluster (ray up)) Starting Ray on a single machine¶ Calling ray.init() (without any address args) starts a Ray runtime on your laptop/machine. This laptop/machine becomes the “head node”. Note In recent versions of Ray (>=1.5), ray.init() will automatically be called on the first use of a Ray remote API. import ray # Other Ray APIs will not work until `ray.init()` is called. ray.init() import io.ray.api.Ray; public class MyRayApp { public static void main(String[] args) { // Other Ray APIs will not work until `Ray.init()` is called. Ray.init(); ... } } #include <ray/api.h> // Other Ray APIs will not work until `ray::Init()` is called. ray::Init() When the process calling ray.init() terminates, the Ray runtime will also terminate. To explicitly stop or restart Ray, use the shutdown API. import ray ray.init() ... # ray program ray.shutdown() import io.ray.api.Ray; public class MyRayApp { public static void main(String[] args) { Ray.init(); ... // ray program Ray.shutdown(); } } #include <ray/api.h> ray::Init() ... // ray program ray::Shutdown() To check if Ray is initialized, you can call ray.is_initialized(): import ray ray.init() assert ray.is_initialized() == True ray.shutdown() assert ray.is_initialized() == False To check if Ray is initialized, you can call Ray.isInitialized(): import io.ray.api.Ray; public class MyRayApp { public static void main(String[] args) { Ray.init(); Assert.assertTrue(Ray.isInitialized()); Ray.shutdown(); Assert.assertFalse(Ray.isInitialized()); } } To check if Ray is initialized, you can call ray::IsInitialized(): #include <ray/api.h> int main(int argc, char **argv) { ray::Init(); assert(ray::IsInitialized()); ray::Shutdown(); assert(!ray::IsInitialized()); } See the Configuration documentation for the various ways to configure Ray. Starting Ray via the CLI ( ray start)¶ Use ray start from the CLI to start a 1 node ray runtime on a machine. This machine becomes the “head node”. $ ray start --head --port=6379 Local node IP: 192.123.1.123 2020-09-20 10:38:54,193 INFO services.py:1166 -- View the Ray dashboard at -------------------- Ray runtime started. -------------------- ... You can connect to this Ray runtime by starting a driver process on the same node as where you ran ray start: # This must import ray ray.init(address='auto') import io.ray.api.Ray; public class MyRayApp { public static void main(String[] args) { Ray.init(); ... } } java -classpath <classpath> \ -Dray.address=<address> \ <classname> <args> #include <ray/api.h> int main(int argc, char **argv) { ray::Init(); ... } RAY_ADDRESS=<address> ./<binary> <args> You can connect other nodes to the head node, creating a Ray cluster by also calling ray start on those nodes. See Manual Ray Cluster Setup for more details. Calling ray.init(address="auto") on any of the cluster machines will connect to the ray cluster. Launching a Ray cluster ( ray up)¶ Ray clusters can be launched with the Cluster Launcher. The ray up command uses the Ray cluster launcher to start a cluster on the cloud, creating a designated “head node” and worker nodes. Underneath the hood, it automatically calls ray start to create a Ray cluster. Your code only needs to execute on one machine in the cluster (usually the head node). Read more about running programs on a Ray cluster. To connect to the existing cluster, similar to the method outlined in Starting Ray via the CLI (ray start), you must call ray.init and specify the address of the Ray cluster when initializing Ray in your code. This allows Ray to connect to the cluster. ray.init(address="auto") Note that the machine calling ray up will not be considered as part of the Ray cluster, and therefore calling ray.init on that same machine will not attach to the cluster. Local mode¶ Caution This feature is maintained solely to help with debugging, so it’s possible you may encounter some issues. If you do, please file an issue. By default, Ray will parallelize its workload and run tasks on multiple processes and multiple nodes. However, if you need to debug your Ray program, it may be easier to do everything on a single process. You can force all Ray functions to occur on a single process by enabling local mode as the following: ray.init(local_mode=True) java -classpath <classpath> \ -Dray.local-mode=true \ <classname> <args> Note If you just want to run your Java code in local mode, you can run it without Ray or even Python installed. RayConfig config; config.local_mode = true; ray::Init(config); Note If you just want to run your C++ code in local mode, you can run it without Ray or even Python installed. Note that there are some known issues with local mode. Please read these tips for more information. What’s next?¶ Check out our Deployment section for more information on deploying Ray in different settings, including Kubernetes, YARN, and SLURM.
https://docs.ray.io/en/master/starting-ray.html
CC-MAIN-2022-05
refinedweb
923
68.77
03 November 2010 14:01 [Source: ICIS news] ALICANTE, ?xml:namespace> “I won’t pull [€120/tonne] off the table if I am to keep my margin, but will the market accept it?” said the producer. The proposed increase would take European prices up from the October contract settlements of €1,110-1,170/tonne FD (free delivered) European raw material costs have been spiralling upwards following jumps in In November, the price of the upstream product paraxylene (PX) could surpass €910/tonne, up from October’s €805/tonne FD NWE (northwest Monoethylene glycol (MEG), PET’s other raw material, could reach €900/tonne, according to the producer. MEG sellers are looking for €930-970/tonne FD NWE for November contracts. An October settlement was outstanding, with only one agreement recorded at €806/tonne, up €15/tonne from September. PET customers and resellers acknowledged upstream developments but said they could not absorb the targeted increases. One source said a maximum of €70/tonne would be the likely outcome. Prices in “There is enough competition in The recent EU investigation into PET antidumping prompted plants in the Spot PET prices hit €1,200/tonne FD Europe at the end of October, up from around €1,100/tonne at the beginning of the month. “It will be difficult to go over €1,200/tonne,” said a reseller. It added that in recent days customers had reviewed their requirements downwards on receipt of high offers. “[Buyers] will try to live off fat for a while,” the producer concluded. ($1 = €0.71) For more on PET, PX
http://www.icis.com/Articles/2010/11/03/9407117/producer-targets-120tonne-increase-in-europe-pet.html
CC-MAIN-2013-48
refinedweb
263
61.36
You need to determine which Kerberos principal Apache is trying to lookup, and that will help you troubleshoot the problem. We've seen this error when using virtual hosts. If you have the following service principal in your keytab: HTTP/ and you are accessing the following URL: the Kerberos module will attempt to get a service ticket for the service principal HTTP/not- What we ended up doing was using mod_rewrite so that all of our urls mapped into the... namespace, and then we only had to set up a service principal for HTTP/, rathern than one for every virtual host. -- Tom Thomas A. La Porte, DreamWorks Animation On Thu, 16 Mar 2006, abbas.attarwala@gmail.com wrote: > Hello, > I am running the kerberos module with apache 1.3.34 on a ubuntu linux > box. > > When i try to access the website hosted by apache, i get the username > and password prompt box, but on entering the correct credentials, the > box stays there and keeps on asking for username and password. > > On checking the error_log file in apache i found this: > > failed to verify krb5 credentials: Server not found in Kerberos > database > > On entering some wrong username and password this is what i get > krb5_get_init_creds_password() failed: Client not found in Kerberos > database > > what am i doing wrong? > > keytab file? wrong realm? > > my kinit works fine. > > ________________________________________________ > Kerberos mailing list Kerberos@mit.edu > > ________________________________________________ Kerberos mailing list Kerberos@mit.edu
http://fixunix.com/kerberos/60235-failed-verify-krb5-credentials.html
CC-MAIN-2015-32
refinedweb
239
62.17
package require ral namespace import ::ral::* package require ralutil namespace import ::ralutil::* # crosstab relValue crossAttr ?attr1 attr2 ...? # # Generate a cross tabulation of "rel" for the "crossAttr" against the # variable number of attributes given. The "crossAttr" argument is the name of # an attribute of "relValue". The idea is to create new relation that contains all # the attributes in "args" plus a new attribute for each distinct value of # "crossAttr". The value of the new attributes is count of tuples that have # the corresponding value of "crossAttr". Relationally, the "summarize" # command is used when computations are required across groups of tuples. proc crosstab {relValue crossAttr args} { # We start by projecting the attributes that will be retained # in the resulting relation. set subproj [relation project $relValue {*}$args] # The strategy is to build up a summarize command on the fly, adding new # attributes. So we start with the constant part of the command. set sumCmd [list relation summarize $relValue $subproj r] # By projecting on the "crossAttr" we get the unique set of values # for that attribute since there are no duplicates in relations. set crossproj [relation project $relValue $crossAttr] # For each distince value of the "crossAttr" extend the relation with # a new attribute by the same name as the value and whose value is # the number of tuples which match the value. foreach val [lsort [relation list $crossproj]] { set sumexpr [format\ {[relation cardinality [relation restrictwith $r {$%s == "%s"}]]}\ $crossAttr $val] lappend sumCmd $val int $sumexpr } # Finally we want the total for all the "crossAttr" matches. lappend sumCmd Total int {[relation cardinality $r]} set ctab [eval $sumCmd] # At this point the relational algebra is over! The rest of this is just # to format some output. First we want to add totals across the bottom of # the tabular display. Technically, these totals are not part of the cross # tabulated relation since they represent different facts than the other # tuples in the relation. So we put the relation into a matrix and add in # the totals there. The matrix also serves as a convenient means of # formatting the output. TclRAL has support for moving relations into # matrices. set m [relation2matrix $ctab $args] # Get rid of the "data type" row in the header. It's ugly here. $m delete row 1 # Add the row where the totals will go $m add rows 1 $m set cell 0 end Total set colIndex [expr {[llength $args] - 1}] # The totals are easy to come by. They are just the summarization # of the original relation value over the "crossAttr". # Add them into the matrix. set totals [relation summarize $relValue $crossproj s\ Total int {[relation cardinality $s]}] relation foreach t $totals -ascending $crossAttr { $m set cell [incr colIndex] end [relation extract $t Total] } # The grand total is just the number of tuples we started with. $m set cell end end [relation cardinality $relValue] # The report package seems a little complicated to use, but TclRAL # includes some support here too, since "relformat" uses the # report package to do formatting. A pre-supplied style helps here. ::report::report r [relation degree $ctab]\ style ::ral::relationAsTable {} 1 # Put the totals line in the bottom caption of the report. r botdata set [r topdata get] r botcapsep set [r topcapsep get] r botcapsep enable r bcaption 1 # Finally some text. set result [$m format 2string r] # Add a text caption to the output. set caption "Cross Tabulation of\ $crossAttr Against [join $args {, }]" append result $caption \n [string repeat "-" [string length $caption]] r destroy $m destroy return $result }if 0 { The original data set actually had two records labeled: - Jane;F;tennis set sportsData [relation create\ {Name string Sex string Sport string}\ {{Name Sex Sport}}\ {Name John Sex M Sport soccer}\ {Name Jane Sex F Sport tennis}\ {Name Tom Sex M Sport football}\ {Name Dick Sex M Sport soccer}\ {Name Harry Sex M Sport tennis}\ {Name Mary Sex F Sport baseball}\ {Name Jeff Sex M Sport baseball}\ {Name Alice Sex F Sport tennis}\ ] puts [relformat $sportsData "Sports Data"]if 0 { +======+======+========+ |Name |Sex |Sport | |string|string|string | +======+======+========+ |John |M |soccer | |Jane |F |tennis | |Tom |M |football| |Dick |M |soccer | |Harry |M |tennis | |Mary |F |baseball| |Jeff |M |baseball| |Alice |F |tennis | +======+======+========+ Sports Data -----------} puts [crosstab $sportsData Sex Sport]if 0 { +--------+-+-+-----+ |Sport |F|M|Total| +--------+-+-+-----+ |baseball|1|1|2 | |football|0|1|1 | |soccer |0|2|2 | |tennis |2|1|3 | +--------+-+-+-----+ |Total |3|5|8 | +--------+-+-+-----+ Cross Tabulation of Sex Against Sport -------------------------------------It's also interesting to look at the transposition. } puts [crosstab $sportsData Sport Sex]if 0 { +-----+--------+--------+------+------+-----+ |Sex |baseball|football|soccer|tennis|Total| +-----+--------+--------+------+------+-----+ |F |1 |0 |0 |2 |3 | |M |1 |1 |2 |1 |5 | +-----+--------+--------+------+------+-----+ |Total|2 |1 |2 |3 |8 | +-----+--------+--------+------+------+-----+ Cross Tabulation of Sport Against Sex -------------------------------------And just for comparison, this is just the straight summarization across the Sport and Sex attributes. The major difference here is that the "0" rows are missing. } puts [pipe { relation project $sportsData Sport Sex | relation summarize $sportsData ~ r Total int {[relation cardinality $r]} | relformat ~ "Totals over Sport and Sex" {Sport Sex} }]if 0 { +========+======+-----+ |Sport |Sex |Total| |string |string|int | +========+======+-----+ |baseball|F |1 | |baseball|M |1 | |football|M |1 | |soccer |M |2 | |tennis |F |2 | |tennis |M |1 | +========+======+-----+ Totals over Sport and Sex ------------------------- See also crosstab and crosstab again.In Ratcl, there is an example called "pivot tables" [1 Category Statistics
http://wiki.tcl.tk/17688
CC-MAIN-2016-50
refinedweb
877
55.58
Created on 2010-06-20 08:49 by Oren_Held, last changed 2013-08-01 15:02 by tim.golden. This issue is now closed. On unices, ismount checks whether the given path is a mount point. On windows, it only checks whether it's a drive letter. Long story short, Python simply returns False when doing ismount(r"c:\mount1"), while c:\mount1 is a real mount point. This is relevant for all modern windows versions. -- I'm using win32file.GetVolumePathName() for overcoming this, but I'm not sure if the os python package should be importing win32file, maybe there is a better way to check whether a path is a mount point.. Switching to Python 3.2 as this essentially constitutes a behaviour change and 2.6 is in bugfix mode and 2.7 is about to enter rc2. It would certainly be possible to use one of the volume APIs under the covers. Would you be willing to offer a patch to, say, posixmodule.c? All we need to do is check the FILE_ATTRIBUTE_REPARSE_POINT in the file attributes. Frustratingly, we grab file attributes a dozen times in posixpath.c only to throw most of it away. Is there a case for adding an "attributes" function to os.path which exposes the full file attributes on Windows, and its posix equivalent if there is one? This could then be used in the ismount function currently implemented in ntpath.py. ... of course you still need to get the reparse tag to determine whether this is a mount point so the file attributes alone in this case are not enough.). I think we're saying the same thing :) The simplest thing to do here is to create a win_ismount function in posixmodule.c which does the attributes / reparse tag dance and returns True/False and use that wherever it's needed to support this concept under Windows. The current solution is correct for a subset of cases. Arguably a bug, although I doubt I'd get that past the release manager! The wider issue of exposing GetFileAttributesW, eg under one of the unused stat fields, should be explored elsewhere. On 22/06/2010 11:46, Senthil Kumaran wrote: > > Senthil Kumaran<orsenthil@gmail.com> added the comment: > >). > > ---------- > nosy: +orsenthil > > _______________________________________ > Python tracker<report@bugs.python.org> > <> > _______________________________________ > _______________________________________________ > Python-bugs-list mailing list > Unsubscribe: I'd like to add the win_ismount function mentioned by Tim. Is anyone else working on this presently? Sijin, please go ahead and submit a patch. No one is working on this at the moment. I was looking at this - and see that (at least as far as GetFileAttributes is concerned) that a mount and a linked directory are seen the same... Here are some tests using ctypes # mounted drive >>> hex(windll.kernel32.GetFileAttributesW(ur"c:\temp\test_c_mount")) '0x410' # normal directory >>> hex(windll.kernel32.GetFileAttributesW(ur"c:\temp\orig")) '0x10' # link (created via mklink /d c:\temp\orig c:\temp\here2 >>> hex(windll.kernel32.GetFileAttributesW(ur"c:\temp\here2")) '0x410' On futher searching - I found the following link: So the function ismount will need to do the following a) Get the file attributes b) check that it's a directory and is a reparse point c) Use FindFirstFile (and FindNextFile? - I need to test more) to fill in WIN32_FIND_DATA.dwResearved0 d) Check that against IO_REPARSE_TAG_MOUNT_POINT (0xA0000003) Anything wrong with the following simple approach? (e.g. is it bad to depend on win32file?) def win_ismount(path): import win32file volume_path = win32file.GetVolumePathName(path) return volume_path == path # May have to ignore a trailing backslash We can't depend on stuff from pywin32, but we could expose GetVolumePathName ourselves. Patch to expose GetVolumePathName() and implementation of ismount(). Tested on Windows7/XP. Unfortunately this missed the boat for 3.3; I'll target 3.4 when we've got a branch to commit to. Hi Tim, Yes, this would be great to get sorted out. Then we could make watchdog.py automatically configure itself for network mounts. Right now this makes no nense because of windows. cheers - chris I put a bit of work in on this this morning, following Mark's suggestion (msg138197) since that's the "canonical" approach. Unfortunately, it completely fails to work for the most common case: the root folder of a drive! The documentation for FindFirstFile explicitly precludes that possibility. It looks as though GetVolumePathName is the way to go. I thought I'd previously found some instance where that failed but, ad hoc, I can't make it fail now. I'll try to rework Atsuo's patch against the current posixmodule.c. issue9035.2.patch is an updated version of Atsuo's patch. Known issues: * I haven't reworked it for the new memory-management API * There's no test for a non-root mount point (which is really the basis for this issue). It's difficult to see how to do that in a robust way on an arbitrary machine without quite a bit of support machinery. I've done ad hoc tests which succeed. issue9035.3.patch has switched to the new memory management API and has tweaked the tests slightly for robustness. This approach does introduce a behavioural change: the root of a SUBSTed drive (essentially a symlink into the Dos namespace) will raise an OSError because GetVolumePathName returns error 87: invalid parameter. So os.path.ismount("F:\\") will fail where F: is the result of running, eg, "SUBST F: C:\temp". I think the simplest thing is to special-case drive roots (which are always mount points) and then to apply the new GetVolumePathName logic. 4th and hopefully final patch. Added tests for byte paths. Reworked the ismount so it uses the original detection approach first (which is wholly lexical) and drops back to the volume path technique only if the path doesn't appear to be a drive or a share root. This should minimise backwards-incompatibility while still solving the original problem. New changeset f283589cb71e by Tim Golden in branch 'default': Issue #9035: os.path.ismount now recognises volumes mounted below New changeset 5258c4399f2e by Tim Golden in branch 'default': issue9035: Prevent Windows-specific tests from running on non-Windows platforms Fixed. Thanks for the patch
http://bugs.python.org/issue9035
CC-MAIN-2014-41
refinedweb
1,036
67.04
Chainer to ONNX to MXNet Tutorial a Chainer Model to ONNX, then Load the Model into MXNet First, activate the Chainer environment: $source activate chainer_p36 Create a new file with your text editor, and use the following program in a script to fetch a model from Chainer's model zoo, then export it to the ONNX format. import numpy as np import chainer import chainercv.links as L import onnx_chainer # Fetch a vgg16 model model = L.VGG16(pretrained_model='imagenet') # Prepare an input tensor x = np.random.rand(1, 3, 224, 224).astype(np.float32) * 255 # Run the model on the data with chainer.using_config('train', False): chainer_out = model(x).array # Export the model to a .onnx file out = onnx_chainer.export(model, x, filename='vgg16.onnx') # Check that the newly created model is valid and meets ONNX specification. import onnx model_proto = onnx.load("vgg16.onnx") onnx.checker.check_model(model_proto) After you run this script, you will see the newly created .onnx file in the same directory. Now that you have an ONNX file you can try running inference with it with the following example:
https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-onnx-chainer-mxnet.html
CC-MAIN-2020-50
refinedweb
183
52.36
Learning Bootstrap - Part 1: Installing and Using - Part 2: Containers and Rows - Part 3: Grids - Part 4: Working with jQuery Plugins Using Bootstrap Bootstrap is a free and open-source front-end framework for designing HTML- and CSS-based website and web applications. Working with jQuery Plugins Bootstrap expects jQuery to be loaded in order for its plugins to work correctly. When working with either the local or a CDN version, it is highly recommended to group the loading together and have the bootstrap.{min}.js file be loaded immediately after jQuery. Most of the built-in plugins in Bootstrap simply work once loaded. Only a few need to be directly loaded. In most cases, using the CSS class or data attribute associated with it. The Components page in the Bootstrap documentation lists all of the possible uses with their names. bs Namespace Certain events that Bootstrap adds are under the “bs” namespace. When working with an Alert, for example, the “closed” event is called “closed.bs.alert”. This is true of all of the Components and their associated events: the past participle form of the event is first, the “bs” namespace is next, and then the name of the component. Popper.js For tooltips, popovers, and some other user-interface elements, Bootstrap uses the JS library Popper.js. Like with jQuery, this should be loaded before any of the Bootstrap code. For most of the integrated Popper.js functionality, like with the jQuery usage, element need only the correct attributes to set their values. Because of this, as the example for tooltips points out, all of the tooltips on a page can be enabled through one function call that acts on the data-toggle=”tooltip” attribute value.
https://videlais.com/2018/11/22/learning-bootstrap-part-4-working-with-jquery-plugins/
CC-MAIN-2021-31
refinedweb
288
62.58
public class Sphere Mathematical representation of a sphere. Used to perform intersection and collision tests against spheres. Public Constructors Public Methods Inherited Methods From class java.lang.Object Public Constructors public Sphere () Create a sphere with a center of (0,0,0) and a radius of 1. public Sphere (float radius) Create a sphere with a center of (0,0,0) and a specified radius. Parameters Public Methods public Vector3 getCenter () Get a copy of the sphere's center. Returns - a new vector that represents the sphere's center See Also public float getRadius () Get the radius of the sphere. Returns - the radius of the sphere See Also public void setCenter (Vector3 center) Set the center of this sphere. Parameters See Also public void setRadius (float radius) Set the radius of the sphere.
https://developers.google.com/sceneform/reference/com/google/ar/sceneform/collision/Sphere
CC-MAIN-2021-31
refinedweb
133
57.37
Opened 7 years ago Closed 6 years ago #17066 closed Bug (fixed) Exception TypeError when using GeoIP Description When using GeoIP in my code I am getting such error at the end of syncdb call: Exception TypeError: "'NoneType' object is not callable" in <bound method GeoIP.__del__ of <django.contrib.gis.utils.geoip.GeoIP object at 0x103a35690>> ignored It seems there is some race condition? I am using Python 2.7.2 on Mac OS X. It has threaded support enabled. Django 1.3. python-geoip 1.2.5, libgeoip 1.4.7. Attachments (1) Change History (10) comment:1 Changed 7 years ago by comment:2 Changed 7 years ago by comment:3 Changed 7 years ago by +1 I am getting this bug as well. Django 1.3.1 comment:4 Changed 6 years ago by Same here after migrating to Django 1.4.1 comment:5 Changed 6 years ago by It could be useful if anyone could provide us with a basic project where this error can be reproduced. comment:6 Changed 6 years ago by Changed 6 years ago by GeoIP Test Project comment:7 Changed 6 years ago by I can confirm this on Django 1.4.3 with geoip 1.4.8. I attached a test project that should reproduce this error. Instructions: Extract, install requirements (Django==1.4.3) and run runtest.sh which downloads GeoIP country data (slightly too big to attach) and then does a syncdb (any Django command will do). The exception should be the last line of the output. The culprit is obviously in django.contrib.gis.geoip.base.GeoIP.__del__ where GeoIP_delete is None when this method is called. Replacing __del__ with the following removes the exception, but may leave file handles lying around (I have no deeper knowledge about that). def __del__(self): # Cleaning any GeoIP file handles lying around. if GeoIP_delete is None: return if self._country: GeoIP_delete(self._country) if self._city: GeoIP_delete(self._city) comment:8 Changed 6 years ago by Thanks for the research on that issue. I guess that we might have difficulties to come with a test for this, so I suggest to simply add the if GeoIP_delete is None safeguard, and call it a day! This looks similar to #13488 and #13843.
https://code.djangoproject.com/ticket/17066
CC-MAIN-2019-09
refinedweb
383
75.3
How to use ~/.bashrc aliases on IPython 3.2.0? I need to use my aliases from ~/.bashrc on IPython. First I've tried but it didn't work %%bash source ~/.bashrc According to this post we should do %%bash . ~/.bashrc f2py3 -v It takes 20 sec to run on Jupiter and I get: bash: line 2: f2py3: command not found My ~/.bashrc file looks like alias f2py3='$HOME/python/bin/f2py' bash: line 2: type: f2py3: not found Neither alias, source, nor %rehashx% work %%bash alias f2py3='$HOME/python/bin/f2py' I actually found that the problem is Python, who can't execute alias command neither with sh nor bash. Can I use alias with IPython magics? Answers You can parse your bashrc file in the ipython config and add any custom aliases you have defined: import re import os.path c = get_config() with open(os.path.expanduser('~/.bashrc')) as bashrc: aliases = [] for line in bashrc: if line.startswith('alias'): parts = re.match(r"""^alias (\w+)=(['"]?)(.+)\2$""", line.strip()) if parts: source, _, target = parts.groups() aliases.append((source, target)) c.AliasManager.user_aliases = aliases This should be placed in ~/.ipython/profile_default/ipython_config.py. %rehashx makes system commands available in the alias table so is also very useful if you want to use ipython as a shell. Need Your Help CakePhp 2.0 .htaccess .htaccess cakephp mod-rewrite cakephp-2.0Having trouble configuring CakePhp on an Apache server. It worked fine in my dev environment, and now I'm looking to deploy. Drupal 7, Get values of an Entity Reference in a Field Collection drupal-7 entity field entityreferenceI've got a field collection, which contains
http://www.brokencontrollers.com/faq/31410683.shtml
CC-MAIN-2019-30
refinedweb
276
77.53
Technical Support On-Line Manuals Compiler Reference Guide Version 6.13 The compiler enables you to use the register storage class specifier to store global variables in core registers. These variables are called global named register variables. register Type VariableName __asm("Reg") Type VariableName Reg char R5 R11 This feature is only available for AArch32 state. If you use -mpixolib, then you must not use the following registers as global named register variables: -mpixolib If you use -fwrpi or -fwrpi-lowering, then you must not use register R9 as a global named register variable. -fwrpi -fwrpi-lowering Arm recommends that you do not use the following registers as global named register variables because the Arm ABI reserves them for use as a frame pointer if needed. You must carefully analyze your code, to avoid side effects, if you want to use these registers as global named register variables: Declaring a core register as a global named register variable means that the register is not available to the compiler for other operations. If you declare too many global named register variables, code size increases significantly. In some cases, your program might not compile, for example if there are insufficient registers available to compute a particular expression. Using global named register variables enables faster access to these variables than if they were stored in memory. For correct runtime behavior: -ffixed-rN N For example, to use register R5 as a global named register for an integer foo, you must use: foo register int foo __asm("R5") For this example, you must compile with the command-line option -ffixed-r5. For more information, see B1.12 -ffixed-rN. -ffixed-r5 The Arm standard library has not been built with any -ffixed-rN option. If you want to link application code containing global named register variables with the Arm standard library, then: The following example demonstrates the use of register variables and the relevant -ffixed-rN option. Source file main.c contains the following code: main.c #include <stdio.h> /* Function defined in another file that will be compiled with -ffixed-r5 -ffixed-r6. */ extern int add_ratio(int a, int b, int c, int d, int e, int f); /* Helper variable */ int other_location = 0; /* Named register variables */ register int foo __asm("r5"); register int *bar __asm("r6"); __attribute__((noinline)) int initialise_named_registers(void) { /* Initialise pointer-based named register variable */ bar = &other_location; /* Test using named register variables */ foo = 1000; *bar = *bar + 1; return 0; } int main(void) { initialise_named_registers(); add_ratio(10, 2, 30, 4, 50, 6); printf("foo: %d\n", foo); // should print 1000 printf("bar: %d\n", *bar); // should print 1 } Source file sum.c contains the following code: sum.c /* Arbitrary function that could normally result in the compiler using R5 and R6. When compiling with -ffixed-r5 -ffixed-r6, the compiler should not use registers R5 or R6 for any function in this file. */ __attribute__((noinline)) int add_ratio(int a, int b, int c, int d, int e, int f) { int sum; sum = a/b + c/d + e/f; if (a > b && c > d) return sum*e*f; else return (sum/e)/f; } Compile main.c and sum.c separately before linking them. This application uses global named register variables using R5 and R6, and therefore both source files must be compiled with the relevant -ffixed-rN option: armclang --target=arm-arm-none-eabi -march=armv8-a -O2 -ffixed-r5 -ffixed-r6 -c main.c -o main.o --save-temps armclang --target=arm-arm-none-eabi -march=armv8-a -O2 -ffixed-r5 -ffixed-r6 -c sum.c -o sum.o --save-temps Link the two object files using armlink: armlink armlink --cpu=8-a.32 main.o sum.o -o image.axf The use of the armclang option --save-temps enables you to look at the generated assembly code. The file sum.s is generated from sum.c, and does not use registers R5 and R6 in the add_ratio() function: armclang --save-temps sum.s add_ratio() add_ratio: .fnstart @ %bb.0: .save {r4, r7, r11, lr} push {r4, r7, r11, lr} ldr r12, [sp, #20] sdiv r4, r2, r3 ldr lr, [sp, #16] sdiv r7, r0, r1 add r4, r4, r7 cmp r0, r1 sdiv r7, lr, r12 cmpgt r2, r3 add r4, r4, r7 bgt .LBB0_2 @ %bb.1: sdiv r0, r4, lr sdiv r0, r0, r12 pop {r4, r7, r11, pc} .LBB0_2: mul r0, r12, lr mul r0, r0, r4 pop {r4, r7, r11, pc} The file main.s has been generated from main.c, and uses registers R5 and R6 only for the code that directly uses these global named register variables: main.s initialise_named_registers: .fnstart @ %bb.0: movw r6, :lower16:other_location mov r5, #1000 movt r6, :upper16:other_location ldr r0, [r6] add r0, r0, #1 str r0, [r6] mov r0, #0 bx lr main: .fnstart @ %bb.0: .save {r11, lr} push {r11, lr} .pad #8 sub sp, sp, #8 bl initialise_named_registers mov r0, #6 mov r1, #50 str r1, [sp] mov r1, #2 str r0, [sp, #4] mov r0, #10 mov r2, #30 mov r3, #4 bl add_ratio adr r0, .LCPI1_0 mov r1, r5 bl __2printf ldr r1, [r6] adr r0, .LCPI1_1 bl __2printf mov r0, #0 add sp, sp, #8 pop {r11, pc} .p2align 2.
http://www.keil.com/support/man/docs/armclang_ref/armclang_ref_kas1548353206992.htm
CC-MAIN-2020-05
refinedweb
874
62.58
import "github.com/golang/groupcache" Package groupcache provides a data loading mechanism with caching and de-duplication that works across a set of peer processes. Each data Get first consults its local cache, otherwise delegates to the requested key's canonical owner, which then checks its cache or finally gets the data. In the common case, many concurrent cache misses across a set of peers for the same key result in just one cache fill. byteview.go groupcache.go http.go peers.go sinks.go RegisterNewGroupHook registers a hook that is run each time a group is created. func RegisterPeerPicker(fn func() PeerPicker) RegisterPeerPicker registers the peer initialization function. It is called once, when the first group is created. RegisterServerStart registers a hook that is run when the first group is created. An AtomicInt is an int64 to be accessed atomically. Add atomically adds n to i. Get atomically gets the value of i. A ByteView holds an immutable view of bytes. Internally it wraps either a []byte or a string, but that detail is invisible to callers. A ByteView is meant to be used as a value type, not a pointer (like a time.Time). At returns the byte at index i. ByteSlice returns a copy of the data as a byte slice. Copy copies b into dest and returns the number of bytes copied. Equal returns whether the bytes in b are the same as the bytes in b2. EqualBytes returns whether the bytes in b are the same as the bytes in b2. EqualString returns whether the bytes in b are the same as the bytes in s. Len returns the view's length. ReadAt implements io.ReaderAt on the bytes in v. func (v ByteView) Reader() io.ReadSeeker Reader returns an io.ReadSeeker for the bytes in v. Slice slices the view between the provided from and to indices. SliceFrom slices the view from the provided index until the end. String returns the data as a string, making a copy if necessary. CacheStats are returned by stats accessors on Group. CacheType represents a type of cache. const ( // The MainCache is the cache for items that this peer is the // owner for. MainCache CacheType = iota + 1 // The HotCache is the cache for items that seem popular // enough to replicate to this node, even though it's not the // owner. HotCache ) Context is an opaque value passed through calls to the ProtoGetter. It may be nil if your ProtoGetter implementation does not require a context. type Getter interface { // Get returns the value identified by key, populating dest. // // The returned data must be unversioned. That is, key must // uniquely describe the loaded data, without an implicit // current time, and without relying on cache expiration // mechanisms. Get(ctx Context, key string, dest Sink) error } A Getter loads data for a key. A GetterFunc implements Getter with a function. type Group struct { // Stats are statistics on the group. Stats Stats // contains filtered or unexported fields } A Group is a cache namespace and associated data loaded spread over a group of 1 or more machines. GetGroup returns the named group previously created with NewGroup, or nil if there's no such group. NewGroup creates a coordinated group-aware Getter from a Getter. The returned Getter tries (but does not guarantee) to run only one Get call at once for a given key across an entire set of peer processes. Concurrent callers both in the local process and in other processes receive copies of the answer once the original Get completes. The group name must be unique for each getter. func (g *Group) CacheStats(which CacheType) CacheStats CacheStats returns stats about the provided cache within the group. Name returns the name of the group. type HTTPPool struct { // Context optionally specifies a context for the server to use when it // receives a request. // If nil, the server uses a nil Context. Context func(*http.Request) Context // Transport optionally specifies an http.RoundTripper for the client // to use when it makes a request. // If nil, the client uses http.DefaultTransport. Transport func(Context) http.RoundTripper // contains filtered or unexported fields } HTTPPool implements PeerPicker for a pool of HTTP peers. NewHTTPPool initializes an HTTP pool of peers, and registers itself as a PeerPicker. For convenience, it also registers itself as an http.Handler with http.DefaultServeMux. The self argument be a valid base URL that points to the current server, for example "". func NewHTTPPoolOpts(self string, o *HTTPPoolOptions) *HTTPPool NewHTTPPoolOpts initializes an HTTP pool of peers with the given options. Unlike NewHTTPPool, this function does not register the created pool as an HTTP handler. The returned *HTTPPool implements http.Handler and must be registered using http.Handle. func (p *HTTPPool) PickPeer(key string) (ProtoGetter, bool) Set updates the pool's list of peers. Each peer value should be a valid base URL, for example "". type HTTPPoolOptions struct { // BasePath specifies the HTTP path that will serve groupcache requests. // If blank, it defaults to "/_groupcache/". BasePath string // Replicas specifies the number of key replicas on the consistent hash. // If blank, it defaults to 50. Replicas int // HashFn specifies the hash function of the consistent hash. // If blank, it defaults to crc32.ChecksumIEEE. HashFn consistenthash.Hash } HTTPPoolOptions are the configurations of a HTTPPool. NoPeers is an implementation of PeerPicker that never finds a peer. func (NoPeers) PickPeer(key string) (peer ProtoGetter, ok bool) type PeerPicker interface { // PickPeer returns the peer that owns the specific key // and true to indicate that a remote peer was nominated. // It returns nil, false if the key owner is the current peer. PickPeer(key string) (peer ProtoGetter, ok bool) } PeerPicker is the interface that must be implemented to locate the peer that owns a specific key. type ProtoGetter interface { Get(context Context, in *pb.GetRequest, out *pb.GetResponse) error } ProtoGetter is the interface that must be implemented by a peer. type Sink interface { // SetString sets the value to s. SetString(s string) error // SetBytes sets the value to the contents of v. // The caller retains ownership of v. SetBytes(v []byte) error // SetProto sets the value to the encoded version of m. // The caller retains ownership of m. SetProto(m proto.Message) error // contains filtered or unexported methods } A Sink receives data from a Get call. Implementation of Getter must call exactly one of the Set methods on success. AllocatingByteSliceSink returns a Sink that allocates a byte slice to hold the received value and assigns it to *dst. The memory is not retained by groupcache. ByteViewSink returns a Sink that populates a ByteView. ProtoSink returns a sink that unmarshals binary proto values into m. StringSink returns a Sink that populates the provided string pointer. TruncatingByteSliceSink returns a Sink that writes up to len(*dst) bytes to *dst. If more bytes are available, they're silently truncated. If fewer bytes are available than len(*dst), *dst is shrunk to fit the number of bytes available. type Stats struct { Gets AtomicInt // any Get request, including from peers CacheHits AtomicInt // either cache was good PeerLoads AtomicInt // either remote load or remote cache hit (not an error) PeerErrors AtomicInt Loads AtomicInt // (gets - cacheHits) LoadsDeduped AtomicInt // after singleflight LocalLoads AtomicInt // total good local loads LocalLoadErrs AtomicInt // total bad local loads ServerRequests AtomicInt // gets that came over the network from peers } Stats are per-group statistics. Package groupcache imports 16 packages (graph) and is imported by 20 packages. Updated 2015-06-16. Refresh now. Tools for package owners.
http://godoc.org/github.com/golang/groupcache
CC-MAIN-2015-27
refinedweb
1,238
67.55
- oRa remote control Palmtop transmitter to couple with the LoRa Shield to execute longer-range radio commands. Among the longer distance radio transmission technologies with low energy consumption, there was one in particular on which we have focused our attention, it is the one LoRa (Long Range) produced by Semtech () with a product based on it, some time ago (issues #199, 200 and 201) we have designed and introduced a shield for Arduino capable of equipping the board with the LoRa technology. However, the project at the time was based on the SX1278 integrated circuit by Semtech (or the equivalent RFM98) which can be used in various ways: - as narrowband transceiver for standard data transmission with amplitude modulation (OOK); - as a classic data transceiver with frequency modulation (FSK), still narrowband; - as LoRa transceiver, i.e. with “Spread Spectrum” technology. The radio module we employed (SX1278) is the version with base frequency that can be set up to 525 MHz (typically aimed to be used at 433÷434 MHz) while the one used, for instance, in the network LoraWAN (e.g. SX1276) works at 868 MHz. in the first article we have verified the circuit’s flexibility, by using it as transmitter in OOK mode to pilot radio-controlled sockets produced by Avidsen or Velleman. In the two next issues, we have described its use as LoRa transceiver. For more information, you can see the documents found on the web at the addresses ww.semtech.com/images/datasheet/sx1276_77_78_79.pdf and. In this article, besides describing the transmitter for LoRa remote control, we are going to recall some concepts regarding “Spread Spectrum” transition and in particular regarding the LoRa technique, for those of you who might have missed the previous articles or maybe you want to just go over the essential info one more time. In fact, the transmitter we have used, better described by the circuit diagram here beside, is designed to integrate with LoRa shield for Arduino and uses the same transmission characteristics; the shield becomes the receiving unit. LoRa (Long Range) is a technology aimed to the IoT (Internet of Things), whenever exists in existing radio links such as WiFi and Bluetooth are not adequate. In fact, in order to bring data communication down to the level of mini/micro apparatuses with extremely low power consumption (for instance self-powered devices using batteries or solar panels), we need to use different radio transmission techniques than Wi-Fi (which is power-hungry) or Bluetooth (which has limited range). Longer-range solutions proposed as of today are two: - Ultra Narrow Band (UNB), which is transmission on a very narrowband in which you can concentrate the limited power of the (SigFox); - Wide Band with Spread Spectrum, that is transmission where the information is spread over a wide spectrum of frequencies using dedicated algorithms; this way, data can be reconstructed even if a signal is very low or even below the noise’s threshold. LoRa proposal. In both cases, the price to pay is the low transmission speed (some hundreds of bps, i.e. some tens of bytes per second). However, this limitation is not very important, since usually, IoT applications such as smart sensors only need the exchange of few periodical data between Peripherals and a data gathering and routing center on the Internet. What’s more important is that we are able to contact numerous mini/micro peripherals spread over the radius of a few kilometers, and these apparatuses must have a very low power consumption. Spread Spectrum and LoRa Spreading the data signal over a wide spectrum of the carrier wave can be done in two ways: - a) by making the carrier wave hop from one channel to the adjacent one using a prefixed algorithm; this is what’s known as Frequency Hopping (FH); - b) by matching each data bit to a string with many modulating bits based on a prefixed pattern (for instance pseudorandom); the receiver will then proceed to reconstruct the single bits by applying the inverted process. It is possible to apply both techniques simultaneously, such is the case in LoRa. However, LoRa actually uses the technique described in point B. this technique can be implemented using a digital super modulation, such as the case for DSSS (Direct Sequence Spread Spectrum), or by using other methods like those used in LoRa. In order to simplify the hardware, especially in terms of reception, LoRa doesn’t use DSSS but a sequencing of “chirps”; that is an oscillation which frequency varies linearly over time. These allow for a simpler coupling of receiver and transmitter, as opposed to the DSSS, which needs precise timers. The data bytes are divided in nibbles (4 bit) to which 1 or 2 or 3 or 4 bits are added for controlling parity. The parameter defining strength on the transmission is CR, which can have in fact a value from 1 to 4. Therefore, if CR=4, it means that the data length of the payload (data to be transmitted) will be doubled, however, the safety of integrity is very high. In the time unit, as many chips are produced by the modulator (modulation unity) as there are Hertz in the bandwidth (BW) (LoRa project data). Therefore the chip ratio is Rc=BW. However, on each chip you can module 2SF values (symbols), where SF is parameter defining the “spread factor” which value can range from 62 12. Therefore, the symbols ratio is Rs=Rc/2SF. Since there are 2SF values represented by SF bits, we can see that the bits ratio (bps) is Rb=SF * Rs. but we still haven’t considered the majority of the bits due to redundancy for the parity control defined by the CR parameter. This produces a surplus equal to the ratio (4+CR)/4 of the bits to be transmitted. Therefore, the transmission ratio as bps is ultimately bound to parameters BW, SF, and the CR through the following formula: Therefore, the frequency of bits per second is directly proportional to BW and inversely proportional to SF (Rb ∝ 1/2(SF-1)). Besides the bps, you also have to take into consideration the reception sensitivity which is directly proportional to SF and inversely proportional to BW. Therefore, maximum sensitivity can be obtained with maximum SF and minimum BW. However, in this case, we also have a too low bps, therefore we have to find a compromise using a bigger BW in order to increase bit rate, especially if there are speed requirements to meet. Table 1 shows the bps values for different values of BW and SF provided we use maximum redundancy (CR=4). Table 2 shows estimated reception sensitivity. table1 Basically, the values of the most useful parameters in typical situations are: SF = 11/12 and BW = 4 – 9. Corresponding to values from 46 bps at -144 dBm sensitivity to 1.343 bps at -129 dBm sensitivity. In case of low bit-rates <1.000 bps), we have to refresh a module registry by enabling the flag for optimizing slow reception. It’s bit 3 of registry 0x26 (that replaces 1); you can do that by using the library function “SX.setLoraLowDataRateOptimize (boolean on)”; but this is on automatically by the library at initialization. table2 Our remote control The purpose of the articles appeared in the magazine for introducing the shield was to show the complexity and flexibility of the radio module SX1278 also in case of use with other types of modulations. However, in this article, we will focus on using it under the LoRa protocol. For LoRa, the “LoRa Alliance” also developed a complex network protocol called LoRaWAN. The Association also takes charge of certifying programs compatible with this protocol. In the in-depth box on LoraWAN in this pages, you will find the main characteristics of the LoRa network. Because LoRaWAN’s characteristics are complex and don’t really match with an amateur or semi-professional activity, we thought to provide a library for using the radio module and for implementing a bidirectional, multi-point LoRa communication. So, we left to the user the task to build their own network according to their necessities, without the restrictions of a certified industrial protocol. Using the library, you can easily manage two communicating stations, or a data gathering center and a multitude of widespread sensors, or other types of more complicated network structures. The library has been updated (version 1.2) by adding some additional feature (see ReleaseNote file). A library can be downloaded from the webpage The library can be used with basic functions with no routing, by wirelessly sending out a proper payload to check coverage. We can find an example in the sketches “LoRaTxEcho” and “LoRaRxEcho” in the attached examples. In this case, we see the fundamental features: - library inclusion #include <LoRa.h> - class instance LORA LR; - initialization LR.begin(); - setting main parameters LR.setConfig(SF,BW,CR); //typically 12,7,4; - sending the message LR.sendMess(tbuffer); - setting into reception LR.receiveMessMode(); - receiving the message (loop n times) if (LR.dataRead(rbuffer,maxlength) >0 ) break; However, the library is organized with more sophisticated features in order to define and use the apparatus addresses. In particular, we can define the address of an apparatus within an identified network from a general address. Which means, we can use an addressing like: network address – local address (of the device) the dimension of network addressing is tied to the dimension of the local addressing. In fact, the overall address defined by the library is always based on 60 bits (unsigned integer for Arduino). Therefore, if for instance we decide to have an addressable space of 8 =256 devices, we have 2(16-8=16)=256 remaining possible values for the network. Actually, the dimension of the local address is one unit lower, because address 0 identifies a broadcast transmission (so towards all the locals). In order to define this subdivision, we use the function: LR.defDevRange(cod); Where “cod” is equal to the exponent of 2; for instance 3 for range 1-7. The function returns the max number of network addresses for a possible control. Here’s some examples: Code 3 : local address 1-7 network addresses 0 – 8191; Code 4 : local address 1-15 network addresses 0 – 4095; Code 5 : local address 1-31 network addresses 0 – 2047; Code 6 : local address 1-63 network addresses 0 – 1023; Code 7 : local address 1-127 network addresses 0 – 511; Code 13 : local address 1-8191 network addresses 0 – 7. Now, all we have to do is deciding the network address that must be within the limit established by the previous division. These can be done using the function: LR.defNetAddress(addr); Which returns an error if “addr” is not within the limits. Since the radio ranges are consistent, we have to protect the information; it is not a case that the LoRaWAN protocol provides a AES128 cryptography. Even the library we propose implements an AES cryptography (although not AES256). It is therefore necessary to establish an identical key between the apparatuses. This can be done using the same initialization function, passing an integer as a key: LR.begin(key); Since the AES key is formed by 32 bytes, the “key” number actually serves as “seed” to initialize the random number generator; the same function then takes care of generating the 32 byte key. This way, we can exchange a simple number as communication key. It is true only when you use the Arduino compiler for all the devices. In fact, starting from the “seed” the compiler’s routine will produce the same pseudorandom sequence for all the Arduino devices. The main features to use in a proprietary LoRa network can therefore listed as such: - library inclusion: #include <LoRa.h> - class instance: LORA LR; - initialization: LR.begin(key); - setting of main parameters: LR.setConfig(SF,BW,CR); //tipicamente 12,7,4 - definition of local address range: LR.defDevRange(cod); - definition of network address: LR.defNetAddress(address); - sending the message: LR.sendNetMess(local receiver, local sender, message,lung); - entering reception mode: LR.receiveMessMode(); - receiving the message (loop n times): if (LR.receiveNetMess(dest. locale,mitt. locale, buffer, maxlung)>0) break; - text extraction and decode: LR.getMessage(); - extracts and decodes the sender address: LR.getSender(); The reception function returns 0 if no messages arrived or no messages sent to the receiver arrived. Besides, it returns -1 if the message hasn’t been sent by the indicated center. If, in the message, the receiver has value 0, the message is always accepted (broadcasting). If the sender parameter of the function has value 0, messages are accepted regardless of the center. Between the samples included in the library, you will find two sketches “LoraTxAddressing” and “LoraRxAddressing” which create a server communicating with several peripheral modules. However, keep in mind that direct management of registers of the SX1278 can always be carried out from the base functions SX of the library. Remote control and library As we will see later in detail, the remote control uses the same LoRa module of the shields. Therefore, the same functions apply to the sketch acting on the remote control. The remote contains an Arduino compatible as processor and clock speed with version “LilyPad with USB”. However, the remote sends out always the same message when the key is pressed. As a result, the transmission is always identical (although encrypted). Anyone, if equipped with the right equipment, could record and repeat the transmission without the need for decoding. It is a problem of all remote actuators. A problem we try to solve, for instance, by making the transmission always different using the “rolling code” technique. In order to solve this problem, in the library, we have provided a “marker”, which is a byte with a random value. The marker of the received message can be read and compared with the last marker arrived. In case that is present in this list, the message would be refused as suspect. Dimensione of the list can be decided at will. Not too long in order to not have too high of a probability to find an already used marker. Nor too short in order to control a sufficient number of already made transmissions. As we will see later, the remote turns on anytime the command key are pressed. In this situation, we cannot use the random generator of Arduino which would be initialized every time with the same value. In order to have a random value with each start, we must make use of electronic noise. In fact, we can read the value of one or more analog ports of Arduino, unused ports and left at high impedance. This value can be used as “seed” to initialize the random generator (or better yet, pseudorandom). To read to marker, we use the function: LR.getMarker(); Keep in mind that the message is overall encrypted, except for the receiver. Therefore, even due to the operational mode of AES cryptography, the content, marker included, is distributed as noise over all the length of the message, making the safety really efficient. Radio range tests On the web, there are many radio range data, some more exaggerated and some other more realistic. A lot depends on the field, which can be more or less open, more or less difficult, and also depends on the position and the antenna. Anyway, we can’t really talk about a few kilometers in an urban environment and 10 or more in open field. Data refers to LoRaWAN conditions where transmissions take place at 868 MHz. Therefore, we wanted to make some effective measurement in order to report some real data using our module at 433 MHz with a ¼ wavelength antenna. Measurements have been done using the LoRa Shield for Arduino which has the antenna on the SMA connector. For the remote, which has a wire internal antenna, ranges are of course lower. Anyway, it has no problems reaching over four floors and an underground parking lot. Back to shield with antenna, we have carried out two series of tests, one by placing the transmitter inside the house and one with the transmitter outside, on a balcony. Transmission data are the following: SF=12, BW=5, power 10 dBm and 20 dBm. The other receiving shields were inside a car. As you can see from the figure, with shields inside the house, the range is asymmetrical because the longest range is towards the window directly in front of the transmitter. Anyway, we can reach ranges of around 1 km. In the figure, the range is not only symmetrical but also much bigger, at around a few kilometers. This test has been carried out, as nation, using a ¼ wavelength small antenna; by using a more efficient antenna, for instance, a 70 cm full-length one, and place in its unit elevated and unencumbered position, we suppose you could reach a few kilometers in urban environments and a few tens of kilometers in the countryside. Report measurements must be construed as approximate although based on a real environment. LoRa remote The LoRa remote includes both the Arduino microprocessor and radio module. Everything has been miniaturized in order to be inserted in the case of a classic pocket-sized remote for a keyholder. Due to the dimension and to unify voltages (in fact, the SX1278 works at 3.3 V), we have selected the Atmega32u4 processor with 8 MHz quartz, the same of the LilyPad with USB. The 8 MHz quartz instead of the 60 MHz the one as Arduino standard is a necessity due to the lower 3.3 V voltage. Anyway, since it is the same configuration of LilyPad with USB, we can (and must) select this platform from the programming IDE list. Of course, as the aficionados of Arduino programming might already know, programming keeps the same characteristics of “Arduino Uno”, since the IDE is tasked with monitoring the different clock speed during compiling. For the programming, we have provided a micro USB port on the remote. In the figure, you can see the block diagram. As you can notice, the transmitter is powered when the command key is pressed; this means that a processor must be quick to intervene on the electronic switch, in order to maintain power when the key is released. In fact, this is the first thing it does, with no problems considering the human intervention times which are a few tens of milliseconds. The processor uses D6 pin to ground the gate of the Q1 (p-channel) POWERMOS through the T1 transistor. T1 transistor works in parallel to the key, therefore, keeps the Q1 POWERMOS in conduction even when the key is released. Once it stabilizes the power, the processor initializes the radio module with the LoRa characteristics defined by the sketch and using the network and local addresses. Besides, it initializes the pseudorandom number generator of Arduino’s C compiler, with a couple of values read from the unused analog pins (A0 and A1) and multiplied between them. This way, it uses variable electronic noise with each start. This is necessary in order to produce a casual marker for the message. Now, it sends out a message using the established pattern and waits for an answer. When it receives a positive response, it signals the execution using an agreed bip code, through the buzzer. Otherwise, after a certain waiting period (a few seconds) or after receiving a negative response it signals a different bip cold. Then it turns off, i.e. it brings D6 pin to logical low. List 1 shows the sketch prepared as an example for the LoRa remote. This way, both the processor and the radio module are powered only for a few seconds (with an average consumption of around 20 mA during activity). A simple A23 battery is therefore enough to make it work for a long time. The server receiving in handling the command can be implemented as reported in the sample sketch included in the library. Essentially, all it has to do is listen and wait for the transmission from the remote control, verify the agreed pattern and verify that the marker is not present in the list of those already arrived. In case of a duplicate, it discards the message and responds negatively. On the other hand, in case the marker is original, it is inserted in a random position on the list and it sends out the confirmation after carrying out the action. Marker insertion in the list in a random position, by overriding the one possibly already present, is done in order to avoid any kind of periodicity. List1 LORA LR; //instance of the class (for other global parameters used, see the complete code) void setup() { pinMode(pinon,OUTPUT); // activate the MOS to keep the power supply digitalWrite(pinon,1); digitalWrite(pinf,1); // set up the buzzer digitalWrite(psound,1); if (!LR.begin(KEY)) {sketchend(1);return;} // initialize the radio module and the cryptographic key LR.defDevRange(RANGECODE); // initialize the network and transmission parameters LR.defNetAddress(NETADD); LR.setFrequency(FREQ); LR.setPower(PWR); LR.setConfig(SF,BW,CR); int seed=analogRead(0)*analogRead(1); // initialize the random generator with electr. noise. randomSeed(seed); sendCommand(); // spedisce il comando if (!getCommandReply()) {sketchend(2);return;} // if the answer is OK warns with a beep signal //otherwise warn with a different beep signal sketchend(0); // it turns off automatically } void loop() // not used {} Naturally, there is always the possibility to receive a message with a marker already present in the list although original. However, since transmission is bidirectional, the server signals the no activation and you can simply repeat the command. Practical creation Alright, now let’s talk a bit about how to build the TX; we already took care of the shield in the corresponding article. All the components of the transmitting unit are in SMD, except for the double pin-strip and the key and they are placed on a double-sided printed circuit, which tracks can be downloaded from our website along with the other files for the project. For the assembly, you must have a soldering iron no more powerful than 20 W, with a very fine tip, besides soldering wire with a 0.5 mm diameter, a small pair of pliers to place components and flux gel to help adhesion of soldering alloy. A magnifier will help centering the smallest components inside the respective pads and to identify, once all the soldering is done, possible shortcircuits caused by an excess of soldering alloy on adjacent terminals. Components must be placed following the mountain scheme you can find in these pages, where you can see the right orientation of the polarized components. All the elements, including pin-strip and key, must be mounted on the top side. The remote has been designed to be inserted in the remote box 5100-SPL10120 with diamond actions 61 x 37 x 15 mm with a battery box for a 23A battery and a button cover that perfectly overlaps P1 key (P2 key, the RESET, must not be taught when activating to remote from the outside). The transmitting unit, after assembly, is ready to be used, since it does not require any calibration. Alternative use for the remote’s hardware If we connect a dry contact in parallel to the key, e.g. a reed contact like those used for windows and doors (those used for antitheft systems, E.N.), we will create an efficient wireless sensor for an antitheft system. For instance, by connecting it to a door we will be able to receive on the server, even a distant one, a notification when the door is opened that would be used as an alarm or to register an event. In fact, we remind you that the LoRa Shield can be mounted on RandA (our shield for Raspberry Pi which integrates the core of Arduino UNO) where the Raspberry Pi processor can act as a sophisticated server, possibly connected to the Internet. The miniaturization of the remote control and its autonomous power make it fit to be placed anywhere we need to have a mechanically activated sensor. If we add a small circuit for making a low consumption timer (a few nanoamps), we could electronically activate the key, for instance using a MOS transistor acting like a switch. In this case, we could periodically send the server an analog value coming from A2 pin or a digital value from D8 pin. For this alternative use, we could use a container comparable in terms of size to a G1013 (with some adaptations), in case you don’t want the small key window. These alternative uses of the remote control take advantage of the momentary activation of the circuit which power is provided by a small, non-rechargeable battery. We are already working on designing a further enhancement of the remote circuit. In this new board, which size is not that different from the remote, we will add a battery charger for 3.7 V LiPo batteries. The result will be a board with processor and radio module with the possibility to be self powered and stay always on. This way, the system can listen for incoming messages or create its own based on various alarms. In fact, the analog and digital pins available on the micro board will be increased. Basically, we have a group of three types of devices: - shield for Arduino (and therefore also for RandA) already introduced and aimed for servers or more elaborated stations; - micro-board with possibility of self-powering through the rechargeable battery, including an Arduino processor (type LilyPad) and radio module; aimed to peripheral stations, also powered by solar panels. - Remote/mechanical alarm, introduced in this article. Conclusions Thanks to the miniaturized LoRa transmitter and with the next self-powered ones, we complete the offer of programmable systems that can create a proprietary network for countless different uses. The use of the library proposed in this article makes these networks very flexible and far from the complex rigidity of LoRaWAN. LoRaWAN was born to standardize IoT from an industrial perspective. It is designed to bring to the Internet these information exchanges involving numerous sensor satellites and actuators within a very encoded Logix (see in-depth the box). Since we don’t want to get into this more complex reality, where it would be more fitting to use an 868 MHz module, it would, however, be a shame not to make use of the great potential of this wonderful regional module. Using these apparatuses and this library, we can create different customized solutions. For instance: - a sophisticated car antitheft system with alarm on the same control; - a sophisticated alarm system extended to houses or businesses; - a management solution for sensors and electro-valves for the farming industry; - automation of apartments and villas on a vast scale; - surveillance for critical environmental values. There are also other applications of the system, which would be to time-consuming to list so we refer to leave them to your imagination. Pingback: LoRa alarm for doors and windows | Open Electronics
https://www.open-electronics.org/lora-remote-control/
CC-MAIN-2022-21
refinedweb
4,527
50.77
Tools are the lifeblood of any programming language, and F# is no different. While you can be successful writing F# code in your favorite text editor and invoking the compiler exclusively from the command line, you’ll likely be more productive using tools. Like C# and VB.NET, F# is a first-class citizen in Visual Studio. F# in Visual Studio has all the features you would expect, such as debugger support, IntelliSense, project templates, and so on. To create your first F# project, open up the Visual Studio IDE and select File→New Project from the menu bar to open the New Project dialog, as shown in Figure 1-1. Select Visual F# in the left pane, select F# Application in the right pane, and click OK. After you click OK in the New Project dialog, you’ll see an empty code editor, a blank canvas ready for you to create your F# masterpiece. To start with, let’s revisit our Hello, World application. Type the following code into the F# editor: printfn "Hello, World" Now press Control + F5 to run your application. When your application starts, a console window will appear and display the entirely unsurprising result shown in Figure 1-2. It may be startling to see a program work without an explicit Main method. You will see why this is admissible in the next chapter, but for now let’s create a more meaningful Hello, World–type program to get a feel for basic F# syntax. The code in Example 1-1 will create a program that accepts two command-line parameters and prints them to the console. In addition, it displays the current time. Example 1-1. Mega Hello World (* Mega Hello World: Take two command line parameters and then print them along with the current time to the console. *) open System [<EntryPoint>] Now that you have actual F# code, hopefully you are curious about what is going on. Let’s look at this program line by line to see how it works. Example 1-1 introduces three values named greeting, thing, and timeOfDay: let greeting, thing= args.[0], args.[1] let timeOfDay= DateTime.Now.ToString("hh:mm tt") The key thing here is that the let keyword binds a name to a value. It is worth pointing out that unlike most other programming languages, in F# values are immutable by default, meaning they cannot be changed once initialized. We will cover why values are immutable in Chapter 3, but for now it is sufficient to say it has to do with functional programming. F# is also case-sensitive, so any two values with names that only differ by case are considered different: let number = 1 let Number = 2 let NUMBER = 3 A value’s name can be any combination of letters, numbers, an underscore _, or an apostrophe '. However, the name must begin with a letter or an underscore. You can enclose the value’s name with a pair of tickmarks, in which case the name can contain any character except for tabs and newlines. This allows you to refer to values and functions exposed from other .NET languages that may conflict with F# keywords: let ``this.Isn't %A% good value Name$!@# ``= 5 Other languages, like C#, use semicolons and curly braces to indicate when statements and blocks of code are complete. However, programmers typically indent their code to make it more readable anyway, so these extra symbols often just add syntactic clutter to by changing the relevant setting under Tools→Options→Text Editor→F#. Reviewing Example 1-1, notice that the body of the main method was indented by four spaces, and the body of the if statement was indented by another four spaces: If the body of the if statement, the failwith, was dedented four spaces and therefore lined up with the if keyword, the F# compiler would yield a warning. This is because the compiler wouldn’t be able to determine if the failwith was meant for the body of the if statement: [<EntryPoint>] let main (args : string[]) = ifargs.Length <> 2 then failwith"Error: Expected arguments <greeting> and <thing>" Warning FS0058: possible incorrect indentation: this token is offside of context started at position (25:5). Try indenting this token further or using standard formatting conventions The general rule is that anything belonging to a method or statement must be indented further than the keyword that began the method or statement. So in Example 1-1, everything in the main method was indented past the first let and everything in the if statement was indented past the if keyword. As you see and write more F# code, you will quickly find that omitting semicolons and curly braces makes the code easier to write and also much easier to read. Example 1-1 also demonstrated how F# can interoperate with existing .NET libraries: open System// ... let timeOfDay = DateTime.Now.ToString("hh:mm tt") The .NET Framework contains a broad array of libraries for everything from graphics to databases to web services. F# can take advantage of any .NET library natively by calling directly into it. In Example 1-1, the DateTime.Now property was used in the System namespace in the mscorlib.dll assembly. Conversely, any code written in F# can be consumed by other .NET languages. For more information on .NET libraries, you can skip ahead to Appendix A for a quick tour of what’s available. Like any language, F# allows you to comment your code. To declare a single-line comment, use two slashes, //; everything after them until the end of the line will be ignored by the compiler: //Program exit code For larger comments that span multiple lines, you can use multiline comments, which indicate to the compiler to ignore everything between the (* and *) characters: (*Mega Hello World: Take two command line parameters and then print them along with the current time to the console. *) For F# applications written in Visual Studio, there is a third type of comment: an XML documentation comment. If a comment starting with three slashes, ///, is placed above an identifier, Visual Studio will display the comment’s text when you hover over it. Figure 1-3 shows applying an XML documentation comment and its associated tooltip. No credit card required
https://www.safaribooksonline.com/library/view/programming-f/9780596802080/ch01s02.html
CC-MAIN-2017-13
refinedweb
1,047
61.46
Filed under: Java, Programming, TDD, — Tags: @Rule, Expected exception, JUnit, checked exception — Thomas Sundberg — 2015-11-20 Sometimes you want to verify that an exception is thrown in your code. Let me show you three different ways to verify that the expected exception has been thrown. One solution that always works would be the one below: @Test public void should_throw_runtime_exception_naive() { try { throwExampleException(); fail("Expected a RuntimeException"); } catch (RuntimeException e) { assertThat(e.getMessage(), is("Oops!")); } } You execute the action the test should check and catches a specific exception. And then check that the expected message actually is the message you see. What is the problem then? The only issue I can think of is that it is unnecessary verbose. I am not comfortable with test code that isn't straight through. In this case, there is a catch clause. In other cases there may be loops or even worse, conditions. So to me, this works but it isn't pretty. An approach that I often use is to annotate the test with the expected exception. It can look like this: @Test(expected = RuntimeException.class) public void should_throw_runtime_exception() { throwExampleException(); } This is smaller. And smaller is good. But it is also a bit blunt. You will get feedback from the test when the expected exception isn't thrown. But you will not get feedback if the message is something different from what you expected. Given these two options, I usually prefer the last one. Even when I am not able to verify the message. The compactness of the code is appealing. If I were able to use an annotation like @Test(expected = RuntimeException.class, messgae = "Oops!") then that would have been a very nice solution. But JUnit doesn't support that. There is a third way to do this. That is to use a JUnit @Rule annotation with ExpectedException. Let me show you how. You need to define a JUnit rule. And use it. It can be done like this: @Rule public ExpectedException thrown = ExpectedException.none(); @Test public void should_also_throw_runtime_exception() { thrown.expect(RuntimeException.class); thrown.expectMessage("Oops!"); throwExampleException(); } This rule says that I don't expect any exceptions to be thrown from my method. This means that the behaviour is consistent with the default behaviour of JUnit. I can, however, define that an exception should be thrown in a test and define the class for the exception.This is what I do in should_also_throw_runtime_exception(). And all of a sudden, I am not only able to assert the class for the thrown exception, I am also able to assert the message that was thrown. And I am able to do it in a test method where I don't have any blocks. I think it would have been nice to extend the test annotation to handle both exception and its message, but using this rule construction is almost just as good as an extended annotation. I am able to verify more things, using the field thrown in this example, so I guess it is a resonable limitation. I don't like blog posts that hides stuff from me. Especially imports in example code. It makes the examples magic and I am not impressed by magic. The complete source code I used for this blog is therefore included below. Enjoy! package se.thinkcode; import org.junit.Rule; import org.junit.Test; import org.junit.rules.ExpectedException; import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.core.Is.is; import static org.junit.Assert.fail; public class CheckExceptionsTest { @Test public void should_throw_runtime_exception_naive() { try { throwExampleException(); fail("Expected a RuntimeException"); } catch (RuntimeException e) { assertThat(e.getMessage(), is("Oops!")); } } @Test(expected = RuntimeException.class) public void should_throw_runtime_exception() { throwExampleException(); } @Rule public ExpectedException thrown = ExpectedException.none(); @Test public void should_also_throw_runtime_exception() { thrown.expect(RuntimeException.class); thrown.expectMessage("Oops!"); throwExampleException(); } private void throwExampleException() { throw new RuntimeException("Oops!"); } } I would like to thank Malin Ekholm for proof reading.
https://www.thinkcode.se/blog/2015/11/20/expected-exceptions
CC-MAIN-2022-05
refinedweb
644
52.15
There are three supporting layers to s6-rc. Which typically need to be upgraded together. The package, ports version and available current version are: sysutils/s6-rc 0.5.2.2 to 0.5.2.3 sysutils/s6 2.10.0.3 to 2.11.0.0 lang/execline 2.8.0.1 to 2.8.1.0 devel/skalib 2.10.0.3 to 2.11.0.0 The skalib changes (from 2.10.0.3) are: libbiguint removed. Obsolete skalibs/environ.h and skalibs/getpeereid.h headers removed. rc4 and md5 functions removed. iobuffer removed. fd_cat() and fd_catn() changed signatures. All *_t types renamed without the _t suffix, in order to preserve POSIX namespace. subgetopt() renamed to lgetopt(). All signal functions entirely reworked; cruft removed. skalibs/cdb_make.h renamed to skalibs/cdbmake.h; cdbmake functions now return 1 on success and 0 on failure. skalibs/cdb.h redesigned to remove reader state from the cdb structure itself. The unsafe cdb_successor() API has been removed. New skalibs/posixplz.h function: munmap_void(). which flows upwards to the other packages. Changes to signal handling is noteworthy. execline 2.8.1.0 introduces a highly beneficial case "statement". I'll test those tomorrow. The reminders for outdated ports got stuck in my overagressive rspamd. Created attachment 230662 [details] Patch updating the skalibs port to 2.11.1.0 Added an initial patch updating skalibs. Please commit the updates to all skaware ports together (or in very quick succession), because they have to be kept in sync. Created attachment 230670 [details] Patch updating lang/execline to version 2.8.2.0 Update execline to version 2.8.2.0 (s6 and s6-rc are still missing) Created attachment 230674 [details] Patch updating sysutils/s6 to version 2.11.0.1 Created attachment 230706 [details] Patch updating sysutils/s6-rc to version 0.5.3.0 While at it restore support for --livedir=/run/s6-rc in the form of the run flavor (the default flavor remains hier(7) compatible). Some setups require a writeable file system very early during boot which can only be provided by a tmpfs mountpoint like the /run mountpoint expected by s6-rc when --livedir is omitted from the configure arguments. The removal of _t typedefs from skalibs may have helped avoid name clashes with reserved names, but it broke the unmaintained sysutils/runwhen port. (In reply to crest from comment #5) Thank-you for updating the suite. I like the approach of providing different options for LIVEDIR which should placate BSD vs s6 folder conventions. Would it be possible to include a ? before =, as follows: .if ${FLAVOR} == hier # Follow hier(7) LIVEDIR?= /var/run/${PORTNAME} .else # Expect a dedicated /run mountpoint. Can be required if /v LIVEDIR?= /run/${PORTNAME} .endif as we define a different value for LIVEDIR? (In reply to crest from comment #6) Unfortunate that an update to a dependency (skalibs) breaks runwhen. Though this is a task for the runwhen author. We use sysutils/snooze to perform a similar function.
https://bugs.freebsd.org/bugzilla/show_bug.cgi?format=multiple&id=259178
CC-MAIN-2022-05
refinedweb
505
62.85
Good day yet again ladies and gents. Today I'm having problems with strings and such. I've been tasked with creating fixed char arrays to contain first name, last name, and middle name. I've managed that, and have the calls in my main. I'm also tasked with using strlen (specifically) to find the length of the strings to create a dynamic array to hold all three names. This is where I'm running into problems. the first cpp file I'll post has the original functions for entering the names into the strings. The commented out portion was where I was getting the link individually (which works), but I am trying to write a separate function in another cpp file which keeps giving me linking errors. I'm not sure where they are coming from since I already used the extern keyword in the header. I'm sure it's possible to write a separate function in a separate cpp file, but am I making more trouble than it's worth? Here are the files: Header #ifndef NAMEINP_H #define NAMEINP_H const int maxin = 16; extern char lastName[maxin]; extern char firstName[maxin]; extern char midName[maxin]; char getlast( char lastName[] , int maxin); char getfirst( char firstName[] , int maxin); char getmid( char midName[] , int maxin); //void displayName (char * name); #endif Here are the functions #include "nameinp.h" #include <iostream> #include <string.h> using namespace std; //Input last name char getlast( char lastName[] , int maxin) { cout << " Please enter your last name up to 15 characters " << endl; cin.getline(lastName, maxin, '\n' ); //size_t lastLen; //lastLen = strlen(lastName); //cout << lastLen << endl; return 0; } //Input first name char getfirst( char firstName[] , int maxin) { cout << " Please enter your first name up to 15 characters " << endl; cin.getline(firstName, maxin, '\n' ); //size_t firstLen; //firstLen = strlen(firstName); //cout << firstLen << endl; return 0; } //Input middle name char getmid( char midName[] , int maxin) { cout << " Please enter your middle name up to 15 characters " << endl; cin.getline(midName, maxin, '\n' ); //size_t midLen; //midLen = strlen(midName); //cout << midLen << endl; return 0; } Here is the function I'm trying to write that's giving me the hassles #include "nameinp.h" #include <iostream> #include <string.h> using namespace std; //Calculate the String Lengths size_t totallength (size_t) { size_t fullength = 0; size_t lastLen; lastLen = strlen(lastName); cout << lastLen << endl; size_t firstLen; firstLen = strlen(firstName); cout << firstLen << endl; size_t midLen; midLen = strlen(midName); cout << midLen << endl; fullength = lastLen + firstLen + midLen; return 0; } Okay, what am I screwing up? Thank you
https://www.daniweb.com/programming/software-development/threads/112732/strings-this-time
CC-MAIN-2019-04
refinedweb
416
66.67
Interactive Smile using React + GSAP Integrating GSAP with your React application What you’ll be able to do Installing GSAP 'react';import { TweenMax, Power3 } from "gsap"; Create the component to be viewed in the return statement. Import the css for the component import './App.css'; Now create a reference to each element that we have, Inside useEffect, we are going to define the initial transition of circles from left and opacity using TweenMax. Now, add the click handler on a circle with a useState to expand and shrink on toggle. Define the GSAP Animations inside those functions. And run your react app, Voila!!! You will get the desired output :) Github URL for the source code:
https://aarushilal.medium.com/interactive-smile-using-react-gsap-ed339f2ed3ce
CC-MAIN-2021-31
refinedweb
115
63.7
I met a trouble. I want to receive any collection, type of ObservableCollection and use it. For example, there is a class like following public class Car{ int num; string str; } ObservableCollection<Car> carOC = new ObservableCollection<Car>(); ObservableCollection<T> ObservableCollection<T> void showingProperties(ObservableCollection<T> coll) { foreach (T item in coll){ // showing item's property list } } carOC showingProperties(carOC); carOC has properties num type int32 str type string ObservableCollection<T> In fact your problem hasn't to do anything with an ObservableCollection<T>. You could do the same with any generic collection. Just for clarify, ObservableCollection<T> observes your list and items, so that it will give you information about changings in the List. It won't observe your class structure. It doesn't make much sense to do this on a List, because your T will exist serveral times in your list, but the Information will be the same for each object. So i would recommend a Method which give you such information for a type. See it type based not object based. public string ShowProperties<T>() : where T : class { var props = typeof (T).GetProperties(BindingFlags.Instance | BindingFlags.Public); string typeInfo = typeof (T).FullName + Environment.NewLine; foreach (var prop in props) { typeInfo += prop.Name + " " + prop.PropertyType.FullName + Environment.NewLine; } return typeInfo; } If you have different items in your List, because of inheritance call this serveral times in your foreach loop. But be carefull reflection is slow if you use it in loops. Think about caching then.
https://codedump.io/share/h09kLZH0kkTY/1/receive-and-use-any-type-of-observablecollectionlttgt
CC-MAIN-2016-50
refinedweb
247
56.45
3509/how-do-i-copy-a-file-in-python How do I copy a file in Python? I couldn't find anything under os. def main(): try: do main program stuff here .... except KeyboardInterrupt: print "Shutdown requested...exiting" except Exception: traceback.print_exc(file=sys.stdout) sys.exit(0) if __name__ == "__main__": main() copy a file in python from shutil import copyfile copyfile(src,dst) where src is copy the content of file name and dst is the destination location must be writable otherwise we will get error. file handling is the import concept of python in file handling we can do several function like creating file , reading file,updating file and deleting file and copy content file. Use the shutil module. copyfile(src, dst) Copy the contents ...READ MORE readline function help to read line in ...READ MORE Hey @David! TRy something like this: campaign_data = ...READ MORE Hi @Mike. First, read both the csv ...READ MORE Hey, Web scraping is a technique to automatically ...READ MORE yes, you can use "os.rename" for that. ...READ MORE You can try the below code which ...READ MORE connect mysql database with python import MySQLdb db = ...READ MORE down voteacceptedFor windows: you could use winsound.SND_ASYNC to play them ...READ MORE len() >>> mylist=[] >>> print len(mylist) 0 READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/3509/how-do-i-copy-a-file-in-python
CC-MAIN-2020-50
refinedweb
223
79.16
Logging and log level¶ Out of the box, your Seldon deployments will be pre-configured to a sane set of defaults when it comes to logging. These settings involve both the logging level and the structure of the log messages. These settings can be changed on a per-component basis. Log level¶ By default, all the components in your Seldon deployment will come out of the box with INFO as the default log level. To change the log level you can use the SELDON_LOG_LEVEL environment variable. In general, this variable can be set to the following log levels (from more to less verbose): DEBUG INFO WARNING ERROR Python inference servers¶ Note Setting the SELDON_LOG_LEVEL to WARNING and above in the Python wrapper will disable the server’s access logs, which are considered INFO-level logs. When using the Python wrapper (including the MLflow, SKLearn and XGBoost pre-package servers), you can control the log level using the SELDON_LOG_LEVEL environment variable. Note that the SELDON_LOG_LEVEL variable has to be set in the respective container within your inference graph. For example, to set it in each container running with the python wrapper, you would do it as follows by adding the environment variable SELDON_LOG_LEVEL to the containers running images wrapped by the python wrapper: "spec": { // ... "predictors": [ { "componentSpecs": [ { "spec": { "containers": [ { "name": "mymodel", "image": "x.y:123", "env": [ { "name": "SELDON_LOG_LEVEL", "value": "DEBUG" } ] } ] } } ] } ] // ... } Once this has been set, it’s possible to use the log in your wrapper code as follows: import logging log = logging.getLogger() log.debug(...) Log level in the service orchestrator¶ To change the log level in the service orchestrator, you can set the SELDON_LOG_LEVEL environment variable on the svcOrchSpec section of the SeldonDeployment CRD: "spec": { // ... "predictors": [ { "svcOrchSpec": { "env": [ { "name": "SELDON_LOG_LEVEL", "value": "DEBUG" } ] } } ] // ... } Log format and sampling¶ By default, Seldon’s service orchestrator and operator will serialise the log messages as JSON and will enable log sampling. This behaviour can be disabled by setting the SELDON_DEBUG variable to true. Note that this will enable “debug mode”, which can also have other side effects. For example, to change this on the service orchestrator, you would do: "spec": { // ... "predictors": [ { "svcOrchSpec": { "env": [ { "name": "SELDON_DEBUG", "value": "true" } ] } } ] // ... }
https://docs.seldon.io/projects/seldon-core/en/latest/analytics/log_level.html
CC-MAIN-2020-40
refinedweb
361
52.8
This article explains how to code text to speech application with SAPI and Microsoft Agent control. When you finish reading this article, I hope you will know the following: Speech The sample given above was created in Visual Studio .NET 2003. In order to run the sample, we need a set of APIs installed on our system. Requirements are: The sample application given above converts the selected text to speech regardless of where the text is and converts it into speech when a HotKey is pressed. To achieve this task, we need to do the following set of things: Speech Setting HotKey for an application means registering the key with the application in order to get the messages from the operating system. For this, we use DllImports of user32.dll. DllImports //API Imports [DllImport("user32.dll", SetLastError = true)] public static extern bool RegisterHotKey( IntPtr hWnd, // handle to window int id, // hot key identifier KeyModifiers fsModifiers, // key-modifier options Keys vk // virtual-key code ); [DllImport("user32.dll", SetLastError = IntPtr hWnd, true)] public static extern bool UnregisterHotKey(// handle to window int id // hot key identifier ); This sample application registers the F9 key as the HotKey in the constructor of the application and unregisters it when disposing. //For registering bool //HotKey_iD may be any number unique to that application //For unregistering bool bcheck = UnregisterHotKey(Handle, HOTKEY_ID); We are done with registering of HotKey. Now we need to handle the message from the operating system. For that, we override the method WndProc and check for the corresponding message received: WndProc const int WM_HOTKEY= 0x312; protected { override void WndProc(ref Message msg)// Listen for operating system messages. { //Here we do whatever we need } } break;base.WndProc(ref msg); When the HotKey is pressed, we copy the selection by sending keyevents using SendKeys.SendWait("^(c)"); and get the text from the clipboard. Now we have the text which needs to be converted in speech. SendKeys.SendWait("^(c)"); In order to use the Speech API, we need to reference it in our application. This is done as given below. First we make a reference to the Microsoft Speech Library 5.0 as follows: Now, we can find the reference of SpeechLib: SpeechLib We have finished referencing the Speech library. Now we have to reference Microsoft Agent Control. We can add Microsoft Agent Control directly to the ToolBar by selecting Add/Remove Items from Menu and selecting the COM Component tab in the Customise toolbox dialog. From that dialog, we select the Microsoft Agent Control as follows: Now, we can just drag and drop the control from the toolbox to our form: Then we need to use the Speech library and Microsoft Agent in our code. First we go with the Speech library and import SpeechLib: using SpeechLib; Then we create a Voice object: Voice voice = new SpVoice(); Now, we need to make it talk. This is done as follows: voice.Speak("Whatever it is" ,SpeechVoiceSpeakFlags.SVSFlagsAsync); We make use of SVSFlagAsync because we are going to use visemes in the sample. This will be explained later. SVSFlagAsync If we have installed different voices, then we can make use of it in our sample application. In order to list out all the available voices in the system, we do this: foreach { Console.Writeline(t.GetAttribute("Name")); //I add it in a Combo } (ISpeechObjectToken t in voice.GetVoices("","")) We can set the voices according to our preference as follows: voice.Voice = voice.GetVoices("Name="+VoiceCombo.Items[0].ToString(), "Language=409").Item(0); Now we can also make use of Microsoft Agent and make the Agent speak for us. This is done as follows: //You can load whatever agent you wish as per the availability //To set the language to US English. axAgent2.Characters.Load("Genie",(Character = axAgent2.Characters["Genie"]; object)"C:/Speaker/chars/GENIE.acs"); Character.LanguageID = 0x409; Character.Show( Character.Speak(txt, Character.Hide(); null);null); Visemes are nothing but images with expression. We have different kinds of expressions related to phonetics. We can have 13 images with different expressions related to phonetics for achieving this. For setting visemes in a SAPI application, we need to have 13 images expressing Silence (ae, aa, ao, ey, er, y, w, ow, aw, oy, ay, h, r, l, s, sh, th, f, d, k, p). Then, we need to set a viseme handler for Voice as follows: voice.Viseme+= new _ISpeechVoiceEvents_VisemeEventHandler(VisemeEvent); The VisemeEvent method sets the different images for the pronounced words: VisemeEvent private { //we have 22 visemetype pictureBox1.Image= selectedList.Images[i] ; } void VisemeEvent(int StreamNo,object StreamPos, int duration, SpeechLib.SpeechVisemeType nextVisemetype, SpeechLib.SpeechVisemeFeature visemeFeature, SpeechLib.SpeechVisemeType currentVisemetype) int i= int.Parse(currentVisemetype.ToString().Replace("SVP_","")); I was not able to find perfect images for visemes. So I tried to create my own visemes. I hope I have covered the basic things about SAPI and Microsoft Agent Control. Please feel free to email me at beniton@gmail.com if you find any problems or have suggestions for this article. Thank you! Please don't forget to rate this article. switch (msg.Msg)case WM_HOTKEY: // this is the block the app turns in if the HotKey has been pressed bcheck = RegisterHotKey(Handle, HOTKEY_ID, KeyModifiers.None, Keys.F.
https://www.codeproject.com/Articles/14044/SAPI-with-Microsoft-Agent-and-Visemes-to-Explain-T?msg=1722546
CC-MAIN-2017-43
refinedweb
868
56.05
, and then give an explanation of what I did and why. The following image shows that the Arduino is capable of showing a letter on the display. I made this by writing a library in C that handles the LED matrix when it’s connected to particular I/O ports, and together with the library I wrote a main program that displays the “B”: #include <util/delay.h> #include "ledmatrix.h" // The header of my library. #define FRAME_RATE_HZ 50 // This feels sufficient #define SUBFRAME_RATE_HZ (FRAME_RATE_HZ * N_SUBFRAMES) // N_SUBFRAMES is in ledmatrix.h #define SUBFRAME_DELAY_US (1000000 / SUBFRAME_RATE_HZ) int main(void) { struct ledmatrix_frame fB = // struct ledmatrix is a 5-bytes struct holding a 7x5 frame. LEDMATRIX_FRAME_INIT( // A helper macro to write frames easily 11110, // This structure contains a "B" 10001, // 1 means LED is on 10001, // 0 means LED is off 11110, // they are transformed into binary numbers by "magic" in LEDMATRIX_FRAME_INIT() 10001, 10001, 11110); ledmatrix_setup(); // Initializes ports and stuff. while(1) { ledmatrix_draw_next_subframe(&fB); // Turns on the LEDs of the next subframe. _delay_us(SUBFRAME_DELAY_US); // This loop+delay might be replaced by a periodic interrupt. } return 0; } This is the internal diagram of the component: The elemental principle for each dot of the display is that if I drive row 1 high and drive column 1 low, I will light up the dot at coordinates (1,1), and if I drive row 1 low and column 1 low I will keep it off. In the tutorials that can be found online on how to drive a LED matrix, they usually scan the matrix by row or by column. I decided to scan by column, so I hooked one resistor for each row pin. The LED matrix datasheet indicates that each LED has a forward voltage of 2.1V at the typical operating current of 20mA.. I wanted to connect the LED matrix pins as orderly as possible in terms of Arduino pin numbers, so I started from pin 0 that corresponds to PORTD0 of the ATmega chip, but I had problems with pins 0 and 1 probably caused by the connection to the USB-serial converter mounted on the Arduino Uno. This is the final wiring of the Arduino to the component: Note that the colours of the wires match the photo at the beginning of this post. This is a table that explicitly indicates the connection: LED matrix - Arduino COL1(PIN1) - PB0(PIN8) COL2(PIN3) - PB1(PIN9) COL3(PIN10) - PB2(PIN10) COL4(PIN7) - PB3(PIN11) COL5(PIN8) - PB4(PIN12) ROW1(PIN12) - PB5(PIN13) ROW2(PIN11) - PD7(PIN7) ROW3(PIN2) - PD2(PIN2) ROW4(PIN9) - PD3(PIN3) ROW5(PIN4) - PD4(PIN4) ROW6(PIN5) - PD5(PIN5) ROW7(PIN6) - PD6(PIN6) In many examples, when performing a column scan, the active column is driven low by a pin, while the others are not driven: the I/O of the Arduino are configured as high-Z input. In this way one can light up each dot of the column by driving high or low the corresponding pins. The problem with driving 7 dots at once is that the Arduino I/O that keeps the column pin low is sinking too much current: the ATmega datasheet indicates a maximum current of 40mA per pin, but each dot needs roughly 20mA, so we should at most light 2 dots. Some people suggest using a current sink IC to solve this problem, but I don’t have such an IC yet in my inventory, so I decided to consider 2 dots at a time instead of a full column; the dots considered at one time is what is called “subframe” in the code. The following GIF shows the scan of a “B” with a column scan that considers at most 2 dots at a time: LED matrix frames animation. I have uploaded the code on Github; this is the header of the library that I wrote: ledmatrix.h Basically the information of which dots to light is encoded in a 5-bytes structure, and there are macros to “draw” a frame with ones and zeros. Then there’s a function , “ ledmatrix_draw_next_subframe()“, that is used to draw two dots and contains in itself the state of which dots are being considered next. With this library is it possible, for example, to have an alphabet of 128 ASCII characters in 640 bytes of table and display them one by one to spell a word. The library implementation instead is here: ledmatrix.c The PORTB and PORTD are used to manipulate the dots; it’s a little tricky to link the pin of the LED matrix to the pin of the port because they are not ordered, but it’s not that messy. By compiling with avr-gcc and -Os optimization, the library is little more than 200 bytes of code and data. I hope that my approach can be useful or inspiring. The benefits are that it doesn’t require additional ICs besides the ATmega and the LED matrix, and that it’s very compact in terms of code size and data. See also: Posted on 2014/11/15 0
https://balau82.wordpress.com/2014/11/15/drawing-on-a-7x5-led-matrix-with-arduino-in-c/
CC-MAIN-2017-47
refinedweb
847
63.22
The data-basic package Please see README.md Properties Modules - Data - Internal - Control - Data - Internal.Data.Basic - Internal.Data.Basic.Common - Internal.Data.Basic.Compare - Internal.Data.Basic.Compiler - Internal.Data.Basic.Foreign - Internal.Data.Basic.Lens - Internal.Data.Basic.Replace - Sql - Internal.Data.Basic.SqlToHsTypes - Internal.Data.Basic.TH - Internal.Data.Basic.TypeLevel - Internal.Data.Basic.Types - Internal.Data.Basic.Virtual Downloads - data-basic-0.2.0.3.tar.gz [browse] (Cabal source package) - Package description (included in the package) Maintainer's Corner For package maintainers and hackage trustees Readme for data-basic-0.2.0.3[back to package description] This is a guide on how to get started with basic. Each commit to this repository represents a single step of the guide. We'll cover the motivation and the general description of the library and how to use it, which consists of declaring your schema, generating the model and manipulating the data. About Basic is a database library with 4 main objectives, roughly prioritized from first to last. Ease of use for the most common use cases While SQL allows for a large number of ways to manipulate your data, people use a small subset of them in the majority of cases. We aim to make these cases as painless as possible. It should never feel like you need to jump through hoops to get a list of entities that satisfy a simple condition. Type safety The type level constraints should reflect your data constraints as best as possible. The library should never allow you to execute a query that doesn't make sense. Expressivity must never come before safety. Flexibility While we provide an escape hatch for writing raw queries to ensure you're never "stuck", the cases where it's needed should be few and far in between. If there's a way to provide an elegant and safe solution for a specific query, the library should allow it. Ability to debug If you mess up and/or something goes wrong we want to make it as easy as possible to fix it. Legible runtime errors are a must. Also, while Haskell libraries (especially the very "type safe" ones) are notorious for hard to understand compiler errors, we use custom type errors to try and cover some of the standard sources of mistakes. We try to provide useful descriptions of what went wrong and tips on how to fix it. Getting started Prerequisites that are not covered by the tutorial: - A PostgreSQL installation - A database with a desired role configuration - A new stack project (simple template) with a dependency on basic that builds As a database-first library, your model will come from an SQL schema. If you have an existing database, you can use tools like pg_dump to get the schema out. If you're developing a new database, you will declare your tables and relations in an SQL file and then use it to initialize the database and generate the basic model. Start off by creating a ./model/schema.sql file. We'll be creating a rudimentary blog so we'll need an author and a post table. CREATE TABLE author ( id int not null primary key, name text not null, registration_date timestamp not null ); CREATE TABLE post ( id int not null, name text not null, content text not null, creation_date timestamp not null, author int not null references author(id) ); Load the schema into the database you created before. For example like this psql -U postgres -d basic_guide -f model/schema.sql Now it's time to generate the model. Create a new module in your src directory. Call it Model. {-# LANGUAGE TemplateHaskell, DataKinds, FlexibleInstances, TypeFamilies, MultiParamTypeClasses #-} {-# LANGUAGE DeriveGeneric, FlexibleContexts #-} module Model where import Data.Basic mkFromFile "./model/schema.sql" Add it to your cabal file. This will generate all the declarations that you'll need to use the library. You will import this module from every module that requires access to the DB. To see the generated declarations, you can open the repl with stack repl and type :browse Model. Manipulating data In your main module, import the Model and Data.Basic so we can start playing with data. Let's start by inserting some data. All basic functions work in any monad with a MonadEffect Basic instance. Practically, this means it integrates well with a mtl-style codebase. sandbox :: MonadEffect1 Basic m => m () sandbox = return () For each table in your schema (say, author), basic will generate a new value called newAuthor. This represents a fresh table entry with no data. The underlying value is a record with one field for each column. Before going futher, add dependencies to the time, mtl, postgresql-simple and lens libraries. Basic uses the lens approach extensively and setting the fields is no exception. Here's how you do it {-# LANGUAGE FlexibleContexts, OverloadedStrings #-} module Main where import Prelude hiding (id) import Model import Data.Basic (MonadEffect, Basic) import Data.Time (getCurrentTime, getCurrentTimeZone, getCurrentTime, utcToLocalTime) import Control.Monad.IO.Class (MonadIO(..)) import Data.Function ((&)) import Control.Lens ((.~)) sandbox :: (MonadEffect Basic m, MonadIO m) => m () sandbox = do zone <- liftIO getCurrentTimeZone now <- liftIO getCurrentTime newAuthor & id .~ 0 & name .~ "John" & registrationDate .~ utcToLocalTime zone now & print & liftIO main :: IO () main = putStrLn "hello world" Let's unpack this a bit. We first do a bit of time boilerplate to get the current LocalTime. Then we use the lens syntax to set fields. If you haven't seen it before, the pattern is record & field .~ value. This returns a new record so we can continue chaining. The & operator is actually the flipped version of $ so we use it also to print the final value and lift that IO operation into our monad. Now, there's no instance MonadEffect Basic IO because the regular old IO monad doesn't know anything about your database. To handle that constraint we use the handleBasicPsql function. It takes a database connection which we can get using the connectPostgreSQL function provided by postgresql-simple. So what we do is handleBasicPsql conn sandbox and it will provide the database functionality that sandbox needs, take care of that constraint leaving only the MonadIO of which the IO monad is an instance. Here's the final version of our Main module. {-# LANGUAGE FlexibleContexts, OverloadedStrings #-} module Main where import Prelude hiding (id) import Model import Data.Basic (MonadEffect, Basic, handleBasicPsql) import Data.Time (getCurrentTime, getCurrentTimeZone, getCurrentTime, utcToLocalTime) import Control.Monad.IO.Class (MonadIO(..)) import Data.Function ((&)) import Control.Lens ((.~)) import Database.PostgreSQL.Simple sandbox :: (MonadEffect Basic m, MonadIO m) => m () sandbox = do zone <- liftIO getCurrentTimeZone now <- liftIO getCurrentTime newAuthor & id .~ 0 & name .~ "John" & registrationDate .~ utcToLocalTime zone now & print & liftIO main :: IO () main = do conn <- connectPostgreSQL "host=localhost port=5432 user=postgres dbname=basic_guide password=admin" handleBasicPsql conn sandbox make sure to modify the connection string And guess what, we can actually run the thing now! Compile the project and run it, you should see something like: Entity {_getEntity = Author {_author_id = 0, _author_name = "John", _author_registration_date = 2017-06-09 23:11:16.0531469}} You might complain that we didn't actually do anything with the database, and you're right, so let's actually insert this author in. How? newAuthor & id .~ 0 & name .~ "John" & registrationDate .~ utcToLocalTime zone now & insert & void add insertto the import list of Data.Basicand import Control.Monad (void) Doesn't get much simpler than that. Here's something cool. What if we forgot to set one of the required fields? Turns out, in basic, a partially filled in entity is a legitimate value. You can use the lens syntax to look at it's fields, you can even serialize it to JSON and back. The cool part is that you can't mess up by accessing an undefined field. Basic enforces this at the type level. Go ahead and try removing the & name .~ "John" line. When you try to compile it you should see something like * Can't insert entity because the required field "name" is not set Querying data Let's start by inserting a bunch of rows so we have something to work with. sandbox :: (MonadEffect Basic m, MonadIO m) => m () sandbox = do zone <- liftIO getCurrentTimeZone now <- liftIO getCurrentTime let localNow = utcToLocalTime zone now newAuthor & id .~ 0 & name .~ "John" & registrationDate .~ localNow & insert & void newAuthor & id .~ 1 & name .~ "Mark" & registrationDate .~ localNow & insert & void newAuthor & id .~ 2 & name .~ "Steve" & registrationDate .~ localNow & insert & void newPost & id .~ 0 & author .~ 0 & creationDate .~ localNow & content .~ "ABC" & name .~ "ABC" & insert & void newPost & id .~ 1 & author .~ 1 & creationDate .~ localNow & content .~ "DEF" & name .~ "DEF" & insert & void newPost & id .~ 2 & author .~ 2 & creationDate .~ localNow & content .~ "GHI" & name .~ "GHI" & insert & void While, to ensure type safety, the types of basic functions are a bit hard to parse, you can follow a simplified mental model. What you do is pretend that every table is just a list of entries. To manipulate the data you use the analog of standard list functions (mapping, filtering, folding...) Let's get a list of all posts. posts <- allPosts -- Pretend `allPosts` has the type `[Post]` How about all posts with an id less than 2? posts <- dfilter (\p -> (p ^. id) <. (2 :: Int)) allPosts ^.is from lens, dfilterand <.are from Data.Basic What about sorting them by name? And maybe getting only the first one. posts <- dtake 1 $ dsortOn (^. name) $ dfilter (\p -> (p ^. id) <. (2 :: Int)) allPosts Some operations you can't do on lists, like deleting their content. Doing it in basic is easy. Filter the rows you want to delete and call ddelete on the whole thing. void $ ddelete $ dfilter (\p -> p ^. id ==. (1 :: Int)) allPosts You can also update the values using the lens syntax like this void $ dupdate (\p -> p & id .~ (2 :: Int)) $ dfilter (\p -> p ^. id ==. (1 :: Int)) allPosts Want to find the highest id of all posts? liftIO . print =<< dfoldMap (\p -> Max (p ^. id)) allPosts Things get more interesting with joins and groups. To join two tables you use the djoin function at the start and then pretend your working with a list of all possible pairs. For grouping, you use the dgroupOn function to choose which field(s) to group on. Pretend the signature is something like dgroupOn :: (a -> b) -> [a] -> [(b, [a])]. You can then either map over that and fold the inner list ( dmap and dfoldMap) or use the dfoldMapInner convenience functions with the pretend type signature dfoldMapInner :: Monoid m => (a -> m) -> [(b, [a])] -> [(b, m)]. res <- dfoldMapInner (\(p, a) -> Max (p ^. id)) $ dgroupOn (\(p, a) -> a ^. id) $ dfilter (\(p, a) -> p ^. author ==. a ^. id) $ djoin allPosts allAuthors Join all posts and authors, then filter the list so you're left with pairs (post, author of the post). Then group them by the author and, for each author, get it's highest post id. Entities also know if they're fresh or they came from the database. This let's you conveniently update single rows like this [a] <- dfilter (\a -> a ^. id ==. (0 :: Int)) allAuthors a & name .~ "New name" & save & void What it does is it looks for the database entry with the same primary key and updates all the other fields. This works if the table actually has a primary key.
http://hackage.haskell.org/package/data-basic
CC-MAIN-2017-39
refinedweb
1,856
57.16
all, my application requires me to launch tasks from within other tasks, like the following def a(): # ... some computation .. def b(): # ... some computation .. def c(): client = get_client() a = client.submit(a) b = client.submit(b) [a,b] = client.gather([a,b]) return a+b client = get_client() res = client.submit(c) However, I would like to have access to the intermediate results a and b, but only c shows up in client.futures. Is there a way to tell dask to keep the results for a and b? Thank you task_states[task] = executor.submit( self.run_task, task=task, state=task_state, ... snipped ... ) Hi guys, I have an issue of pulling large table( can't fit the memory) from the azure database or another server, that table i need to divide in multiple csv-s to generate. So ,i basicaly have no transformation except for dividing it to equal parts. I think that the Dask is the right tool i'm looking for? I tried many ways to make a simple connection to the sql server, but i just can't import dask.dataframe as dd import sqlachemy as sa engine =sa.create_engine('mssql+pyodbc://VM/Data?driver=SQL+Server+Native+Client+11.0') metadata = sa.MetaData() posts = sa.Table('posts', metadata, schema= 'dbo', autoload= True, autoload_with= engine) query = sa.select([posts]) sql_reader = dd.read_sql_table('posts', uri =engine, npartitions=16, index_col='userId') Any help with this ?
https://gitter.im/dask/dask?at=5d8a565d8521b34d918403cd
CC-MAIN-2021-10
refinedweb
233
61.33
GarethJ's WebLog - Code generation and abstraction By default, DSL Tools V1 uses domain properties of type "string" as names for its elements. If you try to use a property of some other type, for example Int32, you'll get the following validation message: "Error 1 Domain Property Name is marked IsName, but has type System.Int32. Unless it has an ElementNameProvider, the type of a Name property should be System.String. C:\dev\test\Language2\Dsl\DslDefinition.dsl 0 1 Dsl". Unfortunately, this is something of a misleading error message as actually its type MUST be string, regardless of whether it has an ElementNameProvider specified. We don't support IsName=true properties of any other type. You can however work around this limitation by having a second property of whatever type you like, from which your string Name property is calculated. Let's say you want to use an Int32 to identify elements by number. Here's a model fragment of an amended versionof the MinimalLanguageSample: I've added a second domain property "Number" to the ExampleElement class. I've aslso set IsBrowsable to false for the "Name" property as we don't want end users to see that there is anything other than a "Number" for a given ExampleElement. Unfortunately we can't mark the "Name" property Internal to really hide it from the API as well due to a V1 limitation. I want the XML to look nice, and also to hide the existence of "Name" from the XML, however I also have to use a property of type string for the XML moniker key that is used to cross-reference elements inside the file. To fake this, I've changed the XmlName for "Name" to be "number" and set the "Number" property to have an XmlName of "number2" and its Representation to be ignore so it is never serialized. I'm not able to choose to just serialize the "Number" property and ignore "Name" because, reasonably enough, the property identified as the moniker key must be serialized. Next, I set the Kind of the Name property to be Calculated. This means I'll have to write custom code to provide the value of the property. I'm just going to implement it by converting the Number property to a string: partial class ExampleElement { private string GetNameValue() { return this.Number.ToString(); } } Finally, I have to give any new instances of ExampleElement a sensible number when they are added to the diagram. To do this, I need to use a custom ElementNameProvider: internal class NumberProvider : Microsoft.VisualStudio.Modeling.ElementNameProvider { protected override void SetUniqueNameCore(Microsoft.VisualStudio.Modeling.ModelElement element, string baseName, IDictionary<string, Microsoft.VisualStudio.Modeling.ModelElement> siblingNames) { ExampleElement example = element as ExampleElement; if (example != null) { example.Number = siblingNames.Count + 1; } } } This code is called when a name needs to be giben to an element. I'm just using the number of existing siblings to give me a sensible initial number. To hook this class into the model, you have to tell your DSL about it by adding an external type with the matching name and namespace. Then you need to select the "Name" property on ExampleElement and in its properties you should now be able to pick NumberProvider as its ElementNamerProvider. You'll need to re-transform to regenerate your code of course. Now when a new ExampleElement is added, the NumberProvider should get called, which will set the new ExampleElement's "Number" property to a value one more than the count of siblings. The "Name" of the new ExampleElement will then be calculated from its Number and saved to the XML file with the tag "number". There's one flaw in this. Because the property we're actually saving is the "Name" property (albeit with the tag "number") which is a Calculated property, then when the file is read back in, the value is simply thrown away on the assumption that it can be recalculated. What we really want is for the number tag to cause the "Number" property to get populated. To do this, we'll need to switch the "Name" property from Kind=Calculated to Kind=CustomStorage. This means we have to have custom code for both getting and setting the value and in the setter we can parse the number and store it in the "Number" property. Effectively the "Number" property is providing backing store for the "Name" property. Here's a simplified version of that with no error checking or globalization: private void SetNameValue(string newValue) { if (!this.Store.InUndoRedoOrRollback) { this.Number = Int32.Parse(newValue); } } The check for InUndoRedoOrRollback is necessary because an undo operation will perform a blanket ovewrite of the values of both "Number" and "Name" with their previous values, making the parse unneccesary. Phew! I think its fair to say this is a little overcomplicated. We'll review the scenarios here for a future release. PingBack from
http://blogs.msdn.com/garethj/archive/2006/09/29/777204.aspx
crawl-002
refinedweb
817
51.68
11 November 2011 10:03 [Source: ICIS news] Correction: In the ICIS story headlined "China’s polyester producers cut rates, shut units on weak demand" dated 11 November 2011, please read in the seventh paragraph ... Also at Jiangyin ... instead of ... Also at Wujiang.... A corrected story, with recast fourth paragraph, follows. SINGAPORE (ICIS)--Some polyester producers in ?xml:namespace> In addition, declining exports of textile products caused producers to shut or plan shutdowns at their units. The export of textile products decreased from 26bn in July to 20bn in October, according to China Customs data. Some producers lowered the operating rates at their plants to 75% of capacity in early November from above 80% in September. In Hengli Group shut its 200,000 tonne/year polyester filament yarn (PFY) line at Wujiang on 31 October for maintenance that will take 20 days, a company source said. Also at Jiangyin, Jiangsu Sanfangxiang Group is planning to shut one of its 12 PSF lines in late November, a company source said. The line has a capacity of 200,000 tonnes/year and the company has not yet confirmed when it will restart the line, the source added. In The turnarounds will take 40 and 20 days respectively, they added. In addition, Huaxin Group is planning to shut its 360,000 tonne/year PFY line at Xiaoshan on 15 December for maintenance that will take 45 days, a company
http://www.icis.com/Articles/2011/11/11/9507366/corrected-chinas-polyester-producers-cut-rates-shut-units-on-weak-demand.html
CC-MAIN-2014-42
refinedweb
236
62.27
On Wed, Oct 20, 2010 at 01:44:56PM +0200, arnaud champion devatom fr wrote: > ?Hi, > > here is a new patch. It propose to separate types of function by classes. For example : > > all virConnect[*] (virConnectOpen, virConnectNumOfDomains, etc...) functions are in the virConnect class. > all virDomain[*] (virDomainCreate, virDomainDestroy, etc...) function are in the virDomain class. > > so we have these classes now : > > virConnect > virDomain > virEvent > virInterface > virLibrary > virNetwork > virNode > virSecret > virStoragePool > virStorageVol > virStream IMHO namespace LibvirtBindings { public class virDomain { .... Is somewhat redundant and better named namespace Libvirt { public class Domain { .... The 'vir' prefix on C library APIs/structs is just a hack because C does not have any concept of namespaces. It shouldn't be copied into languages which do have proper namespace :|
https://www.redhat.com/archives/libvir-list/2010-October/msg00688.html
CC-MAIN-2015-22
refinedweb
121
52.26
Python writes web crawler scripts and implements APScheduler scheduling - 2020-04-02 13:52:26 - OfStack Some time ago, I learned python by myself. As a novice, I thought I could write something to practice. I learned that it is very convenient to write crawler script in python. The requirements of the program are as follows: the page crawled by the crawler is the page of the e-book website of jd.com, and some free e-books will be updated every day. The crawler will send me the free titles updated every day by email as soon as possible and inform me to download them. I. compilation ideas: 1. Crawler script to obtain free book information of the day 2. Compare the acquired book information with the existing information in the database. If there is a book and no operation is done, the book does not exist 3. When performing database insert operation, send the updated data in the form of email 4. Use the APScheduler scheduling framework to complete the scheduling of python scripts Ii. Main knowledge of the script: 1. Python simple crawler The module used this time has urllib2 to grab the page, and the import module is as follows: import urllib2 from sgmllib import SGMLParser The urlopen() method gets the HTML source code for the web page, which is stored in content. The main function of the listhref() class is to parse HTML code and handle semi-structured HTML documents. content = urllib2.urlopen(' listhref = ListHref() listhref.feed(content) The listhref() class code can be found in all of the following code, with just a few key points: The listhref() class inherits the SGMLParser class and overwrites its internal methods. SGMLParser breaks HTML into useful pieces, such as the start and end tags. Once a piece of data has been successfully decomposed into a useful fragment, it calls an internal method based on the data it finds. To use this parser, you need to subclass the SGMLParser class and override the methods of the parent class. SGMLParser parses HTML into different types of data and tags, then calls separate methods for each type: Start tag (Start_tag) Is an HTML tag that starts with a block, like < HTML > . < The head > . < The body > . < The pre > Or a unique mark, like < br > or < img > And so on. This example is when it finds a start tag < a. > , SGMLParser will look for a method named start_a or do_a. If found, SGMLParser calls the method using the tag's property list; Otherwise, it calls the unknown_starttag method with the name of the tag and the list of attributes. End tag (End_tag) Is an HTML tag that ends a block, like < / HTML > . < / head > . < / body > or < / pre > And so on. In this case, when an end tag is found, SGMLParser looks for a method called end_a. If found, SGMLParser calls this method, otherwise it calls unknown_endtag with the name of the tag. Text data Gets a block of text and calls handle_data to get the text when no other markup of any kind is met. The following categories are not used in this article Character reference An escape character represented by the decimal or equivalent hexadecimal of the character, SGMLParser calls handle_charref with the character when the character is found. Entity reference HTML entities, like &ref, when found, the name of the SGMLParser entity calls handle_entityref. HTML comments, included in < ! -... -- > In between. When found, SGMLParser calls handle_comment with the comment content. Processing instruction HTML processing instructions included in < The & # 63; . > In between. When found, SGMLParser calls handle_pi with the directive content. Declaration (Declaration) HTML declarations, such as DOCTYPE, are included in < ! . > In between. When found, SGMLParser calls handle_decl with the declared content. Specific reference API: instructions Highlight = sgmlparser# sgmllib SGMLParser 2. Python operates MongoDB database First to install python for directing a driver PyMongo, download address: The import module import pymongo Connect to the database server 127.0.0.1 and switch to the used database mydatabase mongoCon=pymongo.Connection(host="127.0.0.1",port=27017) db= mongoCon.mydatabase Find database related book information, book is the collection of the search bookInfo = db.book.find_one({"href":bookItem.href}) For the database insert the books information, python support Chinese, but still more complicated for Chinese encoding and decoding, the decoding and encoding please refer to b={ "bookname":bookItem.bookname.decode('gbk').encode('utf8'), "href":bookItem.href, "date":bookItem.date } db.book.insert(b,safe=True) about PyMongo please refer to the API documentation Python sends mail Import mail module # Import smtplib for the actual sending function import smtplib from email.mime.text import MIMEText "Localhost" is the mail server address MSG = MIMEText(context) # text message content MSG ['Subject'] = sub # topic MSG ['From'] = "my@vmail.cn" # sender MSG ['To'] = commaspace.join (mailto_list) # addressee list() Application documents: Highlight = smtplib# 4.Python scheduling framework ApScheduler Download address The official document: API: Installation method: after downloading, unzip, then perform python setup.py install, and import the module from apscheduler.scheduler import Scheduler The ApScheduler configuration is relatively simple, using only the add_interval_job method in this example, and executing the task script after an interval of 30 minutes. instance article for reference # Start the scheduler sched = Scheduler() sched.daemonic = False sched.add_interval_job(job,minutes=30) sched.start() About daemonic parameters: The apscheduler creates a thread that by default is daemon=True, which means it is thread daemon by default. In the above code, the script will not run on time without adding sched.daemonic=False. Since the script does not have sched.daemonic=False, it creates a daemon thread. During this process, an instance of the scheduler is created. But because the script runs so fast, the mainthread will end immediately, and the thread that timed the task will end before it has time to execute. (determined by the relationship between the daemon thread and the main thread). For the script to work properly, you must set the script to be a non-daemon thread. Sched. Daemonic = False Attachment: all the script code All Code #-*- coding: UTF-8 -*- import urllib2 from sgmllib import SGMLParser import pymongo import time # Import smtplib for the actual sending function import smtplib from email.mime.text import MIMEText from apscheduler.scheduler import Scheduler #get freebook hrefs class ListHref(SGMLParser): def __init__(self): SGMLParser.__init__(self) self.is_a = "" self.name = [] self.freehref="" self.hrefs=[] def start_a(self, attrs): self.is_a = 1 href = [v for k, v in attrs if k == "href"] self.freehref=href[0] def end_a(self): self.is_a = "" def handle_data(self, text): if self.is_a == 1 and text.decode('utf8').encode('gbk')==" Limited-time free ": self.hrefs.append(self.freehref) #get freebook Info class FreeBook(SGMLParser): def __init__(self): SGMLParser.__init__(self) self.is_title="" self.name = "" def start_title(self, attrs): self.is_title = 1 def end_title(self): self.is_title = "" def handle_data(self, text): if self.is_title == 1: self.name=text #Mongo Store Module class freeBookMod: def __init__(self, date, bookname ,href): self.date=date self.bookname=bookname self.href=href def get_book(bookList): content = urllib2.urlopen(' listhref = ListHref() listhref.feed(content) for href in listhref.hrefs: content = urllib2.urlopen(str(href)).read() listbook=FreeBook() listbook.feed(content) name = listbook.name n= name.index(' " ') #print (name[0:n+2]) freebook=freeBookMod(time.strftime('%Y-%m-%d',time.localtime(time.time())),name[0:n+2],href) bookList.append(freebook) return bookList def record_book(bookList,context,isSendMail): # DataBase Operation mongoCon=pymongo.Connection(host="127.0.0.1",port=27017) db= mongoCon.mydatabase for bookItem in bookList: bookInfo = db.book.find_one({"href":bookItem.href}) if not bookInfo: b={ "bookname":bookItem.bookname.decode('gbk').encode('utf8'), "href":bookItem.href, "date":bookItem.date } db.book.insert(b,safe=True) isSendMail=True context=context+bookItem.bookname.decode('gbk').encode('utf8')+',' return context,isSendMail #Send Message() #Main job for scheduler def job(): bookList=[] isSendMail=False; context="Today free books are" mailto_list=["mailto@mail.cn"] bookList=get_book(bookList) context,isSendMail=record_book(bookList,context,isSendMail) if isSendMail==True: send_mail(mailto_list,"Free Book is Update",context) if __name__=="__main__": # Start the scheduler sched = Scheduler() sched.daemonic = False sched.add_interval_job(job,minutes=30) sched.start()
https://ofstack.com/python/11050/python-writes-web-crawler-scripts-and-implements-apscheduler-scheduling.html
CC-MAIN-2022-21
refinedweb
1,350
50.53
Lab 07: Smash, Smash, Smash … booo army!!!!! Part 1: Don't Smash the Cafe, Babe! (2 points) Description - Use the shell code to exploit the vulnerable program. Preamble - The assignment can be analyzed on your local VM but must be exploited on the course VM to retrieve the secret message. It must be completed on the clone-1 VM for the course, accessible here: ssh -p 2201 saddleback.academy.usna.edu Instructions - Your task is to retrieve he secret message by exploiting the program below using the shell code provided. The vulnerable program can be found at this path on the clone-1 VM ~aviv/labs/7.1/vulnerable The source code for vulnerableis below: #include <stdio.h> #include <stdlib.h> #include <string.h> void foo(char *s){ int i = 0xcafebabe; char buf[0xbf]; strcpy(buf,s); if(i != 0xcafebabe){ printf("Danger! I'm out-of-here!\n"); exit(1); } } int main(int argc, char * argv[]){ if ( argc < 2){ printf("I pitty the fool who doesn't give me at least one argument!\n"); exit(2); } foo(argv[1]); printf("Go Army!\n"); exit(0); } The provided shell code will call a function called secretthat will display the secret message. SECTION .text global _start _start: jmp callback dowork: pop esi xor ecx,ecx mul ecx mov al,0xb mov ebx, esi int 0x80 callback: call dowork db "/home/aviv/las/7.1/secret",0x0 - The permissions on secretare such that only by exploiting the vulnerbale program may you run secretand reveal the secret message. - You MAY NOT change the shell code in anyway, but you will need to compile and hexify it Submission - You must submit secret.txt that conatins the secret message The best way to store this is as follows: ~aviv/labs/7.1/vulnerable [EXPLOIT STRING] > secret.txt Hints - None. You got this one. Part 2: Smashable Pointers are Dangerous! (4 points + 1) (REQUIRED!) Description - Launch a shell by exploiting the vulnerbale program. This will give you access to an rsa privat key, enabling you to ssh into another VM where a secret message is stored. Retrieve the secret message - This lab is required! Bonus point upon completion, i.e., get 5/4 points for doing it. Preamble The assignment must be completed on the clone-1 VM, accessible via ssh -p 2201 saddleback.academy.usna.edu Instructions - Your tasks is to exploit the following program below such that you can launch a shell on the clone-1VM , ssh into clone-2given access to a private-key file, and read the secret file on the clone-2VM. The vulernable program can be found here on the clone-1VM ~aviv/labs/7.2/vulnerable There is a rsa public/private key pair found here that you will have permission to access once you launch a shell. They are found here. ~aviv/labs/7.2/vulnerbale/lab7-2_id_rsa Using that public key, you can then ssh into clone-2VM like so: ssh -p 2202 -i lab7-2_id_rsa lab7-2@saddleback.academy.usna.edu On the clone-2VM, as the lab7-2user, you'll find the secret message at this path: ~lab-7.2/secret.txt The exploitable program, vulnerable, has the following source code: #include <stdio.h> #include <stdlib.h> #include <string.h> void foo(char *s){ char * p; char buf[300]; for(p=buf; *s; p++,s++){ *p = *s; } return; } int main(int argc, char * argv[]){ if ( argc < 2){ printf("I pitty the fool who doesn't give me at least one argument!\n"); exit(2); } foo(argv[1]); printf("Go Army!\n"); exit(0); } Submission - You must submit a secret.txtcontaining the secret message Hints - Consider what happens when you smash the pointer p: Once you overwrite that value, you can get it to write almost anywhere … such as the return address - But don't overwrite the address of your exploit string! - You'll need to use a different shell code than the jmp-callback one - ssh likes it's keys to only have user read permission Part 3: Smashing Standard Input (3 points) Description - Exploit the program to reveal the secret message. Preamble The assignment must be completed on the clone-1VM accessible here: ssh -p 2201 saddleback.academy.usna.edu gitlab repository Instructions On the Clone-1 VM, you will find a vulnerbale program call vulnerableat the following path: ~aviv/labs/7.3/vulnerable The secet file can be found here: ~aviv/labs/7.3/secret.txt The source code for the vulnerable program is below: #include <stdio.h> #include <stdlib.h> #include <string.h> void foo(){ char buf[0x10]; printf("%s", "Say something spirtiful!\n"); scanf("%s",buf); } int main(int argc, char * argv[]){ foo(); printf("Go Army!\n"); exit(0); } - Develop an exploit to reveal the secret message. Submission - You must submit three files: secret.txt: the secret message shell.asm: asm formatted version of your shell code that you used description.txt: text file describing how you acomplished this task Hints - You might find that scanf()doesn't properly read your shell code because of format reading, so come up with some creative ways to handle that.
https://www.usna.edu/Users/cs/aviv/classes/si485h/s17/lab/07/lab.html
CC-MAIN-2018-22
refinedweb
855
66.94
. public class TestWeakHashMap { private String str1 = new String("newString1"); private String str2 = "literalString2"; private String str3 = "literalString3"; private String str4 = new String("newString4"); private Map map = new WeakHashMap(); private void testGC() throws IOException { map.put(str1, new Object()); map.put(str2, new Object()); map.put(str3, new Object()); map.put(str4, new Object()); /** * Discard the strong reference to all the keys */ str1 = null; str2 = null; str3 = null; str4 = null; while (true) { System.gc(); /** * Verify Full GC with the -verbose:gc option * We expect the map to be emptied as the strong references to * all the keys are discarded. */ System.out.println("map.size(); = " + map.size() + " " + map); } } } Look at the way the four Strings are initialized. Two of them are defined using the 'new' operator, whereas the other two are defined as literals. The Strings defined using the 'new' operator would be allocated in the Java heap, but the Strings defined defined as literals would be in the literal pool. The Strings allocated in the literal pool (Perm Space) would never be garbage collected. This would mean that String 'str2' and 'str3' would always be strongly referenced and the corresponding entry would never be removed from the WeakHashMap. So next time you create a 'new String()' ,] 12.aa1autotitleloans.com/chicago
http://thoughts.bharathganesh.com/2008/06/interesting-leak.html?showComment=1533980804807
CC-MAIN-2020-24
refinedweb
209
63.9
Spotlight. Tiger, however,ed,, plugins to load, and data to mine in your applications. No restrictions. No limits. The technologies that power Spotlight. This article shows you how Spotlight works, how to programatically query the Spotlight Store, and how to create your own file format importers. As you can see, there is quite a bit of ground to cover. First, however, let's start out by defining what meta-data. Some kinds of meta-data, such as file modification dates, ownership, and access permissions are kept external to the file by the file system and have been accessible via a variety of mechanisms. But the most interesting kinds of metadata are found inside the file. For example, digital cameras embed all sorts of data, such as exposure information and whether a flash was used, into the image files that they produce. As well, files written by most applications, including Adobe Photoshop and Microsoft Word, contain quite a bit of meta-data. Until now, this data has been buried in individual files, which has made it hard to work with and to search against. Spotlight gathers all of this information into the Spotlight Store allowing for quick, easy, and effective searches. The Spotlight Store is a file system-level database that holds all of the meta-data attributes about the files, as well as an index of their contents, on a file system. As each file is created, copied, updated, or deleted, Spotlight will ensure that both the content index and the meta-data store entries for that file are updated. The content index is built using an evolved and optimized version of the Search Kit technologies that were introduced with Mac OS X 10.3 Panther. And by optimized, we don't mean that it's a little bit faster. No way. Search Kit in Tiger is three times faster at indexing content and up to 20 times faster at incremental searching than in Panther.. Notice that these keys are abstract rather than the name of a key in a particular format. This is because different file formats might express the same meta-data using different terms. The normalization of terms into a single namespace simplifies creating constrained searches. Tiger ships with a large number of keys defined to handle a variety of meta-data types. One more thing to note about the Spotlight Store: There is one content index and one meta-data store per file system. This keeps the content indexes and meta-data stores with the files they belong to—crucial when using external FireWire drives that travel from Mac to Mac. Now that you know how Spotlight stores meta-data and content indexes for files, lets look at how to access that information programmatically. The easiest way to take a look at a file's meta-data is to simply create a MDItem object using a file's path. To do this in a program using the CoreServices framework, you could use the following code: MDItem CFStringRef path = CFSTR("/Users/erika/Pictures/vacation.jpg"); MDItemRef item = MDItemCreate(kCFAllocatorDefault, path); To get a list of the attribute names: CFArrayRef attributeNames = MDItemCopyAttributeNames(item); Then, to get a particular attribute: CFTypeRef ref = MDItemCopyAttribute(item, attributeName); As you can see, a MDItem is a simple wrapper around a file's meta-data attributes and is accessed much the same way as any dictionary. But if this were all there is to Spotlight, there wouldn't be that much to talk about. The magic is in being able to query Spotlight for all the files that match a set of conditions. The ability to create queries, and get a list of files in response to those queries, is what allows Spotlight to transcend the typical behavior of a file system and enables you to build a totally new category of applications. When you build a query, there are three things you can base your search on: A query is built using a simple language that uses C-like expressions. For example, a query to search all files with the keyword "Tiger" would be written as follows: kMDItemKeywords == "*Tiger*" In a program, once again using the CoreServices framework, this query could be constructed using the following code: MDQueryRef query; query = MDQueryCreate(kCFAllocatorDefault, CFSTR("kMDItemKeywords == '*Tiger*'"), NULL, NULL); Then, to start the query running: MDQueryExecute(query, kMDQueryWantsUpdates); Once the query has been run, you can read the results: CFIndex count = MDQueryGetResultCount(query); for (i = 0; i < count; i++) { MDItemRef item = MDQueryGetResultAtIndex(query, i); } Queries can be run either in one-shot mode (shown above) or as live queries that work with run loops. Live queries are useful when you have a need to monitor the file system over time. As new files are saved that match the query, your code can be called allowing you to act on the new information. We showed a very simple query above. To give you an idea of the kinds of queries that you could build, here's a more complex query: ((kMDItemTextContent = "Tiger*"cd)) && (kMDItemLastUsedDate >= $time.yesterday) && (kMDItemContentType != com.apple.email.emlx) && (kMDItemContentType != public.vcard) This query will match all files that have the word "Tiger" in their content and were used in the last day but which aren't an email message or a contact in the Address Book. And, if that's not enough, even more complex queries are possible that use grouping and sorting. One of the best ways to find examples of complex queries is to use the Finder. Build a query using the Finder's Find feature and then save it. Then, navigate to the Saved Searches folder in your Home folder. You'll see the saved search as a Smart Folder. Get Info about the folder and you'll see the query nicely listed for you to examine. Tiger ships with importers for a variety of common file formats as well as all the important file formats used by Apple's applications such as iTunes and the Address Book. A partial list of file formats includes: If your application, however, uses its own file format or an unsupported file format, Spotlight will need a little bit of help in order to understand them. To give Spotlight this help, you can provide a meta-data importer plug-in with your application that understands the in-and-outs of your file formats. There are three primary steps to creating a meta-data importer plug-in: A GUUID is a 128-bit value guaranteed to be unique. Spotlight uses it to identify its various file system meta-data importer plug-ins. To define a GUUID, use the uuidgen command on the command line: $ uuidgen 09B33E82-226B-11D9-9B1C-000D932ED97A You will find a project for building meta-data plug-ins in Xcode's New Project dialog box under "Standard Apple Plugins". Once you've created the new project, you'll need to edit the following keys in the Info.plist: Next, define the GUUID in your code with the following: #define PLUGIN_ID "09B33E82-226B-11D9-9B1C-000D932ED97A" The last step is to actually write the code. The method prototype is: Boolean GetMetadataForFile(void *thisInterface, CFMutableDictionaryRef attributes, CFStringRef contentTypeUTI, CFStringRef path) { /* do the actual work of pulling meta data from the file */ return TRUE; } In this method, you should open the file at the given path and extract the meta-data from it. Next, set the meta-data attribute values and keys into the given attributes dictionary. And then finally, return TRUE if successful or FALSE if no data was provided. Once the meta-data plug-in is built and has been tested, you can make it available for Spotlight's use by putting it into one of the following directories: ~/Library/Spotlight /Library/Spotlight You can also include importers in an application's bundle in the Contents/Library/Spotlight subdirectory. This allows you to provide a drag-and-drop installation for your application and still provide Spotlight functionality for the application's document types. Contents/Library/Spotlight One last point about writing an importer: It's important to make an importer as efficient as possible. After all, it is going to be executed each and every time a file of the type it handles is created, updated, or destroyed. Be sure to be a good citizen to both CPU and memory. Tiger includes full support for working with Spotlight using the NSMetadataItem, NSMetadataQuery, NSMetadataResultGroup, and NSPredicate classes. The Cocoa API offers support for the same features as the CoreServices APIs discussed in this article. As well, the Cocoa meta-data APIs are fully key-value coding/observing compatible. This means that you can use the API along with Cocoa Bindings in your applications. As far as the meta-data plug-in API, it's easy to use your existing Cocoa based file handling code. Simply change the .c extension on the files to .m, import the Foundation framework, and link away. There's one more thing about Spotlight that should be mentioned. Since the core of Spotlight lives at the very lowest levels of the operating system, it is only natural that there are some command-line tools for power-users to work with file system meta-data and perform queries. The first command is mdls. Just as traditional Unix ls command will list all of the files in a directory, mdls will list all of the meta-data attributes for a file. Here's an example of running the command on an image: $ mdls metadata.jpg kMDItemAttributeChangeDate = 2004-10-20 01:00:15 -0700 kMDItemBitsPerSample = 24 kMDItemColorSpace = "RGB " kMDItemContentType = "public.jpeg" kMDItemContentTypeTree = ("public.jpeg", "public.image", "public.data", "public.item", "public.content") kMDItemDisplayName = "metadata.jpg" kMDItemFSContentChangeDate = 2004-10-19 00:13:04 -0700 kMDItemFSCreationDate = 2004-10-19 00:13:04 -0700 kMDItemFSCreatorCode = 0 kMDItemFSFinderFlags = 0 kMDItemFSInvisible = 0 kMDItemFSLabel = 0 kMDItemFSName = "metadata.jpg" kMDItemFSNodeCount = 0 kMDItemFSOwnerGroupID = 501 kMDItemFSOwnerUserID = 501 kMDItemFSSize = 21917 kMDItemFSTypeCode = 0 kMDItemID = 246476 kMDItemKind = "JPEG Image" kMDItemLastUsedDate = 2004-10-19 00:13:04 -0700 kMDItemPixelHeight = 213 kMDItemPixelWidth = 624 kMDItemResolutionHeightDPI = 72 kMDItemResolutionWidthDPI = 72 kMDItemUsedDates = (2004-10-19 00:13:04 -0700) You can also run queries from the command line using the mdfind tool. For example: $ mdfind "kMDItemAcquisitionModel == 'Canon PowerShot S45'" /Users/erika/Documents/vacation1.jpg /Users/erika/Documents/vacation2.jpg /Users/erika/Documents/vacation3.jpg Not only are these command-line tools useful for the power-user, but they can also be put to good effect in a shell script. For example, you could create a backup of files that contained the keyword "Tiger" with the following script: for i in `mdfind Tiger` do cp $i /Volumes/Backup/$i done As you have seen, Spotlight is much more than just a cute gray search box in the upper-right corner of the screen. And it's even more than the advanced new search features in Finder. It's an entirely new way of working with files. And, Apple is the first to bring you this kind of functionality built into the operating system. Even better, it's all available to you to use in your applications via a set of easy-to-use APIs. It's fast, efficient, and will change your user's experience of your application forever.-06-13 Get information on Apple products. Visit the Apple Store online or at retail locations. 1-800-MY-APPLE
http://developer.apple.com/macosx/spotlight.html
crawl-001
refinedweb
1,888
59.53
Great! I think it does help us on start up time. But your patch breaks the serialization compatibility. I suggest to make the icuSymbols transient and override the writeObject to make sure the zoneStrings has been initialized before writing it out. And I think it is not necessay to maintain the local var with the same name anymore. On Wed, Feb 18, 2009 at 1:43 PM, Deven You <devyoudw@gmail.com> wrote: > Hi, > I have raised a jira HARMONY-6095 for improving java.text.DateFormatSymbols > performance. I found DateFormatSymbols(Locale) will invoke > com.ibm.icu.text.DateFormatSymbols.getZoneStrings() which can take > significant time but rarely used in real world applications. So I delay this > call until it really be used. it seems the performance of > DateFormatSymbols(Locale) can be greatly improved.* > *the testcase is: > > import java.text.DateFormatSymbols; > import java.util.Locale; > > public class TestDateFormatSymbols { > > /** > * @param args > */ > tested on Intel(R) Core(TM)2 Duo Cpu 2.4GHZ, 2.98GB Memory machine, > the result is as below: > Harmony patch before: 1125 ms > Harmony patched: 78 ms > -- Tony Wu China Software Development Lab, IBM
http://mail-archives.apache.org/mod_mbox/harmony-dev/200902.mbox/%3C211709bc0902172253i5a984cf0p7afd113007b55557@mail.gmail.com%3E
CC-MAIN-2017-13
refinedweb
184
51.24
Tutorial for Spline¶ Create a simple spline: import ezdxf dwg = ezdxf.new('AC1015') # splines requires the DXF R2000 format or later fit_points = [(0, 0, 0), (750, 500, 0), (1750, 500, 0), (2250, 1250, 0)] msp = dwg.modelspace() msp.add_spline(fit_points) dwg.saveas("simple_spline.dxf") Add a fit point to a spline: import ezdxf dwg = ezdxf.readfile("simple_spline.dxf") msp = dwg.modelspace() spline = msp.query('SPLINE')[0] # take the first spline # use the context manager with spline.edit_data() as data: # data contains standard python lists data.fit_points.append((2250, 2500, 0)) points = data.fit_points[:-1] # pitfall: this creates a new list without a connection to the spline object points.append((3000, 3000, 0)) # has no effect for the spline object data.fit_points = points # replace points of fp, this way it works # the context manager calls automatically spline.set_fit_points(data.fit_points) dwg.saveas("extended_spline.dxf") You can set additional control points, but if they do not fit the auto-generated AutoCAD values, they will be ignored and don’t mess around with knot values. Solve problems of incorrect values after editing an AutoCAD generated file: import ezdxf dwg = ezdxf.readfile("AutoCAD_generated.dxf") msp = dwg.modelspace() spline = msp.query('SPLINE')[0] # take the first spline with spline.edit_data() as data: # context manger data.fit_points.append((2250, 2500, 0)) # data.fit_points is a standard python list # As far as I tested this works without complaints from AutoCAD, but for the case of problems data.knot_values = [] # delete knot values, this could modify the geometry of the spline data.weights = [] # delete weights, this could modify the geometry of the spline data.control_points = [] # delete control points, this could modify the geometry of the spline dwg.saveas("modified_spline.dxf") Check if spline is closed or close/open spline, for a closed spline the last fit point is connected with the first fit point: if spline.closed: # this spline is closed pass # close a spline spline.closed = True # open a spline spline.closed = False Set start/end tangent: spline.dxf.start_tangent = (0, 1, 0) # in y direction spline.dxf.end_tangent = (1, 0, 0) # in x direction Get count of fit points: # as stored in the DXF file count = spline.dxf.n_fit_points # or count by yourself count = len(spline.get_fit_points())
http://ezdxf.readthedocs.io/en/latest/tutorials/spline.html
CC-MAIN-2018-13
refinedweb
370
60.72
.NET Framework and Language Enhancements in 2005 - Shared .NET Language Additions - VB Language Enhancements - C# Language Enhancements - .NET Framework 2.0 Enhancements - Summary IN THIS CHAPTER - Shared .NET Language Additions - VB Language Enhancements - C# Language Enhancements - key advances made in the Framework. Our assumption is that a majority of readers have some base-level understanding of either VB or a C-based language prior to the current version, along with a decent grasp of the .NET Framework. Therefore, our approach should give you insight into those enhancements that make .NET 2.0 a big leap forward over prior versions. Shared .NET Language Additions The .NET languages pick up a number of enhancements as a result of updates made to the common language runtime (CLR). Although there are specific enhancements for both Visual Basic and C#, respectively, the big advancements made in 2005 apply to both languages. Therefore, we will cover them as a group and provide examples in both languages. This group of .NET language enhancements includes the following key additions: - Generics - Nullable types - Partial types - Properties with mixed access levels - Ambiguous namespaces We will cover each of these items in detail in the coming sections. Again, we provide examples in both C# and VB because these enhancements apply to both languages. We will cover the VB and C# language-specific enhancements later in the chapter. Generics Generics are undoubtedly the biggest addition to .NET in version 2.0. As such, no book would be complete without covering their ins and outs. Generics may seem daunting at first—especially if you start looking through code that contains strange angle brackets in the case of C# or the Of keyword for Visual Basic. The following sections define generics, explain their importance, and show you how to use them in your code. Generics Defined The concept of generics is relatively straightforward. You need to develop an object (or define a parameter to a method), but you do not know the object's type when you write the code. Rather, you want to write the code generically and allow the caller to your code to determine the actual type of the object. You could simply use the System.Object class to accomplish this. That is what we did prior to 2.0. However, imagine you also want to eliminate the need for boxing, runtime type checking, and explicit casting everywhere in your code. Now you can start to see the vision for generics. The benefits of generics can best be seen through an example. The easiest example is that of creating a collection class that contains other objects. For our example, image you want to store a series of objects. You might do so by adding each object to an ArrayList. However, the compiler and runtime know only that you have some list of objects. The list could contain Order objects or Customer objects or both (or anything). The only way to know what is contained in the list is to write code to check for the type of the object in the list. Of course, to get around this issue, you might write your own strongly typed lists. Although this approach is viable, it results in tedious code written over and over for each type you want to work with as a collection. The only real difference in the code is the type allowed in the list. In addition, you still have to do all the casting because the underlying list still simply contains types as System.Object. Now imagine if you could write a single class that, when used, allows the user to define its type. You can then write one generic list class that, instead of containing types as System.Object, would contain objects as the type with which the class is defined. This allows a caller to the generic list to decide the list should be of type Orders or only contain Customers. This is precisely what generics afford us. Think of a generic class as a template for a class. Generics come in two flavors: generic types and generic methods. Generic types are classes whose type is defined by the code that creates the class. A generic method is one that defines one or more generic type parameters. In this case, the generic parameter is used throughout the method but its type is defined only when the method is called. In addition, you can define constraints that control the creation of generics. In the coming sections, we'll look at all of these items. The Benefits of Generics Now you should plainly see some of the benefits that generics provide. Without them, any class that is written to manage different types must use System.Object. This presents a number of problems. First, there is no constraint or compiler checking on what goes into the object. The corollary is also true: You cannot know what you are getting out if you cannot constrain what goes in. Second, when you use the object, you must do type checking to verify its type and then do casting to cast it back to its original type. This, of course, comes with a performance penalty. Finally, if you use value types and store them in System.Object, then they get boxed. When you later retrieve this value type, it must be unboxed. Again, this adds unwanted code and unnecessary performance hits. Generics solve each of these issues. Let's look at how this is possible. How .NET Manages Generics When you compile a generic type, you generate Microsoft Intermediate Language (MSIL) code and metadata (just like all the rest of your .NET code). Of course, for the generic type or method, the compiler emits MSIL that defines your use of generic types. With all MSIL code, when it is first accessed, the just-in-time (JIT) compiler compiles the MSIL into native code. When the JIT compiler encounters a generic, it knows the actual type that is being used in place of the generic. Therefore, it can substitute the real type for the generic type. This process is called generic type instantiation. The newly compiled, native type is now used by subsequent, similar requests. In fact, all reference types are able to share a single generic type instantiation because, natively, references are simply pointers with the same representation. Of course, if a new value type is used in the generic type instantiation, the runtime will jit a new copy of the generic type. This is how we get the benefits of generics both when we're writing our code and when it executes. Upon execution, all our code becomes native, strongly typed code. Now let's look at coding some generics. Creating Generic Types Generic types are classes that contain one or more elements whose type should be determined at instantiation (rather than during development). To define a generic type, you first declare a class and then define type parameters for the class. A type parameter is one that is passed to a class that defines the actual type for the generic. You can think of a type parameter as similar to method parameters. The big difference is that, instead of passing a value or a reference to an object, you are passing the type used by the generic. As an example, suppose you are writing a class called Fields that works with name/value pairs similar to a Hashtable or Dictionary. You might declare the class as follows: C# public class Fields VB Public Class Fields Let's also suppose that the class can work with a variety of types for its keys and a variety of types for its values. You want to write the class generically to support multiple types. However, after the class is instantiated, you want it to be constrained to the types used to create the class. To add the type parameters to the class declaration, you would then write the following: C# public class Fields<keyType, valueType> VB Public Class Fields(Of keyType, valueType) In this case, keyType and valueType are type parameters that can be used in the rest of the class to reference the types that will be passed to the class. For example, you might then have an Add method in your class whose signature looks like the following: C# public void Add(keyType key, valueType value) VB Public Sub Add(key as keyType, value as valueType) This indicates to the compiler that whatever types are used to create the class should also be used in this method. In fact, to consume the class, your code would first create an instance and pass type arguments to the instance. Type arguments are the types passed to type parameters. The following is an example: C# Fields<int, Field> myFields = new Fields<int, Field>(); VB Dim myFields As New Fields(Of Integer, Field) In this case a new instance of the generic Fields class is created that must contain int (integer) value for its keys and Field instances for its values. Calling the Add method of the newly created Fields object would then look like this: C# myFields.Add(1, new Field()); VB myFields.Add(1, New Field()) If you try to pass another type to either parameter, you will get a compiler error because the object becomes strongly typed at this point. Creating Generic Methods So far we've looked at generic type parameters. These type parameters end up defining variables with class-level scope. That is, the variable that defines the generic type is available throughout the entire class. As with any class you write, you may not need class-level scoping. Instead, it may be sufficient to define the elements passed to a given method. Generics are no different in this regard. You can define them at the class level (as we've shown) or at the method level (as we will see). Generic methods work well for common, utility-like functions that execute a common operation on a variety of similar types. You define a generic method by indicating the existence of one or more generic types following the method name. You can then refer to these generic types inside the method's parameter list, its return type, and of course, the method body. The following shows the syntax for defining a generic method: C# public void Save<instanceType>(instanceType type) VB Public Sub Save(Of instanceType)(ByVal type As instanceType) To call this generic method, you must define the type passed to the method as part of the call to the method. Suppose the Save method defined in the preceding example is contained in a class called Field. Now suppose you have created an instance of Field and have stored a reference to it in the variable named myField. The following code shows how you might call the Save method passing the type argument to the method: C# myField.Save<CustomerOrder>(new CustomerOrder()); VB myField.Save(Of CustomerOrder)(New CustomerOrder()) We need to add a few notes on generic methods. First, you can often omit the type parameter when calling a generic method. The compiler can figure out the type based on the parameter passed to it. Therefore, the type parameter is optional when calling a generic method. However, it is generally preferable to pass the type because it makes your code more readable and saves the compiler from having to look it up. Second, generic methods can be declared as static (or shared). Finally, you can define constraints on generic methods (and classes), as we will see in the next section. Getting Specific with Generics (Constraints) When you first encounter generic methods, it can be easy to think of them as simple data storage devices. At first glance, they seem to have a huge flaw. This flaw can best be described with the question that might be gnawing at you, "Generics are great, but what if you want to call a method or property of a generic object whose type, by definition, you are unaware of?" This flaw seems to limit the use of generics. However, upon a closer look, you'll see that generic constraints allow you to overcome this perceived flaw. Generic constraints are just what they sound like: They allow you to define restrictions on the types that a caller can use when creating an instance of your generic class or calling one of your generic methods. Generic constraints have the following three variations: - Derivation constraint—Allows you to indicate that the generic type must implement one or more specific interfaces or derive from a base class. - Default constructor constraint—Allows you to indicate that the generic type must expose a constructor without parameters. - Reference/value constraint—Allows you to indicate that a generic type parameter must either be a reference or a value type. Using a derivation constraint enables you to indicate one or more interfaces (or object types) that are allowed to be passed to the generic class. Doing so allows you to overcome the aforementioned flaw. For example, if in the Fields generic class defined previously you need to be able to call a method or property of the generic valueType (perhaps a property that aids in sorting the group of Fields), you can now do so, provided that method or property is defined on the interface or base class constraint. The following provides an example of defining a derivation constraint on a generic class: C# Class Constraint public class Fields<keyType, valueType> where keyType : ISort VB Class Constraint Public Class Fields(Of keyType, valueType As ISort) In the preceding example, the class named Fields, which defines the two generic types valueType and keyType, contains a constraint on keyType. The constraint is that keyType must implement an interface called ISort. This now allows the generic class Fields to use methods of ISort without casting. You can indicate any number of interfaces that the generic type must implement. However, you can indicate only a single base class from which the generic type can derive. You can, of course, pass to the generic type an object that itself inherits from this constraining base class. Generic Collections Namespace Now that you've seen how to create your own generic classes, it is important to note that the .NET Framework provides a number of generic classes for you to use in your applications. The namespace System.Collections.Generics defines a number of generic collection classes designed to allow you to work with groups of objects in a strongly typed manner. A generic collection is a collection class that allows a developer to specify the type that is contained in the collection when declaring the collection. The generic classes defined in this namespace are varied based on their usage. The classes include one called List designed for working with a simple list or array of objects. It also includes a SortedList, a LinkedList, a Queue, a Stack, and several Dictionary classes. These classes cover all the basics of working without strongly typed collection classes. In addition, the namespace also defines a number of interfaces that you can use when building your own generic collections. Nullable Types Most of us have written applications in which we were forced to declare a variable and choose a default value prior to knowing what value that variable should contain. For instance, imagine you have a class called Person with a Boolean property called IsFemale. If you do not implicitly know a person's sex at object instantiation, you are forced to pick a default, or you must implement the property as a tri-state enumeration (or similar) with values Male, Female, and Unknown. The latter can be cumbersome, especially if the value is stored as a Boolean in the database. There are similar examples. Imagine if you are writing a Test class with an integer value called Score. If you are unsure of the Score value, you end up initializing this variable to zero (0). This value, of course, does not represent a real score. You then must program around this fact by either tracking zero as a magic number or carrying another property like IsScoreSet. These examples are further amplified by the fact that the databases we work with all understand that a value can be null (or not set). We are often unable to use this feature unless we write code to do translation during our insert and select transactions. Nullable types in .NET 2.0 are meant to free us from these issues. A nullable type is a special value type that can have a null assigned to it. This is unlike the value types we are accustomed to (int, bool, double, and so on); these are simply not initialized when declared. On the contrary, with nullable types, you can create integers, Booleans, doubles, and the like and assign them the value of null. You no longer have to guess (or code around) whether a variable has been set. This includes no longer having to provide a default value. Instead, you now can initialize or assign a variable to the value of null. You can now write code without default assumptions. In addition, nullable types also solve the issue of pushing and pulling nulls to and from the database. Let's look at how they work. Declaring Nullable Types Declaring a nullable type is very different between the C# and VB languages. However, both result in declaring the same nullable value type structure inside the .NET Framework (System.Nullable). This generic structure is defined by the type that is used in its declaration. For example, if you are defining a nullable integer, the generic structure returns an integer version. The following code snippets demonstrate how nullable types are declared in both C# and VBL: A C# Nullable Type Example bool? hasChildren = null; A VB Nullable Type Example Dim hasChildren As Nullable(Of Boolean) = Nothing Notice that in the C# example, you can use the ? type modifier to indicate that a base type should be treated as a nullable type. This is simply a shortcut. It allows developers to use the standard syntax for creating types but simply add a question mark to turn that type to a nullable version. On the contrary, if you are coding in VB, you are required to be more explicit by defining the Nullable class as you would a similar generic. You can also use a similar syntax in C#, as in the following example: System.Nullable<bool> hasChildren = null; Working with Nullable Types The generic System.Nullable structure contains two read-only properties: HasValue and Value. These properties allow you to work with nullable types efficiently. The HasValue property is a Boolean value that indicates whether a given nullable type has a value assigned to it. You can use this property in If statements to determine whether a given variable has been assigned. In addition, you can simply check the variable for null (C# only). The following provides an example of each: C# HasValue Example If (hasChildren.HasValue) {...} VB HasValue Example If hasChildren.HasValue Then C# Checking the Variable for Null if (hasChildren != null) {...} VB Checking the Variable Value for Null If hasChildren.Value <> Nothing Then The Value property simply returns the value contained by the Nullable structure. You can also access the value of the variable by calling the variable directly (without using the Value property). The distinction lies in that when HasValue is false, calls to the Value property will result in an exception being thrown. Whereas when you access the variable directly in this condition (HasValue = false), no exception is thrown. Therefore, it is important to know exactly the behavior you require and use these options correctly. The following provides an example of using the Value property: C# Value Property Example System.Nullable<bool> hasChildren = null; Console.WriteLine(hasChildren); //no exception is thrown if (hasChildren != null) { Console.WriteLine(hasChildren.Value.ToString()); } Console.WriteLine(hasChildren.Value); //throws InvalidOperationException VB Value Property Example Dim hasChildren As Nullable(Of Boolean) = Nothing Console.WriteLine(hasChildren) 'no exception is thrown If hasChildren.HasValue Then Console.WriteLine(hasChildren.Value.ToString()) End If Console.WriteLine(hasChildren.Value) 'throws InvalidOperationException In the preceding example, the call directly to hasChildren will not throw an exception. However, when you try to check the Value property when the variable is null, the Framework throws the InvalidOperationException. Partial Types (Classes) Partial types are simply a mechanism for defining a single class, struct, or interface across multiple code files. In fact, when your code is compiled, there is no such thing as a partial type. Rather, partial types exist only during development. The files that define a partial type are merged together into a singe class during compilation. Partial types are meant to solve two problems. First, they allow developers to split large classes across multiple files. This potentially allows multiple team members to work on the same class without working on the same file (thus avoiding the related code-merge headaches). The other problem partial types solve is to further partition tool-generated code from that of the developer's. This keeps your code file clean (with only your work in it) and allows a tool to generate portions of the class behind the scenes. Visual Studio 2005 developers will immediately notice this when working with Windows forms, Web Service wrappers, ASP code-behind pages, and the like. If you've worked with these items in prior versions of .NET, you'll soon notice that when you're working in 2005, the generated code is now absent and the class that you write has been marked as partial. Working with Partial Types Partial types are declared as such using the keyword Partial. This keyword is actually the same in both C# and VB. You can apply this keyword to classes, structures, and interfaces. If you do so, the keyword must be the first word on the declaration (before Class, Structure, or Interface). Indicating a partial type tells the compiler to merge these items together upon compilation into a single .dll or .exe. When defining partial types, you must follow a few simple guidelines. First, all types with the same name in the same namespace must use the Partial keyword. You cannot, for instance, declare a class as Partial Public Person in one file and then declare that same class as Public Person in another file under the same namespace. Of course, to do so, you would add the Partial keyword to the second declaration. Second, you must keep in mind that all modifiers of a partial type are merged together upon compilation. This includes class attributes, XML comments, and interface implementations. For example, if you use the attribute System.SerializableAttribute on a partial type, the attribute will be applied to all portions of the type when merged and compiled. Finally, it's important to note that all partial types must be compiled into the same assembly (.dll or .exe). You cannot compile a partial type across assemblies. Properties with Mixed Access Levels In prior versions of .NET, you were able to indicate the access level (public, private, protected, internal) only of an entire property. However, often you might need to make the property read (get) public but control the write (set) internally. The only real solution to this problem using prior .NET versions was not to implement the property set. You would then create another internal method for setting the value of the property. It would make your coding easier to write and understand if you had fine-grained control over access modifiers of your properties. .NET 2.0 gives you control of the access modifiers at both the set and get methods of a property. Therefore, you are free to mark your property as public but make the set private or protected. The following code provides an example: C# Mixed Property Access Levels private string _userId; public string UserId { get { return _userId; } internal set { userId = value; } } VB Mixed Property Access Levels Private _userId As String Public Property UserId() As String Get Return _userId End Get Friend Set(ByVal value As String) _userId = value End Set End Property Ambiguous Namespaces On large projects, it is possible to easily run into namespace conflicts with each other and with the .NET Framework (System namespace). Previously, these ambiguous references were not resolvable. Instead, you got an exception at compile time. .NET 2.0 now allows developers to define a System namespace of their own without blocking access to the .NET version. For example, suppose you define a namespace called System and suddenly are unable to access the global version of System. In C# you would add the keyword global along with a namespace alias qualifier :: as in the following syntax: global::System.Double myDouble; In VB the syntax is similar but uses the keyword Global: Dim myDouble As Global.System.Double To further manage namespace conflict, you can still define an alias when using (or importing) a namespace. This alias can then be used to reference types within the namespace. For example, suppose you had a conflict with the System.IO namespace. You could define an alias upon import as follows: C# using IoAlias = System.IO; VB Imports IoAlias = System.IO You could then reference types by using the alias directly. Of course, Visual Studio still gives you complete IntelliSense on these items. The following provides an example of using the alias defined in the preceding example. Notice the new syntax that is possible in C# with the double colon operator: C# new syntax IoAlias::FileInfo file; C# old syntax IoAlias.FileInfo file; VB Dim file as IoAlias.FileInfo
http://www.informit.com/articles/article.aspx?p=667411&amp;seqNum=4
CC-MAIN-2018-39
refinedweb
4,273
63.49
Forum:About templates linking to talk pages From Uncyclopedia, the content-free encyclopedia I noticed that templates like {{POV}} use [[:{{NAMESPACE}} talk:{{PAGENAME}}|talk page]] to link to the article's talk page, which doesn't work correctly if the template is itself placed on a talk page (see User talk:Pentium5dot1/Sandbox). The correct solution AFAIK is [[:{{TALKSPACE}}:{{PAGENAME}}|talk page]], which creates a self-referential link instead of trying to link to "Talk talk:..." or something similar. (Admittedly, self-referential "links" are bad too, but my way avoids the creation of unnecessary red links that encourage the creation of garbage pages.) I am going to go ahead and fix this myself on {{POV}}, but anybody want to go around helping with this on all the other templates? Pentium5dot1 04:36, 3 February 2007 (UTC) - Okay, I also went ahead and fixed {{disputed}}, {{theydisputed}} and {{TotallyDisputed}}. BTW, I'm not sure whether that colon at the beginning of the link target is really necessary (it was there originally in {{disputed}} and {{theydisputed}} but not in {{TotallyDisputed}}; can a MediaWiki expert explain more?) Pentium5dot1 04:47, 3 February 2007 (UTC) - A colon at the start of a normal wikilink like [[:...]] ensures that it is just a link and not, say, a category or interwiki link. At the start of a template link like {{:...}} it tell it to use the main namespace for the template, as opposed to the template namespace. And {{TALKSPACE}}:{{PAGENAME}} is better, but {{TALKPAGENAME}} is best. And no, a colon isn't needed. • Spang • ☃ • talk • 05:00, 3 Feb 2007 Sorry, I didn't know of the existence of {{TALKPAGENAME}}. I hope you aren't too mad at me for this, but thank you for helping me to further my knowledge of MediaWiki. I will go re-fix the aforementioned templates ASAP. Pentium5dot1 00:04, 4 February 2007 (UTC)
http://uncyclopedia.wikia.com/wiki/Forum:About_templates_linking_to_talk_pages?t=20130426021134
CC-MAIN-2014-52
refinedweb
312
54.12
This RayTracer is a hobby project of mine. It has some very nice features (such as reflection and anti-aliasing) but it is not by far finished. This demo application is meant for anyone interested in raytracing, image generation and 3D rendering. It shows in a straightforward manner how raytracing is done. Although this raytracer only currently supports just a few shapes (plane, box and sphere). I was aiming at allowing developers to easily add their own objects/shapes to the scene, which I will explain later on in this article. In this article I will shortly explain the basics of raytracing, explain commonly used terminology in raytracing and a few of the many raytracing effects that can be achieved to create a realistic scene. I will explain the algorithms and techniques I used and explain how this project can be extended with your own shapes, and possible your own effects. Note that there are a number of raytracers out there (e.g. the well-known PovRay) and even open source raytracers that have more features than this one. I find that the problem with these raytracers is often that they are hard to understand, either because they are poorly documented, or the algorithms used are optimized for speed, which makes them practically unreadable - unless if you know what you are doing. One of the good aspects of this particular raytracer is that it is fully implemented in C# 2.0, and the code is well documented! The sources project contains all sources of the RayTracer library and of the demo project. Feel free to use and adapt this code for your own purposes. The demo project contains the RayTracer library and an example program that has several predefined scenes. You can select a scene under Tools\Scenes in the menu. From the settings menu you can turn on/off some of the effects. In the Edit menu select the Copy function to copy the raytraced image onto the clipboard. I won't go too much into Raytracing background and history here. There is a lot to find about raytracing simply by looking on the web. Also I would like to point out "A raytracer for the Compact Framework" by gregs here on Code Project that explains some of the raytracing basics. For now I will explain some common concepts used in ray tracing. Figure 1. An example raytracing Scene. In a typical raytracing setting a Ray is casted through each pixel in the Viewport into the Scene, in this example the black arrow. The Raytracer will try and find out if the Ray is intersecting with any object/shape in the scene. In this example it will intersect with the Sphere. Otherwise it will simply display the Background color. To determine the Color to display for the pixel, a number of techniques can be used and mixed. I call them shading effects. Because raytracing scenes require usually a high precision of calculations, the Color as we know it from the Drawing namespace has been replaced by our own RayTracer.Color definition. Here the R, G and B components are scaled down to a floating point number between 0 and 1. Also some of the common arithmetic operators have been overridden, so it will be easier to add, multiply and blend Colors. The most basic technique is by simply displaying the intrinsic Color of the Sphere itself. This is called Ambient lighting. Ambient light is the so called background light that will light up all objects in the scene slightly (see figure 2a). The color is also influenced by the amount of light emitted by surrounding other light sources. In this case the light bulb will light up the surface of the sphere depending on how well the surface is exposed to the light. The yellow arrow shows the direction in which the light is traced back to its source. Based on this direction, and the direction the surface of the sphere is facing, the amount of light is calculated. This is called Diffuse light. It gives a nice shading effect (see figure 2b). Additionally the effects can be enhanced by introducing Highlights, if the surface is somewhat reflective and the rays from the light source are reflected on the shape's surface strait into the camera, a highlight appears: usually a very shiny and bright color. Now for even more effects we can add Reflection and Refraction. In the case of Reflection, the Ray casted from the Camera is reflected on the surface of the sphere onto the green box denoted by the red arrow. This means the particular pixel the Ray travels through will light up with a somewhat greenish color also: the box is reflected into the sphere. Refraction is somewhat more complicated. Refraction is the effect of a ray bending when traveling through a different Material. This applies to transparent objects/shapes. An example of this is a glass ball, where the light rays are bent when traveling through the ball. Then another type of effect we can add to the scene is Shadows. Shadows do not add Color to a pixel, but instead reduce the amount of Color. To find out if an intersection with an object is in a shadow of another object, simply trace the path back to the light source (yellow arrow) and find out of any object is blocking it (does it intersect with any other object than the light source?). If it is blocked, simply reduce the amount of light by a factor. Figure 2. Shading effects: a) Ambient, b) Diffuse, c) Highlights, d) Shadows and e) Reflection (notice the reflection on the floor also) When rendering a scene containing these basic features (even with just ambient, diffuse, highlights and shadows), you would already get a quite amazingly raytraced image, even more so if you built the raytracer from scratch! But of course we are far from finished. One important additional feature is texture. To make any scene look even more realistic you must be able to add textures to shapes. So how is it done? Basically texture can be compared to a piece of gift wrapper, which is wrapped around the object. There are two types of texture materials: a texture material based on a colormap or image (e.g. see the marble effect in the top image), and a texture material that is calculated (e.g. the chessboard effect). Textures are flat and therefore require two coordinates to determine the color to display: often the u and v notation is used. The (u,v) coordinates are mapped onto (-1,-1)-(1,1) and from there on the color is either read from the colormap, or calculated respectively. The difficulty lies in calculating the (u,v) coordinates from an intersection point with the shape. Depending on the shape, the (u,v) coordinates need to be calculated in different ways, but this is up to that programmer to implement. One other important feature to have in a Raytracer is the ability to cope with Anti-Aliasing. Anti-aliasing is a technique to soften huge color differences between neighbouring pixels, so it will look more soothing for the eye. Several techniques can be used to counter this aliasing effect. A quick but dirty technique is to simply apply a 'mean filter'. The pixel will get the mean color value of neighbouring pixels. This is implemented as the 'Quick' AntiAliasing method in this raytracer app. This results is a smoothed image, however the image may also appear a bit vague/blurry. A much nicer way of anti-aliasing is using the 'Monte-carlo' method. The idea here is instead of casting a single ray into the scene through a pixel on the viewport, instead we cast multiple rays through a single pixel, scanning the neighbourhood and taking the average color of those. Although the method is slower, since we are now casting multiple rays for a single pixel, the accuracy is much better, resulting in much smoother but sharp Anti-Aliased images as shown in the figure below. Figure 3. AntiAliasing methods: a) None, b) Mean filter, c) Monte Carlo sampling (using a Very High sampling rate of 64 rays for a single pixel) Apart from cool shading effects more importantly it is to have well defined objects that make up your scene. Because the term 'object' is a bit overused, I prefer to use the term Shape when referring to an object in a Scene. Have you ever wondered why in every raytraced image you always see a lot of spheres? Well apart from the nice shading effects on a sphere, more importantly, the intersection of a ray with a sphere can be calculated very fast. This is probably the most important aspect of a shape definition: how easy is it to calculate the intersection with the shape. Secondary to that, how easy is it to calculate its surface normal vector. Calculating the intersection of a ray with arbitrary shapes turns out to be rather difficult. Instead different methods have been invented such as Voxel techniques or Marching cubes in order to determine the intersection points. The most successful approach so far is to create a so called Mesh to describe the shape. A mesh is created by sampling the shape into small linked triangles. This process is also known as tessellation. The advantage of using triangles in this case, is because the intersection of a ray with a triangle is not hard to calculate and can be done rather efficiently as well. A disadvantage is that in order to create a smooth mesh, it is required to sample a whole lot of triangles. This means that the intersection calculation will also need to be executed more often, potentially killing the performance of the raytracer. This Raytracer however has not been optimized much for performance, and therefore only supports a limited set of shapes: Plane, Sphere (of course) and a Box. A small side note I would like to make here, is that the algorithm used to calculate the intersection of a ray with the sphere is the fastest one I could find on the web. Figure 4. Scene with Box and Sphere. In this topic I will explain the basics of how to use the RayTracer library, and how to extend it with your own additional shapes and materials. Before we can actually start the raytracing process, we first need to setup a scene. Right now a scene can only be setup programmatically. Of course you are invited to change the code in such a way that the scene can be loaded for instance from a file. So how does one setup a scene programmatically? As stated in the previous section, we need a Camera, some Background, some Shapes and possible one or more Lights to light the scene. Setting up a Shape has one catch though, we will need to supply a Material for the Shape. Currently we have three types of materials: Solid, Texture and Chessboard. Each material can have additional parameters to specify: gloss (also known as shininess, or how well is the shape highlighted), reflection (how reflective is the shape), transparency (how transparent is the shape), refraction (how well is the light bent when traveling through the shape, in case of transparent shapes). To give an example see the code below. It will create a scene as shown in the first image on this page (Scene1 in the code). // first of all, create a new scene object Scene scene = new Scene(); // then set its Camera position in the scene scene.Camera = new Camera(new Vector(0, 0, -15), new Vector(-.2, 0, 5), new Vector(0, 1, 0)); // optionally set the scene's background color scene.Background = new Background(new Color(0, 0, .5), 0.2); // setup a solid reflecting sphere and add it to the list of shapes in // the scene scene.Shapes.Add(new SphereShape(new Vector(-1.5, 0.5, 0), .5, new SolidMaterial(new Color(0, .5, .5), 0.2, 0.0, 2.0))); // now lets add a sphere with a marble texture // first load the marble texture from an image file Texture marbleTexture = Texture.FromFile(path + @"\marble1.png"); // next setup the marble material, supplying the marble texture. TextureMaterial marbleMaterial = new TextureMaterial(marbleTexture, 0.0, 0.0, 2, .5); // now create the marble sphere and add it to the list of shapes in the scene scene.Shapes.Add(new SphereShape(new Vector(0, 0, 0), 1, marbleMaterial)); // setup the chessboard floor scene.Shapes.Add(new PlaneShape(new Vector(0.1, 0.9, -0.5).Normalize(), 1.2, new ChessboardMaterial(new Color(1, 1, 1), new Color(0, 0, 0), 0.2, 0, 1, 0.7))); //add two lights for better lighting effects (will cast shadows) scene.Lights.Add(new Light(new Vector(5, 10, -1), new Color(0.8, 0.8, 0.8))); scene.Lights.Add(new Light(new Vector(-3, 5, -15), new Color(0.8, 0.8, 0.8))); Now that we have created a scene, we can start RayTracing it! The following code shows how it is done: // create a new RayTracer object. This object will be responsible for // executing the actual raytracing process. RayTracer.RayTracer tracer = new RayTracer.RayTracer(); // define the viewport to scan using a rectangle Drawing.Rectangle rect = new Drawing.Rectangle(0, 0, 300, 300); // setup a graphics device that raytracer can paint on. this can be the // graphics device available in the Paint event (e.Graphics), // or the create your own graphics device from a bitmap. In that case // the raytraced image is rendered on the bitmap. // in this example create a bitmap Bitmap bitmap = new Drawing.Bitmap(rect.Width, rect.Height); // create a new graphics device from the bitmap Drawing.Graphics g = Drawing.Graphics.FromImage(bitmap); // happy raytracing! raytracer.RayTraceScene(g, rect, scene); After executing the previous two blocks of code, you should be able to get the same nicely rendered image as on the top of this page. Another available scene in RayTracer.Net is the following figure: Figure 5: Another example. Basically this library has two main extension points available: for the Shapes and for the Materials. Each Shape must implement the IShape interface. If you plan to add your own shape (e.g. a triangle, or mesh) you must implement the IShape interface. However I made it easy for you. You can derive your new shape class from the BaseShape class. This class implements the default tedious properties and methods of IShape for you. IShape BaseShape However one method you must always implement, which is the Intersect method. This is probably the hardest to implement, but well, there you have it. The Intersect method expects a Ray and returns an IntersectionInfo object. This IntersectionInfo object contains all the raytracer needs to know about the intersection and how to render the color. If the Ray intersects with your shape you must set the following properties of the IntersectInfo object: Intersect Ray IntersectionInfo IntersectInfo IsHit Distance Position Normal Color This is all the information the RayTracer needs to successfully render the shape. If you are not happy with the currently available materials, e.g. if you want to create a material that supports display of text, you can create your own material. Each material must implement the IMaterial interface. However again, to make life easy I have implemented a BaseMaterial class that implements most of the properties and methods for you. So all you have to do is create a new material class and derive it from BaseMaterial. IMaterial BaseMaterial There is one property and one method you will need to implement for your own material: HasTexture and GetColor respectively. HasTexture GetColor GetColor WrapUp So when you have implemented HasTexture and GetColor attributes, you are ready to use the material in a scene. So far we have ambient, diffuse, highlights, shadows, reflection, refraction, textures. So, are we done yet? The answer to this question is both yes and no. This particular Raytracer example implementation will not go beyond these effects. (Un)fortunately there are many more possible effects to add realism to the scene, for example: But the most important feature to consider is Performance. Speed is and has always been the biggest issue so far for raytracers. In order to render realistic scenes within a limited timeframe will require a number of optimizations: If you are interested on reading up more about this raytracing stuff it may be worth your while to check out the following very good reference sites of which I got most of my information to build this raytracer: Version 1.0 of the RayTracer.Net was publishes on the 11th of October 2006. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) using System; using System.Drawing; using System.Drawing.Imaging; using System.Runtime.InteropServices; namespace SharpRay { /// <summary> /// /// </summary> public unsafe class Surface : ISurface { // Constructor /// <summary> /// /// </summary> /// <param name="width"></param> /// <param name="height"></param> public Surface(int width, int height) { mWidth = width; mHeight = height; mStride = mWidth << 2; byte[] buffer = new byte[mHeight * mStride]; mHandle = GCHandle.Alloc(buffer, GCHandleType.Pinned); mScan0 = Marshal.UnsafeAddrOfPinnedArrayElement(buffer, 0); mBuffer = (uint*)mScan0.ToPointer(); mBitmap = new Bitmap(mWidth, mHeight, mStride, PixelFormat.Format32bppRgb, mScan0); buffer = null; } // Fields private Bitmap mBitmap; private uint* mBuffer; private GCHandle mHandle; private IntPtr mScan0; private int mWidth; private int mHeight; private int mStride; // Methods /// <summary> /// /// </summary> /// <param name="color"></param> public void Clear(uint color) { int count = mWidth * mHeight; if (count % 4 == 0) { for (int i = 0; i < count; i += 4) { mBuffer[i] = color; mBuffer[i + 1] = color; mBuffer[i + 2] = color; mBuffer[i + 3] = color; } } else { for (int i = 0; i < count; i++) mBuffer[i] = color; } } /// <summary> /// /// </summary> public void Dispose() { mBitmap.Dispose(); mBuffer = null; mHandle.Free(); } /// <summary> /// /// </summary> /// <param name="x"></param> /// <param name="y"></param> /// <returns></returns> public uint GetPixel(int x, int y) { if (x >= mWidth) throw new ArgumentOutOfRangeException("x"); if (y >= mHeight) throw new ArgumentOutOfRangeException("y"); return mBuffer[y * mStride + x]; } /// <summary> /// /// </summary> /// <param name="x"></param> /// <param name="y"></param> /// <returns></returns> public uint GetPixelUnchecked(int x, int y) { return mBuffer[y * mStride + x]; } /// <summary> /// /// </summary> /// <param name="x"></param> /// <param name="y"></param> /// <param name="color"></param> public void SetPixel(int x, int y, uint color) { if (x >= mWidth) throw new ArgumentOutOfRangeException("x"); if (y >= mHeight) throw new ArgumentOutOfRangeException("y"); mBuffer[y * mStride + x] = color; } /// <summary> /// /// </summary> /// <param name="x"></param> /// <param name="y"></param> /// <param name="color"></param> public void SetPixelUnchecked(int x, int y, uint color) { mBuffer[y * mStride + x] = color; } /// <summary> /// /// </summary> /// <returns></returns> public Bitmap GetBitmap() { return mBitmap; } /// <summary> /// /// </summary> /// <returns></returns> public uint* GetBuffer() { return mBuffer; } /// <summary> /// /// </summary> public int GetWidth() { return mWidth; } /// <summary> /// /// </summary> /// <returns></returns> public int GetHeight() { return mHeight; } // Properties /// <summary> /// /// </summary> public Bitmap Bitmap { get { return mBitmap; } } /// <summary> /// /// </summary> public uint* Buffer { get { return mBuffer; } } /// <summary> /// /// </summary> public int Width { get { return mWidth; } } /// <summary> /// /// </summary> public int Height { get { return mHeight; } } } } using System; using System.Threading; namespace SharpRay { /// <summary> /// /// </summary> public static class Parallel { /// <summary> /// /// </summary> /// <param name="start"></param> /// <param name="end"></param> /// <param name="action"></param> public static void For(int start, int end, Action<int> action) { int count = end - start; if (count > 64) { For(start + 64, end, action); count = 64; end = start + 64; } ManualResetEvent[] resetEvents = new ManualResetEvent[count]; for (int i = start, index = 0; i < end; i++, index++) { ManualResetEvent resetEvent = new ManualResetEvent(false); resetEvents[index] = resetEvent; ThreadPool.QueueUserWorkItem( data => { object[] dataArray = (object[])data; action((int)dataArray[0]); ((ManualResetEvent)dataArray[1]).Set(); }, new object[] { i, resetEvent }); } WaitHandle.WaitAll(resetEvents); } } } /// <summary> /// /// </summary> /// <returns></returns> public uint ToPixel() { uint pR = (uint)(Red * 255F); if (pR > 255) pR = 255; uint pG = (uint)(Green * 255F); if (pG > 255) pG = 255; uint pB = (uint)(Blue * 255F); if (pB > 255) pB = 255; return 0xFF000000 | (pR << 16) | (pG << 8) | pB; } /// <summary> /// this is the main entrypoint for rendering a scene. this method is responsible for correctly rendering /// the graphics device (in this case a bitmap). /// Note that apart from the raytracing, painting on a graphics device is rather slow /// </summary> /// <param name="g">the graphics to render on</param> /// <param name="viewport">basically determines the size of the bitmap to render on</param> /// <param name="scene">the scene to render.</param> public unsafe void RayTraceScene(Surface surface, Rectangle viewport, Scene scene) { int maxsamples = (int)AntiAliasing; DateTime timestart = DateTime.Now; surface.Clear(0); Parallel.For(0, viewport.Height, y => { for (int x = 0; x < viewport.Width; x++) { double yp = y * 1.0f / viewport.Height * 2 - 1; double xp = x * 1.0f / viewport.Width * 2 - 1; Ray ray = scene.Camera.GetRay(xp, yp); // this will trigger the raytracing algorithm surface.Buffer[y * surface.Width + x] = CalculateColor(ray, scene).ToPixel(); } }); TimeSpan duration = DateTime.Now - timestart; RenderUpdate.Invoke(0, duration.TotalMilliseconds, 0, 0); } Color final = new Color(); for (int tx = 0; tx < SamplesX; tx++) { for (int ty = 0; ty < SamplesY; ty++) { double yp = (y + ty * InvSamplesY) * 1.0f / viewport.Height * 2 - 1; double xp = (x + tx * InvSamplesX) * 1.0f / viewport.Width * 2 - 1; Ray ray = scene.Camera.GetRay(xp , yp); final += CalculateColor(ray, scene); } } // this will trigger the raytracing algorithm surface.Buffer[y * surface.Width + x] = (final * InvSamplesTotal).ToPixel(); private const int SamplesX = 2; private const int SamplesY = 3; private const float InvSamplesX = 1F / SamplesX; private const float InvSamplesY = 1F / SamplesY; private const float InvSamplesTotal = 1F / (SamplesX * SamplesY); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/15935/Yet-Another-RayTracer-for-NET?PageFlow=FixedWidth
CC-MAIN-2014-41
refinedweb
3,595
54.42
GameFromScratch.com In this part we are going to look at how to add animations to a TileMap in LibGDX. Along the way we are going to look at using Properties a very important part of using Tile maps, as properties contain your games “data”. First let’s take a look at setting properties in Tiled. I am adding a second Tileset to my existing map called “Water” using the following graphic I download from Google Images. Tiled doesn’t support animations, so we are going to hack support in using properties. You can get more details on working with Tiled here. Properties are set in the Tile, not the Cell in TileEd. Load the above image as a new Tileset in Tiled named Water. We are working with just 3 of the water tiles: Right click on a tile and select Tile Properties... Now we want to set a property “Water Frame” and give it the value of 1. Now repeat for the next two water tiles, with the values 2 and 3 respectively. In addition to Tile properties, you can also set properties at the Layer and Map level. Properties are just a name value pair of strings when imported into LibGDX. Let’s take a look at that process now. Save your tiled map and add it to the assets folder of your project. Also add all the various texture maps used for tiles. Let’s look at some ( heavily commented ) code that builds on our earlier tile map example: package com.gamefromscratch; import com.badlogic.gdx.ApplicationAdapter; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.Input; import com.badlogic.gdx.InputProcessor;.maps.tiled.*; import com.badlogic.gdx.maps.tiled.renderers.OrthogonalTiledMapRenderer; import java.util.*; public class TiledTest extends ApplicationAdapter implements InputProcessor { Texture img; TiledMap tiledMap; OrthographicCamera camera; TiledMapRenderer tiledMapRenderer; SpriteBatch sb; Texture texture; Sprite sprite; ArrayList<TiledMapTileLayer.Cell> waterCellsInScene; Map<String,TiledMapTile> waterTiles; floatelapsedSinceAnimation = 0.0f; @Override public void create () { float w = Gdx.graphics.getWidth(); float h = Gdx.graphics.getHeight(); camera = new OrthographicCamera(); camera.setToOrtho(false,w,h); // Position the camera over 100pixels and up 400 to capture more interesting part of map camera.translate(100,400); camera.update(); //Load our tile map tiledMap = new TmxMapLoader().load("MyCrappyMap.tmx"); tiledMapRenderer = new OrthogonalTiledMapRenderer(tiledMap); Gdx.input.setInputProcessor(this); // We created a second set of tiles for Water animations // For the record, this is bad for performance, use a single tileset if you can help it // Get a reference to the tileset named "Water" TiledMapTileSet tileset = tiledMap.getTileSets().getTileSet("Water"); // Now we are going to loop through all of the tiles in the Water tileset // and get any TiledMapTile with the property "WaterFrame" set // We then store it in a map with the frame as the key and the Tile as the value waterTiles = new HashMap<String,TiledMapTile>(); for(TiledMapTile tile:tileset){ Object property = tile.getProperties().get("WaterFrame"); if(property != null) waterTiles.put((String)property,tile); } // Now we want to get a reference to every single cell ( Tile instance ) in the map // that refers to a water cell. Loop through the entire world, checking if a cell's tile // contains the WaterFrame property. If it does, add to the waterCellsInScene array // Note, this only pays attention to the very first layer of tiles. // If you want to support animation across multiple layers you will have to loop through each waterCellsInScene = new ArrayList<TiledMapTileLayer.Cell>(); TiledMapTileLayer layer = (TiledMapTileLayer) tiledMap.getLayers().get(0); for(int x = 0; x < layer.getWidth();x++){ for(int y = 0; y < layer.getHeight();y++){ TiledMapTileLayer.Cell cell = layer.getCell(x,y); Object property = cell.getTile().getProperties().get("WaterFrame"); if(property != null){ waterCellsInScene.add(cell); } } } @Override public void render () { Gdx.gl.glClearColor(1, 0, 0, 1); Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); tiledMapRenderer.setView(camera); tiledMapRenderer.render(); // Wait for half a second to elapse then call updateWaterAnimations // This could certainly be handled using an Action if you are using Scene2D elapsedSinceAnimation += Gdx.graphics.getDeltaTime(); if(elapsedSinceAnimation > 0.5f){ updateWaterAnimations(); elapsedSinceAnimation = 0.0f; // This is the function called every half a second to update the animated water tiles // Loop through all of the cells containing water. Find the current frame and increment it // then update the cell's tile accordingly // NOTE! This code depends on WaterFrame values being sequential in Tiled private void updateWaterAnimations(){ for(TiledMapTileLayer.Cell cell : waterCellsInScene){ String property = (String) cell.getTile().getProperties().get("WaterFrame"); Integer currentAnimationFrame = Integer.parseInt(property); currentAnimationFrame++; if(currentAnimationFrame > waterTiles.size()) currentAnimationFrame = 1; TiledMapTile newTile = waterTiles.get(currentAnimationFrame.toString()); cell.setTile(newTile); @Override public boolean keyDown(int keycode) { return false; @Override public boolean keyUp(int keycode) { if(keycode == Input.Keys.LEFT) camera.translate(-32,0); if(keycode == Input.Keys.RIGHT) camera.translate(32,0); if(keycode == Input.Keys.UP) camera.translate(0,-32); if(keycode == Input.Keys.DOWN) camera.translate(0,32); if(keycode == Input.Keys.NUM_1) tiledMap.getLayers().get(0).setVisible(!tiledMap.getLayers().get(0).isVisible()); if(keycode == Input.Keys.NUM_2) tiledMap.getLayers().get(1).setVisible(!tiledMap.getLayers().get(1).isVisible()); @Override public boolean keyTyped(char character) { @Override public boolean touchDown(int screenX, int screenY, int pointer, int button) { @Override public boolean touchUp(int screenX, int screenY, int pointer, int button) { @Override public boolean touchDragged(int screenX, int screenY, int pointer) { @Override public boolean mouseMoved(int screenX, int screenY) { @Override public boolean scrolled(int amount) { } Basically what we do is load our map, then we loop through the “Water” tile set and grab a reference to any tile marked as a Waterframe. We then perform the same action, looping through all of the cells in our map ( on the first layer! ) and if the cells tile has a reference to a tile that has the Waterframe property defined. Then every half a second we update each cell to the next available frame of animation, or loop back to the first frame if none are available. Now if you run this code, voila! Animated water: In this case we manually updated tiles in the map. LibGDX does however present another option. They have recently added an Animated tile class. That said, this functionality is very new so warning ,there be dragons. In fact, I didn’t find a single implementation online, so this may in fact be the first! Here is a code example using AnimatedTiledMapTile to achieve the same effect: import com.badlogic.gdx.maps.tiled.tiles.AnimatedTiledMapTile; import com.badlogic.gdx.maps.tiled.tiles.StaticTiledMapTile; import com.badlogic.gdx.utils.Array; public class TiledTest extends ApplicationAdapter{ Array<AnimatedTiledMapTile> waterTilesInScene; Array<StaticTiledMapTile> waterTiles; waterTiles = new Array<StaticTiledMapTile>(); Object property = tile.getProperties().get("WaterFrame"); if(property != null) { waterTiles.add(new StaticTiledMapTile(tile.getTextureRegion())); waterTilesInScene = new Array<AnimatedTiledMapTile>(); cell.setTile(new AnimatedTiledMapTile(0.5f,waterTiles)); Ultimately the logic is very similar. Here however we actually replace the tile type of for each water instance we find in our map with a AnimatedTiledMapTile. It is passed an interval to update ( 0.5 second again ) as well as an array of tiles to use as part of the animation. The logic is basically identical, you just have slightly less control and no longer have to handle the updating on your own! As you can imagine by the name “SpriteKit”, Sprites are a pretty core part of creating a game using SpriteKit. We are going to continue building on the minimal application we created in the previous part. I want to point out, this isn’t the recommended way of working with SpriteKit, it is instead the simplest way. In a proper application we would be more data driven and store our data in SKS files instead of simply adding them to the project. This is something we will cover later on. First lets jump right in with code. We are going to replace the the class GameScene we created in the last tutorial. In SpriteKit, the fundamentals of your game are organized into SKScene objects. For now we only have one. Let’s look: let sprite = SKSpriteNode(imageNamed: "sprite1.png") sprite.anchorPoint = CGPoint(x:0.5,y:0.5) override func mouseDown(theEvent: NSEvent!) { self.sprite.position = CGPoint(x:theEvent.locationInWindow.x,y:theEvent.locationInWindow.y) We add the sprite “sprite1.png” to our project directory, simply drag and drop it from Finder. The sprite(s) ill be using are from the zip file available here. When you run this code, click anywhere and you should see: Where ever you click the mouse, the sprite will be drawn. One immediate change you will notice in this code is sprite was moved out of didMoveToView and made a member variable. This allows us to access sprite in different functions ( although we could retrieve the sprite from the Scene, something we will see later ). In Swift there are only two main ways of declaring a variable, let and var. var is a variable meaning it’s value can change. Using let on the other hand you are declaring a the the value cannot change, this is the same as a const in other languages. As we touched on briefly in the last part, a let declared value can be assigned later using the ? postfix operator. In this case, it will have the value of nil at initialization, unless one is specifically given like in the code we just did. One thing you may notice is, unlike C++, C# and Java, Swift currently has no access modifiers. In other words all variables are publicly available ( there are no private, internal, protected, etc modifiers available ). Apparently this is only temporary and will be changed in the language later. This personally seems like a very odd thing not to have in a language from day one. Since we set the sprite’s anchor to the middle (0.5,0.5), the sprite will be centred to your mouse cursor. As you can see we added a mouseDown event handler. This class is available because SKScene inherits UIResponder, this is how you handle I/O events in your scene. The only other new aspect to this code is: sprite.xScale = 4 sprite.yScale = 4 This code causes the sprite to be scaled by a factor of 4x. We do this simply because our source sprite was only 64x64 pixels, making it really really tiny in an empty scene! As you can see, scaling sprites in SpriteKit is extremely easy. The structure of a SpriteKit game is actually quite simple. Your SKScene contains a graph of SKNodes, of which SKSpriteNode is one. There are others too including SKVideoNode, SKLabelNode, SKShapeNode, SKEmitterNode and SKEffectNode. Even SKScene itself is a SKNode, which is how all the magic happens. Let’s take a quick look at an SKLabelNode in action. import SpriteKit class GameScene: SKScene { override func didMoveToView(view: SKView) { var label = SKLabelNode(); label.text = "Hello World" label.fontSize = 128 label.position = CGPoint(x:0,y:0) view.scene!.anchorPoint = CGPoint(x: 0.5,y: 0.5) self.addChild(label) Which predictably enough gives you: These nodes however can be parented to make hierarchies of nodes. Take for example a combination of the two we’ve seen, our sprite node with a text label parented to it. var sprite = SKSpriteNode(imageNamed:"sprite1.png") sprite.position = CGPoint(x:100,y:0); sprite.xScale = 4.0 sprite.yScale = 4.0 var label = SKLabelNode(); label.text = "Jet Sprite" label.fontSize = 12 label.position = CGPoint(x:0,y: 15) label.fontColor = NSColor.redColor() label.alpha = 0.5 sprite.addChild(label) And when you run it: There are a few things to notice here. Each Node get’s its default coordinates from it’s parents. Since the jet sprite is parented to the scene and the scene’s anchor is set to the middle of the screen, when we position the screen 100 pixels to the right, that’s 100 pixels to the right of the centre of the screen. Additionally, the text label is positioned relative to the sprite, so it’s positioning is relative to the sprite. Another thing you might notice is the text is blurry as hell. That is because the label is inheriting the scaling from it’s parent, the sprite. As you can see you compose your scene by creating a hierarchy of various types of nodes. Now if we were to transform the parent sprite, all the transformations will apply to the child nodes. The following example shows how transforming a parent node effects all child nodes. Spoilers, it also shows you how to Update a Scene… we will cover this in more detail later, so don’t pay too much attention to the man behind the curtain. var sprite = SKSpriteNode(imageNamed:"sprite1.png") sprite.position = CGPoint(x:0,y:0); sprite.xScale = 8.0 sprite.yScale = 8.0 override func update(currentTime: NSTimeInterval) { if(sprite.yScale > 0) { sprite.yScale -= 0.1 sprite.xScale -= 0.1 else { sprite.xScale = 8.0 sprite.yScale = 8.0 Now if we run this code: Each time update() is called, the sprite is reduced in scaling until it disappears, at which point it’s zoomed back to 8x scaling. As you can see, the child labelNode is scaled as well automatically. Notice how until this point if we wanted to access our sprite across functions we had to make it a member variable? As I said earlier, there is another option here, you name your nodes and retrieve them later using that name. Like so: sprite.name = "MyJetSprite" var sprite = self.childNodeWithName("MyJetSprite"); if(sprite != nil){ if(sprite.yScale > 0) { sprite.yScale -= 0.1 sprite.xScale -= 0.1 else { sprite.xScale = 8.0 sprite.yScale = 8.0 You can perform some pretty advanced searches, such as searching recursively through the tree by prefixing your name with “//“. You can also search for patterns and receive multiple results. We will look at this in more details later. This part is starting to get a bit long so I am going to stop now. The next part will look at more efficient ways of using Sprites, such as using an Atlas, as well as look at basic animation and whatever else I think to cover! iOS 2D CocoonJS just released a new version. CocoonJS is a technology that allows you to bundle HTML applications into a native application for mobile app store deployment. This release contains: Canvas+ Canvas+ now handles correctly HTML5/Web exportations from Construct 2. Get more infohere. Added implementation for the FontFace CSS style. Fonts can now be downloaded from a remote URL defined in the FontFace attribute. Get more info here. Improvements in DOM support Make the window.document readOnly. Added window.pageXOffset and window.pageYOffset properties. Added document.defaultView property. XHR Improvements Make "text/plain;charset=UTF-8" the default content-type. Allow XHR responses to be saved to disk a new cocoonSetOutputFile extension. Get more info here. Page pageLoaded / pageFailed are correctly called now. Fixed Audio System deadlock when alcOpenDevice fails. New (and correct) device orientation handling. Get more info here. Rendering Ensure the renderer is always resumed after loading a url. Improved WebGL compatibility with renderTargets. Fixed some problems in renderbufferStorage arguments, which caused an incomplete framebuffer status. Extensions App Added new methods: onSuspended: notification when the application is suspended. existsPath: checking if a file exists in the filesystem. setTextCacheSize: text rendering is cached inside CocoonJS. This method controls the size of that cache. Ads Fixed problem with iOS 6 and iAd interstitials WebView+ Added check to avoid null pointer exceptions when destroying the webview Implemented webView setNetworkAvailable method, which triggers 'online' event on the JavaScript side Fixed viewport size problem which happened in some devices Fixed some missing resources bugs Cloud Compiler Added support for Cordova 3.3 and 3.4. Launcher Some small improvements to the Android launcher UI. Improve download/unzip times in Android launcher. Make orientation handling behave the same as in the browser. Some existing games that depend on gyroscope/orientation might need some tweaking because of this. Fixed crash if device orientation changes during game launch. Known Bugs Ads with admob and mopub aren't handled correctly, and may not appear. If more ad networks are configured in mopub, those will be served. Samsung Galaxy Tab devices crash due to SIGILL. We have already found the issue related to V8 and this specific hardware/processor and are working hard to try to solve it asap. We are working on all of these bugs in order to solve them and create a new release asap. Sorry for the inconveniences. Important: You can always revert the version of the launcher and the compiler you're using. Get more info here. cocoonSetOutputFile onSuspended existsPath setTextCacheSize Known Bugs We are working on all of these bugs in order to solve them and create a new release asap. Sorry for the inconveniences. Important: You can always revert the version of the launcher and the compiler you're using. Get more info here. If you think Cocoon sounds a heck of a lot like PhoneGAP/Cordova, well, it is. Cocoon have put together this comparison guide. Basically the biggest difference is speed. When I did the first ( and currently only ) part of my Swift with SpriteKit tutorial series I ran into a crash problem with the default game template. In Googling around and looking at the Mac password protected developer forums, I notice I am by no means alone. Even a comment in the earlier tutorial part mentioned encountering the problem and said it was tied to Mac OS 10.9. I can confirm on my 10.10 install, everything works correctly, for me it is only in 10.9 that I have a problem. It’s pretty simple to re-create, when running Mac OS 10.9, create a simple project using the game template and Swift as your language ( the Objective-C template works just fine ), like this one: Then when you run it: EXC_BAD_ACCESS Code=1 address=0x10. Fortunately I have discovered a very simple fix. In Xcode, open up the generated sks file, GameScene.sks: Make sure the Utilities panel is visible: Now from the Object library, drag an Empty Node to the editor window, like so: And now, if you run the game it should work: My guess is the scene file included in the Swift Spritekit template is corrupted. By adding a node and saving it, it seems to fix the corruption. By the way, you can delete the empty node now and it will continue to work. Hope that helps some of you, at least till Apple fix the bug.
https://www.gamefromscratch.com/2014/06/default.aspx?page=2
CC-MAIN-2019-35
refinedweb
3,092
58.79
pcf8574.c File Reference PCF8574 i2c port expander driver. More... #include "pcf8574.h" #include "cfg/cfg_i2c.h" #include <cfg/compiler.h> #include <drv/i2c.h> #include <cfg/module.h> Go to the source code of this file. Detailed Description PCF8574 i2c port expander driver. This driver controls the PCF8574. The PCF8574 is an 8bit i2c port expander. You can read/write 8 pins through an i2c bus. The pins are quasi-bidirectionals, this mean that without the need of a direction register you can use each pin as input or output, see datasheet on how this is achieved. Definition in file pcf8574.c.
http://doc.bertos.org/2.7/pcf8574_8c.html
crawl-003
refinedweb
103
80.78
etcd was created as the primary building-block on which CoreOS is built. It uses the Raft algorithm to keep changes consistent throughout a cluster by electing a leader and distributing a log of operations (“commands”) from the leader to the other systems. Due to these features and others, etcd to be used for robust service-discovery and cluster configuration, replacing ZooKeeper. Entries are referred-to as “nodes”. Distributed Locks Every update automatically increments the “index”, which is a global, monotonically-increasing value, incremented for every operation: c.set('/a/b/c', 5).index # 66 c.set('/a/b/c', 5).index # 67 The index increases for every operation, not just those with side-effects. Per the mailing list (2013-11-29), the reason for this is: That’s a side effect of how Raft works. When new commands come in they get sent to Raft immediately which increments the index. We’re not able to check the current value of the key before insert because Raft batches commands so there may be uncommitted changes between the current state and the state at the time when the command is being committed. That’s also why changes that cause errors can increment the index even though no change was made. etcd also gives us a “CAS” (“compare and swap”) call (“test_and_set” in the Python client). This allows us to assign a value to a key, but only when the existing value meets one or more conditions: - The existing value is set to something specific (a “previous value” condition). - The existing index is set to something specific (a “previous index” condition). - The key either currently exists or doesn’t (a “previously exists” condition). The existence of a monotonic, atomic counter and a CAS function happen to be the exact dependencies required to establish distributed locking. The process might be the following: - Initialize a node for the specific lock (“lock node”). Use CAS with a “prevExists” of “false” and a value of “0”. - Assign some value to some dummy key used for the purpose of incrementing and grabbing the index. This index will be used as a unique ID for the current thread/instance (“instance ID”). - Do a CAS on the lock node with a “prevValue” of “0”, a value of the instance-ID, and a TTL of whatever maximum lock time we should allow. - If error, watch the lock node. Give the HTTP client a timeout. Try again after long-polling returns or timeout hits. - If no error, do whatever logic is required, and, to release, use a CAS to set the lock-node to “0” with a “prevValue” of the instance-ID. If this fails (ValueError), then the lock has been reowned by another instance after having timed-out. It’s important to mention that the “test_and_set” operation in the Python client only currently supports the “prevValue” condition. With the “prevValue” condition, you’ll get a KeyError if the key doesn’t exist. If the real existing value does not match the stated existing value, you’ll get a ValueError (which is a standard consideration when using this call). Additional Features Aside from being so consistent and having easy access to the operations via REST, there are two non-traditional operations that you’ll see with etcd but not with [most] other KV solutions: - Entries can be stored in a hierarchy - Long-polling to wait on a change to a key or folder (“watch”) With (2), you can monitor a key that doesn’t yet exist, or even a folder (in which case, it’ll block until any value inside the folder changes, recursively). You can use this to achieve event-driven scripts (a neat usage mentioned on the mailing list). Lastly, before moving on to the example, the cluster should be kept small:. (what size cluster should I use) etcd is based on Google’s Chubby (which uses Paxos rather than Raft). Quick Start For this example, we’re going to establish and interact with etcd using three different terminals on the same system. etcd requires Go 1.1+. You’ll probably have to build it (via a “Git” clone call, and a build), as it’s not yet available via many package managers (Ubuntu, specifically). Run etcd: $ etcd [etcd] Nov 28 13:02:20.849 INFO | Wrote node configuration to 'info' [etcd] Nov 28 13:02:20.849 INFO | etcd server [name default-name, listen on 127.0.0.1:4001, advertised url] [etcd] Nov 28 13:02:20.850 INFO | raft server [name default-name, listen on 127.0.0.1:7001, advertised url] Creating a cluster is as easy as simply launching additional instances of the daemon on new hosts. Now, install Python’s python-etcd: sudo pip install python-etcd Connect the client: from etcd import Client c = Client(host='127.0.0.1') Set a value (notice that we have to specify a folder, even if it’s only the root): c.set('/test_entry', 'abc') EtcdResult(action=u'SET', index=9, key=u'/test_entry', prevValue=None, value=u'abc', expiration=None, ttl=None, newKey=True) # Actions available on EtcdResult: action, count, expiration, index, key, newKey, prevValue, ttl, value Get the value: r = c.get('/test_entry') print(r.value) # Prints "abc" In a second terminal, connect the client and run the following to block for a change to the given folder (it doesn’t currently exist): r = c.watch('/test_folder') Back in the first terminal, run: c.set('/test_folder/test_inner_folder/deep_test', 'abc') The command waiting in the second terminal has now returned. Examine “r”: print(r) EtcdResult(action=u'SET', index=15, key=u'/test_folder/test_inner_folder/deep_test', prevValue=None, value=u'abc', expiration=None, ttl=None, newKey=True) Get a listing of children. This may or may not work on “/”, depending on your python-etcd version: from pprint import pprint c.set('/test_folder/entry_1', 'test_value_1') c.set('/test_folder/entry_2', 'test_value_2') list_ = c.get('/test_folder') pprint(list_) #[EtcdResult(action=u'GET', index=4, key=u'/test_folder/entry_1', prevValue=None, value=u'test_value_1', expiration=None, ttl=None, newKey=None), # EtcdResult(action=u'GET', index=4, key=u'/test_folder/entry_2', prevValue=None, value=u'test_value_2', expiration=None, ttl=None, newKey=None)] etcd also allows for TTLs (in seconds) on “put” operations: from time import sleep c.set('/disappearing_entry', 'inconsequential_value', ttl=5) sleep(5) c.get('/disappearing_entry') You’ll get the following error (a proper KeyError): Traceback (most recent call last): File "", line 1, in File "/Library/Python/2.7/site-packages/etcd/client.py", line 284, in get response = self.api_execute(self.key_endpoint + key, self._MGET) File "/Library/Python/2.7/site-packages/etcd/client.py", line 357, in api_execute raise error_exception(message) KeyError: u'Key Not Found : get: /disappearing_entry' Miscellaneous functions: c.machines # [''] c.leader # '' As a final note, you don’t have to choose between cURL requests and the API. Rather, there’s also etcdctl for command-line control: $ etcdctl set /foo/bar "Hello world" Hello world Leaders are elected using elections. However, there’s a chance that a leader won’t be elected, and the elections will have to be reattempted. From the mailing list (2013-11-29): Q: What would cause a leader candidate to not receive a majority of votes from nodes, during elections? A: The common case election failure would be due to either a network partition causing less than a quorum to vote, or another candidate being elected first. Q: Is there any decision-making involved during elections, such as the consideration of the CPU utilizations of individual machines? A: Not at this time. It might make sense to add some sort of fitness to the leader proposal decision later.
https://dustinoprea.com/2013/11/28/using-etcd-as-a-highly-available-and-innovative-key-value-storage/
CC-MAIN-2019-51
refinedweb
1,278
54.93
Subject: Re: [boost] painless currying From: Daniel James (dnljms_at_[hidden]) Date: 2011-09-01 17:43:22 On 1 September 2011 05:07, Paul Mensonides <pmenso57_at_[hidden]> wrote: > > 1) GCC performs macro expansion and parsing on the expressions of all #if/ > #elif/#endif blocks that it encounters when not already skipping because > of an outer #if/#endif. It is questionable whether this is what was > intended, but it is arguable either way. I think it was made due to: Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2011/09/185439.php
CC-MAIN-2019-22
refinedweb
101
65.42
1. Manipulate dynamic pointers and arrays of pointers 2. Use sorts, strings, searches and string functions. Procedure: 1. Your new software engineering group is hired for its first paying project to develop a console application to demonstrate a possible user interface for a next generation MP3 player. 2. Your C++ program will prompt the user to input up to 20 (controlled with a global constant int) names of Artists whose songs will be loaded into the MP3 player. No artist name will be more than 80 characters (including the null terminator). Each entry needs to be loaded (strcpy would be nice) into a dynamic string created with the new keyword. 3. Each of the pointers to the strings of the Artists names needs to be stored in an array – yielding an “array of pointers”. 4. The array needs to be sorted in a separate void function such that the first alphabetical artist name should be stored in the [0] entry of the array, the next in the [1] entry etc. 5. After the array is sorted, call a void function passing the array and the number of entries in the array and print out the artists. The list should be sorted correctly. 6. After printing out the artists, prompt the user for a search string in main(). Call a void function to search through the strings pointed to by your array and return (through a call by reference) an array of references which contain the search string. The search should be case in sensitive. If you searched, for example, on “bob”, you would return references to “Bob Dylan” and “The bobs”. 7. If there were valid references returned from your search, call the same print function as in step 5 above and print out the artists that had a match in the search. If there were no valid references, cout a message to the user. You should allow for multiple searches via re prompting. 8. At the end of your program be sure to iterate through your newly allocated strings and be sure to return the allocated memory back to the operating system. 9. Here’s an example of input and output: Enter artist name: Grateful Dead Another artist? (y,n): y Enter artist name: Lucinda Williams Another artist? (y,n): y Enter artist name: Third Eye Blind Another artist? (y,n): y Enter artist name: Bob Marley Another artist? (y,n): y Enter artist name: Metallica Another artist? (y,n): n Sorted Artist List: Bob Marley Grateful Dead Lucinda Williams Metallica Third Eye Blind Enter search string: ll References: Lucinda Williams Metallica Search Again? (y,n): y Enter search string: a References: Bob Marley Grateful Dead Lucinda Williams Metallica Search Again? (y,n): y Enter search string: x Sorry. No references in database for search on: "x" Search Again? (y,n): n Press any key to continue
https://www.daniweb.com/programming/software-development/threads/266127/help-with-a-c-project
CC-MAIN-2019-04
refinedweb
478
71.34
I tried to call servlet from a JSP page, but it has the following status 500 runtime error: '.' expected [javac] import GenerateHTML ; <%@page import = "GenerateHTML " %> <% GenerateHTML obj = new GenerateHTML (); obj.service(request, response); %> public class GenerateHTML extends HttpServlet { public void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { //etc... } } any ideas?? status 500 error to call a servlet from a JSP page (2 messages) - Posted by: Matt Louden - Posted on: June 08 2004 20:27 EDT Threaded Messages (2) - status 500 error to call a servlet from a JSP page by Thomas Jachmann on June 09 2004 05:23 EDT - There's a space in your import statement... by Rene Zanner on June 09 2004 07:47 EDT status 500 error to call a servlet from a JSP page[ Go to top ] The import might expect a . in the import statement, since you usually put your java classes in packages. Of course, you can put it in the default package as well and this should be supported, but it obviously isn't. - Posted by: Thomas Jachmann - Posted on: June 09 2004 05:23 EDT - in response to Matt Louden Anyway, I think you're violating the servlet's lifecycle by instantiating it yourself. You should let the servlet container do that. Just define your servlet in web.xml, map it to a request URI, say /GenerateHTML and then use <jsp:include to call the servlet. HTH, Thomas There's a space in your import statement...[ Go to top ] ...which is criticized correctly by the javac compiler. - Posted by: Rene Zanner - Posted on: June 09 2004 07:47 EDT - in response to Matt Louden Cheers, René
http://www.theserverside.com/discussions/thread.tss?thread_id=26477
CC-MAIN-2015-32
refinedweb
275
70.84
Java Puzzle 5: Static Variables and Object Instantiation Java Puzzle 5: Static Variables and Object Instantiation Catch up with the answer to the last Java Quiz about passing parameters to constructors and dive into a more open-ended puzzle about static variables and instantiation. Join the DZone community and get the full member experience.Join For Free Automist automates your software deliver experience. It's how modern teams deliver modern software. Before we start with this week's quiz, here is the answer to Java Quiz 4: Passing a Parameter to a Constructor The statement new MyClass(5).method(2) creates a new object by passing the one-argument constructor of the class MyClass. By passing 5 to the constructor, the statement y += i; increments the value of y by 5. So y = 3 + 5 = 8. By invoking the method and passing the value 2 to it, the statement y+= i; increments the value of y by 2. So, y = 8 + 2 = 10. The statement new MyClass(new MyClass(5).method(2), 4); creates a new object by calling the two-argument constructor. The method returns the value 10. So, the statement new MyClass(new MyClass(5).method(2), 4); is equivalent to new MyClass(10, 4); The y += (i + i2); increments the value of y by 10 + 4 = 3 + 10 + 4 = 17. The correct answer is: d. Onto the Puzzle! Usually we offer a Java quiz, but today, as shown from the title, we have a Java puzzle. Let's start it! If the following code is compiled and run, it writes nothing to the standard output. Write only one statement at line 29. The statement should apply the following. The statement creates the object mc from the class MyClass. The statement should assign the value y to the object str, assign the value 9 to the variable i, and the value 8 to the variable i2. As a result of adding the statement, the output of the code should become y98. What is that statement? public class MyClass { static String str = "x"; static int i = 2; static int i2; MyClass(String str, int i, int i2) { MyClass.i2 = i2; if(!str.equals("") || i != 0 || i2 != 3) { System.out.print(i2); } } String strMethod(String str) { MyClass.str = str; System.out.print(str); return str; } int intMethod(int i) { MyClass.i = i; System.out.print(i); return i; } public static void main(String[] args) { // add your statement here! } } The correct answer and its explanation will be included in the next quiz in two weeks! For more Java quizzes, puzzles, and assignments, take a look at my }}
https://dzone.com/articles/java-puzzle-5-static-variables-and-object-instantiation?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fjava
CC-MAIN-2018-43
refinedweb
436
68.36
Abstract Both in formal situations (as school teachers, football trainers, etc.) and in many, often unpredictable informal situations (both inside and outside institutions)—adults come close to children. Whether we intend it or not, we continually give them examples of what it is to live as a human being, and thereby we have a pedagogical responsibility. I sketch what it could mean to let ourselves “be built up”, in a Kierkegaardian sense, on the foundation of unconditional love, presupposing that this love is possible for all human beings. Kierkegaard’s Upbuilding discourses invite each reader to engage in a dialogue with the possibilities in the text. Thereby the reader may become aware of his or her present situation in life and see possible alternatives. These discourses or “talks” (taler in Danish) exemplify a manner of indirect communication which perhaps may be transferred to encounters with works of art in general: How could I let examples in literature, pictures, films and music invite and challenge me—to ask myself who I am right now and who I ought to be? My aim is to present an alternative to the instrumental advices that adults are given today. I attempt to clarify the leading concept “upbuilding examples”, sketch the difference between upbuilding, education and Bildung, refer to works of art that seem to have upbuilding possibilities, and consider why upbuilding examples should be studied and how they could be studied in small self-governed groups of adults. Vision and Sources Dostoevsky has a thought-provoking comment to adults in The Brothers Karamaz. (Dostoevsky 2000, VI 3g, p. 298) Adults come close to children in formal as well as informal situations. If we, like the adult in the example above, do not see how we influence children, we might harm them. Children are vulnerable, they may be more sensitive than we would like to think, and if we want human life to continue in a good way, we ought to be good exemplars, or at least not really bad ones. That may demand “long labour” and patience. What can we do? Perhaps the possibilities are very restricted? If something is possible, even small steps should be considered. We may be moved both by principles and by people. Good examples give us models to emulate. Bad examples may warn us and help us to avoid becoming people who unintentionally are “sowing an evil seed” in the child. There are several ways which might open possibilities for small corrections and improvements: direct experiences, research, discussion forums on the internet, courses in parenting, courses in leadership, studies in ethics and education. A way that I see as a promising supplement to these established ways, would be to encourage adults to form pedagogical groups, where examples in literature, pictures, films and music could be experienced and talked about. My vision is that the participants in such groups could try and help each other to become “better human beings” (Wivestad 2012)—persons who seek and do what is good and avoid evil. I do not claim that persons in this way actually will become “better”, but I argue for some good possibilities that can be actualised when informal small groups of adults engage in serious encounters with works of art. The saying above is from Father Zossimah, the mentor of the youngest of the brothers Karamazov. He gives a direct challenge to adults in the novel, and indirectly we, the readers, can let ourselves be challenged as well. Perhaps some would label the challenge as “narrow and moralistic”. This label may be questioned when we consider how Father Zossimah understands the challenge: “Brothers, have no fear of men’s sin. Love a man even in his sin, for that is the semblance of Divine Love and is the highest love on earth” (Dostoevsky 2000, p. 297). The love that the adult in the example above had not fostered in himself, is a love for all, not a love that is reserved only for those who are lovable. This love is a possible “teacher” for all who are willing to be taught by it and to give it authority in their lives (Biesta 2012). Such teaching is probably most difficult to receive for those who are proud of their virtues and competences. If we let love be our teacher and struggle with ourselves to acquire “careful, actively benevolent love”, the struggle in itself is a good model for the children—even when we fail. My interest in personal and political improvement is linked to a concern with the practical “upbringing” (Wivestad 2012) of children. Upbringing is a universal human phenomenon and therefore more fundamental than schooling. As the way of young people is dependent on their way as children, the youth’s formal education is dependent on their upbringing from the very beginning; and as our good intentions in the upbringing of children can be contradicted by the example we give them, both formal and informal educators ought to be more aware of the fact that they are models. The upbringing of children challenges us to live what we learn, and when failing, to struggle with ourselves. The struggling adult is a better exemplar than the seemingly perfect one, because the latter loses the child’s trust when the adult’s bad sides are exposed. In the attempt to flesh out the vision, firstly I found help in Aristotle’s reference to examples (like Pericles) when he defines moral virtue (arete) and moral wisdom (phronesis) (Aristotle 2002, 1107a1, 1140a24, 1140b8), and in Thomas Aquinas’s ideas of a fruitful relation between moral wisdom and unconditional love (Wivestad 2008). Other impulses have been: Paulo Freire’s well-formed aim of creating “a world in which it is easier to love” (Freire 1972, p. 19), the plans of Comenius (1986) for a pan-paideia, a universal paideia (upbringing, cultural heritage and Bildung) inviting all human beings (pantes) to be taught about the universe (panta) in a thorough (pantos) way, Pestalozzi’s (1825) and MacIntyre’s (1999) ideas of a grass-root movement in the local community, Kierkegaard’s (1990, 1995) challenging texts on upbuilding in love, Hans-Georg Gadamer’s understanding of art experience as self-understanding (Gadamer 1986) and insistence on experience as insight in one’s own limitation, one’s finitude (Gadamer 1965); Eisner’s (1991) ideas for an artistic pedagogical inquiry, and Mollenhauer’s (1994) sketch of a critical “universal pedagogic” (Allgemeine Pädagogik). Mollenhauer helps us to see that important documents (pictures and literary texts) in the Western cultural tradition remind us of connections which are easily forgotten in the modern scientific specialization, connections which nevertheless form a necessary basis for reasonable choices today. Artistic and scientific sources may supplement each other, and views derived from the context of the local community, with its particular cultural traditions, should be coordinated with a global perspective. As Adam’s descendants, we are able to give names to all the animals (Dylan and Arnosky 2010) and to all beings. We are probably the only species that can view every small element in the universe as part of the totality, and therefore we ought to care for all beings. But it is the children close to us who are our primary responsibility. What is an “Upbuilding Example”? The adjective or adverb “upbuilding” I found via Tubbs (2005, chapter 8) and a translation of Søren Kierkegaard’s Opbyggelige taler. Footnote 1 Something is opbyggelig or “upbuilding” when it builds up our “house”, i.e. our life (Wivestad 2011, p. 616). The upbuilding discourses have been assessed as “the keystone” of Kierkegaard’s authorship (Perkins 2003, p. 2), but it seems that they are not well known by philosophers. Perhaps some have thought them relevant only for people who understand themselves as Christians? Kierkegaard displays “a kind of writing that repeatedly attempts to turn readers back to themselves and to their individual situation vis-à-vis God” (Pattison 2002, p. 88). He seems thus to actualise the Socratic “know yourself” in relation to “the eternal”, and he searches for a kind of truth that is “concerned” (Kierkegaard 1990, p. 233)—somewhat like the Nicomachean ethics, where the intention is to disclose truth that can help us “become good, since otherwise the inquiry would be of no benefit to us” (Aristotle 1985, 1103b29 and 1140b5). What can build up my life? Can we who are temporal beings understand ourselves and live in truth if we do not wonder how the temporal is related to what is not temporal? In Being and time Heidegger has remarked: “there is more to be learned philosophically from his [Kierkegaard’s] ‘edifying’ writings, than from his theoretical ones—with the exception of his treatise on the concept of anxiety” (Heidegger, quoted in Pattison 2002, p. 1). The concept ‘upbuilding’ in these writings comes from Christian traditions. It is seldom used in today’s dominating post-Christian culture, and if it is used, it is sometimes used ironically. By choosing upbuilding as a leading concept I express Kierkegaard’s and my own connection to Christian traditions. This may prompt the question: Can upbuilding be relevant to agnostics and atheists? In his introduction to a selection of Kierkegaard’s Spiritual writings, George Pattison contends that Kierkegaard’s interpretations of the Bible “restates Christian teaching in a way that can speak to those of all faiths and none” (Kierkegaard 2010, p. xxvii). Kierkegaard assumed only that his readers were “concerned about the meaning of their life in the world”, and that they were “ready to take seriously the possibility of religion” (p. xv). Following this line of thought, I will search for upbuilding examples not only in Christian traditions. To clarify what I mean by an “upbuilding example”, I will start with an example from an artist who explicitly wants to give “a global idea-image of the human” (Vasarely 1982, p. 9).Footnote 2 Before reading on, please study the picture below. What do you see in this picture? To me, the picture of Vasarely, Catch from 1945, represents two flat human figures, one black and one white; or is it rather two sides of the same person? The figures complement each other. Perhaps they confront each other as well? The hands are in focus. How do they “catch”? Do the hands embrace or push? What are the differences between the legs of the two figures? How do the figures move? The composition can give associations to a kind of dynamic balance, in the same way as Yin and Yang. Different interpretations of Catch might be possible. Perhaps the work represents human life as a life of inevitable contradictions and tensions, a life that challenges us to self-reflection? Perhaps it tells about a life where we have to struggle with identity and integrity problems? Perhaps it challenges us to embrace ourselves as split in two, and if so, to understand what characterizes the difference between our two parts? An exploration of this picture could be a starting point for a dialogue about different possible basic realities of human life. Facing the realities of our life I see as a necessary starting point for upbuilding, and therefore I understand this picture as an upbuilding example. However, why talk about “upbuilding”? Could not this exploration and dialogue be understood in more traditional ways, for instance as a process of Hegelian Bildung, where one tries to “seek one’s own in the alien, to become at home in it” (Gadamer 1979, p. 15), or as an “educative experience”, where we in the continuum with our previous experiences form an attitude of “desire to go on learning” (Dewey 1988, p. 29)? Both these traditions attend to important points that could build up both the person and the fellowship. The human consciousness cannot have or undergo new experiences unless it is turned around—away from the illusions (Plato 2000, 7.515c–d)—unless it “recognizes itself in what is alien and different” (Gadamer 1979, p. 318). Is Bildung and education sufficient? “Bildung, as being raised to the universal” (p. 13) or to Deweyan endless growth, might miss out what both Dostoevsky and Kierkegaard see as the highest, i.e. the love of the neighbour. Kierkegaard’s Danish contemporaries talked about Dannelse (a word that can be translated with formation and Bildung and liberal education) as a cry for “the highest”, and there seems to be a similar cry today. Kierkegaard argues why it is not enough to be dannet or “cultured”: Are you, my listener, perhaps what is called a cultured person? Well, I, too, am cultured. But if you think you will come closer to this highest with the help of “culture” [“Dannelse”], you make a great mistake. … has anyone’s zeal for becoming cultured [vinde Dannelse] taught him to love the neighbor? Alas, have not this culture [Dannelsen] and the zeal with which it is coveted rather developed a new kind of distinction, the distinction between the cultured and the uncultured? … Of course, a certain social courtesy, a politeness toward all people, a friendly condescension toward inferiors, a boldly confident attitude before the mighty, a beautifully controlled freedom of spirit, yes, this is culture—do you believe that it is also loving the neighbor? The neighbor is the one who is equal. The neighbor is neither the beloved, for whom you have passion’s preference, nor your friend, for whom you have passion’s preference. Nor is your neighbor, if you are a cultured person, the cultured individual with whom you have a similarity of culture—since with your neighbor you have the equality of a human being before God. (Kierkegaard 1995, pp. 59–60). The Bildung that persons may win (vinde) in the competition on the school arena, and the profit that nations may win on the global arena, may create rigid positions of inferiority and superiority, especially if one believes that the positions are deserved. Callous and rigid attitudes hamper the continuity of experience. Such attitudes will be “mis-educative” (Dewey 1988, p. 17) because they hinder the possibility of further growth. Kierkegaard does also differentiate between Dannelse and Misdannelse, but his focus is the avoidance of hatred, strife and revenge between human beings, who, in his perspective, in reality are equal, in spite of all outward distinctions. Therefore he maintains that upbuilding—as a work of love—is a necessary condition for education: “education without the upbuilding is, eternally understood, miseducation. … But like love, upbuilding, if possible, will unite those, who are most different from each other, in the essential truth” (Kierkegaard Papirer 1918, VIII 2 B, lines 11–12 and 16–17, translated in Søltoft 2000, p. 22). What is an “upbuilding example”? Kierkegaard has presented several good examples in his Eighteen upbuilding discourses (1990). These beautiful texts may help the reader in an indirect way to see what builds us up as human beings. A summary of them may be misleading. Form and content is intertwined in Kierkegaard’s writings: “it is precisely the stylistics of the upbuilding discourses that will provide some of the most important clues to their philosophical significance” (Pattison 2002, p. 7). It is the encounter between possibilities in the texts and the reader’s own life experiences that can make them important. However, because I want to select other works of art that have an upbuilding potential, I need some short direct guidelines, even if they only convey “shadow pictures” of the educational possibilities in the discourses. I derive these guidelines from Nigel Tubbs’s interpretation of Kierkegaard (Tubbs 2005, chapter 8) and my reply to him (Wivestad 2011), especially the outline of how Kierkegaard’s thinking can be educationally relevant (pp. 614–615), and the interpretation of stages in upbuilding (pp. 617–619). These may be compared with Pattison’s (2002, pp. 37–38) description of the stages. Three summary points might give some understanding of upbuilding. Something is upbuilding if - 1. it helps us to a realistic understanding of our own limitations, helps us to acknowledge that we, in our striving to possess the world and become masters, will also be possessed by the world and thereby lose ourselves, - 2. it helps us to see that our life as a whole is a good gift that we have been given, - 3. it helps us to view others without hatred, envy and egoistic calculation, helps us to share with others the good gifts that we have been given. I will give some comments to and concretizations of these three points. Point 1 This follows the insight that Socrates derives from the saying at Delphi, that no one is wiser than Socrates: “the god is really wise …’Human wisdom is of little or no value’” (Plato 1966, 21a and 23a–b). It is dangerous if hubris leads us to forget our finitude and imperfection. We and those close to us may suffer if our own knowledge and wisdom becomes our god, something that we trust absolutely. A classical play that reminds us of this, is Sophocles’ Antigone (Wivestad 2008, p. 311). In the film Casino royale (Campbell 2006), which is based on the first Ian Fleming novel about James Bond, the hero is vulnerable. He is nearly tortured to death, and he enters a real love relationship which makes him ready to leave his “business” as agent 007. He loses his love, however, and perhaps therefore also his vulnerability. A series of other Bond films follows a pattern, which ends with outward success—in business as well as in bed. Books and films with protagonists who are always successful can move us away from a true understanding of ourselves, but could perhaps have upbuilding potential if read or seen in a critical way. Point 2 This challenges the centre of a common modern understanding of human beings: We do not want to accept our life just as it is given to us and try to be autonomous architects of our life. We ourselves want to mend the split that we feel between how we are and how we ought to be, and we want to do it “on our own terms” (Kierkegaard 2010, p. xxi). Works of art like Munch’s (1893) painting The scream, Pink Floyd’s (1979/1994) concept album The wall and Kieslowski’s (1988) film series Dekalog may help us to reflect realistically on our possibilities here. Films and games like Star Wars could be used for reflection, but can function as noise, a way to escape from encounters with our uncertainty, anxiety and doubt. If Kierkegaard is right, our best abilities and efforts are always imperfect, and the only possible foundation for upbuilding would be the love that has been given unconditionally to all. Point 3 In Works of love (1995, pp. 212–219) Kierkegaard describes the foundation of upbuilding as unconditional love, a love that can bind people together in spite of the differences between them. This gives an important demarcation. All kinds of communication, formation and education that separate people from each other and lead to haughtiness and envy, are not building us up. Knowledge may be important, and the love that builds up is not without knowledge, but knowledge without love only “puffs up” (Kierkegaard 1995, p. 215). Love builds up and it builds up love: “Love is the ground, love is the building, love builds up” (p. 216). The foundation is given. “Love builds up by presupposing that love is present in the ground; therefore love also builds up where, in the human sense, love seems to be lacking” (p. 219). For Kierkegaard the foundation is a love that “has been present in every human being ever since creation”; and everyone has got this foundation, because we are created by God, who is love (Søltoft 2000, p. 25). A successful teacher points to the success of her student; politicians point to their country’s success on PISA rankings and are proud of what has been built up. This is not the way love builds up. “A person can be tempted to be a builder, a teacher, a disciplinarian because this seems to be ruling over others; but to build up the way love does cannot tempt, because this means to be the one who serves … love that builds up has nothing to point to, since its work consists only of presupposing” (Kierkegaard 1995, p. 217). Nobody is tempted to become a servant. Serving the other in love means that we forget ourselves. Presupposing a foundation of love in the other, means that the building process goes on beyond our knowledge and control. One of Kierkegaard’s discourses that could be read individually and discussed in a group of adults, has the title Love will hide a multitude of sins (Kierkegaard 1990, pp. 55–68; 2010, chapter 11). When we focus on sin, sin is “fruitful”. One sin gives birth to many more and becomes a “multitude of sins”. But when we presuppose love in the other, we can ignore (hide) the bad sides of the other. This has obvious consequences for how we look at the other, how we listen to the other and how we include the other in our fellowship and care. As a positive example Kierkegaard refers to the parable of the father who waits for his youngest son to return (Luke 15:11–32). Similar works of art can have upbuilding possibilities: the Largo movement from Bach’s (1731) Concerto for 2 violins in d-minor, where the two main voices “wait” for each other and “embrace” each other, Rembrandt’s (1668) painting The return of the prodigal son, Kieslowski’s (1993) film Blue, and the animation film Tokyo godfathers (Kon 2003). The latter example presupposes love in a rebellious teenager, an alcoholic and a drag queen. Some examples unite many in a celebration of self-love that can lead away from fellowship. I can’t get no satisfaction (Jagger and Richards 1965) has a self-centered text connected to a repeated riff that moves around itself. The song expresses a feeling of alienation towards the dominant culture, but bows to the consumption of pleasures: easy information, special cigarettes (hash?), and girls who easily can be “made”. The restless consumer and capitalist, who never gets satisfaction, seems to adore an unholy trinity of consumption as Life-giver, competition as Saviour and culture industry as Comforter. Films, books and games in the Walt Disney franchised series Pirates of the Caribbean move their characters and participants to egoistic calculation. The principle is expressed in this pregnant mantra: “Take what you can, give nothing back!” In the Norwegian short novel A happy boy, Eyvind has his mother and his schoolmaster as good models. “Eyvind grew and became an active boy: at school he was amongst the first, and he was capable at his work at home. That was because at home he was fond of his mother and at school he was fond of his master” (Bjørnson 1860/1896, ch. 3, p. 20). Before Baard became a schoolmaster, he made some fatal mistakes, and he had experienced great sorrows. But he was met with love and wanted to “pay it forward” to the school children. What influenced Eyvind most during his school years was the life story of his schoolmaster, which “his mother told him one evening as they sat by the fire. It ran through all his books, underlay every word the schoolmaster said; he felt it in the air of the schoolroom when all was quiet. It filled him with obedience and respect, and gave him a quicker apprehension, as it were, of all that was taught him” (p. 20). The situation changed when Eyvind approached the time of confirmation and the passage to adult status. Then he isolated himself and studied for his own prestige and power. It increased his knowledge, but decreased his love and joy. In the light of Vasarely’s Catch and Kierkegaard (1990, pp. 314–319), Eyvind’s situation can be described as a struggle between the first self and the deeper self. The first self wants to eat the fruits of knowledge, possess the world and become master; while the deeper self shows him that this world is dubious, inconstant and deceitful. This creates conflict, and the result may be that the first self “kills” the deeper self by drowning it in oblivion or noise. The schoolmaster had to take a risk when he stopped Eyvind on his lonely way to “success”, appealing to Eyvind’s deeper self. And Eyvind let himself be stopped; he acknowledged his greed for power and his lack of gratitude towards his parents, his teacher and his God. So in the end Eyvind stood for confirmation as number one—without vanity. Some examples remind us of the vulnerability of the children and how that challenges our life style. In the film L’Enfant, directed by the brothers Jean-Pierre et Luc Dardenne (2005), we observe how Sonia, a girl in her late teens with a newborn baby in her arms, crosses streets with dangerous motor traffic. Sonia has a flat, but her boyfriend Bruno has let it out for some days to another couple while she was in the hospital. She searches for Bruno, and gets the help of a boy who lets her sit on the backseat of his moped with the baby in her arms. When she finds Bruno, he is more interested in his own petty criminal activities than in the baby. The film does not let us know anything of what has happened before and during Sonia’s pregnancy, but it is quite obvious that Bruno is not prepared to be a father. Though he shows “cleverness”—an ability to attain his goals (Aristotle 1985, 1144a25 Irwin), this force is even more alarming, because he is at the same time immature—choosing sometimes to act justly (in his dealings with some schoolboys who steal for him) and choosing sometimes to lie. He seems to be “guided in his life and in each of his pursuits by his feelings” (1095a7). We do not get insight into his upbringing, but he is not welcomed in his mother’s house, when he wants his mother to give him a false alibi. However, Sonia loves Bruno. She is childlike in a positive way—lively, playful and humorous, and is at the same time caring and responsible, both in her relation with the child and with Bruno. She leads Bruno to be registered as the child’s father and proposes that he should apply for a job. She asks Bruno to walk the child, Jimmy, with the pram. During the walk he gets an idea: The person who receives his stolen goods had mentioned to him the possibility of adoption for money. He follows this impulse, and like in a documentary, we witness how an illegal adoption may be brought about. In his first direct contact with the child, however, we notice Bruno’s care when he puts the child down, and sense that he perhaps has mixed feelings. But he does not stop the process. They need the money, and to him it seems easy to “produce” a new baby. When Sonia is confronted with the facts, she faints and has to be brought to hospital. When she wakes up, she tells the truth. The police investigate the case, but Bruno lies as usual in problematic situations, even accusing Sonia for telling lies about him. Bruno realizes that he has done wrong. He manages to get the child back again from the criminals, but Sonia will not talk with him, and throws him out of the flat. The film ends, however, with a positive possibility. When one of the young boys who steal for him is caught by the police, Bruno admits responsibility and guilt. In the last scene Sonia visits him in prison. The end gives hope: His feeling of shame and Sonia’s love can perhaps move him to turn around from his previous lifestyle. L’Enfant is an engaging and beautifully filmed story about the risks that a newborn child (and older children as well) may be exposed to by an immature adult. It also shows a possible way out of this. Children are dependent on adults who love them, adults who give without conditions, who see the real needs of each unique child there and then, and who make wise long-term decisions on behalf of the child. What do I mean by an “upbuilding example”? Three points are important: First, an upbuilding example is concrete and perceptible and emotionally engaging—it may be a person, a picture, a story, a text, a song, a film—it is an artistically crafted work that is easy to remember, easy to learn by heart. Second, an upbuilding example may help us to see truths about ourselves and our world—often very unpleasant truths.Footnote 3 Third, an upbuilding example may contribute to our upbuilding in love. The foundation for this upbuilding is a gift of love to all people, whether they see it as God’s gift or not. It is a gift which is given without conditions and which we ought to share with each other without conditions. Why Should Upbuilding Examples be Studied? It has been argued by a Norwegian philosopher, Peter Wessel Zappfe, that the best future for the world as a whole would be a planned voluntary and gradual extinction of the human species; each couple having no more than one child (Zapffe 1983/1941, §59, p. 240). However, even in this extreme view, each child who is conceived ought to be welcomed to this life. Then the first educational question could be: Why do we want this child? The question can be addressed to the adult generation as a whole. Klaus Mollenhauer asks: “Warum wollen wir Kinder?”/“Why do we want children?” (Mollenhauer 1994, p. 17). When I stand with a newborn child in my arms, the salient moral questions are: What example of human life do I want to give this child, and will the example that I want to give also be good for this unique child? Why can examples be important? Why should adults study works of art? And what are the possibilities in the study of works of art for those who want to struggle with themselves as exemplars for children? In the Kantian tradition actions are justified by universal principles. In the Aristotelian tradition those who are virtuous will intuitively seek the right thing to do and choose their actions through engaged deliberations and judgments in each particular situation. Examples are important in both these traditions. We need examples—stories, images and metaphors to live by, and we need principles when we evaluate these examples (Louden 2009, pp. 77–78). Kant “regularly recommends the use of examples in education” (Løvlie 1997). They have a didactic function and help us to see that “fulfillment of the moral law is a real possibility… not just a logical possibility” (Guyer 2012, p. 124). In the Aristotelian tradition examples are even more important. The starting point for Aristotle is not an abstract theory of perfect morality, but concrete human exemplars that we respect and admire because they are relatively good persons. Experiences of good and bad human qualities do not depend on our ability to give verbal justifications of those qualities. From the very beginning children will experience and feel whether the adult persons around them in general do well or bad. So it is, also when we as adults experience a convincing work of art. In the grip of a good work of art, “we are fully present to the work, and ‘get’ all its features as a whole” (Arcilla 2010, p. 51). “We are thereby convinced by it” (p. 52)—it engages our emotions directly. According to the Aristotelian tradition, you can act wisely without being wise yourself. The decisive moral action is to listen to and to emulate persons who are wise. Virtue, human excellence, the way we should hold ourselves between too much and too little, is determined rationally and “in the way which the wise person would determine it” (Aristotle 2002, 1107a1 Rowe). Pericles is a typical exemplar, because persons like him “are capable of forming a clear view of what is good for themselves and what is good for human beings in general” (1140b9). By studying critically those who are wise and by emulating their examples in a creative way, we may do the right thing and gradually acquire virtuous ways of holding ourselves, even before we are able to formulate verbally the reasons for what we do. Works of art cannot replace such living exemplars, but may give an important supplement, especially because we, through works of art, may become aware of many possibilities that we can learn not to emulate. This is why Aristotle thought that the study of tragedies like Antigone was important (Wivestad 2008, p. 311). His aim was not to “know what virtue is, but to become good” (Aristotle 1985, 1103b29). Abstract principles and recipes may be helpful, but they have to be adjusted to the particular circumstances in new situations. Even instrumental Supernanny principles for retaining self-control when you get angry (Samuel 2007) can have some strength when they are supplied with the presentation of cases. Some cases just illustrate and confirm the principles. Through the study of a unique case, however, we may have a new experience. Works of art may be seen as an enormous collection of unique cases. Such cases open possibilities for exploring personal development and relationships in more depth than usual. Each work of art can modify our previous experiences: “it is not… as we thought” (Gadamer 1979, p. 318). The study of unique cases makes possible a kind of spiritual exercise whereby we learn to live and to die. We can give attention to particulars—looking for important nuances. We can meditate on negative possibilities and become open for dialogue on important questions (Hadot 1995, pp. 84–89). “But everything that touches the domain of the existential… is not directly communicable…. That is why it often happens that a poem or a biography are more philosophical than a philosophical treatise, simply because they allow us to glimpse this unsayable in an indirect way” (p. 285). Arts can be seen as metaphors for basic attitudes and “otherwise unspoken and unexamined assumptions” in any culture, and also as “a way of transcending” these assumptions (Small 1996, p. 2). Each different art may transfer something special about human possibilities that can neither be expressed in ordinary language nor in the other arts. Therefore all art forms could be relevant, and examples in literature, pictures, music and films could be supplied by examples in other art forms. Some educational writers have shown how educators can learn from pictures (Mollenhauer 1994), others have supplied this with fiction and feature films (Arcilla 2010; Friesen and Sævi 2010), and there is a long tradition of fruitful contact between literature and education. Thomas Mann’s narrator in the novel Doctor Faustus postulates an inner connection between good reading (bonae litterae) and the upbringing (Erziehung) of the young. The study of languages or humaniora, combined with the passion for the humanior (the more human) as a “living and loving sense of the beauty and rational dignity of human beings” (Mann 1980, p. 16, my transl.),Footnote 4 destines the scholar in philology almost naturally to become an educator of the youth (Jugendbildner). Music, however, seems for him to be separated from this rational sphere of “unconditional trust in things of reason and human dignity”Footnote 5 and to represent a dangerous but possibly seminal influence of the underworld (unteren Gewalten). Warnings like this may perhaps be relevant if our destination is simply to become just and rational beings. But if we are emotional beings and if the emotions are “essential elements of human intelligence” (Nussbaum 2001, p. 3), if the wisdom of the head is not “all-sufficient” (Dickens 1989, p. 297), but needs to be complemented with a wisdom of the heart, we should search for upbuilding examples in music as well as in literature. In his Politics Aristotle (1997, 1340a23) refers to a common experience: we are moved and “undergo changes in our soul” through rhythms and melodies. If the movement of musical structures have a likeness to specific emotions, then it is possible that music may build up and strengthen emotional habits. Thereby music, which is entertaining and pleasurable in itself, may be helpful in character formation. Nussbaum (2001, p. 254) thinks that music is connected to “the perception of urgent needs and vulnerabilities that are often masked from view in daily life”. She quotes from a letter that the composer Gustav Mahler wrote in 1896: “As long as my experience can be summed up in words, I write no music about it; my need to express myself musically—symphonically—begins at the point where the dark feelings hold sway, at the door which leads into the ‘other world’—the world in which things are no longer separated by space and time” (p. 255). This may be the side of the self that is hidden to others and partly to our conscious self as well. And it is perhaps just because music “is not really translatable into words” that it “digs into our depths and expresses hidden movements of love and fear and joy that are inside us” (p. 254). Even the founder of modern rationalism, Descartes, must have experienced something like this. When he was 22 years old he wrote a small “Compendium of music”, which has these opening lines: “Its object is sound. Its end is to delight and to elicit different affects in us” (Descartes 1978/1656, my transl.).Footnote 6 After a discussion of different views on music and emotions, Nussbaum concludes that though music is different from language, it is a symbolic structure, which contains emotional material “embodied in peculiarly musical forms” (Nussbaum 2001, p. 265) that may be understandable for the listener who is familiar with the tradition that the music belongs to. Therefore a musical artwork can function in the same way to the listener as a tragedy to the spectator. The spectator’s emotions are … real emotions, of a complex sort. They include emotions such as fear and pity and grief assumed through empathy with a perspective or perspectives embodied on the work; sympathetic emotions responding to the presence of those structures in the work; closely connected emotions about human life in general and about her own possibilities; and, finally, emotions of wonder and delight that take the artwork itself as their object. (Nussbaum 2001, p. 278) Her main example is the first and fifth of Gustav Mahler’s Kindertotenlieder. The poems were written in 1833 by Friedrich Rückert, a professor of Oriental languages, shortly after two of his children had died of scarlet fever. The songs exemplify that “the expressive power of the work does not reside in the text alone” (p. 280). The text itself may be interpreted as a consolation of the parent that the children are resting in God’s hands. Mahler’s music, however, seems not to give assurance to this interpretation. And Nussbaum interprets the songs in this way: “For the children it is a sleep not of comfort but of nothingness. For the parent, it is the knowledge of the impossibility of any loving, any reparative effort.” (p. 293) Face to face with death there is no hope. Can this be an upbuilding example? I think so. It opens for empathy with the feelings of grieving parents. And it challenges us to converse about how we understand death, how we should show compassion with others and how we should be prepared ourselves. Why should those who struggle with themselves as exemplars for children study works of art? Both Kantians and Aristotelians underline the importance of examples. Especially in the latter tradition works of art may be seen as unique cases, metaphors of basic attitudes to life and death—also unsayable aspects of our existence, cases which engages both our head and our heart. We may find some possible examples to emulate and many examples to reflect on and learn from. The experiences of artists can modify our own previous experiences, and each interpretation of a work of art is in itself a practice in the judgement of a unique situation. However, the promising possibilities in the study of art works are only possibilities. There is no guarantee that the upbuilding possibility is actualized. How Could Upbuilding Examples be Studied? I consider the study of upbuilding examples as a difficult and challenging process, an artistic process, a process where conversations in a group may be helpful, a process which in itself should express the aims of wisdom and love. With Kierkegaard’s indirect communication as a model, I propose upbuilding studies as a bottom-up approach initiated by enthusiasts in informal self-governed pedagogical groups, who can get help from a database containing selected works of art. “All you need is love … It’s easy” was the message on the 25th of June 1967, in the first TV program sent around the whole world by satellite (Beatles 1967). But is it really easy to love, is it easy “to learn how to be you in time”, or always to “be … where you’re meant to be”, as the Beatles proclaimed in their song? Upbuilding presupposes that “the love which we (for instance as educators) give to others (the children) is a gift that we ourselves have been given” (Tubbs 2005; Wivestad 2011, p. 620). This humiliating condition implies a Socratic doubt in oneself and in one’s own doubt. The wisest person acknowledges that only the eternal wisdom can be perfect, as Socrates may have said in his Apology (23a). Kierkegaard maintains that “the world can be possessed only by its possessing me” (Kierkegaard 1990, p. 164), and understands our life as human beings in this way: Because the temporal, which we are possessed by, contradicts the eternal, which is our foundation, one’s soul or self becomes “the contradiction of the temporal and the eternal” (p. 163).This means that we have to struggle, not only with the temporal side, but with the eternal side of our self as well. This struggle is not easy. The more we are able to control our temporal side, the more we are tempted to forget our limitations—forget to doubt our own doubt and forget that our life is a gift. In all struggles with ourselves the task is to imagine different possibilities, deliberate alternatives in detail, choose the seemingly best ones and try them out in practice. It is not just a process of practical application of theoretical principles. It may rather be seen as an artistic process: “a halting and exploratory effort to give form to a vision” (Eisner 1979, p. 135). Reflections at each moment of halting are important, firstly in order to see the details of the “picture” as means to realise the vision, and secondly in order to see each new detail as a unique contribution that can clarify and modify the vision (the whole picture). Upbuilding examples can of course be studied individually, but conversation with others may be helpful. We can be taught moral and intellectual virtues “by having our reasoning put to the question by others, by being called to account for ourselves and our actions by others” (MacIntyre 1999, p. 148). If the starting point for a dialogue is the reading of an academic text or the listening to a lecture, there is a danger that the dialogue never starts, or becomes abstract and uncommitted. And if a group directly confronts the personal experiences of the participants, there is a danger that the conversation becomes too personal for some in the group. However, when all have experienced the same picture or film and discuss the examples in the work, it may be possible for the participants to find connections to their personal experiences and at the same time feel free to choose how much of their own experiences they want to share with the others. As adults we are responsible for what we transfer to the next generation both when we do have pedagogical intentions and when we do not have such intentions. And the examples we give the children will probably have greater effect than the principles that we intentionally try to inculcate. “In human actions and emotions, where experience is most important, examples move us more than words” (Aquinas 2005, I–II 34,1 co., my transl.).Footnote 7 The main themes in a pedagogical study of works of art could therefore be how we actually are and how we ought to be ourselves.Footnote 8 If upbuilding examples in works of art are studied in a group, the participants may become exemplars for each other. In his essay “Learning and teaching” Oakeshott (1967, p. 176) uses this metaphor: “Not the cry, but the rising of the wild duck impels the flock to follow him in flight.” The process in such pedagogical groups should itself express the aims of moral wisdom and unconditional love. This implies openness, humility, and willingness to use necessary time in search for consensus and for solving conflicts. Conflicts are inevitable, but here they should not be seen as “disturbances”, but as “actual possibilities for development of social attitudes and abilities” (Klafki 1996, p. 265, my transl.) .Footnote 9 Moreover, as Kierkegaard lets us imagine, when love inhabits the heart, one closes the ears to mockery of oneself, gives hasty words of others a good meaning, has patience in listening to others, translates evil words to good words, does not understand the speech of anger, because one waits for a word that will make the speech meaningful; gives without looking for rewards, looks for the good sides of others and loves forth the good even in the other who hurts one (Kierkegaard 1990, pp. 60–61). Many types of groups could be possible. The most important factor is probably that one or a few persons believe in the idea, take initiatives and develop a suitable structure. Reading groups have functioned like this for many years. And though it is unlikely that such groups will attract the great majority, it can have great positive consequences if only a few persons in a local community start struggling with themselves. In the beginning of the novel Lienhard und Gertrud, Pestalozzi retells a story from a Jewish Rabbi: “There were amongst the heathen nations who dwelt round about the inheritance of Abraham, men full of wisdom, whose equals were not to be found far or near. These said: ‘Let us go to the kings and to their great men, and teach them how to make the people happy upon the earth.’ And the wise men went out, and learned the languages of the houses of the kings and of their great men, and spoke to the kings and to their great men, in their own language. And the kings and the great men praised the wise men, and gave them gold, and silk, and frankincense; but treated the people as before. And the wise men were blinded by the gold, and the silk, and the frankincense, and no longer saw that the kings and the great men behaved ill and foolishly to all the people who lived upon the earth. But a man of our nation reproved the wise men of the heathens, and was kind to the beggar upon the highway; and took the children of the thief, of the sinner, and of the exile, into his house; and saluted the tax-gatherers, and the soldiers, and the Samaritans, as if they had been brethren of his own tribe. And his deeds, and his poverty, and the longsuffering of his love towards all men, won him the hearts of the people, so that they trusted him as a father. And when the man of Israel saw that all the people trusted him as a father, he taught the people wherein their true happiness lay; and the people heard his voice, and the princes heard the voice of the people” (Pestalozzi 1825, pp. vi–viii). The story can help us to see our own situation as “wise” academics who start our work with good intentions. We concentrate on mastering theoretical perspectives and the power structures that support them and benefit from them; we master the “languages” of the powerful. But thereby we risk forgetting our intention: to contribute to “make the people happy”. A better way is to start in practice, in direct relation with people, acting in a way that is trustworthy. Then some people may listen and start to seek what gives true happiness. And those who have power must in the long run listen to the people, if they want to retain their power. Persons like Pestalozzi and Gandhi (Attenborough 1982) have followed the example of “the man of Israel”, and the stories about them still have a great effect. There are at least four reasons why pedagogical groups ought to be informal, even when the groups functions within formal educational structures: - 1. The process of upbuilding and the examples we give the children are related to all situations of life. Many of the situations where adults are close to children are informal. Even formal education has informal sides, for instance the way the teacher and the students look upon each other and talk with each other both in and after the class. - 2. The groups should be led by informal leaders, not professionals. In “communities of giving and receiving” (MacIntyre 1999, p. 147) all participants are dependent on each other, and roles ought to change. - 3. Using the rhetoric of Kierkegaard’s discourses as a model, no one has a formal authority, and each participant should engage directly with the challenges in the works they study: enter a dialogue with the work, identify with different possibilities (characters and figures) in the work, recognise his or her own questions, deliberate the possibilities, and decide and actualize “a particular insight or value” (Pattison 2002, pp. 153–155). - 4. Meeting informally will solve the practical problem of using films on video and DVD published only for use in homes. As pedagogical groups, the studies are connected to paideia, i.e. upbringing, cultural inheritance and Bildung. Studies in a pedagogical group will be most meaningful when the content in the meetings hang together, and when each meeting is integrated in the life of the participants. Therefore the content should be organised as themes with a progressive differentiation. The ideal is that later studies should only be a particular evolution of what has been previously studied, like a tree with a permanent stem and main branches, which rambles in always new shoots (Comenius 1968, chap. 16, 45).Footnote 10 Each theme should be related to the experiences of the participants. Therefore a group could start with elementary existential questions, like for instance “Why do we want children?” and with elementary themes, like for instance “The good life”, and choose works that may be interesting for the actual group. Imagine a group of adults, who study a picture like Catch, a text like A happy boy or a film like L’Enfant. They discuss their experiences and try to understand themselves—both how they are and how they want to become. It could be a student group in education or ethics, it could be a group of ice-hockey coaches or it could be a group of adults who live in the same neighbourhood, or have children in the same kindergarten or school or congregation. The group is planning its own studies, but in order to get ideas and help, they use an Internet database called Upbuilding examples: Pictures, films, music and texts for adults close to children. To give an inkling of how this could look like, I will mention a database which has been used in medical education since 1993: Literature, Arts and Medicine database. It contains annotations of works of art, films and literature; each work presented by searchable keywords, a summary and a commentary. See for instance the list of works connected by the keyword “Parenthood”. A pedagogical database could include some of the same works, but the perspective would be different. In my vision of a future database named Upbuilding examples, the main criteria for the selection of works of art would be the work’s possible upbuilding qualities, its possible relevance for the participants in the group and its degree of difficulty. What is upbuilding has been discussed in the second part of this article. An example may be a novel like Dickens’ Hard times, where the utilitarian and cognitivist character Thomas Gradgrind is contrasted with the loving and serving Sissy Jupe. As mentioned, we have to consider whether this work is unique, appeals to our senses and feelings and contains something general that makes it possible for many people to recognise themselves in the work. Thereby individual personal experiences may be actualised. Can the work help us to recollect experiences we have “forgotten”, experiences that we need to reflect upon so that we can make “amends” (Dickens 1989, p. 314) for previous wrongdoings? This applies both to the life story of the individual and to the history of the generations in our culture. One’s own story is a part of a bigger story. Through a work with upbuilding examples, one may become more conscious of one’s own experiences and have a share in the experiences of others. One may see persons and situations in a new way, understand the feelings and the life of others, explore special situations that may come in the future and be prepared for them. Notes - 1. Kierkegaard's (1990) Eighteen upbuilding discourses are translated by E. H. Hong & H. V. Hong. Kierkegaard discusses the concept “upbuilding” in his book Works of love (1995), which is also translated by the Hongs. A good selection of Kierkegaard's upbuilding discourses and Christian discourses are made by George Pattison (Kierkegaard 2010). He has wanted to make these writings accessible to contemporary readers, contending that Kierkegaard's vision of life, with Gift, Creation and Love as keywords, invites all to a dialogue about what can "build us up in gratitude for the gift of being, in joy at being who we are, and in love for love itself" (p. xxvii). - 2. Je me suis efforcé de dépersonnaliser le contenu, de tenir compte de toutes les vérités, de donner une idée-image globale de l'humanité. - 3. - 4. inneren und fast geheimnisvollen Zusammenhang des altphilologischen Interesses mit einem lebendig-liebevollen Sinn für die Schönheit und Venunftwürde des Menschen … - 5. unbedingte Zuverlässigkeit in Dingen der Vernunft und Menschenwürde …(p. 17). - 6. Compendium musicae Renati Cartesii. Hujus objectum est sonus. Finis ut delectet, variosque in nobis moveat affectus … - 7. In operationibus enim et passionibus humanis, in quibus experientia plurimum valet, magis movent exempla quam verba. - 8. The themes that may be relevant for a particular group cannot be decided on beforehand, and the themes below (derived from Aristotle, Kierkegaard, Freire and Aquinas) sketches only some possibilities: The human condition (lack of freedom, oppression, ignorance, …). The good life. Lack of character, the breaking of elementary laws, rules and promises (Bruno in L’Enfant). Outward punishments and rewards, consequences for the person (Huxley’s Brave new world). Feelings of shame (inward punishment). Self-directed search for better actions (Baard in A happy boy). Incontinence, weak character, with need of support from others in difficult situations (Louisa in Hard times). Continence (strong character), with dangers of hypocrisy, pedantry and bad feelings (Eyvind in A happy boy, the eldest brother in Rembrandt’s The return of the prodigal son). Excellence or virtue, where thinking and emotions play together (Sissy Jupe in Hard times). The interdependence of all the virtues—lack of justice makes moderation and courage dangerous, and lack of moral wisdom makes inventiveness and cleverness dangerous (Bruno in L'Enfant). Friendship (conditional love) with willingness to do good things to loveable persons. Struggle with one’s self to avoid being obsessed with passion for mastery and power (Eyvind in A happy boy). Struggle between the first self and the deeper self (Vasarely’s Catch). Gaining and losing one's soul. Human action, roles, choice, the final end. The human good. Transformation of the person by God’s grace and gifts (Kierkegaard’s discourses). Unconditional love, agape, and its fruits: compassion, kindness, caring, inward and outward peace, joy (Alyosha in The brothers Karamazov). Injustice, solidarity and justice. Structures, rules and habits in society and culture that impede wisdom and love (Hard times, Pink Floyd’s The wall). Ordering of a disordered character: Love of self disperses the person’s emotions (Baard). Moderation and courage order us in ourselves. Justice orders us to others. Faith, hope and agape order us to God (Aquinas 1989, pp. 252–253, 2005, I–II 72,4 co.). - 9. Anlaß zur Entwicklung von sozialen Einstellungen und Fähigkeiten … - 10. … tantum sint priorum particularior quaedam evolutio. Ita enim arbori… nulli novi rami enascuntur, sed primo enati in novos semper ramusculos diffunduntur. References Aquinas, T. (1989). Summa theologiae: A concise translation (T. McDermott, Trans.). Allen, Texas: Christian Classics. Aquinas, T. (2005). Summa theologiae. Corpus Thomisticum S. Thomae de Aquino opera omnia.. Accessed 10 August 2012. Arcilla, R. V. (2010). Mediumism: A philosophical reconstruction of modernism for existential learning. Albany, NY: SUNY Press. Aristotle. (1985). Nicomachean ethics (T. Irwin, Trans.). Indianapolis, Ind.: Hackett. Aristotle. (1997). The politics of Aristotle (P. L. P. Simpson, Trans.). Chapel Hill: University of North Carolina Press. Aristotle. (2002). Nicomachean ethics (C. J. Rowe, Trans.). Oxford: Oxford University Press. Attenborough, R. (1982). Gandhi [DVD, 183 min.]. USA: Columbia Tristar, 2001. Bach, J. S. (1731). Concerto for 2 violins, strings and continuo in d minor, BWV 1043.. Accessed 10 August 2012. Beatles. (1967). All you need is love.. Accessed 10 August 2012. Biesta, G. (2012). Receiving the gift of teaching: From ‘learning from’ to ‘being taught by’. Studies in Philosophy and Education,. doi:10.1007/s11217-012-9312-9. Bjørnson, B. (1860/1896). A happy boy (W. Archer, Trans.). London: William Heinemann. Campbell, M. (2006). Casino Royale [DVD, 139 min.]. Sony Pictures Home Entertainment. Comenius, J. A. (1968). Magna didactica: Ex editione Amstelodamensi anni 1657 omnes libros didacticos complectente, nunc primum separatim editit Fridericus Carolus Hultgren, Lipsiae [Leipzig] 1894. Farnborough, UK: Gregg. Comenius, J. A. (1986). Comenius’s Pampaedia or universal education (A. M. O. Dobbie, Trans.). Dover: Buckland Publications. Dardenne, J.-P., & Dardenne, L. (Directors). (2005). L’enfant [DVD, 100 min.]. Belgium/France: Sony Pictures Classics (USA). Descartes, R. (1978/1656). Leitfaden der Musik/Musicae compendium (J. Brocht, Trans.). Darmstadt: Wissenschaftliche Buchgesellschaft. Dewey, J. (1988). Experience and education. In J. A. Boydston & B. Levine (Eds.), The later works, 1925–1953 (Vol. 13: 1938–1939, pp. 1–62). Carbondale and Edwardsville, IL: Southern Illinois University Press. Dickens, C. (1989). Hard times. Oxford: Oxford University Press. Dostoevsky, F. M. (2000). The brothers Karamazov. In J. Manis (Ed.).. Accessed 10 August 2012. Dylan, B., & Arnosky, J. (2010). Man gave names to all the animals [Picture book with CD]. New York: Sterling. Eisner, E. W. (1979). The educational imagination: On the design and evaluation of school programs (1st ed.). New York: Macmillan. Eisner, E. W. (1991). The enlightened eye: Qualitative inquiry and the enhancement of educational practice. New York, NY: Macmillan Publ. Co. Freire, P. (1972). Pedagogy of the oppressed. Harmondsworth: Penguin. Friesen, N., & Sævi, T. (2010). Reviving forgotten connections in North American teacher education: Klaus Mollenhauer and the pedagogical relation. Journal of Curriculum Studies, 42(1), 123–147. Gadamer, H.-G. (1965). Wahrheit und Methode: Grundzüge einer Philosophischen Hermeneutik (2nd ed.). Tübingen: Mohr. Gadamer, H.-G. (1979). Truth and method (W. Glen-Doepel, J. Cumming & G. Barden, Trans. 2nd ed.). London: Sheed and Ward. Gadamer, H.-G. (1986). The relevance of the beautiful: Art as play, symbol and festival (N. Walker, Trans.). In R. Bernasconi (Ed.), The relevance of the beautiful and other essays [by] H.-G. Gadamer (pp. 3–53). Cambridge: Cambridge University Press. Guyer, P. (2012). Examples of moral possibility. In K. Roth & C. W. Surprenant (Eds.), Kant and education: Interpretations and commentary (pp. 124–138). New York: Routledge. Hadot, P. (1995). Philosophy as a way of life: Spiritual exercises from Socrates to Foucault. Oxford: Blackwell. Jagger, M., & Richards, K. (1965). I can’t get no satisfaction [song]. In the album Out of our heads. USA. Kierkegaard, S. (1990). Eighteen upbuilding discourses (E. H. Hong & H. V. Hong, Trans.). Princeton, New Jersey: Princeton University Press. Kierkegaard, S. (1995). Works of love (H. V. Hong & E. H. Hong, Trans.). Princeton, N.J.: Princeton University Press. Kierkegaard, S. (2010). Spiritual writings: Gift, creation, love: Selections from the upbuilding discourses (G. Pattison, Trans.). New York, NY: Harper Perennial. Kieslowski, K. (1988). Dekalog: The ten commandments [DVD, four discs]. Poland: World Cinema. Kieslowski, K. (1993). Trois couleurs: Bleu [DVD, 94 min.]. Scanbox Entertainment Norway. Klafki, W. (1996). Neue Studien zur Bildungstheorie und Didaktik: Zeitgemäße Allgemeinbildung und kritisch-konstruktive Didaktik (5th ed.). Weinheim: Beltz. Kon, S. (2003). Tokyo godfathers [DVD, 90 min.]. Japan: Sony Pictures Entertainment. Louden, R. B. (2009). Making the law visible: The role of examples in Kant’s ethics. In J. Timmermann (Ed.), Kant’s Groundwork of the metaphysics of morals: A critical guide (pp. 63–81). Cambridge: Cambridge University Press. Løvlie, L. (1997). The uses of example in moral education. Journal of Philosophy of Education, 31(3), 409–425. MacIntyre, A. (1999). Dependent rational animals: Why human beings need the virtues. London: Duckworth. Mann, T. (1980). Doktor Faustus: Das Leben des deutschen Tonsetzers Adrian Leverkühn erzählt von einem Freunde. Frankfurt am Main: S. Fischer. Mollenhauer, K. (1994). Vergessene Zusammenhänge: Über Kultur und Erziehung (4th ed.). München: Juventa Verlag. Munch, E. (1893). Skrik [The scream] (Casein/waxed crayon and tempera on cardboard, 91 cm x 73,95 cm). National Gallery, Oslo. Nussbaum, M. C. (2001). Upheavals of thought: The intelligence of emotions. Cambridge: Cambridge University Press. Oakeshott, M. (1967). Learning and teaching. In R. S. Peters (Ed.), The concept of education (pp. 156–176). London: Routledge & K. Paul. Pattison, G. (2002). Kierkegaard’s upbuilding discourses: Philosophy, literature, theology. London, New York: Routledge. Perkins, R. L. (2003). Introduction to Eighteen upbuilding discourses (Vol. 5, pp. 1–14). Macon, Ga.: Mercer University Press. Pestalozzi, J. H. (1825). Leonard and Gertrude or a book for the people: Translated from the German of Pestalozzi (Vol. 1–2). London: J. Mawman, Ludgate Street. Pink-Floyd. (1979/1994). The wall [Concept album]. United Kingdom: EMI. Plato. (1966). Euthypro, Apology, Crito, Phaedo (H. N. Fowler, Trans.). Plato in twelve volumes (Perseus 4.0 ed., Vol. 1). London: Heinemann.. Accessed 10 August 2012. Plato. (2000). The republic (T. Griffith, Trans.). Cambridge: Cambridge University Press. Rembrandt, H. v. R. (1668). The return of the prodigal son (Oil on canvas, 262 × 205 cm). St. Petersburg: State Heremitage Museum. Samuel, V. (2007). Staying calm with your kids.. Accessed 10 August 2012. Small, C. (1996). Music, society, education. Hanover, NH: Weyslean University Press. Søltoft, P. (2000). To let oneself be upbuilt. In N. J. E. Cappelørn, H. E. Deuser, & J. E. Stewart (Eds.), Kierkegaard studies: Yearbook 2000 (pp. 19–39). Berlin: Walter de Gruyter. Tubbs, N. (2005). Special Issue—Philosophy of the teacher. Journal of Philosophy of Education, 39(2), 183–420. (I refer to the paper version. The electronic version covers the same content on pp. 183–414). Vasarely, V. (1982). Gea. Paris: Hervas. Wivestad, S. M. (2008). The educational challenges of “agape” and “phronesis”. Journal of Philosophy of Education, 42(2), 307–324. Wivestad, S. M. (2011). Conditions for ‘upbuilding’: A reply to Nigel Tubbs’ reading of Kierkegaard. Journal of Philosophy of Education, 45(4), 613–625. Wivestad, S. M. (2012). On becoming better human beings: Six stories to live by. Studies in Philosophy and Education, doi:10.1007/s11217-012-9321-8. Online first. Open access. Zapffe, P. W. (1983/1941). Om det tragiske. Oslo: Aventura. Acknowledgments I would like to thank Kristján Kristjánsson, Gert Biesta, Andrew Krivak, Peter Robins, Trygve Bergem, Nigel Tubbs, René V. Arcilla, Tone Sævi, Svein Rise, Herner Sæverot and Solveig M. Reindal for encouragement and constructive criticisms. I am also thankful to Michele Vasarely for permission to use Victor Vasarely’s work Catch. Wivestad, S.M. “Upbuilding Examples” for Adults Close to Children. Stud Philos Educ 32, 515–532 (2013). Published: Issue Date: DOI: Keywords - Close Adults - Kierkegaard - Upbuilding Discourses - Deep Self - Dannelse
https://link.springer.com/article/10.1007/s11217-012-9327-2?error=cookies_not_supported&code=790f5e91-7797-4004-8786-d887139a54c4
CC-MAIN-2021-17
refinedweb
10,793
61.56
What Is AWS Cloud Map? AWS Cloud Map is a fully managed service that you can use to create and maintain a map of the backend services and resources that your applications depend on. Here's how AWS Cloud Map works: You create a namespace that identifies the name that you want to use to locate your resources and also specifies how you want to locate resources: using AWS Cloud Map DiscoverInstancesAPI calls, DNS queries in a VPC, or public DNS queries. Typically, a namespace contains all the services for an application, such as a billing application. You create an AWS Cloud Map service for each type of resource for which you want to use AWS Cloud Map to locate endpoints. For example, you might create services for web servers and database servers. A service is a template that AWS Cloud Map uses when your application adds another resource, such as another web server. If you chose to locate resources using DNS when you created the namespace, a service contains information about the types of records that you want to use to locate the web server. A service also indicates whether you want to check the health of the resource and, if so, whether you want to use Amazon Route 53 health checks or a third-party health checker. When your application adds a resource, it can call the AWS Cloud Map RegisterInstanceAPI action, which creates a service instance. The service instance contains information about how your application can locate the resource, whether using DNS or using the AWS Cloud Map DiscoverInstancesAPI action. When your application needs to connect to a resource, it calls DiscoverInstancesand specifies the namespace and service that are associated with the resource. AWS Cloud Map returns information about how to locate one or more resources. If you specified health checking when you created the service, AWS Cloud Map returns only healthy instances. AWS Cloud Map is tightly integrated with Amazon Elastic Container Service (Amazon ECS). As new container tasks spin up or down, they automatically register with AWS Cloud Map. You can use the Kubernetes ExternalDNS connector to integrate Amazon Elastic Container Service for Kubernetes with AWS Cloud Map. You can also use AWS Cloud Map to register and locate any cloud resources, such as Amazon EC2 instances, Amazon DynamoDB tables, Amazon S3 buckets, Amazon Simple Queue Service (Amazon SQS) queues, or APIs deployed on top of Amazon API Gateway, among others. You can specify attribute values for services instances, and clients can use these attributes to filter the resources that AWS Cloud Map returns. For example, an application can request resources in a particular deployment stage, like BETA or PROD.
https://docs.aws.amazon.com/cloud-map/latest/dg/what-is-cloud-map.html
CC-MAIN-2019-13
refinedweb
445
57.4
Al Viro wrote:> > >?> > ... is the same as for the same question with "set of mounts" replaced> with "environment variables".Not quite.After changing environment variables in .profile, you can copy them toother shells using ". ~/.profile".There is no analogous mechanism to copy namespaces.I agree with you that Miklos' patch is not the right way to do it.Much better is the proposal to make namespaces first-class objects,that can be switched to. Then users can choose to have themselves anamespace containing their private mounts, if they want it, withlogin/libpam or even a program run from .profile switching into it.While users can be allowed to create their own namespaces which affectthe path traversal of their _own_ directories, it's important that theexistence of such namespaces cannot affect path traversal of otherdirectories such as /etc, or /autofs/whatever - and that creation ofnamespaces by a user cannot prevent the unmounting of a non-userfilesystem either.The way to do that is shared subtrees, or something along those lines.Here is one possible implementation:As far as I can tell, namespaces are equivalent to predicates attachedto every mount - the predicate being "this mount intercepts pathtraversal at this point if current namespace == X".It makes sense, when users can create namespaces for themselves, thatthe predicate be changed to "this mount valid if [list of currentnamespace and all parent namespaces] contains X". Parent namespacemeans the namespace from which a CLONE_NS namespace inherits.Then it would be safe (i.e. secure) to allow ordinary users to useCLONE_NS for the purpose of establishing private namespace(s), withinwhich they can mount things on directories they own. But those userswould continue to see mounts & unmounts done by the system in otherdirectories such as /mnt and /autofs. Effectively this confines thenew namespace to only affecting directories owned by the user.That would work properly with suid programs, properly with autofs andalso manual system-wide administration, and it is general enough thatit doesn't force any particular policy. Also, it would be usable forpartial sharing of resources in virtual server and chroot scenarios.What's not to like? :)-- Jamie-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2005/4/24/99
CC-MAIN-2015-18
refinedweb
379
56.05
import sys,re,random while True: x = input("Please ask a question: ") n = random.randint(1, 7) if re.match("^exit$|^close$", x): print("GoodBye!") sys.exit() elif n == 1: print("The answer lies in your heart") elif n == 2: print("I do not know") elif n == 3: print("Almost certainly") elif n == 4: print("No") elif n == 5: print("Why do you need to ask?") elif n == 6: print("Go away. I do not wish to answer at this time.") elif n == 7: print("Time will only tell") @Jethro_ Thanks for your help its nice.. thank you This simple script is to be added to a mirc bot to pm the owner of the bot when the owners name is mentioned in a channel.
http://hawkee.com/profile/61378/
CC-MAIN-2018-05
refinedweb
125
83.15
Li Chen's Blog Porting a C# Windows application to Linux I own a Windows application. To expand our customer base, we need to create a Linux edition. In anticipating the demand, we previously decided to place the majority of logics in a few .net standard libraries and this is a big paid-off. However, there are still a few things we need to do so that the same code would work on both Windows and Linux. - Path separator is different between Windows and Linux. Windows uses “\” as separator while Linux uses “/” as separator. The solution is to always use Path.Combine to concatenate paths. Similarly, use Path.GetDirectoryName and Path.GetFileName to split the paths. - Linux file system is case sensitive. The solution is to be very consistent with path names and always use constants when a path is used in multiple places. - In text files, Windows uses \r\n to end lines while Linux uses \r. The solution is to use TextReader.ReadLine and TextWriter.WriteLine. TextReader.ReadLine reads Windows text files correctly on Linux and vice versa. If we have to face line-ending characters explicitly, use Environment.NewLine. - Different locations for program files and program data. Windows by defaults store programs in “c:\Program Files” folder and store program data in “c:\ProgramData”. The exact location can be determined from the %ProgramFile% and %ProgramData% environment variables. Linux, in contrast, has a different convention and one often install programs under /opt and write program data under /var. For complete reference, see:. This is an area we have to branch the code and detect operating system using RuntimeInformation.IsOSPlatform. - Lack of registry in Linux. The solution is to just use configuration files. - Windows has services while Linux has daemon. The solution is to create a Windows Service application on Windows and create a console application on Linux. RedHat has a good article on creating Linux daemon in C#:. For addition information on Systemd, also see:. - Packaging and distribution. Windows application are usually packaged as msi or Chocolatey package. Linux applications are usually packaged as rpm. This will be the subject of another blog post. Building .net core on an unsupported Linux platform Introduction I need to a product that I own from Windows to Amazon Linux. However, Amazon Linux is not a supported platform for running .net core by Microsoft. Although there is a Amazon Linux 2 image with .net core 2.1 preinstalled and it is possible to install the CentOS version of .net core on Amazon Linux 1, I went on a journey to build and test .net core on Amazon Linux to have confidence that my product will not hit a wall. .net core require LLVM 3.9 to build. However, we can only get LLVM 3.6.3 from the yum repository. So we have to build LLVM 3.9.LLVM 3.9 requires Cmake 3.11 or later, but we can only get Cmake 2.8.12 from the yum repository. So we have to start from building CMake. Building CMake The procedure to build CMake can be found in. Here is what I did: sudo yum groupinstall "Development Tools" Sudo yum install swig python27-devel libedit-devel version=3.11 build=1 mkdir ~/temp cd ~/temp wget tar -xzvf cmake-$version.$build.tar.gz cd cmake-$version.$build/ ./bootstrap make -j4 sudo make install Building CLang and LVVM With CMake installed, we can build LLVM. My procedure of building Clang and LLVM is similar to the procedure in. Please also refer to for additional information. cd $HOME git clone cd $HOME/llvm git checkout release_39 cd $HOME/llvm/tools git clone git clone cd $HOME/llvm/tools/clang git checkout release_39 cd $HOME/llvm/tools/lldb git checkout release_39 Before we start building, we need to patch LLVM source code for Amazon Linux triplet.Otherwise LLVM cannot find the c++ compiler on Amazon Linux. To patch, find file ./tools/clang/lib/Driver/ToolChains.cpp, find an array that looks like: "x86_64-linux-gnu", "x86_64-unknown-linux-gnu", "x86_64-pc-linux-gnu", "x86_64-redhat-linux6E", "x86_64-redhat-linux", "x86_64-suse-linux", "x86_64-manbo-linux-gnu", "x86_64-linux-gnu", "x86_64-slackware-linux", "x86_64-linux-android", "x86_64-unknown-linux" Append "x86_64-amazon-linux" to the last line. Similar, append "i686-amazon-linux" to "i686-montavista-linux", "i686-linux-android", "i586-linux-gnu" Now we can build: mkdir -p $HOME/build/release cd $HOME/build/release cmake -DCMAKE_BUILD_TYPE=release $HOME/llvm make –j4 sudo make install Building CoreCLR and CoreFx With Clang/LLVM 3.9 installed, we can now build CoreCLR and CoreFx. We need to install the prerequisites first: sudo yum install lttng-ust-devel libunwind-devel gettext libicu-devel libcurl-devel openssl-devel krb5-devel libuuid-devel libcxx sudo yum install redhat-lsb-core cppcheck sloccount mkdir ~/git git clone git clone Go to each directory and check out a version, for eample: git checkout tags/v2.0.7 Now just follow to the build. ./clean.sh -all ./build.sh -RuntimeOS=linux ./build-tests.sh Also look at: and Conclusions With the steps above, I was able to build and test .net core on Amazon Linux 1 and 2. Note that .net core requires GLIBC_2.14 to run. To find the version of GLIBC on your version of Amazon Linux, run: strings /lib64/libc.so.6 | grep GLIBC If you don’t see 2.14 on the list, .net core will not run. try “sudo yum update” to see if you can update to a later version of GLIBC. Additionally, since many newer programming languages were build on LLVM, this exercise also allow us to build other languages that require newer version of LLVM than the version in the yum repository. Configure Open Live Writer to weblogs.asp.net I have not blogged for a while. When I opened my Open Live Writer, I got error with. I searched the web. Most blogs were still referencing the xmlrpc url which no longer exists. Fortunately, Fixing is easy. Just Add Account and select “Other services” On the next screen, enter the url of the blog (without xmlrpc). Open Live Writer and Orchard are smart enough to figure out the rest. This is certainly an improvement over the earlier versions. If you are curious on how Open Live Writer figured out the post API endpoint, which view the source of your web page and you will see the following lines in the header: <link href="" rel="wlwmanifest" type="application/wlwmanifest+xml" /> <link href="" rel="EditURI" title="RSD" type="application/rsd+xml" /> Top k algorithm revisited 3 years ago, I implemented top K operator in LINQ. I was asked recently why I chose Min Heap since there are faster algorithms. To recap, we try to select top k element from a sequence of n elements. A min-heap has the following property: - find-min takes O(1) time. - extract-min takes O(ln k) time where k is the size of the heap. - insert takes O(ln k) time. For each number in the sequence, I first compare the number to find-min. If the number is smaller, the number is tossed away. If the number is bigger, we do a extract-min followed by an insert. So in the worst scenario, the algorithm runs with time complexity of O(n ln k) and the space complexity of O(k). If we use max-heap instead, we can heapify n elements in O(n) time. Then we do k extract-max so we have the total time complexity of O(n + k ln n) and a space complexity of O(n). We could also use Quick Select. It is very similar to Quick Sort that we randomly select a pivot and move it to the right position. Unlike Quick Sort, we can discard the left side of the pivot whenever we have greater then k elements on the right side. This algorithm converge fairly quickly and we have the average time complexity of O(n) and space complexity of O(n). In average case, the space requirement by Quick Select is less than the max heap approach. So both max-heap and quick select are likely faster than the min-heap approach. Why do I used min-heap then? The reason is that the min-heap approach uses minimum amount of memory and I assume that I will work with large dataset so . Also, if we work with a stream, the min-heap provides a running top k. Ever wonder on which platform Amazon AWS Lambda in C# is running? In last December, AWS announced C# support for AWS Lambda using .NET Core 1.0 runtime. Ever wonder on which platform is it running? I am curious too and I did not see it in any official documentation. So I decided to write a small AWS Lambda function to detect the platform: using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using System.Runtime.InteropServices; using Amazon.Lambda.Core; using Amazon.Lambda.Serialization; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializerAttribute(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))] namespace SysInfoLambda { public class Function { /// <summary> /// A simple function that takes a string and does a ToUpper /// </summary> /// <param name="input"></param> /// <param name="context"></param> /// <returns></returns> public RuntimeInfo FunctionHandler(ILambdaContext context) { return new RuntimeInfo() { FrameworkDescription = RuntimeInformation.FrameworkDescription, OSArchitecture = RuntimeInformation.OSArchitecture, ProcessArchitecture = RuntimeInformation.ProcessArchitecture, OSDescription = RuntimeInformation.OSDescription, OSPlatform = RuntimeInformation.IsOSPlatform(OSPlatform.Linux) ? OS.Linux : RuntimeInformation.IsOSPlatform(OSPlatform.OSX) ? OS.OSX : OS.Windows }; } } public class RuntimeInfo { public string FrameworkDescription { get; set; } public Architecture OSArchitecture { get; set; } public string OSDescription { get; set; } public Architecture ProcessArchitecture { get; set; } public OS OSPlatform { get; set; } } public enum OS { Linux, OSX, Windows } } The result? The AWS C# Lambda runs in 64 bit Linux. The extract OS description is: Linux 4.4.35-33.55.amzn1.x86_64 #1 SMP Tue Dec 6 20:30:04 UTC 2016.. First look at the Visual Studio Tools for Apache Cordova CTP 3.1 The company that I worked for had an old cross-platform mobile app developed by an outside contractor using PhoneGap 1.0. When I was asked to look at the app a few months ago, I had great difficulty collecting large number of moving pieces: PhoneGap, Android SDK and emulator. When I saw Visual Studio Tools for Apache Cordova (I will call it VSTAC in the remaining of this post), I decide to give it a try since it attempts to install a large number of third-party software for me. The journey is not exactly easy, but it is certainly far easier than collecting all the pieces myself with the excellent installation document from MSDN. The result is more than pleasant. Here are some of my findings: 1) After the installation, I could not even get a hello world app to work. It turns out that I had an older version of Nods.js. VSTAC skipped node.js installation. After I uninstall the old node.js and reinstall with the one linked from the VSTAC installation page, I was able to get hello world to work. 2) I was surprise to see the Ripple emulator which I was not aware of previously. The Ripple emulator is very fast and VSTAC provides excellent debugging experience. 3) I had to clear my Apache Cordova cache a few times. This and some other useful items are documented in FAQ. Also visit known issues. 4) The application connects to an old soap web services developed with WCF. It does not support CORS. So I had to use Ripple proxy to connect to it but I kept getting 400 error. Fortunately, I was able to hack Ripple proxy to make it work. 5) I then tried to run the app in Google Android emulator. VSTAC supports this scenario as well. I had to uninstall and reinstall some Android SDK components following this and this directions. Then I had to run AVD Manager to create and start a device. Then I had to update my display driver to make sure I have compatible OpenGL ES driver installed. After that, the Google emulator ran beautifully. It was not as fast as Ripple but is acceptable. So at the end, I want to give a big thank you to the Microsoft VSTAC team. I know this is not easy but the excellent document got me through. It certainly saved me lots of time. Missing methods in LINQ: MaxWithIndex and MinWithIndex The LINQ library has Max methods and Min methods. However, sometimes we are interested in the index location in the IEnumerable<T> rather than the actual values. Hence the MaxWithIndex and MinWithIndex methods. These methods return a Tuple. The first item of the Tuple is the maximum or minimum value just the the Max and Min methods. The second item of the Tuple is the index location. As usually, you might get my LINQ extension from NuGet: PM>Install-Package SkyLinq.Linq Usage examples in the unit test. ASP Classic Compiler is now available in NuGet I know this is very, very late, but I hope it is better than never. To make it easy to experiment with ASP Classic Compiler, I made the .net 4.x binaries available in NuGet. So it is now extremely easy to try it: - From the package console of any .NET 4.x web project, run “Install-Package Dlrsoft.Asp”. - To switch from AspClassic to Asp Classic Compiler in the project, add the following section to the system.webServer handlers section: <system.webServer> <handlers> <remove name="ASPClassic"/> <add name="ASPClassic" verb="*" path="*.asp" type="Dlrsoft.Asp.AspHandler, Dlrsoft.Asp"/> </handlers> </system.webServer>Comment out the section to switch back. - Add a test page StringBuilder.asp: <% imports system dim s = new system.text.stringbuilder() dim i s = s + "<table>" for i = 1 to 12 s = s + "<tr>" s = s + "<td>" + i + "</td>" s = s + "<td>" + MonthName(i) + "</td>" s = s + "</tr>" next s = s + "</table>" response.Write(s) %>This code uses the .net extension so it will only work with Asp Classic Compiler. Happy experimenting! SkyLinq binaries are available on NuGet After much hesitate, I finally published my SkyLinq binaries on NuGet. My main hesitation was that this is my playground so I am changing things at will. The main reason to publish is that I want to use these works myself so I need an easy way to get the latest binaries into my projects. NuGet is the easiest way to distribute and get updates, including my own projects. There are 3 packages: - SkyLinq.Linq is a portal library that contains some LINQ extensions. - SkyLinq.Composition contains my duck-typing implementation. It is similar to Impromptu-Interface but it is much simpler and it uses il.emit instead of LINQ Expressions to generate code. It also contains a LINQ query rewriting example. - LINQPadHost is a simple hosting and executing environment for LINQPad queries. Live demo at.
https://weblogs.asp.net/lichen
CC-MAIN-2022-33
refinedweb
2,504
59.5
Boost Libraries Go Up to C++ Reference The Boost libraries are a set of C++ libraries that significantly expand the language using template metaprogramming. Subsets of Boost Version 1.39 and Version 1.50.0 are included that have been fully tested and preconfigured specifically for C++Builder XE3. - Binaries have been built for functions that require them. - Include paths have been set for the Boost libraries, and any necessary libraries should be automatically linked because of #pragmalink lines in the Boost code. Installing and Uninstalling the Boost Libraries - Note about 64-bit Windows: For a 64-bit Windows C++Builder installation, you need at least 3 Gigabytes of free disk space in order to ensure that the Boost Libraries are correctly installed, and the product installation might require 18 Gigabytes of disk space. - For a 64-bit Windows install, you get both Boost 1.50.0 (for 64-bit Windows applications) and Boost 1.39 (for 32-bit Windows applications). - For Boost Version 1.39, separate libraries are installed for 32-bit Windows and for OS X. The RAD Studio product installer gives you a choice of whether to install the Boost Libraries. Boost is one of the items listed in the feature selections tree of the product installer. If you leave Boost enabled, the separate Boost installer is started by the product installer, and installs the Boost libraries. To uninstall only the Boost libraries (not the entire product), you should run the Boost installer and choose the Remove option, as described in the following steps. To uninstall only the Boost libraries (not the product): - Open the Windows Control Panel. - Choose Uninstall a program. - Double-click Boost Libraries for C++Builder <version>. The Boost installer starts. - On the Welcome page of the Boost installer, choose the Remove option and click Next. - On the Ready to Uninstall page, click Next. Boost Libraries Installation Locations The following table shows the Boost Libraries Versions that are used on specific target platforms and installed on specific development systems with RAD Studio: Boost Version 1.39 Include Directories For Boost Version 1.39, the typical include directory is as follows: - (32-bit development system: C:\Program Files\Embarcadero\RAD Studio\<n.n>\include\boost_1_39\boost - 64-bit development system: C:\Program Files (x86)\RAD Studio\<n.n>\include\boost_1_39\boost Run-Time Libraries For the 32-bit Windows target platform, the Boost Version 1.39 libraries are typically installed here: C:\Program Files\Embarcadero\RAD Studio\<n.n>\lib\Win32\release For the Mac OS X target platform, the Boost Version 1.39 libraries are installed in the following directory: C:\Program Files (x86)\Embarcadero\RAD Studio\<n.n>\lib\osx32\release For the 64-bit Windows target platform, the Boost Version 1.39 libraries are installed here: C:\Program Files (x86)\Embarcadero\RAD Studio\<n.n>\lib\Win32\release Example File Names For Boost Version 1.39 on Windows, the files themselves are too numerous to list, but these are the names of the Boost libraries for the Mac OS X target platform: libboost_date_time-bcb-mt-1_39.a libboost_math_c99-bcb-mt-1_39.a libboost_math_c99f-bcb-mt-1_39.a libboost_regex-bcb-mt-1_39.a libboost_signals-bcb-mt-1_39.a libboost_system-bcb-mt-1_39.a Libraries for the Win32 target platform have the file extension .lib. Boost Version 1.50.0 Boost Version 1.50.0 is used only for the 64-bit Windows target platform. Include Directory For Boost Version 1.50.0, the typical include directory is: C:\Program Files (x86)\Embarcadero\RAD Studio\<n.n>\include\boost_1_50\boost Run-Time Libraries For the 64-bit target platform, the Boost Version 1.50.0 files are typically installed here: C:\Program Files (x86)\Embarcadero\RAD Studio\<n.n>\lib\win64\release Example File Names The files themselves for Version 1.50.0 are too numerous to list, but these are examples: libboost_chrono-bcb-1_50.a libboost_date_time-bcb-1_50.a libboost_math_c99-bcb-1_50.a libboost_prg_exec_monitor-bcb-1_50.a libboost_random-bcb-1_50.a libboost_unit_test_framework-bcb-1.50.a. (The term target platform means the current setting of the Target Platforms node in the Project Manager.) In a Boost 1.39 installation, the Boost minmax extensions are installed in the algorithm directory. For example, to use minmax in an application that targets 32-bit Windows, your code should specify: #include <boost/algorithm/minmax.hpp> This command includes the minmax library that is part of the algorithm directory. The Boost 1.50.0 directory structure might be different. The path to the Boost libraries is specified in the following environment variables: - CG_BOOST_ROOT for 32-bit Windows systems - CG_64_BOOST_ROOT for 64-bit Windows systems These variables are set on the Tools > Options > Environment Options > Environment Variables dialog box. Including the Boost Libraries for Mac OS X Applications For Mac OS X application development, you can use the Boost libraries for OS X, which are installed by default in the program files on the development computer at: Embarcadero\RAD Studio\<n.n>\lib\osx32\release\ With the OS X target platform, .a files are library files. For example, the $(BIN)\llib\osx32\release directory contains .a files such as date time, math_C99/math_c99f, regex, signals, and system. For exact information about the OS X Boost headers available to you, please explore the directories subordinate to your Boost installation directory $(BIN)\lib\osx32\release. Using the same #include command shown in the example above (for Windows) also includes the Boost Libraries for Mac OS X in your project. Boost Documentation To view the help for the Boost libraries, go to: - Boost Version 1.39: - Boost Version 1.50.0:
http://docwiki.embarcadero.com/RADStudio/XE3/en/Boost_Libraries
CC-MAIN-2013-20
refinedweb
939
50.12
Emotions are important when expressing oneself as 60% of communication is expressed through the emotion found in one’s face. This guide will show how to implement emotion display on a screen with ROS. The first step is to download the module pygame using terminal. sudo apt-get install python-pygame The next step is to open up a text editor and begin with importing pygame. Next you would want to create a loop to create a continuous screen. #! /usr/bin/env python import pygame screen = pygame.display.set_mode((640, 480)) running = 1 while running: event = pygame.event.poll() if event.type == pygame.QUIT: running = 0 screen.fill((0, 0, 0)) pygame.display.flip() This allows for a continuous screen to be displayed on the screen. Although there is currently a screen, there is nothing that is being displayed, so now is the time to add the 7 faces: neutral, happy, surprise, anger, disgust, fear, and sad. With the 7 faces above, there is a certain gradual factor based on the confidence of that face that allows each face to revert to the neutral face. An example will be provided of the anger face to show how the gradual factor is included. Another important aspect of displaying the face is to utilize an eye coordinate location. Since sympathy also relies on the location of the eyes, it is included into the eyes. When drawing the eyes and mouths, you can utilize this API to check how to draw each face. The program utilizes an array to store values or probabilities of each emotion. #draws the eyes pygame.draw.circle(screen, (0, 0, 0), (255 + eyecoordx, 250 + eyecoordy), 15) pygame.draw.circle(screen, (0, 0, 0), (355 + eyecoordx, 250 + eyecoordy), 15) pygame.draw.polygon(screen, (255, 255, 255), [(270 + eyecoordx, 235 + (30 * emotionamts[0]) + eyecoordy), (270 + eyecoordx, 235 + eyecoordy), (270 - (30 * emotionamts[0]) + eyecoordx, 235 + eyecoordy)], 0) pygame.draw.polygon(screen, (255, 255, 255), [(340 + eyecoordx, 235 + eyecoordy), (340 + eyecoordx, 235 + (30 * emotionamts[0]) + eyecoordy), (340 + (30 * emotionamts[0]) + eyecoordx, 235 + eyecoordy)], 0) #draws the mouth pygame.draw.lines(screen, (0, 0, 0), False, [(240, 355 + int(20*emotionamts[0])), (305, 355), (370, 355+int(20*emotionamts[0]))], 5) The next step would be to integrate ROS into the program or Robot Operating System. Since there is not much information to be published. There will be no need to use a publisher. A subscriber will be needed. First, the subscriber would need to collect which emotion is currently expressed and the probability of the emotion. import rospy import std_msgs.msg import String def callback(data): listener(): rospy.init_node('emotiondisplay', anonymous=True) rospy.Subscriber("emotiondisplay2", String, callback) rospy.spin() After integrating ROS into the program, multithreading is needed to collect information as a subscriber as well as continuously printing out to a screen. To multithread, you need to import the threading module and the time module. With these modules, we can sleep a thread to check for possible messages from the publisher. To ensure that threads don’t switch during the process, the use of a lock is necessary. import thread import time import threading lock = threading.Lock() def callback(data): with lock: func(delay): done = False first = True while(not done): with lock: display() time.sleep(delay) def main(): try: thread.start_new_thread(func, (.01,)) listener() except Exception, e: print str(e) This code allows for both threads, displaying the screen and collecting data from the publisher, to run simultaneously. The func method is called by thread.start_new_thread method is given a parameter of .01 seconds in which is the delay time between each print of display unless the lock of callback prevents it.
https://emotionrobots.com/2016/02/28/emotion-display/
CC-MAIN-2022-21
refinedweb
612
58.48
A:: WINE, a MS Windows emulator for Linux, is still not ready for general distribution. If you want to contribute to its development, look for the status reports in the comp.emulators.ms-windows.wine newsgroup. There is also a FAQ, compiled by P. David Gardner, at. A: In the meantime, if you need to run MS Windows programs, the safest bet is to dual-boot. LILO, the Linux boot loader, can boot one of several operating systems from a menu. See the LILO documentation for details.: A proprietary program called VMWare is also available to let you run Windows under a Linux "host" operating system. See the company's website at... ERROR: LDP namespace resolution failure on TT-Debian-HOWTO. A:,.
http://www.faqs.org/contrib/linux/Linux-FAQ/compatibility.html
CC-MAIN-2016-44
refinedweb
122
59.9
The key to this problem is how to identify strings that are in the same shifting sequence. There are different ways to encode this. In the following code, this manner is adopted: for a string s of length n, we encode its shifting feature as "s[1] - s[0], s[2] - s[1], ..., s[n - 1] - s[n - 2],". Then we build an unordered_map, using the above shifting feature string as key and strings that share the shifting feature as value. We store all the strings that share the same shifting feature in a vector. Well, remember to sort the vector since the problem requires them to be in lexicographic order :-) A final note, since the problem statement has given that "az" and "ba" belong to the same shifting sequence. So if s[i] - s[i - 1] is negative, we need to add 26 to it to make it positive and give the correct result. BTW, taking the suggestion of @StefanPochmann, we change the difference from numbers to lower-case alphabets using 'a' + diff. The code is as follows. class Solution { public: vector<vector<string>> groupStrings(vector<string>& strings) { unordered_map<string, vector<string> > mp; for (string s : strings) mp[shift(s)].push_back(s); vector<vector<string> > groups; for (auto m : mp) { vector<string> group = m.second; sort(group.begin(), group.end()); groups.push_back(group); } return groups; } private: string shift(string& s) { string t; int n = s.length(); for (int i = 1; i < n; i++) { int diff = s[i] - s[i - 1]; if (diff < 0) diff += 26; t += 'a' + diff + ','; } return t; } }; Nice solution. And I see you're embracing range-based loops now :-) Just one idea that's shorter and I guess slightly faster: t += 'a' + diff;. But your way is of course fine as well. Hi, Stefan. I try your suggestion and the code just takes 4 ms, actually much faster than using to_string :-) But I guess it would be much safer to add an additional ,. For example, if not using the , for separation, both "az" and "ach" will have the same shifting feature "25". So I use t += 'a' + diff + ','. if not using the "," for separation, both "az" and "ach" will have the same shifting feature "25" No, "az" becomes "z" and "ach" becomes "cf". Not "25". Oh, yeah. I see now. Since it is added by a and becomes a character, the problem is automatically fixed :-) I'm not sure that adding 'a' is necessary. It does get accepted without. But I'm a little afraid at least of the null character, as that at least in C denotes the end of a string and I don't know how C++ handles it. So I added 'a' to be on the safe side and because it's just nicer in case anyone ever looks at the strings (like for debugging). Hi, Stefan. I guess C++ will handle it as we desire. I test the code without adding 'a' using the test case strings = ["a", "aa", "ac", "acc"] and C++ just groups each of them into a single group, as it should. Thanks. Good test case. Though you didn't necessarily test C++ but only the C++ implementation that you or LeetCode use. I tried finding it documented but failed. Closest thing I found is the claim "allowing the NUL byte to be in the string", but there's no reference. Hi, thanks for your great solution. Here is the java version solution based on your idea. Thanks for your discussion with Stefan too. I learned a lot. public class Solution { public List<List<String>> groupStrings(String[] strings) { Map<String, List<String>> map = new HashMap<>(); for(int i=0; i<strings.length; i++) { String key = shiftPattern(strings[i]); if(map.containsKey(key)) { map.get(key).add(strings[i]); }else { map.put(key, new ArrayList<String>()); map.get(key).add(strings[i]); } } List<List<String>> result = new ArrayList<>(); for(String key : map.keySet()) { List<String> g = map.get(key); Collections.sort(g); result.add(g); } return result; } public String shiftPattern(String s) { String key = ""; for(int i=1; i<s.length(); i++) { int diff = (int)(s.charAt(i)-s.charAt(i-1)); if(diff < 0) { diff += 26; } key += 'a' + diff; //alternatively, key += 'a' + diff + ','; } return key; } } Hi, connie0817. Thanks for your clear Java code. Well, I am very new to Java, but the following part if(map.containsKey(key)) { map.get(key).add(strings[i]); }else { map.put(key, new ArrayList<String>()); map.get(key).add(strings[i]); } may be simplified to if (!map.containsKey(key)) map.put(key, new ArrayList<String>()); map.get(key).add(strings[i]); well,i'm not subscribe,so cannot use the oj,but i will verify on my local machine,thanks anyway. @GO That line is necessary, try this case: ["az","yx"]. BTW, I am sorry that I forgot this problem is not free to everyone... it is not necessary to encode the sequence into a human readable string though, just take the ascii difference, stack them into a string whatever it looks like as a key. I think you should write t+= 'a' + diff; t += ',' instead a single line code: t += 'a' + diff + ','; Because the operator+ for string will only accept one char each time. Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/20823/4ms-easy-c-solution-with-explanations
CC-MAIN-2018-05
refinedweb
889
74.49
Hi Guys, Having an issue with the rip movie range and slate script running together in RVIO. It seems that if your trying to rip the movie length with the "-t" flag and are trying to generate a slate at the same time the slate is never generated. The ripping of the movie should be done independent of slate creation. I have tried different rip configuration with no luck. Attached is some sample python of the issue. import os inPath = " P:/Desktop/testFootage/testFootage.#.tga" outPath = "P:/Desktop/ testFootage/testFootage.mov" cmd = "rvio" cmd = cmd + " -t " + "1-30" cmd = cmd + " " + inPath cmd = cmd + " -o "+ outPath cmd = cmd + " -leader simpleslate" cmd = cmd + " " + "\"Zoic Studios\"" cmd = cmd + " " + "\"Show=Test Show\"" cmd = cmd + " " + "\"Episode=Test Episode\"" cmd = cmd + " " + "\"Version=TEst Version\"" os.system( cmd ) -Romey
https://support.shotgunsoftware.com/hc/en-us/community/posts/209494438-RVIO-and-simpleslate-
CC-MAIN-2019-51
refinedweb
132
74.9
How can I get rid of CC3250.DLL (single threaded) or CC3250MT.DLL (multi threaded) dependencies over a built executable under Borland 5 Professional? I just noticed this by chance. It's a simple console project. No VCL, no nothing. With just iostream and iomanip includes and using the std namespace. I tried to build it again as a release version and still nothing. Looked at BCB 5 documentation, and if I read it correctly, need to include the corresponding .bpl package in the compiler options. But I couldn't find it under the lib folder (or I don't know which package it is). Please don't tell me this DLL will have to go with all final release applications. It's a ~1.4Mb file and completely defeats the idea of a small sized app.
http://cboard.cprogramming.com/cplusplus-programming/18266-executable-unwanted-dll-dependency-borland-printable-thread.html
CC-MAIN-2016-40
refinedweb
137
79.06
Implementation of Klee’s Algorithm in C++ In this tutorial, we are going to see the Implementation of Klee’s Algorithm in C++. Here we will learn about Klee’s algorithm, how it works, and after that, we will see the C++ program for the same. Given a set of line segments with the starting and ending point in the form of pairs. We have to find the maximum continuous length after taking the union of all the pairs of line segments. For example:- arr={{1,6},{4,5},{3,8},{7,9}} Output: 8 arr={{1,3},{2,5},{5,6}} Output: 5 Klee’s Algorithm Klee’s Algorithm is introduced by a mathematician Victor Klee in 1977. The time complexity of the algorithm is O (n log n). This algorithm uses sorting that’s why it has a time complexity of O(nlogn). Let us see the algorithm. Algorithm 1. Create a vector of pairs where each pair contain a value of starting/ending point and false if it is a starting point or true if it is an ending point eg. for {2,3}—>{{2,false},{3,true}}. 2. Sort the vector in ascending order using in-built function sort(). And if in case two values are equal, then starting point value will have more priority and comes first in order. 3. Now, traverse the whole vector using for loop. 4. We take two variables ‘answer’ and ‘counter’, ‘answer’ stores the result and if the counter is non-zero, we will add the difference of the points to the result. 5. And ‘counter’ will be incremented by 1 if the point is a starting point (contain false) and decremented by 1 if the point is an ending point (contain true). 6. Display the output. C++ implementation of Klee’s Algorithm So, here is the C++ implementation of the above algorithm:- #include<bits/stdc++.h> using namespace std; /*========================================================= FUNCTION TO FIND THE LENGTH AFTER TAKING UNION OF ALL ==========================================================*/ int find_union(vector<pair<int,int>> vect) { int size = vect.size(); // Create a vector to store starting and ending // points separately and if it is starting insert false in //pair and if it is ending point insert true in pair. vector <pair <int, bool> > endpoints(size * 2); for (int i = 0; i < size; i++) { endpoints[i*2] = make_pair(vect[i].first, false); endpoints[i*2 + 1] = make_pair(vect[i].second, true); } // Sorting all endpoints sort(endpoints.begin(), endpoints.end()); //Initialising two variables //answer stores the final length //counter keep tracks of opening & closing of segments int answer = 0; int counter = 0; // Traverse through all endpoints for (int i=0; i<size*2; i++) { // If the counter is non-zero then we add the //difference of curr and prev points to answer if (counter) answer+=(endpoints[i].first - endpoints[i-1].first); // If endpoint is ending point of segment decrement // the counter else increment the counter if (endpoints[i].second) counter--; else counter++; } return answer; } /*====================================== MAIN FUNCTION =======================================*/ int main() { //Initialising vector of pairs vector< pair <int,int> > vect; //Inserting endpoints of line segments vect.push_back(make_pair(1, 6)); vect.push_back(make_pair(4, 5)); vect.push_back(make_pair(3, 8)); vect.push_back(make_pair(7, 9)); //Calling find_union() function int ans=find_union(vect); //Displaying output cout<<"Final length after taking union of all segments = " <<ans<<endl; return 0; } Output:- Final length after taking union of all segments = 8 Time Complexity:- O(n*logn) Thanks for reading this tutorial. I hope it helps you !!
https://www.codespeedy.com/implementation-of-klees-algorithm-in-c/
CC-MAIN-2021-10
refinedweb
579
62.58
Recall that the extends clause declares that your class is a subclass of another. You can specify only one superclass for your class (Java does not support multiple class inheritance), and even though you can omit the extends clause from your class declaration, your class has a superclass. So, every class in Java has one and only one immediate superclass. This statement leads to the question, "Where does it all begin?" As depicted in the following figure, the top-most class, the class from which all other classes are derived, is the Object class defined in java.lang. The Object class defines and implements behaviour that every class in the Java system needs. It is the most general of all classes. Its immediate subclasses, and other classes near top of the hierarchy, implement general behaviour; classes near the bottom of the hierarchy provide for more specialised behaviour. Definition 1. A subclass is a class that extends another class. A subclass inherits state and behaviour from all of its ancestors. The term "superclass" refers to a class's direct ancestor as well as to all of its ascendant classes. A subclass inherits all of the members in its superclass that are accessible to that subclass unless the subclass explicitly hides a member variable or overrides a method. Note that constructors are not members and are not inherited by subclasses. The following list itemises the members that are inherited by a subclass: Creating a subclass can be as simple as including the extends clause in your class declaration. However, you usually have to make other provisions in your code when subclassing a class, such as overriding methods or providing implementations for abstract methods. As mentioned before, member variables defined in the subclass hide member variables that have the same name in the superclass. One interesting feature of Java member variables is that a class can access a hidden member variable through its superclass. Consider the following superclass and subclass pair: class Super { Number aNumber; } class Subbie extends Super { Float aNumber; } The aNumber variable in Subbie hides aNumber in Super. But you can access Super's aNumber from Subbie with super.aNumber super is a Java language keyword that allows a method to refer to hidden variables and overridden methods of the superclass. The ability of a subclass to override a method in its superclass allows a class to inherit from a superclass whose behaviour is "close enough" and then override methods as needed. For example, all classes are descendants of the Object class. Object contains the toString method, which returns a String object containing the name of the object's class and its hash code. Most, if not all, classes will want to override this method and print out something meaningful for that class. Let's resurrect the Stack class example and override the toString method. The output of toString should be a textual representation of the object. For the Stack class, a list of the items in the stack would be appropriate. public class Stack { private Vector items; // code for Stack's methods and constructor //not shown // overrides Object's toString method public String toString() { int n = items.size(); StringBuffer result = new StringBuffer(); result.append("["); for (int i = 0; i < n; i++) { result.append(items.elementAt(i).toString()); if (i < n-1) result.append(","); } result.append("]"); return result.toString(); } } The return type, method name, and number and type of the parameters for the overriding method must match those in the overridden method. The overriding method can have a different throws clause as long as it doesn't declare any types not declared by the throws clause in the overridden method. Also, the access specifier for the overriding method can allow more access than the overridden method, but not less. For example, a protected method in the superclass can be made public but not private. Sometimes, you don't want to completely override a method. Rather, you want to add more functionality to it. To do this, simply call the overridden method using the super keyword. For example, super.overriddenMethodName(); A subclass cannot override methods that are declared final in the superclass (by definition, final methods cannot be overridden). If you attempt to override a final method, the compiler displays an error message similar to the following and refuses to compile the program: FinalTest.java:7: Final methods can't be overridden. Method void iamfinal() is final in class ClassWithFinalMethod. void iamfinal() { ^ 1 error. The Object class sits at the top of the class hierarchy tree in the Java platform. Every class in the Java system is a descendent, direct or indirect, of the Object class. This class defines the basic state and behaviour that all objects must have, such as the ability to compare oneself to another object, to convert to a string, to wait on a condition variable, to notify other objects that a condition variable has changed, and to return the class of the object. Your classes may want to override the following Object methods. The equals/hashCode are listed together as they must be overridden together. Your class cannot override these Object methods (they are final): You use the clone method to create an object from an existing object. To create a clone, you write: aCloneableObject.clone(); Object's implementation of this method checks to see if the object on which clone was invoked implements the Cloneable interface, and throws a CloneNotSupportedException if it does not. Note that Object itself does not implement Cloneable, so subclasses of Object that don't explicitly implement the interface are not cloneable. If the object on which clone was invoked does implement the Cloneable interface, Object's implementation of the clone method creates an object of the same type as the original object and initialises the new object's member variables to have the same values as the original object's corresponding member variables. The simplest way to make your class cloneable then, is to add implements Cloneable to your class's declaration. For some classes the default behaviour of Object's clone method works just fine. Other classes need to override clone to get correct behaviour. Consider the Stack class, which contains a member variable that refers to a Vector. If Stack relies on Object's implementation of clone, then the original stack and its clone will refer to the same vector. Changing one stack will change the other, which is undesirable behaviour. Here then is an appropriate implementation of clone for our Stack class, which clones the vector to ensure that the original stack and its clone do not refer to the same vector (changes are indicated with a change in font): public class Stack implements Cloneable { private Vector items; // code for Stack's methods and constructor not shown protected Object clone() { try { Stack s = (Stack)super.clone(); // clone the stack s.items = (Vector)items.clone(); // clone the vector return s; // return the clone } catch (CloneNotSupportedException e) { // this shouldn't happen because Stack is Cloneable throw new InternalError(); } } } The implementation for Stack's clone method is relatively simple: It calls super.clone, which Stack inherits from Object and which creates and initialises an instance of the correct type. At this point, the original stack and its clone refer to the same vector. Next the method clones the vector. Be careful: clone should never use new to create the clone and should not call constructors. Instead, the method should call super.clone, which creates an object of the correct type and allows the hierarchy of superclasses to perform the copying necessary to get a proper clone. You must override the equals and hashCode methods together. The equals method compares two objects for equality and returns true if they are equal. The equals method provided in the Object class uses the identity function to determine if objects are equal (if the objects compared are the exact same object the method returns true). However, for some classes, two distinct objects of that type might be considered equal if they contain the same information. Consider this code that tests two Integers, one and anotherOne, for equality: Integer one = new Integer(1); anotherOne = new Integer(1); if (one.equals(anotherOne)) System.out.println("objects are equal"); This program displays objects are equal even though one and anotherOne reference two distinct objects. They are considered equal because the objects compared contain the same integer value. Your classes should only override the equals method if the identity function is not appropriate for your class. If you override equals, then override hashCode as well. The value returned by hashCode is an int that maps an object into a bucket in a hash table. An object must always produce the same hash code. However, objects can share hash codes (they aren't necessarily unique). Writing a "correct" hashing function is easy -- always return the same hash code for the same object. Writing an "efficient" hashing function, one that provides a sufficient distribution of objects over the buckets, is difficult. Even so, the hashing function for some classes is relatively obvious. For example, an obvious hash code for an Integer object is its integer value. The Object class provides a method, finalize, that cleans up an object before it is garbage collected. This method's role during garbage collection was discussed previously. The finalize method is called automatically by the system and most classes you write do not need to override it. So you can generally ignore this method. Object's toString method returns a String representation of the object. You can use toString along with System.out.println to display a text representation of an object, such as the current thread: System.out.println(Thread.currentThread().toString()); The String representation for an object depends entirely on the object. The String representation of an Integer object is the integer value displayed as text. The String representation of a Thread object contains various attributes about the thread, such as its name and priority. For example, the previous line of code displays the following output: Thread[main,5,main] The toString method is very useful for debugging. It behooves you to override this method in all your classes. The getClass method is a final method that returns a runtime representation of the class of an object. This method returns a Class object. Once you have a Class object you can query it for various information about the class, such as its name, its superclass, and the names of the interfaces that it implements. The following method gets and displays the class name of an object: void PrintClassName(Object obj) { System.out.println("The Object's class is " + obj.getClass().getName()); } One handy use of a Class object is to create a new instance of a class without knowing what the class is at compile time. The following sample method creates a new instance of the same class as obj, which can be any class that inherits from Object (which means that it could be any class): Object createNewInstanceOf(Object obj) { return obj.getClass().newInstance(); } Part II continues here... RSS feed Java FAQ News
http://javafaq.nu/java-article1083.html
CC-MAIN-2018-13
refinedweb
1,851
54.63
DZone's Guide to Join the DZone community and get the full member experience.Join For Free I'm leaving this code here but I'm recommending against its use. The Jakarta Commons Email library () is a better choice. It's just as easy to use but it also has support for more features of sending email and it will handle one thing in particular that is difficult to get right on your own. If you want to send an email with embedded graphics and include all those graphics in the email (i.e. they aren't just links to graphics on some remote server) then the Commons Email library will let you do that easily. Trust me, it beats having to figure out how to do it yourself in a way that works across email clients. // This example of sending mail is a little different from the typical one you // see in Java. For one thing, when you call the function to send the email // you send in both a plain text version of the email and an HTML version. The // recipient's email reader will pick the version to display (usually favoring // the HTML version if it can display both). // // The other thing to note is that there is some commented out code in the // method for dealing with SMTP servers which require authentication. As best // I can remember this code worked fine but it's not in the current version. // Copyright (c) 2002, John Muns John Muns learn more about open source licenses, please visit: // package com.johnmunsch.util; import java.util.Properties; import javax.mail.*; import javax.mail.internet.*; import org.apache.log4j.*; /** * Handles sending email to a user. Slightly different from some of the examples * you see in that it will send a multi-format email with both a HTML "pretty" * version of the email and a straight text version. */ public class Mail { private static Logger log = Logger.getLogger(Mail.class.getName()); /** * Send an email from one user to another user with a given subject using a * given SMTP host. You can send both text and HTML versions of the same * email, and it should in fact be the same email content in both cases, * because the end user's email program will be the one to pick the version * to display to the user. * * @param from * @param to * @param subject * @param textBody * @param htmlBody * @param host * @throws AddressException * @throws MessagingException */ public static void sendMail(String from, String to, String subject, String textBody, String htmlBody, String host) throws AddressException, MessagingException { // Get system properties. Properties props = System.getProperties(); // Setup the mail server. props.put("mail.smtp.host", host); // Get a session. Session session = Session.getInstance(props, null); // The following is required for SMTP servers that require // authentication in order to send an email. // Transport transport = session.getTransport("smtp"); // transport.connect(host, username, password); // props.put("mail.smtp.auth", "true"); // Define the message. MimeMessage message = new MimeMessage(session); message.setFrom(new InternetAddress(from)); message.addRecipient(Message.RecipientType.TO, new InternetAddress(to)); message.setSubject(subject); message.setText(textBody); message.setContent(htmlBody, "text/html"); // Send message Transport.send(message); } } Topics: Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/email-101
CC-MAIN-2017-30
refinedweb
541
57.27
At its most basic level, a Podcast or Vodcast is simply an RSS feed. Users can subscribe to your feed through any of the standard RSS methods including news aggregators and readers and browser plug-ins such as Live Bookmarks. To begin, you first need to determine what media you wish to present to your subscribers. Your audio and video clips can be original music and videos, presentations, instructional videos and tutorials, or any other form of media in which your subscribers would be interested. Keep in mind that Podcasting and Vodcasting were originally designed for use on portable media devices such as Apple’s iPod. You should format your media accordingly and I’ll discuss that later in this article. But that doesn’t mean that you are limited to those devices, it simply means that you need to make sure your subscribers know what they’re getting. The first step in this process is to get your media into the correct format. Many news aggregators such as Apple’s free iTunes and the open-source Democracy Player support many different playback formats. However, if you want to be compatible with the highest number of readers and devices you should stick to the standards. The most popular format for audio distribution is MP3 and the most popular video format is MP4. If you have a direct audience, you could easily distribute AVI, Flash, or any other file type using the same methods I will be presenting to you.{mospagebreak title=Preparing your media files} In this section, I am going to demonstrate a couple of easy ways to create media files. If you already have media files that you would like to distribute, you can skip the next section. If you’re interested in creating audio files you first need to find an audio creation program. There are very good commercial products available such as Adobe Audition that have very powerful audio creation and editing abilities. If you’re not interested in a commercial solution I suggest taking a look at Audacity, a free, open-source cross-platform sound editor. In either case, these programs are able to record live audio from different MIDI sources. Simply record your audio, make any necessary edits, and save. Both Audition and Audacity support saving directly to MP3 format. For video creation you’ll most likely want to do some sort of video recording. There are plenty of software solutions to choose from like TechSmith’s Camtasia Studio. If you are creating instructional videos or tutorials I highly recommend using Adobe Captivate. Of course, if free is your preferred price then you might want to take a look at BoByte’s AviTricks and AviScreen. Nearly every video creator/editor that I’m aware of exports to AVI format which can lead to very large file sizes. Don’t worry about that. Sticking to our model we’ll be converting it to a compressed MP4 format anyway. So you have your audio or video file and you’re ready to distribute it. Most likely your audio file will already be in the MP3 format. If it isn’t, there are many free MP3 encoders available. I recommend using the open-source Lame MP3 Encoder. Due to licensing, the Lame MP3 Encoder must be installed separately in order to add MP3 functionality to Audacity. MP3 is a compressed format, but it still doesn’t hurt to tinker with the settings. Remember that most mobile devices still have a limited amount of storage space. You also don’t want to kill your subscribers in download time or yourself in bandwidth. An audio rate of 128kbps is plenty. In most cases you can use a lower bit rate without a noticeable loss to the end user. Especially for videos, slight changes can make drastic improvements in file sizes. For example, most portable media players have very small displays so you can safely reduce the resolution on your videos that are designed for mobile playback. Converting video from AVI to MP4 is a largely similar process. I personally use Free iPod Video Converter by Jodix. It has a nice wizard style interface and is capable of performing batch conversions. You can also save commonly used settings and edit ID3 tags on the fly.{mospagebreak title=Publishing your Podcast} In order to publish your Podcast you will need to create an RSS feed enclosure. I’m not going to go into great detail about creating the feed. You can find more information about creating RSS feeds in my article “Simple Web Syndication with RSS 2.0.” The important point here is the use of a media enclosure. Enclosures are only available in RSS 2.0. The RSS 2.0 namespace includes an Enclosure tag for adding enclosures to your feed. Simply put, an enclosure is a way of supplying a media file with a particular feed item. However, the use of an enclosure tag isn’t enough for a Podcast. We also want to include another feature specifically for mobile devices, or more appropriately, for the software used to manage your mobile device. I’m talking about the iTunes namespace. There are a series of iTunes tags used specifically to mark an item for use on a mobile device. To implement this namespace we need to declare it in our RSS tag by linking to its DTD. <?xml version="1.0"?> <rss xmlns: Here we’ve opened our file by defining it as an XML document. Next, we’ve linked to the iTunes DTD to allow use of the iTunes namespace. Now we have to construct our channel and add items. <?xml version="1.0"?> <rss xmlns: <channel> <title>"News You Can’t Use" by Developer Shed – Audio</title> <link></link> <category>News</category> <language>en-us</language> Shed, Inc.</copyright> <description>Fresh every Wednesday, "News You Can’t Use" by Developer Shed brings you the latest offbeat tech news stories from around the world–stories so crazy it’s News You Can’t Use!</description> <item> <title>Developer Shed’s News You Can’t Use for 2-28-2007</title> <enclosure url=" technews_02282007.mp3" type="audio/mp3" /> <guid> technews_02282007.mp3</guid> <pubDate>Wed, 28 Feb 2007 17:00:00 GMT</pubDate> </item> </channel> </rss> This is what our weekly “News You Can’t Use” feed looks like after adding the iTunes DTD. While this feed is fully functional at this point, changing document type hasn’t done any good because we haven’t made use of the iTunes namespace. Let’s begin with the channel section. There are several tags available that help us better define our channel’s content. After my channels description tag I add the following piece of code. <itunes:subtitle>Weekly Audio Tech News Segment </itunes:subtitle> <itunes:author>Developer Shed, Inc.</itunes:author> <itunes:summary>Fresh every Wednesday, "News You Can’t Use" by Developer Shed brings you the latest offbeat tech news stories from around the world–stories so crazy it’s News You Can’t Use!</itunes:summary> <itunes:owner> <itunes:name>Developer Shed, Inc.</itunes:name> <itunes:email>technews@developershed.com</itunes:email> </itunes:owner> <itunes:category <itunes:category </itunes:category> <itunes:explicit>no</itunes:explicit> Okay, the subtitle, author, and summary tag pairs are pretty self-explanatory. Next we add the owner tag pair. This contains tag pairs that hold information about the channel owner. The email address provided should be an email where concerns about the channel can be received. The category tags get a little tricky. There are predefined categories to choose from listed on Apple’s website. You can find these along with the complete tag listing by skimming through the iTunes namespace technical specification. The first category tag is a surrounding pair with a text attribute that defines the main category listing. Multiple sub-categories can be listed inside of this pair within self-closing tags. You can list as many combinations as you like or need. The explicit tag is used to rate the content of the media being offered. iTunes will not list a feed in its directory without an explicit tag. Possible values are “yes” if it contains explicit material, “no” if it doesn’t, or “clean” if it’s the edited version of an explicit recording. <item> <itunes:author>Developer Shed, Inc.</itunes:author> <itunes:subtitle>Video news segment for the week of February 28, 2007</itunes:subtitle> <itunes:summary>Fresh every Wednesday, "News You Can’t Use" by Developer Shed brings you the latest offbeat tech news stories from around the world–stories so crazy it’s News You Can’t Use! </itunes:summary> <enclosure url=" technews_02282007.mp3" type="audio/mpeg" /> <guid> technews_02282007.mp3</guid> <pubDate>Wed, 28 Feb 2007 17:00:00 GMT</pubDate> <itunes:duration>5:31</itunes:duration> <itunes:keywords>devshed, tech, news, developer, shed </itunes:keywords> </item> Once you have created a channel you need to add your items. This is a basic example. Again we have our iTunes tags that are specific to the item itself. I’ve also added my enclosure. The enclosure tag is a self closing tag. The first attribute is the URL to the media file. The second attribute defines the MIME type. The third attribute, which I’ve left out of this example, is the length in bytes of the file. Once you’ve created your feed you can save it with either an XML or RSS extension and upload it to your website. Just provide a URL directly to the file. You can provide a URL specifically for iTunes users that will open the file directly in the iTunes software. Just replace the http:// in the URL with the itpc:// protocol instead.{mospagebreak title=Distribution, licensing, and legal concerns} There are several ways to distribute your Podcast or Vodcast. You can provide a direct URL on your website, in emails and newsletters, or on printed material. There are several online listing services available as well. Before distributing your Podcast there are a couple of things that you need to take into legal consideration. Any media that you provide in a Podcast or Vodcast falls under copyright laws. If you are distributing your own original works, you have little to worry about because you inherently own the copyright, but if you are redistributing others’ works or works based on someone else’s work then you should be careful that you are not violating any applicable laws. Copyright laws are far beyond the scope of this article. Stay safe by making sure that you have written permission to redistribute any material that is not your own original work. For a more detailed explanation of how copyright laws affect podcasting, take a look at the Podcasting Legal Guide or consult a professional. If you would like to research this further on your own, take a look at the Digital Millennium Copyright Act (DMCA) and International Copyright Laws. The University of Washington’s “Copyright Connection” website is an excellent resource and a great place to start. It’s also important that you provide your subscribers with a usage license. Most commonly Podcasts and Vodcasts are released to the public under a Creative Commons license. There are a couple of CC licenses to choose from based on what you wish to allow your subscribers to do with your content. For more information visit the Creative Commons website. The final thing you should take into consideration before releasing yourself to the public is your content. Make sure that you target only your intended audience. You should also take care that your feed is not available to those who shouldn’t see it—especially if it contains explicit material. While there is no content ratings system in place for Podcasts at the time of this writing, there is an initiative to start one. You can take part in the beta Content Self-Ratings System for Podcasts being developed by Podtrac to help establish guidelines for a ratings system similar to those of the MPAA for movies and the ESRB for video games.
http://www.devshed.com/c/a/xml/how-to-set-up-podcasting-and-vodcasting/2/
CC-MAIN-2017-22
refinedweb
2,016
64.2
FineTimerfrfu_1819646 Jul 1, 2014 6:33 AM At the moment the lowest finetimer setting is 12.5msec. Can i adjust this to lower value like 1msec? Maybe by patching the cortex m3 registers directly ? 1. Re: FineTimerArvindS_76 Jun 30, 2014 11:40 AM (in response to frfu_1819646)1 of 1 people found this helpful The 1s app timer and the fine timer are software times and the resolution if the fine timer is 12.5mS. This cannot be changed. With SDK 2.x, there is a different timer that can run at a resolution of 1.25mS as an optional library you can add to the app. See <SDK>/Wiced-Smart/tier2/brcm/libraries/inc/bt_clock_based_timer.h To include this library to your application, add the following line to your application's makefile.mk: # Include this library to the application. APP_PATCHES_AND_LIBS += bt_clock_based_periodic_timer.a Then initialize the library in your application_create function before starting the timer: #include "bleappevent.h" #include "bt_clock_based_timer.h" void application_create(void) { //// All other application initialization here. // Initialize the BT clock based periodic timer library bt_clock_based_periodic_timer_Init(); } Then to start the timer at say 50mS interval: void application_start_50ms_timer(void) { // See header for more details bt_clock_based_periodic_timer_Enable(application_timer_expired_callback, NULL, 50000/625) } int application_timer_expired_callback(void* context) { // 50 ms timer callback, do something. // Context was not allocated and so does not need to be freed. So return no action. return BLE_APP_EVENT_NO_ACTION; } To stop timer, use bt_clock_based_periodic_timer_Disable() 2. Re: FineTimerfrfu_1819646 Jun 30, 2014 11:03 PM (in response to frfu_1819646) Can i use SDK 2... for a 20732s Design? Is there a solution for SDK 1.1? 3. Re: FineTimerMichaelF_56 Jul 1, 2014 6:29 AM (in response to frfu_1819646) Unfortunately, you will need to use SDK 1.1 for BCM20732S designs. I will let the development team respond with a work around if one exists. 4. Re: FineTimeruserc_8140 Sep 19, 2014 4:44 AM (in response to ArvindS_76) I am using BCM920737TAG board and the timer works with 2.5msec configuration. bt_clock_based_periodic_timer_Enable(application_timer_expired_callback, NULL, 2500/625); -- works But it doesn't work with 1.25msec configuration bt_clock_based_periodic_timer_Enable(application_timer_expired_callback, NULL, 1250/625); I am toggling one of port pin to monitor the timing int application_timer_expired_callback(void* context) { // 50 ms timer callback, do something. if(gpio_getPinOutput(APPLICATION_LED_GREEN_PORT, APPLICATION_LED_GREEN_PIN) == APPLICATION_LED_GREEN_ON) { application_turn_off_green_led(); } else { application_turn_on_green_led(); } // Context was not allocated and so does not need to be freed. So return no action. return BLE_APP_EVENT_NO_ACTION; } Is there any limitation on using 1.25msec as timer period? 5. Re: FineTimerArvindS_76 Sep 19, 2014 8:56 AM (in response to userc_8140)1 of 1 people found this helpful Periods of less than ~5mS should not be used because this will severely affect connections/advertisements/scans. This timer uses the BT scheduler which runs off BT slots (625uS). Since most things BT are scheduled in periods of slots or frames (1.25mS), setting this timer to 1.25mS won't work because there are higher priority tasks that the BT scheduler has to perform and will always preempt this timer. 6. Re: FineTimeruserc_8140 Sep 21, 2014 1:16 PM (in response to ArvindS_76) Is it possible t read the slot count(625usec) form application? 7. Re: FineTimerArvindS_76 Sep 21, 2014 3:03 PM (in response to userc_8140)1 of 1 people found this helpful No, this is not possible. 8. Re: FineTimeruserc_8140 Sep 23, 2014 5:17 AM (in response to ArvindS_76) Thanks for information and I need more information about this timer. I want to configure this timer for 5msec and in call back function I want to execute my application code. Is there any limitation how much code we can execute without effecting the performance. 9. Re: FineTimerArvindS_76 Sep 23, 2014 9:58 AM (in response to userc_8140)1 of 1 people found this helpful The callback is serialized to the application thread. Since all time critical activity happens in interrupt context, you should be OK to use say 3-4mS of the processing time. Just remember that it is the idle thread that pets the watch dog (WD). If you don't let the idle thread run at least once in 2s, you will trip the watchdog and the chip will reset (you can pet the WD using wdog_restart() in your app code, but you should be a bit careful with this approach). Also remember that there are a number of commands and events the BT stack will need to handle (the stack runs in the same thread context). Your app will also need to handle other callbacks and interrupt handlers and these are also serialized to the application thread. So you cannot take up all the processing time in this timer callback (and you have to return from this function).
https://community.cypress.com/thread/2175
CC-MAIN-2020-45
refinedweb
781
55.84
Each Answer to this Q is separated by one/two green lines. I am using Python 2.5. And using the standard classes from Python, I want to determine the image size of a file. I’ve heard PIL (Python Image Library), but it requires installation to work. How might I obtain an image’s size without using any external library, just using Python 2.5’s own modules? Note I want to support common image formats, particularly JPG and PNG. Here’s a python 3 script that returns a tuple containing an image height and width for .png, .gif and .jpeg without using any external libraries (ie what Kurt McKee referenced above). Should be relatively easy to transfer it to Python 2. import struct import imghdr def get_image_size(fname): '''Determine the image type of fhandle and return its size. from draco''' with open(fname, 'rb') as fhandle: head = fhandle.read(24) if len(head) != 24: return if imghdr.what(fname) == 'png': check = struct.unpack('>i', head[4:8])[0] if check != 0x0d0a1a0a: return width, height = struct.unpack('>ii', head[16:24]) elif imghdr.what(fname) == 'gif': width, height = struct.unpack('<HH', head[6:10]) elif imghdr.what(fname) == 'jpeg': try: fhandle.seek(0) # Read 0xff next size = 2 ftype = 0 while not 0xc0 <= ftype <= 0xcf: Here’s a way to get dimensions of a png file without needing a third-party module. From import struct def get_image_info(data): if is_png(data): w, h = struct.unpack(') When you run this, it will return: True (x, y) And another example that includes handling of JPEGs as well: While it’s possible to call open(filename, 'rb') and check through the binary image headers for the dimensions, it seems much more useful to install PIL and spend your time writing great new software! You gain greater file format support and the reliability that comes from widespread usage. From the PIL documentation, it appears that the code you would need to complete your task would be: from PIL import Image im = Image.open('filename.png') print 'width: %d - height: %d' % im.size # returns (width, height) tuple As for writing code yourself, I’m not aware of a module in the Python standard library that will do what you want. You’ll have to open() the image in binary mode and start decoding it yourself. You can read about the formats at: Regarding Fred the Fantastic‘s answer: Not every JPEG marker between C0– CF are SOF markers; I excluded DHT ( C4), DNL ( C8) and DAC ( CC). Note that I haven’t looked into whether it is even possible to parse any frames other than C0 and C2 in this manner. However, the other ones seem to be fairly rare (I personally haven’t encountered any other than C0 and C2). Either way, this solves the problem mentioned in comments by Malandy with Bangles.jpg (DHT erroneously parsed as SOF). The other problem mentioned with 1431588037-WgsI3vK.jpg is due to imghdr only being able detect the APP0 (EXIF) and APP1 (JFIF) headers. This can be fixed by adding a more lax test to imghdr (e.g. simply FFD8 or maybe FFD8FF?) or something much more complex (possibly even data validation). With a more complex approach I’ve only found issues with: APP14 ( FFEE) (Adobe); the first marker being DQT ( FFDB); and APP2 and issues with embedded ICC_PROFILEs. Revised code below, also altered the call to imghdr.what() slightly: import struct import imghdr def test_jpeg(h, f): # SOI APP2 + ICC_PROFILE if h[0:4] == '\xff\xd8\xff\xe2' and h[6:17] == b'ICC_PROFILE': print "A" return 'jpeg' # SOI APP14 + Adobe if h[0:4] == '\xff\xd8\xff\xee' and h[6:11] == b'Adobe': return 'jpeg' # SOI DQT if h[0:4] == '\xff\xd8\xff\xdb': return 'jpeg' imghdr.tests.append(test_jpeg) def get_image_size(fname): '''Determine the image type of fhandle and return its size. from draco''' with open(fname, 'rb') as fhandle: head = fhandle.read(24) if len(head) != 24: return what = imghdr.what(None, head) if what == 'png': check = struct.unpack('>i', head[4:8])[0] if check != 0x0d0a1a0a: return width, height = struct.unpack('>ii', head[16:24]) elif what == 'gif': width, height = struct.unpack('<HH', head[6:10]) elif what == 'jpeg': try: fhandle.seek(0) # Read 0xff next size = 2 ftype = 0 while not 0xc0 <= ftype <= 0xcf or ftype in (0xc4, 0xc8, 0xcc): Note: Created a full answer instead of a comment, since I’m not yet allowed to. If you happen to have ImageMagick installed, then you can use ‘identify‘. For example, you can call it like this: path = "//folder/image.jpg" dim = subprocess.Popen(["identify","-format","\"%w,%h\"",path], stdout=subprocess.PIPE).communicate()[0] (width, height) = [ int(x) for x in re.sub('[\t\r\n"]', '', dim).split(',') ] Found a nice solution in another Stackoverflow post (using only standard libraries + dealing with jpg as well): JohnTESlade answer And another solution (the quick way) for those who can afford running ‘file‘ command within python, run: import os info = os.popen("file foo.jpg").read() print info Output: foo.jpg: JPEG image data...density 28x28, segment length 16, baseline, precision 8, 352x198, frames 3 All you gotta do now is to format the output to capture the dimensions. 352×198 in my case. That code does accomplish 2 things: Getting the image dimension Find the real EOF of a jpg file Well when googling I was more interest in the later one. The task was to cut out a jpg file from a datastream. Since I I didn’t find any way to use Pythons ‘image’ to a way to get the EOF of so jpg-File I made up this. Interesting things /changes/notes in this sample: extending the normal Python file class with the method uInt16 making source code better readable and maintainable. Messing around with struct.unpack() quickly makes code to look ugly Replaced read over’uninteresting’ areas/chunk with seek Incase you just like to get the dimensions you may remove the line: hasChunk = ord(byte) not in range( 0xD0, 0xDA) + [0x00] ->since that only get’s important when reading over the image data chunk and comment in #break to stop reading as soon as the dimension were found. …but smile what I’m telling – you’re the Coder 😉 import struct import io,os class myFile(file): def byte( self ): return file.read( self, 1); def uInt16( self ): tmp = file.read( self, 2) return struct.unpack( ">H", tmp )[0]; jpeg = myFile('grafx_ui.s00_\\08521678_Unknown.jpg', 'rb') try: height = -1 width = -1 EOI = -1 type_check = jpeg.read(2) if type_check != b'\xff\xd8': print("Not a JPG") else: byte = jpeg.byte() while byte != b"": while byte != b'\xff': byte = jpeg.byte() while byte == b'\xff': byte = jpeg.byte() # FF D8 SOI Start of Image # FF D0..7 RST DRI Define Restart Interval inside CompressedData # FF 00 Masked FF inside CompressedData # FF D9 EOI End of Image # hasChunk = ord(byte) not in range( 0xD0, 0xDA) + [0x00] if hasChunk: ChunkSize = jpeg.uInt16() - 2 ChunkOffset = jpeg.tell() Next_ChunkOffset = ChunkOffset + ChunkSize # Find bytes \xFF \xC0..C3 That marks the Start of Frame if (byte >= b'\xC0' and byte <= b'\xC3'): # Found SOF1..3 data chunk - Read it and quit jpeg.seek(1, os.SEEK_CUR) h = jpeg.uInt16() w = jpeg.uInt16() #break elif (byte == b'\xD9'): # Found End of Image EOI = jpeg.tell() break else: # Seek to next data chunk print "Pos: %.4x %x" % (jpeg.tell(), ChunkSize) if hasChunk: jpeg.seek(Next_ChunkOffset) byte = jpeg.byte() width = int(w) height = int(h) print("Width: %s, Height: %s JpgFileDataSize: %x" % (width, height, EOI)) finally: jpeg.close() It depends on the output of file which I am not sure is standardized on all systems. Some JPEGs don’t report the image size import subprocess, re image_size = list(map(int, re.findall('(\d+)x(\d+)', subprocess.getoutput("file" + filename))[-1])) Stumbled upon this one but you can get it by using the following as long as you import numpy. import numpy as np [y, x] = np.shape(img[:,:,0]) It works because you ignore all but one color and then the image is just 2D so shape tells you how bid it is. Still kinda new to Python but seems like a simple way to do it.
https://techstalking.com/programming/python/how-to-obtain-image-size-using-standard-python-class-without-using-external-library/
CC-MAIN-2022-40
refinedweb
1,381
67.65
"Add new jobs" do not run any jobs if original build failed RESOLVED FIXED in mozilla58 Status People (Reporter: whimboo, Assigned: bstack) Tracking Details Attachments (1 attachment) Please have a look at the following try build: The original build for Linux64 failed because of infra issues (S3 outage) yesterday. So I retriggered the build once the problems went away. Later I wanted to add some Firefox UI functional jobs, and did so by selecting via "Add new jobs". Submitting the request worked fine and the "add-new" jobs appear. But none of the selected test jobs is getting scheduled, because all relies on the original build. We talked with Armen on IRC and as he also mentioned it was working fine with mozci earlier because in such a case it would have created a new build job. But that is no longer the case with the re-implementation. Brian can you please have a look? Thanks. Flags: needinfo?(bstack) Ah, that is indeed a change in behavior in actions.json. I'll work on a patch now! Thanks for finding this, there will probably be a long-tail of differences in implementations. Flags: needinfo?(bstack) Assignee: nobody → bstack Status: NEW → ASSIGNED Comment on attachment 8910450 [details] Bug 1400223 - Merge tasks added by action tasks into graphs used for subsequent tasks ::: taskcluster/taskgraph/actions/util.py:75 (Diff revision 1) > params, > to_run, > label_to_taskid) > + write_artifact('task-graph.json', optimized_task_graph.to_json()) > + write_artifact('label-to-taskid.json', label_to_taskid) > + write_artifact('to-run.json', list(to_run)) Is there a doc to update about these? ::: taskcluster/taskgraph/util/taskcluster.py:113 (Diff revision 1) > + data = {'continuationToken': response.get('continuationToken')} > + else: > + break > + > + # We can sort on expires because all of these tasks should be created with the > + # same expires time so they end up in order from earliest to latest action clever! Attachment #8910450 - Flags: review?(dustin) → review+ Comment on attachment 8910450 [details] Bug 1400223 - Merge tasks added by action tasks into graphs used for subsequent tasks > clever! Now that I think about this, I realize this is not generically true across all namespaces. Within the context of action tasks this should be true, but this is a function in taskgraph/util and not action specific. Should we leave it as-is and call it an implementation detail to sort out later or maybe I should do the sorting in action tasks? I think you can document the function as sorting by expiration, and leave it up to other users to sort differently if desired. Pushed by ryanvm@gmail.com: Merge tasks added by action tasks into graphs used for subsequent tasks r=dustin Status: ASSIGNED → RESOLVED Closed: 2 years ago Resolution: --- → FIXED Target Milestone: --- → mozilla58 Product: Taskcluster → Taskcluster Graveyard
https://bugzilla.mozilla.org/show_bug.cgi?id=1400223
CC-MAIN-2019-26
refinedweb
452
54.63
4 Observables & Subjects in Practice Written by Marin Todorov By this point in the book, you understand how observables and different types of subjects work, and you’ve learned how to create and experiment with them in a Swift playground. It could be a bit challenging, however, to see the practical use of observables in everyday development situations such as binding your UI to a data model, or presenting a new controller and getting output back from it. It’s OK to be a little unsure how to apply these newly acquired skills to the real world. In this book, you’ll work through theoretical chapters such as Chapter 2, “Observables,” and Chapter 3, “Subjects”, as well as practical step-by-step chapters — just like this one! In the “… in practice” chapters, you’ll work on a complete app. The starter Xcode project will include all the non-Rx code. Your task will be to add the RxSwift framework and add other features using your newly-acquired reactive skills. That doesn’t mean to say you won’t learn a few new things along the way — au contraire! In this chapter, you’ll use RxSwift and your new observable superpowers to create an app that lets users create nice photo collages — the reactive way. Getting started Open the starter project for this chapter: Combinestagram. It takes a couple of tries to roll your tongue just right to say the name, doesn’t it? It’s probably not the most marketable name, but it will do. Install all pods and open Combinestagram.xcworkspace. Refer to Chapter 1, “Hello RxSwift,” for details on how to do that. Select Assets/Main.storyboard and you’ll see the interface of the app you will bring to life: In the first screen, the user can see the current photo collage and has buttons to either clear the current list of photos or to save the finished collage to disk. Additionally, when the user taps on the + button at the top-right, they will be taken to the second view controller in the storyboard where they will see the list of photos in their Camera Roll. The user can add photos to the collage by tapping on the thumbnails. The view controllers and the storyboard are already wired up, and you can also peek at UIImage+Collage.swift to see how the actual collage is put together. In this chapter, you are going to focus on putting your new skills to practice. Time to get started! Using a subject/relay in a view controller You’ll start by adding a BehaviorRelay<[UIImage]> property to the controller class and store the selected photos in its value. As you learned in Chapter 3, “Subjects”, the BehaviorRelay class works much like you’re used to with plain variables: you can manually change their value property any time you want. You will start with this simple example and later move on to subjects and custom observables. Open MainViewController.swift and add the following inside the body of MainViewController: private let bag = DisposeBag() private let images = BehaviorRelay<[UIImage]>(value: []) Since no other class will use those two constants, you define them as private. Encapsulation FTW! The dispose bag is owned by the view controller. As soon as the view controller is released all your observable subscriptions will be disposed as well: This makes Rx subscription memory management very easy: Simply throw subscriptions in the bag and they will be disposed alongside the view controller’s deallocation. However, that won’t happen for this specific view controller, since it’s the root view controller and it isn’t released before the app quits. You’ll see the clever dispose-upon-deallocation mechanism at work later on in this chapter for the other controller in the storyboard. At first, your app will always build a collage based on the same photo. No worries; it’s a nice photo from the Barcelona country side, which is already included in the app’s Asset Catalog. Each time the user taps +, you will add that same photo, one more time, to images. Find actionAdd() and add the following to it: let newImages = images.value + [UIImage(named: "IMG_1907.jpg")!] images.accept(newImages) First, you get the latest collection of images emitted by the relay fetching it via its value property and then you append one more image to it. Don’t mind the force-unwrapping after the UIImage initialization, we’re keeping things simple by skipping error handling for this chapter. Next, you use the relay’s accept(_) to emit the updated set of images to any observers subscribed to the relay. The initial value of the images relay is an empty array, and every time the user taps the + button, the observable sequence produced by images emits a new .next event with the new array as an element. To permit the user to clear the current selection, scroll up and add the following to actionClear(): images.accept([]) With few lines of code in this chapter section, you neatly handled the user input. You can now move on to observing images and displaying the result on screen. Adding photos to the collage Now that you have images wired up, you can observe for changes and update the collage preview accordingly. In viewDidLoad(), create the following subscription to images. Even though its a relay, you can subscribe to it directly, since its conforms to ObservableType, much like Observable itself does: images .subscribe(onNext: { [weak imagePreview] photos in guard let preview = imagePreview else { return } preview.image = photos.collage(size: preview.frame.size) }) .disposed(by: bag) You subscribe for .next events emitted by images. For every event, you create a collage with the helper method collage(images:size:) provided for arrays of type UIImage. Finally, you add this subscription to the view controller’s dispose bag. In this chapter, you are going to subscribe to your observables in viewDidLoad(). Later in the book, you will look into extracting these into separate classes and, in the last chapter, structure them into an MVVM architecture. You now have your collage UI together; the user can update images by tapping the + bar item (or Clear) and you update the UI in turn. Run the app and give it a try! If you add the photo four times, your collage will look like this: Wow, that was easy! Of course, the app is a bit boring right now, but don’t worry — you will add the ability to select photos from Camera Roll in just a bit. Driving a complex view controller UI As you play with the current app, you’ll notice the UI could be a bit smarter to improve the user experience. For example: You could disable the Clear button if there aren’t any photos selected just yet, or in the event the user has just cleared the selection. Similarly, there’s no need for the Save button to be enabled if there aren’t any photos selected. You could also disable Save for an odd number of photos, as that would leave an empty spot in the collage. It would be nice to limit the amount of photos in a single collage to six, since more photos simply look a bit weird. Finally, it would be nice if the view controller title reflected the current selection. If you take a moment to read through the list above one more time, you’ll certainly see these modifications could be quite a hassle to implement the non-reactive way. Thankfully, with RxSwift you simply subscribe to images one more time and update the UI from a single place in your code. Add this subscription inside viewDidLoad(): images .subscribe(onNext: { [weak self] photos in self?.updateUI(photos: photos) }) .disposed(by: bag) Every time there’s a change to the photo selection, you call updateUI(photos:). You don’t have that method just yet, so add it anywhere inside the class body: private func updateUI(photos: [UIImage]) { buttonSave.isEnabled = photos.count > 0 && photos.count % 2 == 0 buttonClear.isEnabled = photos.count > 0 itemAdd.isEnabled = photos.count < 6 title = photos.count > 0 ? "\(photos.count) photos" : "Collage" } In the above code, you update the complete UI according to the ruleset above. All of the logic is in a single place and easy to read through. Run the app again, and you will see all the rules kick in as you play with the UI: By now, you’re probably starting to see the real benefits of Rx when applied to your iOS apps. If you look through all the code you’ve written in this chapter, you’ll see there are only a few simple lines that drive the entire UI! Talking to other view controllers via subjects In this section of the chapter, you will connect the PhotosViewController class to the main view controller in order to let the user select arbitrary photos from their Camera Roll. That will result in far more interesting collages! First, you need to push PhotosViewController to the navigation stack. Open MainViewController.swift and find actionAdd(). Comment out the existing code and add this code in its place: let photosViewController = storyboard!.instantiateViewController( withIdentifier: "PhotosViewController") as! PhotosViewController navigationController!.pushViewController(photosViewController, animated: true) Above, you instantiate PhotosViewController from the project’s storyboard and push it onto the navigation stack. Run the app and tap + to see the Camera Roll. The very first time you do this, you’ll need to grant access to your Photo Library: Once you tap OK you will see what the photos controller looks like. The actual photos might differ on your device, and you might need to go back and try again after granting access. The second time around, you should see the sample photos included with the iPhone Simulator. If you were building an app using the established Cocoa patterns, your next step would be to add a delegate protocol so that the photos controller could talk back to your main controller (that is, the non-reactive way): With RxSwift, however, you have a universal way to talk between any two classes — an Observable! There is no need to define a special protocol, because an Observable can deliver any kind of message to any one or more interested parties — the observers. Creating an observable out of the selected photos You’ll next add a subject to PhotosViewController that emits a .next event each time the user taps a photo from the Camera Roll.Open PhotosViewController.swift and add the following near the top: import RxSwift You’d like to add a PublishSubject to expose the selected photos, but you don’t want the subject publicly accessible, as that would allow other classes to call onNext(_) and make the subject emit values. You might want to do that elsewhere, but not in this case. Add the following properties to PhotosViewController: private let selectedPhotosSubject = PublishSubject<UIImage>() var selectedPhotos: Observable<UIImage> { return selectedPhotosSubject.asObservable() } Here, you define both a private PublishSubject that will emit the selected photos and a public property named selectedPhotos that exposes the subject’s observable. Subscribing to this property is how the main controller can observe the photo sequence, without being able to interfere with it. PhotosViewController already contains the code to read photos from your Camera Roll and display them in a collection view. All you need to do is add the code to emit the selected photo when the user taps on a collection view cell. Scroll down to collectionView(_:didSelectItemAt:). The code inside fetches the selected image and flashes the collection cell to give the user a bit of a visual feedback. imageManager.requestImage(...) gets the selected photo and gives you image and info parameters to work with in its completion closure. In that closure, you’d like to emit a .next event from selectedPhotosSubject. Inside the closure, just after the guard statement, add: if let isThumbnail = info[PHImageResultIsDegradedKey as NSString] as? Bool, !isThumbnail { self?.selectedPhotosSubject.onNext(image) } You use the info dictionary to check if the image is the thumbnail or the full version of the asset. imageManager.requestImage(...) will call that closure once for each size. In the event you receive the full-size image, you call onNext(_) on your subject and provide it with the full photo. That’s all it takes to expose an observable sequence from one view controller to another. There’s no need for delegate protocols or any other shenanigans of that sort. As a bonus, once you remove the protocols, the controllers relationship becomes very simple: Observing the sequence of selected photos Your next task is to return to MainViewController.swift and add the code to complete the last part of the schema above: namely, observing the selected photos sequence. Find actionAdd() and add the following just before the line where you push the controller onto the navigation stack: photosViewController.selectedPhotos .subscribe( onNext: { [weak self] newImage in }, onDisposed: { print("Completed photo selection") } ) .disposed(by: bag) Before you push the controller, you subscribe for events on its selectedPhotos observable. You are interested in two events: .next, which means the user has tapped a photo, and also when the subscription is disposed. You’ll see why you need that in a moment. Insert the following code inside the onNext closure to get everything working. It’s the same code you had before, but this time it adds the photo from Camera Roll: guard let images = self?.images else { return } images.accept(images.value + [newImage]) Run the app, select a few photos from your Camera Roll, and go back to see the result. Cool! Disposing subscriptions — review The code seemingly works as expected, but try the following: Add few photos to a collage, go back to the main screen and inspect the console. Do you see a message saying, “Completed photo selection”? You added a onDispose closure, but it never gets called! That means the subscription is never disposed and never frees its memory! How so? You subscribe an observable sequence and throw it in the main screen’s dispose bag. This subscription (as discussed in previous chapters) will be disposed of either when the bag object is released, or when the sequence completes via an error or completed event. Since you neither destroy the main view controller to release its bag property, nor complete the photos sequence, your subscription just hangs around for the lifetime of the app! To give your observers some closure, you could emit a .completed event when that controller disappears from the screen. This would notify all observers that the subscription has completed to help with automatic disposal. Open PhotosViewController.swift and add a call to your subject’s onComplete() method in the controller’s viewWillDisappear(_:): selectedPhotosSubject.onCompleted() Perfect! Now, you’re ready for the last part of this chapter: taking a plain old boring function and converting it into a super-awesome and fantastical reactive class. Creating a custom observable So far, you’ve tried BehaviorRelay, PublishSubject, and an Observable. To wrap up, you’ll create your own custom Observable and turn a plain old callback API into a reactive class. You’ll use the Photos framework to save the photo collage — and since you’re already an RxSwift veteran, you are going to do it the reactive way! You could add a reactive extension on PHPhotoLibrary itself, but to keep things simple, in this chapter you will create a new custom class named PhotoWriter: Creating an Observable to save a photo is easy: If the image is successfully written to disk you will emit its asset ID and a .completed event, or otherwise an .error event. Wrapping an existing API Open Classes/PhotoWriter.swift — this file includes a couple of definitions to get you started. First, as always, add an import of the RxSwift framework: import RxSwift Then, add a new static method to PhotoWriter, which will create the observable you will give back to code that wants to save photos: static func save(_ image: UIImage) -> Observable<String> { return Observable.create { observer in } } save(_:) will return an Observable<String>, because, after saving the photo, you will emit a single element: the unique local identifier of the created asset. Observable.create(_) creates a new Observable, and you need to add all the meaty logic inside that last closure. Add the following to the Observable.create(_) parameter closure: var savedAssetId: String? PHPhotoLibrary.shared().performChanges({ }, completionHandler: { success, error in }) In the first closure parameter of performChanges(_:completionHandler:), you will create a photo asset out of the provided image; in the second one, you will emit either the asset ID or an .error event. Add inside the first closure: let request = PHAssetChangeRequest.creationRequestForAsset(from: image) savedAssetId = request.placeholderForCreatedAsset?.localIdentifier You create a new photo asset by using PHAssetChangeRequest.creationRequestForAsset(from:) and store its identifier in savedAssetId. Next insert into completionHandler closure: DispatchQueue.main.async { if success, let id = savedAssetId { observer.onNext(id) observer.onCompleted() } else { observer.onError(error ?? Errors.couldNotSavePhoto) } } If you got a success response back and savedAssetId contains a valid asset ID, you emit a .next event and a .completed event. In case of an error, you emit either a custom or the default error. With that, your observable sequence logic is completed. Xcode should already be warning you that you miss a return statement. As a last step, you need to return a Disposable out of that outer closure so add one final line to Observable.create({}): return Disposables.create() That wraps up the class nicely. The complete save() method should look like this: static func save(_ image: UIImage) -> Observable<String> { return Observable.create({ observer in var savedAssetId: String? PHPhotoLibrary.shared().performChanges({ let request = PHAssetChangeRequest.creationRequestForAsset(from: image) savedAssetId = request.placeholderForCreatedAsset?.localIdentifier }, completionHandler: { success, error in DispatchQueue.main.async { if success, let id = savedAssetId { observer.onNext(id) observer.onCompleted() } else { observer.onError(error ?? Errors.couldNotSavePhoto) } } }) return Disposables.create() }) } If you’ve been paying attention, you might be asking yourself, “Why do we need an Observable that emits just a single .next event?” Take a moment to reflect on what you’ve learned in the previous chapters. For example, you can create an Observable by using any of the following: Observable.never(): Creates an observable sequences that never emits any elements. Observable.just(_:): Emits one element and a .completedevent. Observable.empty(): Emits no elements followed by a .completedevent. Observable.error(_): Emits no elements and a single .errorevent. As you see, observables can produce any combination of zero or more .next events, possibly terminated by either a .completed or an .error. In the particular case of PhotoWriter, you are only interested in one event since the save operation completes just once. You use .completed for successful writes, and .error if a particular write failed. You get a big bonus point if you’re screaming “But what about Single?” about now. Indeed, what about Single? RxSwift traits in practice In Chapter 2, “Observables,” you had the chance to learn about RxSwift traits: specialized variations of the Observable implementation that are very handy in certain cases. In this chapter, you’re going to do a quick review and use some of the traits in the Combinestagram project! Let’s start with Single. Single As you know from Chapter 2, Single is an Observable specialization. It represents a sequence, which can emit just once either a .success(Value) event or an .error. Under the hood, a .success is just .completed pair. This kind of trait is useful in situations such as saving a file, downloading a file, loading data from disk or basically any asynchronous operation that yields a value. You can categorize two distinct use-cases of Single: For wrapping operations that emit exactly one element upon success, just as PhotoWriter.save(_)earlier in this chapter. You can directly create a Singleinstead of an Observable. In fact you will update the save(_)method in PhotoWriterto create a Singlein one of this chapter’s challenges. To better express your intention to consume a single element from a sequence and ensure if the sequence emits more than one element the subscription will error out. To achieve this, you can subscribe to any observable and use .asSingle()to convert it to a Single. You’ll try this just after you’ve finished reading through this section. Maybe Maybe is quite similar to Single with the only difference that the observable may not emit a value upon successful completion. If we keep to the photograph-related examples imagine this use-case for Maybe, your app is storing photos in its own custom photo album. You persist the album identifier in UserDefaults and use that ID each time to “open” the album and write a photo inside. You would design a open(albumId:) -> Maybe<String> method to handle the following situations: - In case the album with the given ID still exists, just emit a .completedevent. - In case the user has deleted the album in the meanwhile, create a new album and emit a .nextevent with the new ID so you can persist it in UserDefaults. - In case something is wrong and you can’t access the Photos library at all, emit an .errorevent. Just like other traits, you can achieve the same functionality with using a “vanilla” Observable, but Maybe gives more context both to you as you’re writing your code and to the programmers coming to alter the code later on. Just as with Single, you can either create a Maybe directly by using Maybe.create({ ... }) or by converting any observable sequence via .asMaybe(). Completable The final trait to cover is Completable. This variation of Observable allows only for a single .completed or .error event to be emitted before the subscription is disposed of. You can convert an observable sequence to a completable by using the ignoreElements() operator, in which case all next events will be ignored, with only a completed or error event emitted, just as required for a Completable. You can also create a completable sequence by using Completable.create { ... } with code very similar to that you’d use to create other observables or traits. You might notice that Completable simply doesn’t allow for emitting any values and wonder why would you need a sequence like that. You’d be surprised at the number of use-cases wherein you only need to know whether an async operation succeeded or not. Let’s look at an example before going back to Combinestagram. Let’s say your app auto-saves the document while the user is working on it. You’d like to asynchronously save the document in a background queue and, when completed, show a small notification or an alert box onscreen if the operation fails. Let’s say you wrapped the saving logic into a function saveDocument() -> Completable. This is how easy it is then to express the rest of the logic: saveDocument() .andThen(Observable.from(createMessage)) .subscribe(onNext: { message in message.display() }, onError: {e in alert(e.localizedDescription) }) The andThen operator allows you to chain more completables or observables upon a success event and subscribe for the final result. In case any of them emits an error, your code will fall through to the final onError closure. I’ll assume you’re delighted to hear that you will get to use Completable in two chapters later in the book. And now back to Combinestagram and the problem at hand! Subscribing to your custom observable The current feature — saving a photo to the Photos library — falls under one of those special use-cases for which there is a special trait. Your PhotoWriter.save(_) observable emits just once (the new asset ID), or it errors out, and is therefore a great case for a Single. Now for the sweetest part of all: making use of your custom-designed Observable and kicking serious butt along the way! Open MainViewController.swift and add the following inside the actionSave() action method for the Save button: guard let image = imagePreview.image else { return } PhotoWriter.save(image) .asSingle() .subscribe( onSuccess: { [weak self] id in self?.showMessage("Saved with id: \(id)") self?.actionClear() }, onError: { [weak self] error in self?.showMessage("Error", description: error.localizedDescription) } ) .disposed(by: bag) Above you call PhotoWriter.save(image) to save the current collage. Then you convert the returned Observable to a Single, ensuring your subscription will get at most one element, and display a message when it succeeds or errors out. Additionally, you clear the current collage if the write operation was a success. Note: asSingle()ensures that you get at most one element by throwing an error if the source sequence emits more than one. Give the app one last triumphant run, build up a nice photo collage and save it to the disk. Don’t forget to check your Photos app for the result! With that, you’ve completed Section 1 of this book — congratulations! You are not a young Padawan anymore, but an experienced RxSwift Jedi. However, don’t be tempted to take on the Dark Side just yet. You will get to battle networking, thread switching, and error handling soon enough! Before that, you must continue your training and learn about one of the most powerful aspects of RxSwift. In Section 2, “Operators and Best Practices,” operators will allow you to take your Observable superpowers to a whole new level! Challenges Before you move on to the next section, there are two challenges waiting for you. You will once again create a custom Observable — but this time with a little twist. Challenge 1: It’s only logical to use a Single You’ve probably noticed that you didn’t gain much by using .asSingle() when saving a photo to the Camera Roll. The observable sequence already emits at most one element! Well, you are right about that, but the point was to provide a gentle introduction to .asSingle(). Now you can improve the code on your own in this very challenge. Open PhotoWriter.swift and change the return type of save(_) to Single<String>. Then replace Observable.create with Single.create. This should clear most errors. There is one last thing to take care of: Observable.create receives an observer as parameter so you can emit multiple values and/or terminating events. Single.create receives as a parameter a closure, which you can use only once to emit either a .success(T) or .error(E) values. Complete the conversion yourself and remember that the parameter is a closure not an observer object, so you call it like this: single(.success(id)). Challenge 2: Add custom observable to present alerts Open MainViewController.swift and scroll towards the bottom of the file. Find the showMessage(_:description:) method that came with the starter project. The method shows an alert onscreen and runs a callback when the user taps the Close button to dismiss the alert. That does sound quite similar to what you’ve already done for PHPhotoLibrary.performChanges(_), doesn’t it? To complete this challenge, code the following: - Add an extension method to UIViewControllerthat presents an alert onscreen with a given title and message and returns an Completable. - Add a Close button to allow the user to close the alert. - Dismiss the alert controller when the subscription is dismissed, so that you don’t have any dangling alerts. In the end, use the new completable to present the alert from within showMessage(_:description:). As always, if you run into trouble, or are curious to see the provided solution, you can check the completed project and challenge code in the projects folder for this chapter. You can peek in there anyway, but do give it your best shot first!
https://www.raywenderlich.com/books/rxswift-reactive-programming-with-swift/v4.0/chapters/4-observables-subjects-in-practice
CC-MAIN-2021-04
refinedweb
4,623
55.64
Imagine you have a scenario wherein you want your COM Server to be called within JavaScript which invokes an Async method. Upon completion of the Async method you would like to notify the JS using callback function that the task has been completed. Doing the Async work is simple but the callback to notify the JS that the task has been completed is tricky. What can we do in such scenarios? If you are implementing an ActiveX control, you need to create a thread pool when the object initializes or when it receives the first request. Then, each send request would be dispatched to the thread pool and return back to the caller. For example, if we refer to the diagram below, the ActiveX controls equivalent to PrintAsync() would be: PrintAysnc(CallbackObject) { PrintAysnc(CallbackObject) DoWork(CallbackObject) // queue request to thread pool; work is not actually done in this function! return (); // returns immediately } DoWork() queues the request to the thread pool and does not do the work. The thread pool thread then does the work and uses the completion object to signal back to the original caller that the request is done. An example of code that interops with JavaScript and does an Async work looks like this: public void MethodCalledByJavaScript() { Dispatcher dispatcher = Dispatcher.CurrentDispatcher; ThreadPool.QueueUserWorkItem((state) => // do async work here dispatcher.Invoke(new Action(() => { // call back into JavaScript here }); }); } When JavaScript calls into MethodCalledByJavaScript(), it returns immediately and allows subsequent JavaScript code to continue executing, while at the same time spawning a new thread to do the Async work. When the code in the thread finishes executing, it uses the dispatcher to queue a work item on the UI thread to call back into JavaScript. More info on the Dispatcher class: The Dispatcher class (System.Windows.Threading Namespace) is available in the following Dot Net versions: Dot Net 3.0 (Base version 2.0)
http://blogs.msdn.com/b/dsnotes/archive/2013/10/01/how-to-get-javascript-working-using-callback-with-com.aspx
CC-MAIN-2014-52
refinedweb
315
60.75
KWayland::Client::EventQueue #include <event_queue.h> Detailed Description Wrapper class for wl_event_queue interface. The EventQueue is needed if a different thread is used for the connection. If the interface wrappers are held in a different thread than the connection thread an EventQueue is needed for the thread which holds the interface wrappers. A common example is a dedicated connection thread while the interface wrappers are created in the main thread. All interface wrappers are set up to support the EventQueue in the most convenient way. The EventQueue needs only to be passed to the Registry. The EventQueue will then be passed to all created wrappers through the tree. The EventQueue can be used as a drop-in replacement for any wl_event_queue pointer as it provides matching cast operators. Definition at line 55 of file event_queue.h. Member Function Documentation Adds the proxy to the EventQueue. Definition at line 76 of file event_queue.cpp. Adds the proxy of type wl_interface (e.g. wl_compositor) to the EventQueue. Definition at line 135 of file event_queue.h. Adds the proxy wrapper class of type T referencing the wl_interface to the EventQueue. Definition at line 142 of file event_queue.h. Destroys the data held by this EventQueue._event_queue interface once there is a new connection available. Definition at line 41 of file event_queue.cpp. Dispatches all pending events on the EventQueue. Definition at line 67 of file event_queue.cpp. - Returns trueif EventQueue is setup. Definition at line 47 of file event_queue.cpp. Releases the wl_event_queue interface. After the interface has been released the EventQueue instance is no longer valid and can be setup with another wl_event_queue interface. Definition at line 35 of file event_queue.cpp. Creates the event queue for the display. Note: this will not automatically setup the dispatcher. When using this method one needs to ensure that dispatch gets invoked whenever new events need to be dispatched. Definition at line 52 of file event_queue.cpp. Creates the event queue for the connection. This method also connects the eventsRead signal of the ConnectionThread to the dispatch method. Events will be automatically dispatched without the need to call dispatch manually. Definition at line 61 of file event_queue.
https://api.kde.org/frameworks/kwayland/html/classKWayland_1_1Client_1_1EventQueue.html
CC-MAIN-2021-17
refinedweb
362
61.43
Flexible Grayscale OLED Hookup Guide Introduction We’ve been hearing about and seeing flexible screens at CES for years now, but now you can finally get one in your hands and bend a screen! You can’t fold it like paper but. The interface is 3-wire SPI and each pixel requires 4 bits. This means you will need a processor capable of storing a local array of 80*32 = 2,560 bytes in order to truly flex (pun intended) the power of the grayscale display. Basic 8-bit Arduinos can communicate with the display and do things like text but graphics will be tricky. Required Materials To get started, you’ll need a microcontroller to control everything in your project. SparkFun ESP32 ThingDEV-13907 Raspberry Pi 3DEV-13825 Particle Photon (Headers)WRL-13774 Tools You will need a soldering iron, solder, and general soldering accessories. Serial Peripheral Interface (SPI) Logic Levels Hardware Overview Let’s look over a few characteristics of the flexible OLED so we know a bit more about how it behaves. Pins The characteristics of the available pins on the flexible OLED breakout are outlined in the table below. Hardware Assembly The flexible OLED is fairly simple to connect to your microcontroller. You can either solder headers to the OLED breakout or solder wires straight to the breakout pins. If you’ve not soldered headers to a board, make sure to check out our tutorial here on soldering. Hookup Table The onboard buffer means that you can hook the display straight up to 3.3V or 5V logic without the need for any logic conversion circuitry. Therefore, you can just connect the pins directly to the I/O on your microcontroller. Simply connect the pins to their assignments in the below table and we’ll be ready to go. These pins can be changed in software later if you need to use any of them to control other parts of your project. to download and install the Sparkfun Flexible Grayscale OLED Brekaout library. You can do this through the Arduino library manager or manually installing it by clicking the button below. Download the SparkFun Flexible Grayscale Library Before we get started developing a sketch, let’s look at the available functions of the library. void command(uint8_t c);— Sends the display a command byte. void data(uint8_t c);— Sends the display a data byte. void setColumnAddress(uint8_t add);— Sets the column address. void setPageAddress(uint8_t add);— Sets the page address. LCD Drawing Functions void clearDisplay(uint8_t mode):— Clears the screen buffer in the OLED’s memory, pass in mode = CLEAR_DISPLAYto clear the memory of the display, mode = CLEAR_BUFFERto clear the display buffer, or mode = CLEAR_ALLto clear both. void display(void);— Moves display memory to the screen to draw the image in memory. void setCursor(uint8_t x, uint8_t y);— Set cursor position to (x, y). void invert(boolean inv);— Turns every black pixel white, turns all white pixels black. void setContrast(uint8_t contrast);— Changes the contrast value anywhere between 0 and 255. void flipVertical(boolean flip);— Does a vertical mirror of the screen. void flipHorizontal(boolean flip);— Does a horiontal mirror of the screen. void setPixel(uint8_t x, uint8_t y);— Draw a pixel using the current fore color and current draw mode in the screen buffer’s x,y position. void setPixel(uint8_t x, uint8_t y, uint8_t color, uint8_t mode);— Draw a pixel with NORM or XOR draw mode in the screen buffer’s x,y position. void line(uint8_t x0, uint8_t y0, uint8_t x1, uint8_t y1);— Draw line using current fore color and current draw mode from x0,y0 to x1,y1 of the screen buffer. void line(uint8_t x0, uint8_t y0, uint8_t x1, uint8_t y1, uint8_t color, uint8_t mode);— Draw line using color and mode from x0,y0 to x1,y1 of the screen buffer. void lineH(uint8_t x, uint8_t y, uint8_t width);— Draw horizontal line using current fore color and current draw mode from x,y to x+width,y of the screen buffer. void lineH(uint8_t x, uint8_t y, uint8_t width, uint8_t color, uint8_t mode);— Draw horizontal line using color and mode from x,y to x+width,y of the screen buffer. void lineV(uint8_t x, uint8_t y, uint8_t height);— Draw vertical line using current fore color and current draw mode from x,y to x,y+height of the screen buffer. void lineV(uint8_t x, uint8_t y, uint8_t height, uint8_t color, uint8_t mode);— Draw vertical line using color and mode from x,y to x,y+height of the screen buffer. void rect(uint8_t x, uint8_t y, uint8_t width, uint8_t height);— Draw rectangle using current fore color and current draw mode from x,y to x+width,y+height of the screen buffer. void rect(uint8_t x, uint8_t y, uint8_t width, uint8_t height, uint8_t color , uint8_t mode);—Draw rectangle using color and mode from x,y to x+width,y+height of the screen buffer. void rectFill(uint8_t x, uint8_t y, uint8_t width, uint8_t height);— Draw filled rectangle using current fore color and current draw mode from x,y to x+width,y+height of the screen buffer. void rectFill(uint8_t x, uint8_t y, uint8_t width, uint8_t height, uint8_t color , uint8_t mode);— Draw filled rectangle using color and mode from x,y to x+width,y+height of the screen buffer. void circle(uint8_t x, uint8_t y, uint8_t radius);— Draw circle with radius using current fore color and current draw mode with center at x,y of the screen buffer. void circle(uint8_t x, uint8_t y, uint8_t radius, uint8_t color, uint8_t mode);— Draw circle with radius using color and mode with center at x,y of the screen buffer. void circleFill(uint8_t x0, uint8_t y0, uint8_t radius);— Draw filled circle with radius using current fore color and current draw mode with center at x,y of the screen buffer. void circleFill(uint8_t x0, uint8_t y0, uint8_t radius, uint8_t color, uint8_t mode);— Draw filled circle with radius using color and mode with center at x,y of the screen buffer. void drawChar(uint8_t x, uint8_t y, uint8_t c);— Draws a character at position (x, y). void drawChar(uint8_t x, uint8_t y, uint8_t c, uint8_t color, uint8_t mode);— Draws a character using a color and mode at position (x, y) void drawBitmap(uint8_t * bitArray);— Draws a preloaded bitmap. uint16_t getDisplayWidth(void);— Gets the width of the OLED. uint16_t getDisplayHeight(void);— Gets the height of the OLED. void setDisplayWidth(uint16_t);— Sets the width of the OLED. void setDisplayHeight(uint16_t);— Sets the height of the OLED. void setColor(uint8_t color);— Sets the color of the OLED void setDrawMode(uint8_t mode);— Sets the drawing mode of the OLED uint8_t *getScreenBuffer(void);— Font Settings uint8_t getFontWidth(void);— Gets the current font width as a byte. uint8_t getFontHeight(void);— Gets the current font height as a byte. uint8_t getTotalFonts(void);— Return the total number of fonts loaded into the MicroOLED’s flash memory. uint8_t getFontType(void);— Returns the font type number of the current font (Font types shown below). boolean setFontType(uint8_t type);— Sets the font type (Font types shown below). uint8_t getFontStartChar(void);— Returns the starting ASCII character of the current font. uint8_t getFontTotalChar(void);— Return the total characters of the current font. Rotation and Scrolling The following functions will scroll the screen in the various specified directions of each function. Start and stop indicate the range of rows/columns that will be scrolling. void scrollRight(uint8_t start, uint8_t stop); void scrollLeft(uint8_t start, uint8_t stop); void scrollUp(uint8_t start, uint8_t stop); void scrollStop(void); Example Code Now that we have our library installed, we can get started playing around with our examples to learn more about how the screen behaves. Example 1 - Text To get started, open up Example1_Text under File > Examples > SparkFun Flexible Grayscale OLED Breakout > Example1_Text. Upon opening this example, you’ll notice that our void loop() is empty. This is because we only need to draw the image to our OLED one time in order for it to stay there. We first initialize our screen with CS connected to pin 10 and RES connected to pin 9, with the line SSD1320 flexibleOLED(10, 9);. Then in our setup loop we use flexibleOLED.begin(160, 32); to begin a display that is 160x32 pixels. We then use the following lines to first clear the display, set the font, the location where we’d like to type, and the text we’d like to type. The final line tells the display to show what we’ve just written to the display buffer. language:c flexibleOLED.clearDisplay(); //Clear display and buffer flexibleOLED.setFontType(1); //Large font flexibleOLED.setCursor(28, 12); flexibleOLED.print("Hello World!"); flexibleOLED.setFontType(0); //Small font flexibleOLED.setCursor(52, 0); flexibleOLED.print("8:45:03 AM"); flexibleOLED.display(); This will write the text to the display when our microcontroller runs the setup loop and leave it there, the output should look something like the below image. Example 2 - Graphics To get started, open up Example2_Graphics under File > Examples > SparkFun Flexible Grayscale OLED Breakout > Example2_Graphics. This example will draw a grayscale image from pre-converted image data, in this case, an image of a macaque. In order to convert your own images to a format readable by the OLED, check out this neat Python script for converting Bitmaps to arrays for the grayscale OLED. First you’ll need a *.bmp file that is 160 pixels wide and 32 pixels tall. Once you have your *.bmp, generating an image array is as simple as running the python script from the command line like below. (Make sure you put in the proper file paths) language:bash python <path to bmptoarray.py> <pathway to image.bmp> The output will be placed in the output.txt file in the same directory as bmptoarray.py, and will look something like the below image. This large array must then be copied into a *.h file in the same folder as your sketch. Go ahead and name it something memorable, my sketch folder looks like this, with my array sitting in the TestImage.h file Then, in our sketch, we’ll need to make sure we include the file containing this array, so make sure to put an #include "TestImage.h" at the top of your sketch. Also make sure you comment out any other image files that may be included. If you haven’t gone ahead and replaced the macaque with your own image, the output should look like the below image, otherwise, it should obviously look like whatever image you’ve chosen to display on your OLED. Example 3 - Lines To get started, open up Example3_Lines under File > Examples > SparkFun Flexible Grayscale OLED Breakout > Example3_Lines. This example draws a few shapes on the display. Once again, it simply writes the image to the display and leaves it there. Play around with the parameters that draw each rectangle and circle to determine how this affects their positioning and size. The stock example code should look something like the below image. Example 4 - BMP Eater In this example, we’ll feed bitmaps directly into the screen using a serial terminal like Tera Term. If you’re not too familiar with using a terminal, check out our overview of serial terminal basics and download Tera Term. This is useful because we don’t have to convert our bitmaps into a prog_mem or anything. To get started we’ll first have to make sure our microcontroller can properly parse the serial input into pixel data. Go ahead and open up Example4_BMP_Eater under File > Examples > SparkFun Flexible Grayscale OLED Breakout > Example4_BMP_Eater. Once you have this open and uploaded, check out the getBitmap() function, which checks the structure of what we’re sending over serial and then writes it to the screen. Now that our microcontroller is ready for data, it’s time to open up Tera Term and start sending data. A new instance of Tera Term should prompt you to enter the COM port. Be sure to enter the port that your microcontroller is on. Once we’ve done this, we’ll need to change the baud rate of our terminal to match the microcontroller’s baud of 57600. Do this by going to Setup > Serial Port… and select 57600 from the drop-down menu. Now that we’ve opened a connection to the OLED we can start sending images to it. To do this, all we need to do is go to File > Send File… and select the bitmap we want to send to our screen. Go to Documents > Arduino > Libraries > SparkFun_Flexible_Grayscale_OLED_Breakout > Examples > Example4_BMP_Eater. This folder should contain a few bitmaps. If you got fancy and created your own bitmap in the second example, you can load that up as well. Select your file, make sure you’re sending it in a binary format (the image below shows the binary box checked). Uploading the image should show the display refresh line by line as it gets new data to chew on. The process looks something like the below GIF. Example 5 - All The Text To get started, open up Example5_AllTheText under File > Examples > SparkFun Flexible Grayscale OLED Breakout > Example5_AllTheText. This example displays all of the text capabilities of the OLED. Take a look at the text example functions below to see how each one writes the corresponding text. language:c void smallTextExample() { printTitle("Small text", 0); flexibleOLED.setFontType(0); //Small text byte thisFontHeight = flexibleOLED.getFontHeight(); flexibleOLED.clearDisplay(); //Clear display RAM and local display buffer flexibleOLED.setCursor(0, thisFontHeight * 3); flexibleOLED.print("ABCDEFGHIJKLMNOPQRSTUVWXYZ"); flexibleOLED.setCursor(0, thisFontHeight * 2); flexibleOLED.print("abcdefghijklmnopqrstuvwxyz"); flexibleOLED.setCursor(0, thisFontHeight * 1); flexibleOLED.print("1234567890!@#$%^&*(),.<>/?"); flexibleOLED.setCursor(0, thisFontHeight * 0); flexibleOLED.print(";:'\"[]{}-=_+|\\~`"); flexibleOLED.display(); delay(2000); } Changing the type of text is simply a matter of using the setFontType() and changing the font used by the screen. Also notice how we must use different cursor positions for our lines of text to prevent them from overlapping each other. language:c void largeTextExample() { printTitle("Large text", 0); flexibleOLED.setFontType(1); //Larger text byte theDisplayHeight = flexibleOLED.getDisplayHeight(); byte thisFontHeight = flexibleOLED.getFontHeight(); flexibleOLED.clearDisplay(); //Clear display RAM and local display buffer flexibleOLED.setCursor(0, theDisplayHeight - (thisFontHeight * 1)); flexibleOLED.print("ABCDEFGHIJKLMNOPQ"); flexibleOLED.setCursor(0, theDisplayHeight - (thisFontHeight * 2)); flexibleOLED.print("abcdefghij1234567"); flexibleOLED.display(); delay(2000); } Uploading this example should yield an output on your screen similar to the one shown in the image below. Example 6 - Pong This next example will play us a nice little game of fake pong. To get started, open up Example6_Pong under File > Examples > SparkFun Flexible Grayscale OLED Breakout > Example6_Pong. The meat and potatoes of this pong example is contained in the shapeExample() function, shown below. language:c void shapeExample() { printTitle("Shapes!", 0); // Silly pong demo. It takes a lot of work to fake pong... int paddleW = 3; // Paddle width int paddleH = 15; // Paddle height // Paddle 0 (left) position coordinates int paddle0_Y = (flexibleOLED.getDisplayHeight() / 2) - (paddleH / 2); int paddle0_X = 2; // Paddle 1 (right) position coordinates int paddle1_Y = (flexibleOLED.getDisplayHeight() / 2) - (paddleH / 2); int paddle1_X = flexibleOLED.getDisplayWidth() - 3 - paddleW; int ball_rad = 2; // Ball radius // Ball position coordinates int ball_X = paddle0_X + paddleW + ball_rad; int ball_Y = random(1 + ball_rad, flexibleOLED.getDisplay < flexibleOLED.getDisplay >= (flexibleOLED.getDisplay > flexibleOLED.getDisplayHeight() - 2 - paddleH)) { paddle0Velocity = -paddle0Velocity; } // Change paddle 1's direction if it hit top/bottom if ((paddle1_Y <= 1) || (paddle1_Y > flexibleOLED.getDisplayHeight() - 2 - paddleH)) { paddle1Velocity = -paddle1Velocity; } // Draw the Pong Field flexibleOLED.clearDisplay(CLEAR_BUFFER); //Save time. Only clear the local buffer. // Draw an outline of the screen: flexibleOLED.rect(0, 0, flexibleOLED.getDisplayWidth() - 1, flexibleOLED.getDisplayHeight()); // Draw the center line flexibleOLED.rectFill(flexibleOLED.getDisplayWidth() / 2 - 1, 0, 2, flexibleOLED.getDisplayHeight()); // Draw the Paddles: flexibleOLED.rectFill(paddle0_X, paddle0_Y, paddleW, paddleH); flexibleOLED.rectFill(paddle1_X, paddle1_Y, paddleW, paddleH); // Draw the ball: flexibleOLED.circle(ball_X, ball_Y, ball_rad); // Actually draw everything on the screen: flexibleOLED.display(); //delay(25); // Delay for visibility } delay(1000); } Most of this function is simply math to move the paddles and ball around the screen and check for collisions. The actual drawing of the objects is executed in the last few lines of the function, right before the flexibleOLED.display(): function. The shapeExample() function is called repeatedly in our void loop() to progress the positions of the Pong pieces. The OLED should look something like the below GIF with this code uploaded. Example 7 - Logo To get started, open up Example7_Logo under File > Examples > SparkFun Flexible Grayscale OLED Breakout > Example7_Logo. This example simply shows us how to display what was already in the OLED’s buffer. All we have to do is initialize the screen without clearing the buffer, give the flexibleOLED.display() command, and the OLED will show the SparkFun logo. It’ll look similar to the image below. Example 8 - Noise Drawing To get started, open up Example8_NoiseDrawing under File > Examples > SparkFun Flexible Grayscale OLED Breakout > Example8_NoiseDrawing. This example writes noise directly to the display and also to the buffer. However, the buffer is incapable of grayscale so we will only get black and white noise when calling the writeToBuffer() function. We can see upon closer inspection that each of these functions writes noise from the A0 and A1 pins, so make sure these aren’t connected to anything. The output will look something like the below image. Notice how the noise from the buffer is only in black and white. Resources and Going Further Now that you’ve successfully got your flexible grayscale OLED display up and running, it’s time to incorporate it into your own project! For more information, check out the resources below: - Schematic (PDF) - Eagle Files (ZIP) - Flexible Grayscale OLED Display Datasheet - SSD1320 Command Set - SSD1320 Protocol Datasheet - GitHub - Product Repo - Hardware design files - Arduino Library - Library and example code. - BMP to Array - Python script to convert bitmaps to an Arduino prog_mem array when outputting grayscale images to OLEDs. - SFE Product Showcase: SparkFun Flexible Grayscale OLED Breakout Need some inspiration for your next project? Check out some of these related tutorials:
https://learn.sparkfun.com/tutorials/flexible-grayscale-oled-hookup-guide
CC-MAIN-2018-34
refinedweb
2,990
56.25
import reverse def myview(request): return HttpResponseRedirect(reverse('arch-summary', args=[1945])) You can also pass kwargs instead of args. For example: >>> reverse('admin:app_list', kwargs={'app_label': 'auth'}) '/admin/auth/' args and kwargs cannot be passed to reverse() at the same time. If no match can be made, reverse() raises a NoReverseMatch exception. The reverse() function can reverse a large variety of regular expression patterns for URLs, but not every possible one. The main restriction at the moment is that the pattern cannot contain alternative choices using the vertical bar ( "|") character. You can quite happily use such patterns for matching against incoming URLs and sending them off to views, but you cannot reverse such patterns. The current_app argument allows you to provide a hint to the resolver indicating the application to which the currently executing view belongs. This current_app argument is used as a hint to resolve application namespaces into URLs on specific application instances, according to the namespaced URL resolution strategy. The urlconf argument is the URLconf module containing the URL patterns to use for reversing. By default, the root URLconf for the current thread is used. Note The string returned by reverse() is already urlquoted. For example: >>> reverse('cities', args=['Orléans']) '.../Orl%C3%A9ans/' Applying further encoding (such as urllib.parse.quote()) to the output of reverse() may produce undesirable results. reverse_lazy()¶ A lazily evaluated version of reverse(). It is useful for when you need to use a URL reversal before your project’s URLConf is loaded. Some common cases where this function is necessary are: - providing a reversed URL as the urlattribute of a generic class-based view. - providing a reversed URL to a decorator (such as the login_urlargument for the django.contrib.auth.decorators.permission_required()decorator). - providing a reversed URL as a default value for a parameter in a function’s signature. resolve()¶ The resolve() function can be used for resolving URL paths to the corresponding view functions. It has the following signature: path is the URL path you want to resolve. As with reverse(), you don’t need to worry about the urlconf parameter. The function returns a ResolverMatch object that allows you to access various metadata about the resolved URL. If the URL does not resolve, the function raises a Resolver404 exception (a subclass of Http404) . - class ResolverMatch¶ route¶']. namespaces¶ The list of individual namespace components in the full instance namespace for the URL pattern that matches the URL. i.e., if the namespace is foo:bar, then namespaces will be ['foo', 'bar']. A ResolverMatch object can then be interrogated to provide information about the URL pattern that matches a URL: # Resolve a URL match = resolve('/some/path/') # Print the URL pattern that matches the URL print(match.url_name) A ResolverMatch object can also be assigned to a triple: func, args, kwargs = resolve('/some/path/') One possible use of resolve() would be to test whether a view would raise a Http404 error before redirecting to it: from urllib.parse import urlparse from django.urls import resolve from django.http import Http404, HttpResponseRedirect def myview(request): next = request.META.get('HTTP_REFERER', None) or '/' response = HttpResponseRedirect(next) # modify the request and response as required, e.g. change locale # and set corresponding locale cookie view, args, kwargs = resolve(urlparse(next)[2]) kwargs['request'] = request try: view(*args, **kwargs) except Http404: return HttpResponseRedirect('/') return response get_script_prefix()¶ Normally, you should always use reverse() to define URLs within your application. However, if your application constructs part of the URL hierarchy itself, you may occasionally need to generate URLs. In that case, you need to be able to find the base URL of the Django project within its Web server (normally, reverse() takes care of this for you). In that case, you can call get_script_prefix(), which will return the script prefix portion of the URL for your Django project. If your Django project is at the root of its web server, this is always "/".
https://docs.djangoproject.com/en/3.1/ref/urlresolvers/
CC-MAIN-2022-21
refinedweb
653
55.24
Thomas, Can it wait just a few more days? Thanks, dims --- Thomas Sandholm <sandholm@mcs.anl.gov> wrote: > Now when 1.1 has been labeled I would like to merge the fixes I put into > the dynamic_deserilization branch into the trunk so that they get included > into the next release. > I have done a successful merge in my workspace, but I wanted to give you a > heads up too with things that have been merged before committing. > > -dynamic deserialization support described at: > > -SOAP Header dirty flag bug fix > -Added support for getting default namespace in XMLUtils getNamespace, > getFullQNameFromString > -Fix for correct namespace generation for types in Java to wrapped WSDL > generation > -xsd:union support > -added meta data generation to enum emitter > > If I don't hear any objections I will merge it into the trunk tomorrow. > > Thanks, > Thomas > > > At 06:03 PM 6/8/2003 -0400, Glen Daniels wrote: > > >I did drop a label ("axis1_1"), but I did not cut an actual branch, > >figuring we can do that from the label if/when necessary. > > > >--Glen > > > > > -----Original Message----- > > > From: Davanum Srinivas [mailto:dims@yahoo.com] > > > Sent: Sunday, June 08, 2003 4:01 PM > > > To: axis-dev@ws.apache.org > > > Subject: Re: 1.1 pre-release (please test) > > > > > > > > > Thanks). > > > > > > > > Thomas Sandholm <sandholm@mcs.anl.gov> > The Globus Project(tm) <> > Ph: 630-252-1682, Fax: 630-252-1997 > Argonne National Laboratory > ===== Davanum Srinivas - __________________________________ Do you Yahoo!? The New Yahoo! Search - Faster. Easier. Bingo.
http://mail-archives.apache.org/mod_mbox/axis-java-dev/200306.mbox/%3C20030610162046.12482.qmail@web12809.mail.yahoo.com%3E
CC-MAIN-2015-22
refinedweb
241
64.51
A. The first line contains integer t, the number of test cases. Followed by t lines containing integers K. For each K, output the smallest palindrome larger than K. Input: 2 808 2133 Output: 818 2222 Warning: large Input/Output data, be careful with certain languages Can anyone tell me what is wrong with my output in the last submission. It works fine for hte sample output on my machine same holds good for me too. I suspect some error in the checking methodology. If not share the input for which the program failed with the mail being sent. This is fairly straight forward thing to do with java Hi codechef folks ! Why the above prgram is giving NZEO(non-zero exit code) error ?It is working fine in my machine. May I know where is the bug in above program? Thanks for looking into it ! hey this code giving out put in my compiler better check it suggest me if any mistakes @ADMIN my code is workin fine in my computer and its workin fine for abt 400 digits and more.... but wat is the problem i cant understand im getin runtime error...... plz reply.... @ravi The input can have numbers up to (and including) 1000000 digits; 400 digits is too little. i think the longest number you can store in c ,c++ is in "unsigned long int" and it doesn't store upto 1000000digits any clue???? i m some problem in understanding the last line of the problem.. just tell me is 100 a palindrome. since hundred can be read as 00100. No, 00100 has leading zeroes and is not a number. my code working fine in my computer i am using devcpp but when i submit here it gave me compile error.. can any one tell me why Please take a look at for a sample solution in c or c++. i checked for my 1000000 and my running time s fast but not able to get the ans my code s #include <vector>#include <list>#include <map>#include <set>#include <queue>#include <deque>#include <stack>#include <bitset>#include <algorithm>#include <functional>#include <numeric>#include <utility>#include <sstream>#include <iostream>#include <iomanip>#include <cstdio>#include <cmath>#include <cstdlib>#include <ctime>using namespace std;#define fore(i,a,b) for(int i=a;i<b;i++)#define vi vector<int>#define sz size()#define all(a) a.begin(),a.end()int main(){ int t,flag=0; long v,k; vector<long> res; cin>>t; fore(z,0,t) { cin>>v; if(v/10<1) cout<<v<<endl; else { k=v; flag=0; while(flag==0) { k++; stringstream ss; string cal; ss<<k; cal=ss.str(); for(int i=0;i<=(cal.sz+1)/2;i++) { if(cal[i]==cal[cal.sz-i-1]) {flag=1;} else { flag=0; break; } } if(flag==1) {res.push_back(k); } } } } fore(i,0,res.sz) cout<<res[i]<<endl; return 0; } It says the number can be 1000000 digits long, not just 1000000 which is only 7 digits long. can anyone plz tell me wts wrong with this code a compiler error is being shown import java.io.*;import java.lang.Math;class Enter{ int rev(int num) { int s=0; int len=String.valueOf(num).trim().length()-1; while(num>0) { int r=num % 10; num=num/10; s=s+r*(int)Math.pow(10,len); len--; } return s; } void find(int arr[]) { for(int i=0;i<arr.length;i++){ int num1,flag=0,temp=arr[i]; temp++; do{ num1=rev(temp); int l=String.valueOf(num1).trim().length(); if(temp==num1&&l!=1){ System.out.println(); System.out.println(temp); flag=1; } else{ temp++; } }while(flag==0); } } void input() throws java.lang.Exception { BufferedReader br= new BufferedReader(new InputStreamReader(System.in)); String str=br.readLine(); int tot=Integer.parseInt(str); int[] arr; arr = new int[tot]; for(int i=0;i<tot;i++){ System.out.println(); str=br.readLine(); arr[i]=Integer.parseInt(str); } find(arr); } } class Palindrom1{public static void main (String[] args) throws java.lang.Exception{ Enter obj= new Enter(); obj.input(); }} Please read the FAQ and the Sample Solutions @admin wats the limit for test cases ? 9 seconds actually i talkin about max value for first line of input. 10 @admin, max value for first line of input is not specified in the problem.. thank u. i submitted solution for this problem,it shows wrong answer,can it possible to tell,is it any logical error or error in format of output wat i m printing..ie spacing or new line.. soln id is110546,110544 So long as your output is in accordance with what is specified in the output format, there should be no problems. There are no issues with newlines. thank you,finally i got where i m wrong. admin, can you tell me what is the complation error my submission is giving. It compiles fine on VC++ You have submitted a blank file. That is the reason why it is giving a compile error. i have written my program in java ...but after submitting its showing following compilation error....but its wrking fine on my pc...smbdy plzz help.... sources/Main.java:9: class Palindrome is public, should be declared in a file named Palindrome.java public class Palindrome { ^ 1 error Read the FAQ. can u tell me for what input am i getting wrong answer??? soln id : [120718] no problem...got it... @admin. I tried my code on all types of inputs. It works fine in my sys. I checked for values having 1000000 digits too. But its not getting accepted here. It says wrong answer. Atleast can you tell me which case my code is failing. Thank you. I am printing the output as an when I process a test case, should I print the results after processing all test cases? Thanks No, printing them the way you are doing is fine. You might want to take at corner cases. Large test cases need not necessarily be the correct way to test your program. Got it admin. Minor bug. I am happy now. Thanks anyway. i am getting a wrong answer all the time. could you tell me the the common places where mistakes are made in this problem. i've tried single digit numbers, all 9s and a few random numbers. my approach is to copy the first half of the string to the last (after reversing). if the resulting number is smaller than the orig no. i add 1 to the first half of the number and copy again. also, should the new line be printed at the end of the last input? @ admin can u please suggest the problem in my latest code ... i have tried different variants but each time i am getting the same Runtime Error(OTHER). Also, what kind of runtime is the code giving, to be exact ? how can a input of 1000000 digits can be taken in language c can anyone help????????? A char array? bt can an array be declared to hold 1000000 elements it shows error on my compiler array size too large will it wrk on cc compiler???????????????????? bt can an array be declared to hold 1000000 elements it shows error on my compiler "array size too large"will it wrk on codechef compiler???????????????????? Declare it outside your function, as a global entity. i think my code is correct ... why it is showing Wrong Answer please help It is showing Wrong Answer because it is not correct, so you shouldn't think it is :P someone please send me some test cases id: dalchand@iitkgp.ac.in ... with answer :) @stephen no it is correct... :P can someone tell me about the input/output format... i'm new at codechef :( I'm taking iput as: //MAX = 500000 scanf("%d",&u); c = getchar(); while( u > 0){ n = 0; do{ c = getchar(); if(c == 'n') break; if(n < MAX-1) A[n]=c; else B[n-MAX+1]=c; n++; }while(c!='n'); if(n < MAX){ A[n]=' '; B[0]=' '; } else{ A[MAX-1]=' '; B[n-MAX+1]=' '; } //and puting output as printf("%s%sn",A,B); // A and B are char array each of size MAX it this the correct way? sorry ? mark is null character ' ' ' 0 ' Please stop spamming. Your code is incorrect and no one is going to mail you the test cases :) Please check out . Input is provided via stdin and you have to print the output to stdout. @Aniruddha : i submitted a solution in JAVA which is working fine in my pc. Here its giving a runtime error - NZEC. I looked at the error definition and catcheed all the errors. But still get the same error. Can you please tell me what is the exact problem with my code??? The first thing I did was test your code on an input of maximal size. It threw an exception. Surely you've tried that? Is there anyone who solved this one in java. Mine code is working fine on my computer (even with 1000000000 digits input also), but I am getting time limit exceed. @admin please check my solution :) my submission ID is 160207 ... tell me if there is any problem with input/output format.. The input / output format is mentioned in the problem statement. Please check it yourself. hii, i checked my code to many time but it is always giving wrong answere can u pls tell me where my code is wrong. my code giving correct answer on my pc. my submision id is 176543 No, it is not giving the correct answer on your pc; you can't have tried many test cases at all. Almost every single one I tried failed. I input 9 as a test case, and the program exits before I have a chance to enter other test cases. I enter 100 and get 111 when the answer is of course 101. I enter 99, and the program quits again.. @admin:My code is working fine in my PC having gcc And also giving output for 10 test cases in 0.004 secs but is showing time limit exceed after submission ..Plz tell me where I am making mistake.I am new to codechef >Help! What test cases did you try? The most obvious test case to try that may take some time is the largest possible, ie something with 1000000 digits. You obviously haven't tried that :P hello , can you please tell what is the problem in my solution . my solution ID is 183285. thanks for your time . @stephen :Thanks for ur help I got my mistake. what is the range of t ?? Getting a wrong answer for the submission 185356. Anyone willing to tell me why ? @admin,@stephen,@Aniruddha can u please tell what is the problem with code submission id 187096 and 187105 . Running fine in my computer even for more larger values than the specified limit. No it isn't. Your code doesn't even work on an input of 100 digits, let alone a million digits. sorry my mistake ... its million digits not million number ... extremely sorry @admin, @stephen Any guesses to what may be wrong with the submission 185356 ? @stephen,@Aniruddha please check i have changed code but this time system is giving TLE...can u tell me problem is Slower I/O or my Logic. So that i can work on that part. submission id 187507 submission id is 187589 @stephen,@Aniruddha PLEASE IGNORE MY ABOVE TWO POST AND please check i have changed code but this time system is giving TLE...can u tell me problem is Slower I/O or my Logic. So that i can work on that part. submission id is 187589 PLEASE IGNORE MY ABOVE TWO POST AND Whats wrong in this code for "The next palindrome" My code is #include<stdio.h> #define max 10 int check(long int val); int main() {int t,i,j; long int k[max]; scanf("%d",&t); for(i=0;i<t;i++) scanf("%ld",&k[i]); {k[i]++; if(k[i]<1000000) {while(!check(k[i])) k[i]++; printf("%ldn",k[i]);} } return 0; int check(long int val) {int c=0; long int a=val; while(a!=0) {c=c*10+a%10; a=a/10; if(c==val) return 1; else Its running on my pc...but i am not able to submit this code....help me Please don't post code. Read the FAQ. For one thing, your code will obviously fail when there are more than 10 test cases. It also has no hope on an input with 1000000 digits. #include<stdio.h>#include<string.h>int main(void){ int t,num,n,j,len; char str[100],str1[100]; scanf("%d",&t); while(t){ scanf("%d",&num); for(num++;;num++){ n=num; j=0; while(n != 0){ str[j++]=(n%10)+48; n/=10; } str[j]=' '; len=strlen(str); //strcpy(str1,str); //strrev(str1); strlen(str); j=0; while(len--){ str1[j++]=str[len]; } str1[j]=' '; if(strcmp(str,str1)==0){ printf("%sn",str); t--; break; } } }}can u temme wat's wrong in da code..i m gettn da answere but its not gettn accepted here.. can any one let me know how to take input such a number having million digits? if i take it in array,then how should i proceed? i'm new to codechef.....plz help me plz help me i would love to read more from you on this Can anyone find the error in my program............. Somebody who has solved the problem please tell me in which test case my program is not giving the right output. What about the single digit numbers?? Do we have to simply print a number just greater to them(For 5,it will be 6 and For 9,it will be 11)?? i'm getting correct answer according to following program but its not getting accepted here. Can somebody find out d mistake in the following: #include<iostream>using namespace std;int main(){ long long int m,n,i,r,j,s,t,a[500]; cin>>t; for(i=1;i<=t;i++) { cin>>n; n=n+1; for(j=1;j<=100000;j++) {m=n; s=0; while(n!=0) {r=n%10; s=(s*10)+r; n=n/10;} if(m==s) { a[i]=s; j=100001;} m++; n=m;} } for(i=1;i<=t;i++) cout<<a[i]<<endl; return 0; } 100000 digits means value or 1000000 digits? how is it possible to deal with such large values, do we need to store it as character array then do calculations?? It says digits; it means digits. im getting a runtime (other) error, can you please point out what that error may be, its working fine on codeblocks on windows using default compile which i think is g++ here my code: #include<iostream>#include<string>#include<stdlib.h>using namespace std;void add(string &,int ,int ,unsigned long int&);int main(){ unsigned long long int k; string palin; unsigned long int digits; unsigned long int counter=0; unsigned int temp; int consec_tries=0; char c,d; //char no[50]={' '}; cin>>k; while(k--){ consec_tries=0; counter=0; cin>>palin; digits=palin.length(); add(palin,1,digits-1-counter,digits); while( counter < digits/2 ){ c=palin[digits-counter-1]; d=palin[counter]; if( c==d ) { consec_tries++; counter++; } else if( c<d ){ //palin+= palin[counter] - palin[digit-counter-1]; palin.at(digits-counter-1)=palin.at(counter); counter++; consec_tries=-100; } else {//c>d //palin+=(palin[counter]+10- palin[digits-1-counter])* temp; palin[digits-counter-1]=palin[counter]; counter++; add(palin,1,(digits-counter-1),digits); //add 1 at digits-counter pos ie next position consec_tries=-100; } if(consec_tries == (digits/2)) break; else if( counter==(digits/2)){ counter=0; consec_tries=0; continue; } } cout<<palin<<endl; }return 0;}void add(string &pal,int no,int pos,unsigned long int &no_digits){ int val,counter=0; char c=pal[pos]; string temp="1"; //c=cstr[pos]; if( pal.at(pos) != '9'){ pal.at(pos)=atoi(&c)+no+'0'; } else{ while(pal.at(pos)=='9' && counter<pal.length()){ cout<<"hell"<<endl; pal.at(pos)='0'; pos--; //since higher digit is at lower index counter++; } if( pos>-1) pal[pos]=atoi(&pal[pos])+no+'0'; else{ pal=temp.append(pal); no_digits++; } }} Well I have tried this problem out and it shows wrong answer, could you please tell me for what inputs it gives the wrong answer...... my solution id is 270161...plzz help, thanks in advance @Sarthak. well try out the cases like 99,999 etc.. and check what result u get for'em I am keeping on getting wrong answer. Pl help my submission id is 274092 I resubmitted but still gettin the wrong answer. Admin Pl help my submission code is 274109 @karthik Check your output for 9,191,1991,19991... i am submitting the following code ..... but online judge is displaying it as a wrong answer ..... its showing the correct output in my system ........ i am using dev c++ as a compiler.... please help me out ........ #include<iostream>int main(void){long long int a,n,i=0,c; int t,j; std::cin>>t; for(j=1;j<=t;j++) { std::cin>>n; if(n>1000000) { break; } while(1) {i=0; n++; c=n; while(c!=0) { a=c%10; i=i*10+a; c=c/10; } if(n==i) { std::cout<<n; break; } } if(j<t) std::cout<<"n"; } return 0;} @PRATHMESH SWAROOP: Please do not post your code here. Also please read the problem statement atleast 3 times, it will help you. It is "1000000 digits" not the number "1000000". So think about how to take input as long as 1000000 digits first and then try to find the next palindrome. Hint: You might have a look at fgets(). in my computer ri8 input n output is coming.....but here says wrong. can u PLZZZZZ tell me what is wrong in my code(in C)... { int s,p; scanf("%d",&p); for(s=0;s<p;s++) { long int n,i; scanf("%ld",&n); for(i=n+1;i<1000000;i++) long int m,r=0,k; m=i; while(i!=0) k=i%10; r=10*r+k; i=i/10; if(m==r) {printf("%ldn",m);break;} i=m; return(0); You didn't even read the comment directly above yours.. thanx.....4 reply. My solution is running fine on all inputs locally. I checked that each number (string) I produce is a palindrome. By my algorithm, I am pretty certain to make sure that I return the smalles number possilbe. Also (which I overlookd) now I add 1 before I run the algorithm in order to get a larger solution. Still.. wrong answer. Everything runs fine with my test cases. 30 10010000001100000119919333999339993339999119991911199199998769871908008800712341011029999999998948100010010001 101100110011001001211929334433340433343399999999919202200298899892111811880081331111111100110001999949100110110101 Which seems fine. Still no luck. Any ideas? My code in Java crashes with NZEC. I tried giving a million digit number as an input, and it ran on my system(in 9 minutes altough) , i tried all other cases i could think of and they ran as expected. So, it should be able to run on the judge and should be declared as TLE but its reporting a runtime error in the first place. admin please give some directions. Last Submission ID: 332042 Yes, that definitely won't work, since new lines have an \r character in them (see the FAQ). #include <iostream>#include <vector>#include <cmath>using namespace std;int palin (int);int main(){ int m; int o; vector <int> u; // TO STORE INPUT NO. cin>>o; for (int e = 0; e < o; e++){ cin>>m; u.push_back(m); } for (int h = 0 ; h < u.size(); h++){ int t = u[h] ; // No which will be checked for palindrome int b = palin(t);//Calling function 'palin' which'll give next palindrome No. cout<<endl<<b<<endl; } return 0;}/*....................................................... FUNCTION TO CHECK NO IS PALINDROME OR NOT ...................................................*/ int palin(int i){ int t = i+1; int u = i+1; int j = 0; int k = 0; /* LOOP TO CALUCATE NO OF DIGITS */ while (u > 0){ u = u/10; k++; } /*LOOP TO GET REVERSE NO.*/ while (t > 0){ k--; int a = t%10; t = t/10; j = j + a*pow(10, k); } if ((i+1) == j){ return j; } else palin(i+1); } This code is working according question on my PC, but here it is giving wrong answer,please specify the inputs for which it's not working properly. Did you read how big K can be? @Admin, when i submitted my soln, i got internal error occured in the system error . when does that happen ? My code is working perfectly fine in turbo c compiler. but when m submitting the soln it gives runtime error. plz help me how can i know and rectify my error. iwant to know where actually error lies Accepted Solutions Wrong??!! My code is wworking fine with all test cases i tried. But I'm getting wrong answer again and again. So I took 2 solutions from the accpeted ones and the gave the same input set to both. Interestingly both generated different outputs!!!! And worst of all for the sample inputs given above they are not producing the above mentioned outputs!!! PLZ CHECK!!!!!!!!! It only took me about 30 seconds of testing random inputs on your code to find one that didn't work. I entered 9798 and you print 9879, which isn't a palindrome. As for accepted solutions being wrong; it is possible an accepted solution can give a wrong answer to a particular test case, but it is also easily possible you are testing them incorrectly. What solutions do you think are incorrect, and on what input? Well I have tried this problem out and it shows wrong answer, could you please tell me for what inputs it gives the wrong answer...... plzz help, thanks in advance,.... my solution is>>>...... One immediate mistake is that your program doesn't allow enough space to read in a string with a million digits. @Stephen i am declaring a character array f[1000000] so why is it not providing enough space????????help!!!!! An array of length 1000000 can hold a string of at most 999999 digits. That's the first thing anyone learns in C :P sry for my blunder!!!!!!!!!!!! still wrong answer plz help!!!!!! plzzz help admin still i am gettting a wrong answer... hello admin and stephen could you please tell the problem with my code.my solution is at: showing wrong answer but seem to show results for test cases mentioned in problem and in the discussions so far.Plz help.Thanks in advance. I'm afraid you'll kick yourself when you see it :) Try reading your code, one line at a time, from the top.. yeah really kicked myself!!! thanks by the way. wht abt my query my solution is why i am getting a wrong answer????plz help stephen.........thanx in advance...... @admin: submitted by sanjaynambiar is giving wrong output , so how can this be submitted succesfully Ex: Output by his algo k value = 289882 Actual output = 290092 [Sanjaynambiar algo gives this answer] Expected 289982[ Bcz that no. is nearer to k value] I didn't checked it out for other successfull submitters, but if your script passes that type of mistake than it will always give wrong answer if someone submitted the correct solution ... Any correct solution will always be accepted. While the judge's test cases try to cover as many possibilities as possible, it is obviously impossible to test every single test case - so it is always possible that a wrong solution could be judged correct, while failing on some test cases that weren't provided. I really don't know what are your test cases, again one of your succesfull submitter fails some the basic value spandan dutta Value -- Next Palindrome 100--101 R101--201 W102--201 W103--201 W104--201 W105--201 W106--201 W107--201 W 108--201 W 109--111 R110--111 R R-Right answer Q - Wrong answer *Q--> W -- Wrong That's not the output that program produces. You must be testing it incorrectly. Sory for that compare the old file with new one. I have one query.. Test case 1. No of test cases 100 Value is in 10^6 digits spandan dutta time in execution real 0m5.023suser 0m4.660ssys 0m0.328s My solution time in execution real 0m4.696suser 0m3.664ssys 0m0.324s Still on codechef execution time of spandan is 0.00 and mine is 0.20. Is that i am missing something, please do let me know as i am quite curious to know my mistake . Can anyone tell me what is WRONG in this code??? At least gcc cant find any..... Please don't post your code in the comments. Read the input constraints again. I tested my code for all inputs here... It gives correct output... Bur still getting a wrong answer error.... which test case is my code failing!! somebody help!! It is easy to find test cases it fails on if you test systematically. For example, it doesn't work on an input of 9.. @admin: I checked some posted codes randomly and found following errors. if we give input 00100 output should be 101. but program was giving output 00200. if input is 876547982output should be something greater than this number, but the output was. 876545678 admin its my humble request please revise the test cases and look into all the codes again. thank you The first input you provided is not an integer, and the problem says you should enter an integer. As for the second; I have already talked about this in earlier comments. No matter what test cases are provided, it is always possible for a wrong program to be accepted. Just make sure yours is right and you will always be accepted. My program is working for all possible test cases, even for the cases for which some of the accepted programs are not working(as stated in comments) , but still i am getting wrong answer,,,please help..!!! My code is available at..."". Did you not read how big K could be? @Stephen..I converted all my variables to long still i am getting the wrong answer. Modified code available at... and thanks for your prompt response. Why would long help? I'll say it again, have you read how large K can be? so is there anyway by which this one can be solved without using a character array. You're going to have to store the million digits somewhere, so I suppose the answer is no, there isn't. somebody plz help me to get rid of runtime error!!!!!!!!!! #include<stdio.h>#include<string.h>int main () { char a[50][50]; int i=0,flg=1,j,m,l,t,k; scanf("%d",&t); getchar();while(t>i) { flg=1; for(j=0;(a[i][j]=getchar())!='n';j++); a[i][j]=' '; l=strlen(a[i]); if(l%2==0) m=(l/2)-1; else m=l/2; //printf("l=%d m=%d",l,m); for(j=0;j<=m;j++) { a[i][m-j]=(((a[i][m-j]-48)+flg)%10)+48; if(l%2!=0) a[i][m+j]=a[i][m-j]; else a[i][m+j+1]=a[i][m-j]; if(a[i][m-j]==48) flg=1; else flg=0; } if(flg==1) { for(k=l;k>0;k--) a[i][k+1]=a[i][k]; a[i][0]='1'; a[i][l]='1'; //a[i][l+1]=' '; } i++; } for(i=0;i<t;i++) printf("%sn",a[i]); return 0; Have you tested your code on an input with a million digits? First of all thanks to Stephen for your response :) and very sorry for such a late reply for that. This comment is a reply to your post Stephen Merriman - 1st Nov,2010 02:42:00. I understand that accepted solutions may fail for some particular test cases which the judge fails to check. But the test cases i checked included the sample input given in the problem statement. And as I reported earlier, two different accepted code produced different answers. One of the code was generatingsome strange output for the sample input 808, while other one was generating the output 8008 for 808, when it is clearly mentioned 818. please check. Thanks :) And there was a post by nikhil (nikhil - 17th Nov,2010 02:00:16.) saying my accepted solution was wrong :) actually what happened is that, I was getting all wrong answers and so I just tried submitting an accepted solution, which generated incorrect outputs as I've mentioned above. Codechef is a head ache now as I think. When ever i post a solution it comes with some error. I quit codechef for some day say, month then again if i try , no luck. So, I decided to check some one else answer. As I wrote the code in python and I found someone answer in python posted for this question , ( ). I compile it and get shocked. for 1 -> 11 is the next palindrome but for 2 -> 22 is the palindome next to it. Can any one tell me why? Because this is a correct answer. So, I need to check whether I am wrong or code chef is wrong. Because from my point of view 1->11, 2->11, 3->11 ?????? Neither is correct. The first palindrome after 1 is 2. If an accepted solution fails on a certain case, then that case can't be part of the judge's tests. @Stephen: Could you please help me figure out the test case for which my code is producing the wrong answer. My code is here : Thanks in advance, Shachindra A C the codechef has became a headache....dont know what wong with it /////////////using c++////////////////////// #include<iostream>using namespace std;int reverseNumber(int number){ int reversedNumber = 0; while(number != 0) { reversedNumber = (reversedNumber * 10) + (number % 10); number /= 10; } return reversedNumber;}int main(){ int t; int n; int p; int counter=0; cin>>t; int a[t]; do { cin>>n; p=reverseNumber(n); if(n==p && n<=1000000) { a[counter]=p; ++counter; } else { for(int i=0;i<=1000000;i++) { n++; p=reverseNumber(n); if(n==p && n<=1000000) { a[counter]=p; ++counter; break; } } } t=t-1; } while(t!=0); for(int i=0;i<=counter-1;i++){ cout<<a[i]<<endl;} system("pause"); } RESUlT:WRONG ANSWER I tried the same code with java :it geves RUNTIME ERROR //////////////java/////////////////// import java .util.Scanner;public class Main{int reverseNumber(int number){ int reversedNumber = 0; while(number != 0) { reversedNumber = (reversedNumber * 10) + (number % 10); number /= 10; } return reversedNumber;} public static void main(String[] args) { // TODO code application logic here Main m=new Main(); m.go(); } public void go() { Scanner scanner=new Scanner(System.in); int t; int n; int palin; t=scanner.nextInt(); do { n=scanner.nextInt(); palin=reverseNumber(n); if(n==palin && n<=1000000) { System.out.println(palin); } else { for(int i=0;i<1000;i++) { n++; palin=reverseNumber(n); if(n==palin && n<=1000000) { System.out.println(palin); break; } } } t=t-1; } while(t!=0); }} You should start by reading the problem statement again. Did you read how large the input can be? #!/usr/bin/perlmy $num = $ARGV[0];START:my $num_pal = reverse($num); my $count;if ( $num == $num_pal){ print "the palindrome is $num n"; $num = $num+1; $count++; if ( $count == 2) { exit; } else { goto START; }}else {$num = $num + 1 ;goto START;} Hello Admin, I have been getting the same runtime error(NZEC) for my code in python.Its running perfectly fine on my system with python2.5 and python 2.7.I have tried several times.Please check the code - 465889 and let me know the details.I have also cross checked 1 million testcases of max possible sizes with the accepted code.It's working fine.Help me please Please help i am getting an output as "Wrong Answer" while submission , but the code works fine on my system: Have checked for the following cases I/P <9 where I/P like 999 0r 99 0r Increase in digit and others please help last submmsion @vivsapru Thanks in advance @Admin Hello Please check my solution it works correctly on my system ..... Plx pass me the test case fpr which it is failing Giving runtime error, though its working fine on my end... #include<iostream>using namespace std;int palnext(char *);int main(){ char input[1000000]; int n; cin>>n; while(n!=0) { cin>>input; palnext(input); n--; cout<<input<<"n"; } cin.get(); cin.get(); return 0;}int palnext(char* a){ int half,temp,j=0,i=0; while(a[i]!=' ') i++; half=i/2; int flag=1,flag2=1; temp=i-1; while(j<half) { flag=1; while(a[j]>a[temp]) { flag2=0; a[temp]++; } while(a[j]<a[temp]) { if(flag) { a[temp-1]++; flag=0; } flag2=0; a[temp]--; } j++; temp--; } if(flag2) { if(i%2==0) { a[half-1]++; a[half]++; } else a[half]++; } return 0;} Hi, if someone willing to help please look at solution id 541427 considered all possible cases which came to mind.....still missing something i guess bcoz geting wrong answer I have considered all the cases(I hope so), but it still gives the wrog answer. My solution id is 548588. I hope there is not a problem in displaying the output. import java.util.Scanner;/** * * @author roger */public class Main { /** * @param args the command line arguments */ public static void main(String[] args) { // TODO code application logic here boolean flag=true;int x=0; Scanner s=new Scanner(System.in); int a=s.nextInt(); while(a>0) { String str=s.next(); int i=0; x=Integer.parseInt(str); x=x+1; str=Integer.toString(x); while(i<str.length()) { while (flag) {if(str.charAt(i)==str.charAt(str.length()-1-i)) {i++; flag=false; } else { x=x+1; str=Integer.toString(x); } } flag=true; } System.out.println(str); a--; }}}the code above is running good in my system but giving NZEC ERROR PLEASE tell mem what's wrong with this code ASAP..
https://www.codechef.com/problems/PALIN
CC-MAIN-2016-07
refinedweb
5,721
75.1
Tuesday 7 September 2021 Paweł Marks, VirtusLab Greetings from the Scala 3 team! We are glad to announce that Scala 3.0.2 is now officially out. As no critical bugs have been found in the previously released Scala 3.0.2-RC2, it has been promoted to 3.0.2 and is the current stable Scala version. Recently, we are more and more confident that we can release stable versions of the compiler frequently and regularly. So, we have decided that the blog posts should focus more on stable features available for all users in the latest release of scala. So here, as a refresher, you have a summary of the most significant changes introduced in 3.0.2. What’s new in 3.0.2 Improved insertion of semicolons in logical conditions Scala 3’s indentation based syntax is aimed at making your code more concise and readable. As it gets broader adoption, we consistently improve its specification to eliminate corner cases which might lead to ambiguities or counterintuitive behaviours. Thanks to #12801 it is now allowed for a logical expression in an if statement or expression to continue in the following line if it starts in the same line as the if keyword, e.g. if foo (bar) then //... can now be used instead of if foo(bar) then //... If your intention is to have a block of code evaluating into a single condition you should add a new line and indentation directly after if, e.g. if val cond = foo(bar) cond then //... so code like below would NOT be valid if val cond = foo(bar) cond then //... Towards better null safety in the type system The compiler option -Yexplicit-nulls modifies Scala’s standard type hierarchy to allow easier tracing of nullable values by performing strict checks directly on the level of the type system rather than just relying on conventions (e.g. this prevents you from writing code like val foo: Option[String] = Some(null), which would be otherwise valid Scala although very likely to cause a NullPointerException at some further point). After the recently introduced changes with this option enabled the Null type becomes a subtype of Matchable instead of inheriting directly from Any, making the code below compile (this used to compile before only without strict nullability checking). def foo(x: Matchable) = x match { case null => () } Method search by type signature You can now browse the documentation of Scala’s API not only by names of methods but also by their type in a Hoogle-like manner (but with Scala syntax) thanks to integration with Inkuire brought up by #12375. To find methods with the desired signature simply write in scaladoc’s searchbar the type you would expect them to have after eta-expansion (as if they were functions rather than methods). Typing escape hatch for structural types Structural types may come in handy in many situations, e.g. when one wants to achieve a compromise between safety of static typing and ease of use when dealing with dynamically changing schemas of domain data structures. They have however some limitations. Among others structural typing doesn’t normally play well with method overloading because some types of reflective dispatch algorithms (inlcuding JVM reflection) might not be able to choose the overloaded method alternative with the right signature without knowing upfront the exact types of the parameters after erasure. Consider the following snippet. class Sink[A] { def put(x: A): Unit = {} } val a = Sink[String]() val b: { def put(x: String): Unit } = a This code won’t compile. This is because when Sink[String] gets erased to Sink[Object] (as it’s seen from JVM’s perspective) the method’s signature becomes put(x: Object): Unit while for the structural type it remains unchanged as put(x: String): Unit and they wouldn’t match in runtime therefore Sink[String] cannot be treated as a subtype of { def put(x: String): Unit }. We might however try to write a better method dispatch algorithm ourselves instead of relying on the JVM’s default one to make this work. To assure the compiler that we know what we’re doing we’ll need to use the new Selectable.WithoutPreciseParameterTypes marker trait. Currently it’s an experimental feature (introduced by #12268) so you’ll be able to use it only with a snapshot or nightly version of the compiler and you’ll need to annotate all subtypes of this trait with @experimental. import annotation.experimental @experimental trait MultiMethodSelectable extends Selectable.WithoutPreciseParameterTypes: // smartly choose the right method implementation to call def applyDynamic(name: String, paramTypes: Class[_]*)(args: Any*): Any = ??? @experimental class Sink[A] extends MultiMethodSelectable: def put(x: A): Unit = {} val a = new Sink[String] val b: MultiMethodSelectable { def put(x: String): Unit } = a This snippet will compile as the compiler won’t perform the precise signature check for b anymore. Other changes Beside that scala 3.0.2 introduced multiple small improvements, mainly in metaprogramming, and fixed handful of bugs. You can see the detailed changelog on GitHub. What’s next We have decided that it is the right time for the first minor version after the initial release of Scala 3. Together with stable version 3.0.2, we have released the first release candidate for Scala 3.1. You can already use 3.1.0-RC1 and test not only the new experimental features like safer exceptions but also Scastie embedded in Scaladoc pages, improvements in JVM bytecode generation, the possibility to configure the compiler warnings and lots of smaller improvements and fixes all across the board. You can find the full changelog for 3.1.0-RC1 on GitHub. You can expect the stable release of Scala 3.1 in the middle of October. Contributors Thank you to all the contributors who made the release of 3.0.2 possible 🎉 According to git shortlog -sn --no-merges 3.0.1..3.0.2 these are: 94 Martin Odersky 60 Liu Fengyun 47 Kacper Korban 28 Filip Zybała 18 Andrzej Ratajczak 17 Guillaume Martres 15 Jamie Thompson 10 bjornregnell 9 tanishiking 8 Dylan Halperin 8 Anatolii Kmetiuk 8 Tom Grigg 7 Paweł Marks 5 Som Snytt 5 changvvb 5 Michał Pałka 5 Krzysztof Romanowski 4 Aleksander Boruch-Gruszecki 4 Sébastien Doeraene 4 Nicolas Stucki 3 Phil 3 Magnolia.K 2 xuwei-k 2 Ben Plommer 2 Florian Schmaus 2 Lukas Rytz 2 Maciej Gorywoda 2 Markus Sutter 2 Roman Kotelnikov 2 Stéphane Micheloud 2 noti0na1 2 vincenzobaz 1 Ondrej Lhotak 1 KazuyaMiayshita 1 odersky 1 Julian Mendez 1 Anton Sviridov 1 GavinRay97 1 EnzeXing 1 Tomas Mikula 1 Tomasz Godzik 1 Vaastav Arora 1 Vadim Chelyshov 1 Will Sargent 1 Zofia Bartyzel 1 Dale Wijnand 1 Bjorn Regnell 1 dmitrii.naumenko 1 Adrien Piquerez 1 Meriam Lachkar 1 Martin 1 Olivier Blanvillain 1 Lorenzo Gabriele Library authors: Join our community build Scala 3 now has a set of widely-used community libraries that are built against every nightly Scala 3 snapshot. Join our community build to make sure that our regression suite includes your library.
https://www.scala-lang.org/blog/2021/09/07/scala-3.0.2-released.html
CC-MAIN-2021-43
refinedweb
1,180
51.28
I am building a Python utility that will involve mapping integers to word strings, where many integers might map to the same string. From my understanding, Python interns short strings and most hard-coded strings by default, saving memory overhead as a result by keeping a "canonical" version of the string in a table. I thought that I could benefit from this by interning string values, even though string interning is built more for key hashing optimization. I wrote a quick test that checks string equality for long strings, first with just strings stored in a list, and then strings stored in a dictionary as values. The behavior is unexpected to me: import sys top = 10000 non1 = [] non2 = [] for i in range(top): s1 = '{:010d}'.format(i) s2 = '{:010d}'.format(i) non1.append(s1) non2.append(s2) same = True for i in range(top): same = same and (non1[i] is non2[i]) print("non: ", same) # prints False del non1[:] del non2[:] with1 = [] with2 = [] for i in range(top): s1 = sys.intern('{:010d}'.format(i)) s2 = sys.intern('{:010d}'.format(i)) with1.append(s1) with2.append(s2) same = True for i in range(top): same = same and (with1[i] is with2[i]) print("with: ", same) # prints True ############################### non_dict = {} non_dict[1] = "this is a long string" non_dict[2] = "this is another long string" non_dict[3] = "this is a long string" non_dict[4] = "this is another long string" with_dict = {} with_dict[1] = sys.intern("this is a long string") with_dict[2] = sys.intern("this is another long string") with_dict[3] = sys.intern("this is a long string") with_dict[4] = sys.intern("this is another long string") print("non: ", non_dict[1] is non_dict[3] and non_dict[2] is non_dict[4]) # prints True ??? print("with: ", with_dict[1] is with_dict[3] and with_dict[2] is with_dict[4]) # prints True I thought that the non-dict checks would result in a "False" print-out, but I was clearly mistaken. Would anyone know what is happening, and whether string interning would yield any benefits at all in my case? I could have many, many more keys than single value if I consolidate data from several input texts, so I am searching for a way to save memory space. (Maybe I will have to use a data-base, but that is outside the scope of this question.) Thank you in advance! One of the optimizations performed by the bytecode compiler, similar to but distinct from interning, is that it will use the same object for equal constants in the same code block. The string literals here: non_dict = {} non_dict[1] = "this is a long string" non_dict[2] = "this is another long string" non_dict[3] = "this is a long string" non_dict[4] = "this is another long string" are in the same code block, so equal strings end up represented by the same string object.
http://jakzaprogramowac.pl/pytanie/59482,in-python-why-do-separate-dictionary-string-values-pass-quot-in-quot-equality-checks-string-interning-experiment
CC-MAIN-2017-43
refinedweb
472
59.23
/* * (__x86_64__) #include <architecture/i386/asm_help.h> /* * _ctx_start((void *func)(int arg1, ..., argn), * int arg1, ..., argn, ucontext_t *ucp) * * %rdi - func * %rsi - arg1 * %rdx - arg2 * %rcx - arg3 * %r8 - arg4 * %r9 - arg5 * WRONG! * (8*(n-6))(%rsp) - argn * (8*(n + 1))(%rsp) - ucp, %rbp setup to point here (base of stack) */ TEXT LABEL(__ctx_start) popq %rax /* accounted for in makecontext() */ /* makecontext will simulate 6 parameters at least */ /* Or it could just set these in the mcontext... */ popq %rdi popq %rsi popq %rdx popq %rcx popq %r8 popq %r9 callq *%rax /* call start function */ movq %r12, %rsp /* * setup stack for completion routine; * ucp is now at top of stack */ movq (%rsp), %rdi CALL_EXTERN(__ctx_done) /* should never return */ int $5 /* trap */ #endif /* __x86_64__ */
http://opensource.apple.com/source/Libc/Libc-594.9.4/x86_64/gen/_ctx_start.S
CC-MAIN-2015-14
refinedweb
118
70.43
This chapter discusses the Object Type Translator (OTT), which is used to map database object types and named collection types to C structs for use in OCI applications. This chapter contains these topics: What Is the Object Type Translator? Using OTT with OCI Applications The OTT (Object Type Translator) assists in the development of C language applications that make use of user-defined types in an Oracle server. With SQL CREATE TYPE statements, you can create object types. The definitions of these types are stored in the database, and can be used in the creation of database tables. Once these tables are populated, an OCI. The OTT simplifies this step by automatically generating appropriate struct declarations. In OCI, the application also must call an initialization function generated by OTT. In addition to creating structs that represent stored data types, OTT also generates parallel indicator structs that indicate whether an object type or its fields are NULL. The Object Type Translator (OTT) converts database definitions of object types and named collection types into C struct declarations that can be included in an OCI application. You must explicitly invoke OTT to translate database types to C representations. On most operating systems, OTT is invoked on the command line. It takes as input an intype file, and it generates an outtype file and one or more C header files and an optional implementation file. The following is an example of a command that invokes the OTT: ott userid=scott intype=demoin.typ outtype=demoout.typ code=c hfile=demo.h\ initfile=demov.c This command causes OTT to connect to the database with user name 'scott'. You are prompted for the password. The implementation file ( demov.c) contains the function to initialize the type version table with information about the user-defined types translated. Later sections of this chapter describe each of these parameters in more detail. Sample demoin.typ file: CASE=LOWER TYPE emptype Sample demoout.typ file: CASE = LOWER TYPE SCOTT.EMPTYPE AS emptype VERSION = "$8.0" HFILE = demo.h In this example, the demoin.typ file contains the type to be translated, preceded by TYPE (for example, TYPE emptype). The structure of the outtype file is similar to the intype file, with the addition of information obtained by the OTT. Once the OTT has completed the translation, the header file contains a C struct representation of each type specified in the intype file, and a NULL indicator struct corresponding to each type. For example, if the employee type listed in the intype file was defined as CREATE TYPE emptype AS OBJECT ( name VARCHAR2(30), empno NUMBER, deptno NUMBER, hiredate DATE, salary NUMBER ); the header file generated by the OTT ( demo.h) includes, among other items, the following declarations: struct emptype { OCIString * name; OCINumber empno; OCINumber deptno; OCIDate hiredate; OCINumber salary; }; typedef struct emptype emptype; struct emptype_ind { OCIInd _atomic; OCIInd name; OCIInd empno; OCIInd deptno; OCIInd hiredate; OCIInd salary; }; typedef struct employee_ind employee_ind; A sample implementation file, demov.c produced by this command contains: #ifndef OCI_ORACLE #include <oci.h> #endif sword demov(OCIEnv *env, OCIError *err) { sword status = OCITypeVTInit(env, err); if (status == OCI_SUCCESS) status = OCITypeVTInsert(env, err, "HR", 2, "EMPTYPE", 7, "$8.0", 4); return status; } Parameters in the intype file control the way generated structs are named. In this example, the struct name emptype matches the database type name emptype. The struct name is in lowercase because of the line CASE=lower in the intype file. The data types that appear in the struct declarations (for example, OCIString, OCIInd) are special data types. See Also:For more information about these types, see "OTT Data Type Mappings" The following sections describe these aspects of using the OTT: Creating Types in the Database The remaining sections of the chapter discuss the use of the OTT with OCI, the CREATE TYPEstatement, see Oracle Database SQL Language Reference The next step is to invoke OTT. OTT parameters can be specified on the command line, or in a file called a configuration file. Certain parameters can also be specified in the intype file. If a parameter is specified in more than one place, its value on the command line takes:"The OTT Command Line" A configuration file is a text file that contains OTT parameters. Each non-blank line in the file contains one parameter, with its associated value or values. If more than one parameter is put on a line, only the first one is used. No whitespace can. For example, on Solaris, the file specification is $ORACLE_HOME/precomp/admin/ottcfg.cfg. See your operating system-specific documentation for further information. The intype file gives a list of user defined types for OTT to translate. The parameters CASE, HFILE, INITFUNC, and INITFILE can appear in the intype file. See Also:"The Intype File" On most operating systems, OTT is invoked on the command line. You can specify the input and output files, and the database connection information, among other things. Consult your operating system-specific documentation to see how to invoke OTT. The following is an example of an OTT invocation from the command line: ott userid=bren intype=demoin.typ outtype=demoout.typ code=c \ hfile=demo.h initfile=demov.c Note:No spaces are permitted around the equals sign (=). The following sections describe the elements of the command line used in this example. See Also:For a detailed discussion of the various OTT command line options, see "OTT Reference" Specifies the database connection information that OTT uses. In "OTT Command Line Invocation Example", OTT attempts to connect with user name ' bren' and is then prompted for the password. Specifies the name of the intype file that is used. In "OTT Command Line Invocation Example", "OTT Command Line Invocation Example", the name of the outtype file is specified as demoout.typ. Note:If the file specified by the outtypekeyword exists, it is overwritten when OTT runs. If the name of the outtypefile is the same as the name of the intypefile, the outtypeinformation overwrites the intypefile. Specifies the target language for the translation. The following options are available: C (equivalent to ANSI_C) ANSI_C (for ANSI C) KR_C (for Kernighan & Ritchie C) There is currently no default option, so this parameter is required. Struct declarations are identical in both C dialects. The style in that "OTT Command Line Invocation Example", the generated structs are stored in a file called demo.h. Note:If the file specified by the h. Specifies the name of the C source file into which the type initialization function is to be written. Note:If the file specified by the init. When running OTT, the intype file tells OTT which database types should be translated, and it can also control the naming of the generated structs. The intype file can be a user-created file, or it can becase. However, this CASE option is only applied to those identifiers that are not explicitly mentioned in the intype file. Thus, employee and ADDRESS would always result in C structures employee and ADDRESS, respectively. The members of these structures would be named in lowercase. AS is not used to translate a type or attribute name, the database name of the type or attribute is used as the C identifier name, except that the CASE option is observed, and any characters that cannot be mapped to a legal C identifier character The OTT may need to translate additional types that are not listed in the intype file. This is because the. If you specify FALSE as the value of the TRANSITIVE parameter, then OTT does not generate types that are not specified in the intype file. A normal case-insensitive SQL identifier can be spelled in any combination of upper and lowercase an OTT-reserved word, for example, TYPE "CASE". Therefore, when a name is quoted, the quoted name must be in uppercase if the SQL identifier was created in a case-insensitive manner, for example, CREATE TYPE Case. If an OTT-reserved word is used to refer to the name of a SQL identifier but is not quoted, the OTT reports a syntax error in the intype file. See Also:For a more detailed specification of the structure of the intypefile and the available options, see "Structure of the Intype File" When OTT generates a C struct from a database type, the struct contains one element corresponding to each attribute of the object type. The data types of the attributes are mapped to types that are used in Oracle's object data types. The data types found in Oracle include a set of predefined, primitive types, and provide for the creation of user-defined types, such as object types and collections.; See Also:The indicator struct ( struct employee_ind) is explained in the section, "Null Indicator Structs" The data types in the struct declarations— OCIString, OCINumber, OCIDate, OCIInd—are used here to map the data types of the object type attributes. The NUMBER data type of the empno attribute, maps to the OCINumber data type, for example. These data types can also be used as the types of bind and define variables. This section describes the mappings of Oracle object attribute types to C types generated by OTT. The following section, "OTT Type Mapping Example", includes examples of many of these different mappings. Table 15-1 lists the mappings from types that you can use as attributes to object data types that are generated by OTT. Note:For REF, varray, and nested tabletypes, OTT generates a typedef. The type declared in the typedef is then used as the type of the data member in the struct declaration. For examples, see the next section, Oracle C data types to which the OTT maps non-object database attribute types are structures, which, except for OCIDate, are opaque. The following example is presented to demonstrate the various type mappings created by OTT. Given the following database types CREATE TYPE my_varray AS VARRAY(5) of integer; CREATE TYPE object_type AS OBJECT (object_name VARCHAR2(20)); CREATE TYPE my_table AS TABLE OF object_type; CREATE TYPE other_type AS OBJECT (object_number NUMBER);; /* used in many_types */ typedef OCITable my_table; /* used in many_types*/ typedef OCIRef other_type_ref; struct object_type /* used that are used as attributes of a type being translated, may preceding example it is the object_type attribute), the indicator entry for that attribute is the NULL indicator struct ( object_type_ind) corresponding to the nested object type (if TRANSITIVE=TRUE). varrays and nested tables contain the NULL information for their elements. The data type for all other elements of a NULL indicator struct is OCIInd. See Also:"NULL Indicator Structure" for more information about atomic nullity that_ind; preceding preceding type is: struct Book_t_ind { OCIInd _atomic; OCIInd title; OCIInd author; } Note that the NULL indicator struct corresponding to the author attribute can be obtained from the author object itself. See OCIObjectGet the OTT-generated C identifiers, and has provided a list of types that should be translated. In two of these types, naming conventions are specified. The following is an example of what the outtype file might look includes. An OCI application that accesses objects in an Oracle server can use C header and implementation files that have been generated by OTT. The header file is incorporated into the OCI code with an #include statement. Once the header file has been included, the OCI application can access and manipulate object data in the host language format. Figure 15-1 shows the steps involved in using OTT with the OCI for the simplest applications: 15-1 Using OTT with OCI Within the application, the OCI program can perform bind and define operations using program variables declared to be of types that appear data type mapping and manipulation functions that are specifically designed to work on attributes of object types and named collection types. The following are examples of the available functions: data type. other chapters of this guide. OTT generates a C initialization function if requested. The initialization function tells the environment, for each object type used in the program, which version of the type is used. You can specify a name for the initialization function when invoking OTT with the INITFUNC option, or you can an environment handle is created by an explicit OCI object call, for example, by calling OCIEnvCreate(), you must also explicitly call the initialization functions. All the initialization functions must be called for each explicitly created environment handle. This gives each handle access to all the Oracle data types used in the entire program. If an environment handle is implicitly created by embedded SQL statements, such as EXEC SQL CONTEXT USE and EXEC SQL CONNECT, the handle is initialized implicitly, and the initialization functions need not be called. This is only relevant when Pro*C/C++ is being combined with OCI applications. The following example shows an initialization function. Given an intype file, ex2c.typ, containing TYPE BREN.PERSON TYPE BREN.ADDRESS and the command line ott userid=bren intype=ex2c outtype=ex2co hfile=ex2ch.h initfile=ex2cv.c OTT generates the following, "BREN", 5, "PERSON", 6, "$8.0", 4); if (status == OCI_SUCCESS) status = OCITypeVTInsert(env, err, "BREN", 5, "ADDRESS", 7, "$8.0", 4); return status; } The function ex2cv() creates the type version table and inserts the types BREN.PERSON and BREN. When a header file is generated by OTT and an environment handle is explicitly created in the program, then the implementation file must also be compiled and linked into the executable. The C initialization function supplies version information about the types processed by OTT. It adds to the type-version table the name and version identifier of every OTT-processed object data type. The type-version table is used by Oracle's type manager to determine which version of a type a particular program uses. Different initialization functions generated by OTT at different times can add some of the same types to the type version table. When a type is added more than once, Oracle ensures the same version of the type is registered each time. It is the OCI programmer's responsibility to declare a function prototype for the initialization function, and to call the function. Note:In the current release of Oracle, each type has only one version. Initialization of the type version table is required only for compatibility with future releases of Oracle. Parameters that can appear on the OTT command line or in a CONFIG file control the behavior of OTT. Certain parameters can also appear in the intype file. This section provides detailed information about the following topics: Where OTT Parameters Can Appear Structure of the Intype File Nested Included File Generation OTT Restriction on File Name Comparison The following conventions are used in this chapter to describe OTT syntax: Italic strings are variables or parameters to be supplied by the user. Strings in UPPERCASE are entered as shown, except that case is not significant. OTT keywords are listed in a lowercase monospaced font in examples and headings, but are printed in uppercase in text to make them more distinctive._name=ALWAYS|IF_NEEDED|FROM_INTYPE] [transitive=TRUE|FALSE] [URL=url] Note:Generally, the order of the parameters following the OTTcommand is reported. Therefore, it is safe to omit the HFILE parameter only if the INTYPE file was previously generated as an OTT OUTTYPE file. If the intype file is omitted, the entire schema is translated. See the parameter descriptions in the following section for more information. The following is an example of an OTT command line statement (you are prompted for the password): OTT userid=marc intype=in.typ outtype=out.typ code=c hfile=demo.h\ errtype=demo.tls case=lower The following sections describe each of the OTT command line parameters. user name, password, and optional database name (Oracle Net Services database specification string). If the database name is omitted, the default database is assumed. The syntax of this parameter is: userid=username/password[@db_name] The USERID parameter is optional. If omitted, OTT automatically attempts to connect to the default database as user OPS$username, where username is the user's operating system user name. If this is the first parameter, " USERID=" and the password and the database name can be omitted, as shown here: OTT username ... For security purposes, when you only enter the user name you are prompted for the rest of the entry. The INTYPE parameter specifies the name of the file from which to read the list of object type specifications. OTT translates each type in the list. The syntax for this parameter is intype=filename " INTYPE=" can be omitted if USERID and INTYPE are the first two parameters, in that order, and " USERID=" is omitted. If INTYPE is not specified, all types in the user's schema is translated. OTT username filename... The intype file can be thought of as a makefile for type declarations. It lists the types for which C struct declarations are needed. See Also:"Structure of the Intype File" describes the format of the intypefile If the file name on the command line or in the intype file does not include an extension, a operating system-specific extension such as " TYP" or " .typ" is added. The name of a file into which OTT writes type information for all the object data types it processes. This includes all types explicitly named in the intype file, and can include additional types that are translated because they are used in the declarations of other types that must be translated (if TRANSITIVE=TRUE). This file must operating system-specific extension such as " TYP" or " .typ" is added. This is the desired host language for OTT output, which is specified as CODE=C, CODE=KR_C, or CODE=ANSI_C. " CODE=C" is equivalent to " CODE=ANSI_C". CODE=C|KR_C|ANSI_C There is no default value for this parameter; it must be supplied. The INITFILE parameter specifies the name of the file where the OTT-generated initialization file is to be written. The initialization function is not generated if this parameter is omitted. operating system-specific extension such as " C" or " .c" is added. initfile=filename The INITFUNC parameter is only used, and these other types are declared in two or more different files, and TRANSITIVE=TRUE. If the file name of an HFILE on the command line or in the intype file does not include an extension, a operating system-specific extension such as " H" or " .h" is added. hfile=filename The CONFIG parameter specifies the name of the OTT configuration file, which lists commonly used parameter specifications. Parameter specifications are also read from a system configuration file in a operating system-dependent location. All remaining parameter specifications must appear on the command line, or in the intype file. config=filename Note:A CONFIGparameter is not allowed in the CONFIGfile. If this parameter is supplied, OTT writes a listing of the intype file to the ERRTYPE file, along with all informational and error messages. Informational and error messages are sent to the standard output whether ERRTYPE is specified or not. Essentially, the ERRTYPE file is a copy of the intype file with error messages added. In most cases, an error message includes a pointer to the text that caused the error. If the file name of an ERRTYPE on the command line or in the INTYPE file does not include an extension, a operating system-specific extension such as " TLS" or " .tls"case, and vice versa. CASE=[SAME|LOWER|UPPER|OPPOSITE] This option affects only those identifiers (attributes or types not explicitly listed) not mentioned in the intype file. Case conversion takes place after a legal identifier has been generated. Note that the case of the C struct identifier for a type specifically mentioned in the INTYPE option is the same as its case in the intype file. For example, if the intype file includes the following line: TYPE Worker then the OTT generates struct Worker {...}; On the other hand, if the intype file were written as TYPE wOrKeR the OTT generates struct wOrKeR {...}; following the case of the intype file. Case-insensitive SQL identifiers not mentioned in the intype file appears in uppercase if CASE=SAME, and in lowercase if CASE=OPPOSITE. A SQL identifier is case-insensitive if it was not quoted when it was declared. This option offers control in qualifying the database name of a type from the default schema with a schema name in the outtype file. The outtype file generated by OTT contains information about the types processed by OTT, including the type names. See Also:See "SCHEMA_NAMES Usage" Takes the values TRUE (the default) or FALSE. Indicates whether type dependencies not explicitly listed in the intype file are to be translated, or not. If TRANSITIVE=TRUE is specified, then types needed by other types but not mentioned in the intype file are generated. If TRANSITIVE=FALSE is specified, then types not mentioned in the intype file are not generated, even if they were used as attribute types of other generated types. OTT uses JDBC (Java Database Connectivity), the Java interface for connecting to the database. The default value of parameter URL is: URL=jdbc:oracle:oci8:@ The OCI8 driver is for client-side use with an Oracle installation. To specify the Thin driver (the Java driver for client-side use without an Oracle installation): URL=jdbc:oracle:thin:@host:port:sid where host is the name of the host on which the database is running, port is the port number, and sid is the Oracle SID. OTT parameters can appear filename. In addition, parameters are also read from a default configuration file in a operating system-dependent location. This file must exist, but can be empty. Parameters in a configuration file must appear one in each line, can operating system-dependent. The intype and outtype files list the types translated by OTT, and provide all the information needed to determine how a type or attribute name is translated to a legal C identifier. These files contain one or more type specifications. These files also can. See Also:For an example of a simple user-defined intypefile, and of the full outtypefile that the OTT generates from it, see "The Outtype File" A type specification in the INTYPE names an object data type that is to be translated. A type specification in the outtype file names an object data type that has been translated, TYPE employee TRANSLATE SALARY$ AS salary DEPTNO AS department TYPE ADDRESS TYPE PURCHASE_ORDER AS p_o The structure of a type specification is as follows, where [] indicates optional inputs inside: TYPE type_name [AS type_identifier] [VERSION [=] version_string] [HFILE [=] hfile_name] [TRANSLATE{member_name [AS identifier]}...] The syntax of type_name is: [schema_name.]type_name where schema_name is the name of the schema that owns the given object data type, and type_name is the name of the type. The default schema is that of the user running OTT. The default database is the local database. The components of a type specification are described next. type_name is the name of an Oracle object data type. type_identifier is the C identifier used to represent the type. If omitted, the default name mapping algorithm is used. version_string is the version string of the type that was used when the code was generated by a previous invocation of OTT. The version string is generated by OTT and written to the outtype file, which can later be used as the intype file when OTT is later executed. The version string does not affect the operation of OTT, but is eventually used to select that version of the object data type should be used in the running program. type_identifier is the C identifier used to represent the type. If omitted, the default type mapping algorithm is used. See Also:"Default Name Mapping" hfile_name is the name of the header file in which the declarations of the corresponding struct or class appears. If hfile_name is omitted, the file named by the command-line HFILE parameter is used if a declaration is generated. member_name is the name of an attribute (data member) that is to be translated to the following identifier. identifier is the C identifier used to represent the attribute in the user program. Identifiers can be specified in this way for any number of attributes. The default name mapping algorithm is used for the attributes that are not mentioned. An object data type may need to be translated for one of two reasons: It appears in the intype file. It is required to declare another type that must be translated, and TRANSITIVE=TRUE. If a type that is not mentioned explicitly is required by types declared in exactly one file, OTT writes the translation of the required type to the same file(s) as the explicitly declared types that require it. If a type that is not mentioned explicitly is required by types declared in two or more different files, OTT writes the translation of the required type to the global HFILE file. Every HFILE generated by OTT #includes other necessary files, and #defines a symbol constructed from the name of the file, which can be used to determine if the HFILE has you invoke OTT with the following command, then it generates the two following header files. ott scott tott95i.typ outtype=tott95o.typ code=c can conditionally include tott95b.h without having to worry whether tott95b.h depends on the include file using the following construct: #ifndef TOTT95B_ORACLE #include "tott95b.h" #endif Using this technique, the programmer, which contains type and function declarations that the Pro*C/C++ or OCI programmer can find useful. This is the only case in which OTT uses angle brackets in a #include. Next, the file tott95a.h is included. This file is included because it contains the declaration of " struct px1", which tott95b.h requires. When the user's intype file requests that type declarations be written to more than one file, OTT determines which other files each HFILE must include, and generates the necessary #includes. Note that OTT uses quotes in this #include. When a program including tott95b.h is compiled, the search for tott95a.h begins where the source program was found, and thereafter follows an implementation-defined search rule. If tott95a.h cannot be found in this way, a complete file name (for example, a Linux an input parameter to Pro*C/C++. From the point of view of Pro*C/C++, it is the Pro*C/C++ intype file. This file matches database type names to C struct names. This information is used at run-time to ensure that the correct database type is selected into the struct. If a type appears with a schema name in the outtype file (Pro*C/C++ intype file), the type is found in the named schema during program execution. If the type appears without a schema name, the type is found in the default schema to which the program connects, which can be different from the default schema OTT used. If SCHEMA_NAMES is set to FROM_INTYPE, and the intype file reads: TYPE Person TYPE david.Dept TYPE sam.Company then the Pro*C/C++ application that uses the OTT-generated structs uses the types sam.Company, david.Dept, and Person. Using Person without a schema name refers to the Person type in the schema to which the application is connected. If OTT and the application both connect to schema david, the application uses the same type ( david.Person) that OTT used. If OTT connected to schema david but the application connects to schema jana, the application uses the type jana.Person. This behavior is appropriate only if the same " CREATE TYPE Person" statement has been executed in schema david and schema jana. On the other hand, the application uses type david david, SCHEMA_NAMES=FROM_INTYPE is specified, and the user's intype files include either TYPE Person or TYPE david.Person but do not mention the type david.Address, which is used as a nested object type in type david.Person. If " TYPE david.Person" appeared in the intype file, " TYPE david.Person" and " TYPE david.Address" appears in the outtype file. If " Type Person" appeared in the intype file, " TYPE Person" and " TYPE Address" appears in the outtype file. If the david.Address type is embedded in several types translated by OTT, but is not explicitly mentioned in the intype file, the decision of whether to use a schema name is made the first time OTT encounters the embedded david.Address type. If, for some reason, the user wants type david.Address to have a schema name but does not want type Person to have one, the user should explicitly request TYPE david.Address in the intype file. The main point is that in the usual case in which each type is declared in a single schema, it is safest for the user process can operating system-byte equivalents are converted to those single-byte equivalents. Next, the name is converted from the OTT character set to the compiler character set. The compiler character set is a single-byte-byte letters that appear in the compiler character set, so legal C identifiers are not altered. Name translation can, for example, translate accented single-byte characters such as "o" with an umlaut or "a" with an accent grave to "o" or "a", and can translate a multibyte letter to its single-byte equivalent. Name translation typically fails if the name contains multibyte characters that lack single-byte equivalents. In this case, the user must specify name translations in the intype file. OTT does not detect a naming clash caused by two or more database identifiers being mapped to the same C name, nor does it detect a naming problem where a database identifier is mapped to a C keyword. Currently, the OTT determines if two files are the same by comparing the file names provided by the user on the command line or in the intype file. But one potential problem can occur when the OTT needs to know if two file names refer to the same file. For example, if the OTT-generated file foo.h requires a type declaration written to foo1.h, and another type declaration written to /private/elias/foo1.h, OTT should generate one #include if the two files are the same, and two #includes if the files are different. In practice, though, it would conclude that the two files are different, and would generate two #includes, as follows: #ifndef FOO1_ORACLE #include "foo1.h" #endif #ifndef FOO1_ORACLE #include "/private/elias/foo1.h" #endif If foo1.h and /private/elias/foo1.h are different files, only the first one is included. If foo1.h and /private/elias/foo1.h are the same file, a redundant #include is written. Therefore, if a file is mentioned several times on the command line or in the intype file, each mention of the file should use exactly the same file name.
http://docs.oracle.com/cd/E18283_01/appdev.112/e10646/oci15ott.htm
CC-MAIN-2016-44
refinedweb
5,156
54.52
In my introduction to MicroProfile, I described a simple meeting coordination application I had written. In this article I’ll take you through how to use MicroProfile to write the application.. The application is available in GitHub. The purpose of this article is to go over using MicroProfile 1.0, so it will just cover the backend Java logic, and not the user interface, which is provided in GitHub. start start - In Eclipse, import the project as an existing project. From Eclipse If you prefer to clone the Git repository from Eclipse: - In Eclipse, switch to the Git perspective. - Click Clone a Git repository from the Git Repositories view. - Enter URI - Click Next, then click Next again accepting the defaults. - From the Initial branch drop-down list, click start. - Select Import all existing Eclipse projects after clone finishes, then click Finish. - Switch to the Java EE perspective. The meetings project is automatically created in the Project Explorer view. Creating the MeetingManager class The first part of this application is a CDI-managed bean that manages the meetings. This bean is application-scoped, meaning there is only one instance. At this point the MicroProfile 1.0 release has not stated a preference for a persistence mechanism, so this example simply stores information in memory. This means that we need shared state, and an application-scoped bean ensures that state is shared by all clients. - Create a new class: right-click the meetingsproject, then click New > Class… - Name the class MeetingManager, then click Finish. Eclipse brings up the MeetingManager class in the Java editor. The first thing to do is make it a managed bean. - Above the class type definition add @ApplicationScoped, which is in the package javax.enterprise.context: import javax.enterprise.context.ApplicationScoped; @ApplicationScoped public class MeetingManager { - Save the file. The next step is to create a map to store the meeting information. There could be concurrent requests so we need to ensure a thread-safe data construct is used: - Just after the class definition add the following code: private ConcurrentMap<String, JsonObject> meetings = new ConcurrentHashMap<>(); - This code introduces three new classes, all of which need to be imported. The relevant imports are: import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import javax.json.JsonObject; - Save the file In this example a JsonObject is being stored; this is defined by the JSON-Processing standard Java API. Many CRUD-style applications just need to take JSON input and store it away. While you might expect to convert the JSON to some form of object data type, in this application, that would be overkill so we just pass JsonObjects around. Taking this approach somewhat breaks the separation of concerns between business logic and protocol handling so do not take this as a best practice–it’s just a convenient approach for this application. The bean needs to have four different operations: Add a meeting, get a specific meeting, get all the meetings, and start a meeting. Some will be very simple. To add the operations: - Add a method to add a new meeting. This stores the meeting away using the JSON idattribute. It only stores it once, so subsequent calls are essentially ignored. public void add(JsonObject meeting) { meetings.putIfAbsent(meeting.getString("id"), meeting); } - Add a method to get a meeting: public JsonObject get(String id) { return meetings.get(id); } - Add a method to list all the meetings. This is the second most complicated operation. It essentially loops around all the values in the map created earlier and adds them to a JsonArrayBuilderto be returned: public JsonArray list() { JsonArrayBuilder results = Json.createArrayBuilder(); for (JsonObject meeting : meetings.values()) { results.add(meeting); } return results.build(); } This method introduces three new types: A JsonArray, a JsonArrayBuilder, and the Jsonclass. A JsonArrayrepresents an array of JsonObjects. A JsonArrayBuilderis used to create a JsonArray. The Jsonclass provides utility methods for constructing the Json builder classes. Add the following imports: import javax.json.JsonArray; import javax.json.JsonArrayBuilder; import javax.json.Json; Finally, you need to create the method to start a meeting. The key thing to understand about JsonObjects is that they are read-only. This means that, once they are created, they cannot be changed so, to start a meeting, you need to clone the existing one. This code is hidden in a helper class provided in the Git repository. The method is shown below. It essentially creates a new JsonObjectBuilderand copies the entries across. In some cases (not used in this article) the clone may want to not have a field copied across; in that case a list of keys to ignore can be provided. public static JsonObjectBuilder createJsonFrom(JsonObject user, String ... ignoreKeys) { JsonObjectBuilder builder = Json.createObjectBuilder(); List<String> doNotCopy = Arrays.asList(ignoreKeys); for (Map.Entry<String, JsonValue> entry : user.entrySet()) { if (!!!doNotCopy.contains(entry.getKey())) { builder.add(entry.getKey(), entry.getValue()); } } return builder; } This method introduces two new types: a JsonValueand a JsonObjectBuilder. A JsonValueis the superclass of all the Json types. A JsonObjectBuilderis used to create a JsonObject. The code also uses the standard Java Collections API. Add the following imports: import javax.json.JsonValue; import javax.json.JsonObjectBuilder; import java.util.Arrays; import java.util.List; import java.util.Map; - When the application user starts a new meeting, their action provides a new JsonObjectwith the meeting ID and a URL for joining the meeting. To ensure the meeting is started, the meeting ID and URL are fetched from the input parameter meeting. The JsonObjectfor the existing meeting is fetched from memory as existingMeeting. The existing meeting is then cloned using the helper method above and the meeting URL is added by calling addon the JsonObjectBuilderreturned from the helper method. A JsonObjectis then built from the builder. Finally, the meetings map has the meeting replaced assuming that the existing meeting is still bound. This ensures thread-safe updates so if two calls to startMeetingrun at once only one will win. public void startMeeting(JsonObject meeting) { String id = meeting.getString("id"); String url = meeting.getString("meetingURL"); JsonObject existingMeeting = meetings.get(id); JsonObject updatedMeeting = MeetingsUtil.createJsonFrom(existingMeeting).add("meetingURL", url).build(); meetings.replace(id, existingMeeting, updatedMeeting); } - Save the file Creating the MeetingService class The second part of this example is the JAX-RS service endpoint. This makes a REST API available externally via HTTP. - Create a new class called MeetingService. Eclipse brings up the MeetingService class in the Java editor. Most Java EE beans are automatically considered CDI-managed beans by default, but not JAX-RS beans. JAX-RS beans need to be annotated with a CDI scope to become CDI-managed. In this case we want the normal JAX-RS behaviour of a bean instance per request but we need it to be CDI managed. This can be done using the CDI request scope: - Above the class type definition add @RequestScoped, which is in the package javax.enterprise.context: import javax.enterprise.context.RequestScoped; @RequestScoped public class MeetingService { - Save the file. The next step is to make the bean a JAX-RS bean. This is done using the JAX-RS Path annotation: - Above the class type definition add @Path, which is in the package javax.ws.rs. This takes a single value which is the default path used to access the JAX-RS resource that this bean will manage: @Path("meetings") public class MeetingService { - This introduces the new type Pathwhich needs to be imported: import javax.ws.rs.Path; - Save the file. The meeting service needs two objects injected to perform its actual behaviour. The first is the MeetingManager class and the second is a JAX-RS class for managing URI processing: - Just after the class definition, add the code below. The Injectannotation tells CDI to inject the MeetingManager CDI bean. The Injectannotation is from the package javax.inject: @Inject private MeetingManager manager; - Import the Injectannotation: import javax.inject.Inject; - Next, add the code below. The Contextannotation tells the JAX-RS runtime to inject the UriInfoobject: @Context private UriInfo info; - The Contextannotation and UriInfointerface are in the javax.ws.rs.corepackage so import the Contextand UriInfo: import javax.ws.rs.core.Context; import javax.ws.rs.core.UriInfo; - Save the file. The JAX-RS bean needs four methods to respond to the resource requests. JAX-RS uses annotations to work out which methods map to which operations. Although it is common for JAX-RS beans to receive information by annotated content, the JAX-RS specification only allows this when using XML via JAX-B. Every JAX-RS provider supports JSON binding to Java beans so, in general, this isn’t an issue but since this is using MicroProfile we are sticking to JSON-P which is the only required mapping of JSON to Java. - The method to add the operation is shown below. The PUTannotation says to call this method when the HTTP PUT method is called. The Consumesannotation tells it that the method expects JSON to be received. The method takes a JsonObject. It calls the MeetingManagerservice to add the meeting and then returns a 201 Created response with a link to the created resource. @PUT @Consumes(MediaType.APPLICATION_JSON) public Response add(JsonObject m) { manager.add(m); UriBuilder builder = info.getBaseUriBuilder(); builder.path(MeetingService.class).path(m.getString("id")); return Response.created(builder.build()).build(); } The method introduces several new classes that need to be imported: PUTand Consumesare in package javax.ws.rs; Response, MediaType, and UriBuilderare all in ‘javax.ws.rs.core’; JsonObjectis in the javax.jsonpackage. Take care when importing MediaTypeand Responseas Java contains multiple classes with these names: import javax.ws.rs.Consumes; import javax.ws.rs.PUT; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import javax.ws.rs.core.UriBuilder; import javax.json.JsonObject; - A method to list all the meetings that exist is very simple. The GETannotation is used to indicate this will be called on an HTTP GET request and the Producesmethod indicates that JSON will be returned to the client: @GET @Produces(MediaType.APPLICATION_JSON) public JsonArray list() { return manager.list(); } This method introduces two new classes that need to be imported: Both GETand Producesare in the javax.ws.rspackage. Take care when importing Producesas Java EE contains multiple classes with this name: import javax.ws.rs.GET; import javax.ws.rs.Produces; import javax.json.JsonArray; - To get the details of a single meeting we need a method that responds to a child path of the path provided on the class definition. This can be done using the Pathannotation on the method. The Pathvalue can contain either a literal or, in this case, a named entity that can then be passed in as a method parameter. The PathParamannotation is used on the method parameter to indicate which part of the path this parameter should be provided from. @GET @Path("{id}") @Produces(MediaType.APPLICATION_JSON) public JsonObject get(@PathParam("id") String id) { return manager.get(id); } This method introduces one new class that needs to be imported: PathParamis in the javax.ws.rspackage. Take care when importing PathParamas Java EE contains multiple classes with this name: import javax.ws.rs.PathParam; - Finally, you need to write a method that starts the meeting. This method responds to an HTTP post which is indicated using the POSTannotation. In this case it responds to a specific resource instance and, to ensure that the JsonObjectand the path don’t have conflicting information, the JsonObject’s ID in the JsonObjectis overwritten by the one from the path. In this application it is not important but if there were a security constraint on the URL, it could be crucial. @POST @Path("{id}") @Consumes(MediaType.APPLICATION_JSON) public void startMeeting(@PathParam("id") String id, JsonObject m){ JsonObjectBuilder builder = MeetingsUtil.createJsonFrom(m); builder.add("id", id); manager.startMeeting(builder.build()); } This method introduces one new class which needs to be imported: POSTis in the javax.ws.rs.package: import javax.ws.rs.POST; import javax.json.JsonObjectBuilder; - Save the file. Creating the MeetingApplication class The last step is to tell JAX-RS that this module should be treated as a JAX-RS application. There are a few ways to do this but the simplest is with a class: - Create a new class called MeetingApplicationwith the superclass javax.ws.rs.core.Application. When the class opens in the editor, add an annotation to tell the JAX-RS annotation where to dispatch requests to REST endpoints from: - Before the class definition add the @ApplicationPathannotation with a value of "/rest/": import javax.ws.rs.ApplicationPath; @ApplicationPath("/rest/") - Save the file. The application is done and you are ready to run it. You can check that you’ve copied the code correctly by comparing the classes against the code in GitHub on the Master branch of the repository. In Eclipse, you might see some warnings, which you can ignore. For example, the HTML problems are because Eclipse doesn’t understand the AngularJS tags which are used to define the application’s UI. WebSockets and CDI Events to notify the client about changes to the meeting. All code is in GitHub. Continue reading Writing a simple MicroProfile application (4): Using WebSockets and CDI events
https://developer.ibm.com/wasdev/docs/writing-simple-microprofile-application/
CC-MAIN-2017-34
refinedweb
2,189
51.04
0isuchi left a reply on Laravel Str_slug Not Working For Unicode. str_slugor facade version Str::slugdoesn't work with non-ascii string. You can instead use this approach Check this solution: tisuchi left a reply on Nested Loop In Laravel Blade @DARRENCHAN - In this case, you can easily use relationship here. @foreach ($users as $user) @foreach ($user->friends as $friend) // Now I want to check if like this way. @if ($friend->id == $friend->user->id) @endif @endforeach @endforeach I just use user relationship form the friend model here. Try this. tisuchi left a reply on Form Request Validation Redirect What do you mean by control? What exactly you want to do after request validation? tisuchi left a reply on Login Via User Id. If I get you correctly, you are looking for something like this. Auth::loginUsingId(1); Here, 1 is the userId. Even you can use remember me option there too. // Login and "remember" the given user... Auth::loginUsingId(1, true); tisuchi left a reply on Nested Loop In Laravel Blade Have you tried with $loop->parent? Normally it returns the parent loop data. tisuchi left a reply on How To Retrieve Env Variable? @JLRDW - I think env() used for Environment Configuration variables. Isn't that? Ref: tisuchi left a reply on How To Retrieve Env Variable? Have you tried this way in your controller? echo env('MY_APP_URL'); This line should print out anywhere (controller / models / view) in your application tisuchi left a reply on Where Do Laravel Store All The Configuration Files? Normally it stores in config directory. All of the configuration files for the Laravel framework are stored in the config directory. Each option is documented, so feel free to look through the files and get familiar with the options available to you. Ref: tisuchi left a reply on Making A Route Public For Anyone To View tisuchi left a reply on How To Follow The "How To Manage An Open Source Project" Series? Go to this link and click "Subscribe to Series" button. tisuchi left a reply on How To Store Multiple Selection Data? tisuchi left a reply on Making A Route Public For Anyone To View The route seems public already. Make sure, there is no authentication checking in the __construct() method in your ProjectsController. tisuchi left a reply on How To Store Multiple Selection Data? I have a route idea like this way- In your view, form selection should be like this- <select name="countries[]" multiple="multiple"> ... Now in your method in the controller, you should insert like this- $countries = $request->input('countries'); foreach($countries as $country){ UserCountry::create([ 'country_id' => $country, 'user_id' => Auth::id() ]); } Hope it will work for you. tisuchi left a reply on Undefined Index @PAULCATALIN97 - I haven't noticed that you have received data from the user in the wrong way. It should be like that. Can you try with this code in your controller? $skills = $request->input('skills'); foreach($skills as $skill){ Joburi::create([ 'titlu' => $request->input('titlu'), 'descriere' => $request->input('descriere'), 'salariu_estimativ' => $request->input('salariu_estimativ'), 'oras' => $request->input('oras'), 'id_skill'=>$skill, ]); } tisuchi left a reply on Undefined Index I just update your whole code here... <div class="dropdown-field"> <select data- @foreach($skills as $skill) <option value="{{$skill->id}}" >{{strtoupper($skill->name)}}</option> @endforeach </select> <div class="box-footer"> Here is the controller- class AdaugaJobController extends Controller { public function __construct() { $this->middleware('auth'); } public function index() { $skills = SkillsEmployee::all(); return view('adaugajob', compact('skills')); } public function store(Request $request) { $data = $request->validate([ 'titlu' => 'required|string|max:255', 'descriere' => 'required|string|max:255', 'salariu_estimativ' => 'required|string|max:255', 'oras'=> 'required', 'id_skill'=>'', ]); $skills = $data['skills']; foreach($skills as $skill){ Joburi::create([ 'titlu' => $data['titlu'], 'descriere' => $data['descriere'], 'salariu_estimativ' => $data['salariu_estimativ'], 'oras' => $data['oras'], 'id_skill'=>$skill, ]); } if($data){ return redirect()->route('adaugajob')->withSuccess('S-a incarcat cu success!'); }else{ return redirect()->route('adaugajob')->withDanger('Nu s-a incarcat! A aparut o eroare.'); } } } I think it should work. tisuchi left a reply on Undefined Index @PAULCATALIN97 - This is the wrong way. It should be like this- $skills = $data['skills']; tisuchi left a reply on Undefined Index @PAULCATALIN97 - What exact error you are facing? tisuchi left a reply on Undefined Index @PAULCATALIN97 - It depends on what you want. If you want to allow the user to choose more than one skill at a time, then you need to do like that way. I have updated my answer. Check this. tisuchi left a reply on Undefined Index I think it needs to be an array name="skills[]". <select data- tisuchi left a reply on Using Cookies To Increase Page View Count @ROOTTECH - If you use packages, then have you checked these packages? tisuchi left a reply on Why My Update Is Not Working? There could be few possibilities. The first things in my mind that, you have to make sure you have fillable properties in your post model. It should be like that- Post extends Model{ protected $fillable = ['title', 'details']; ... } tisuchi left a reply on Using Cookies To Increase Page View Count Have you read this? May be you will get some idea. tisuchi left a reply on How To Create Unique Slug In Laravel? There are lots of resources on that. You can follow this tutorial- Or you can even use a package for that. and google it, you will get more.. tisuchi left a reply on How To Redirect If User Is Not Authenticated? @DEVFREY - Yeah.. True. Thanks man. :) @darrenchan you can add this suggestion in your list too... tisuchi left a reply on How To Redirect If User Is Not Authenticated? You can use auth middleware in your route. tisuchi left a reply on Why I Am Getting Error? tisuchi left a reply on Why I Am Getting Error? You need to make sure that, requested user is logged in. Change your code like this way- if(Auth::check()){ echo "Welcome " . Auth::user()->name; } tisuchi left a reply on Trying To Get Property 'id' Of Non-object Make sure, the requested user is logged in. Otherwise, you can wrap your code like this way- public function __construct() { if(Auth::check()){ this->currentuser = Auth::user()->id; } } tisuchi left a reply on Routes - Weird Behaviour tisuchi left a reply on Routes - Weird Behaviour It's because the browser cannot detect PATCH method. You need to use GET or POST but in your form, you can use a hidden field that contains method request. <form method="post"> <input type="hidden" name="_method" value="put"> Etc.. </form> tisuchi started a new conversation Unnecessary Posts. Hello @jeffreyway Over the last 1 hour, there are few unnecessary posts in a different language that is neither relevant nor readable. You need to take care of such kind of spamming... tisuchi left a reply on Laravel Telescope Not Working On A Fresh Install (wrong Path) Have you tried to put in your APP_URL at .env file? tisuchi left a reply on PHP Carbon Not Working As Expected. Normally you need to define new column in your model. For example- ModelName extends Model{ public $timestamps = ['brithday']; ... tisuchi left a reply on Expected Status Code 422 But Received 500 Since it's 500 error, you can get error details in storage/logs/laravel.log file. The log file might be looks like this ../laravel-2019-05-14.log also. You will get more details there. tisuchi left a reply on How To Use Event And Listener? tisuchi left a reply on How To Use Event And Listener? Hi, I personally suggest you read the documentation first. There is a video from Jefreey. If you looking for more details of understanding Event and Listener, watch this video. tisuchi left a reply on Validation: Sometimes|Required tisuchi left a reply on Count Distinct Children Within Relationship First of all, in the event table, session_id filed is missing. You have forgotten to add that. Now, you have a few options to figure it out. Using relationship. In that case, you can pass an anonymous method in eager loading then do filtration. In your relationship, you can define more logic. For example- public function events() { return $this->hasMany('App\event','session_id', 'id')->where('put your logic here')->where('Add more logic'); } tisuchi left a reply on Validation: Sometimes|Required Does it trigger any error? You may print out validation error details to check what's required there? tisuchi left a reply on Expected Status Code 200 But Received 201. @HJORTUR17 - hmm... I think he used unconsciously. It should be 201. tisuchi left a reply on Expected Status Code 200 But Received 201. Normally the status code of creating a record is *201. However, you are expecting **200** instead of *201. Here is the code- $this->post($thread->path().'/athugasemdir', $reply->toArray()) ->assertStatus(200); Here, assert status code should be assertStatus(201) since it's posting something. tisuchi left a reply on ToArray(), WhereMonth(), WhereYear() Deprecated? Still, you can use whereMonth() and whereYear() in the Database Query. tisuchi left a reply on How To Update Multi Data Into Database What problem you are encountering now? tisuchi left a reply on BelongsToMany With 4 Tables I think you can define your relationship like this ways. usershas many user_has_schools. user_has_schoolsbelongs to a schools. user_has_schoolsbelongs to a roles. Once successfully define the relationship, you can easily access schools and roles from the user model easily by following Nested Eager Loading rules. Ref: tisuchi left a reply on PHP Pinterest Integration. Have you checked this? Recently I have used this. I feel it's one of the best packages for PHP until now. tisuchi left a reply on Model::create($data) With Carbon Dates Fields In $data Returns Carbon Object. @AMITSHAHC - Can you show your real code? How do you store start_time? tisuchi left a reply on How To Show Message (MOTD) On All Pages? There are a few ways to do that. I think this tutorial will be easy for you to understand. tisuchi left a reply on Model::create($data) With Carbon Dates Fields In $data Returns Carbon Object. By default, the model makes Carbon instance created_at and updated_at. However, if you need to make carbon instance of any field, i.e. start_time you need to define in your model. Just add this in your model. protected $dates = [ 'start_time', ]; Hope, it won't act differently now. tisuchi left a reply on The PUT Method Is Not Supported For This Route. Supported Methods: GET, HEAD, POST. @MUAZZAMAZAZ - Just explaining @foram idea a bit. You just add PUT hidden field after declaring form tag like this- {!! Form::model($destination, array('action' => array('[email protected]', $destination->id),'enctype'=>'multipart/form-data', 'method' => 'POST')) !!} <input type="hidden" name="_method" value="PUT"> . . . tisuchi left a reply on Can't Login To Admin Section On Server I suggest you just understand the concept first. Here are some resources how to define relationships. Hope it will help you to increase your skills.
https://laracasts.com/@TISUCHI
CC-MAIN-2019-22
refinedweb
1,816
66.84
Hello,I'm completly new to Allegro library so please be understanding. I want to write a class which constructor will draw a bitmap on the display. #include "main.h" #include <allegro5\allegro.h> using namespace std; class Bouncer { public: ALLEGRO_BITMAP *bouncer; Bouncer::Bouncer( int a, int x, int y, int r, int g, int b ); Bouncer::~Bouncer(void); }; #include "Bouncer.h" Bouncer::Bouncer( int a, int x, int y, int r, int g, int b ) { bouncer = al_create_bitmap(a,a); al_set_target_bitmap(bouncer); al_clear_to_color(al_map_rgb(r,g,b)); al_draw_bitmap(bouncer, x, y, 0 ); } Of course it is not working because my al_draw_bitmap(bouncer, x, y, 0) should work on my display backbuffer. However, I don't know how to connect my class with a display created in my "main.cpp" file ALLEGRO_DISPLAY *Window = NULL; if( !(Window = al_create_display(640, 480) ) ) { cerr << "Failed to create dipslay!" << endl; return -1; } I tried solving it by putting external declaration in "main.h" #include <allegro5\allegro.h> extern ALLEGRO_DISPLAY *Window; but then I get unresolved external symbol error. Thanks for Your help in advance. You could do this: ALLEGRO_STATE state; al_store_state(&state, ALLEGRO_STATE_TARGET_BITMAP); // change target bitmap, etc al_restore_state(&state); al_draw_bitmap(bouncer, x, y, 0); Or you could use al_set_target_backbuffer(al_get_current_display()). --RTFM | Follow Me on Google+ | I know 10 people You probably don't want to be doing all that in your constructor, and you don't need to make your ALLEGRO_DISPLAY* an extern one (by the way it is an unresolved symbol because you didn't define it in any source file - using extern only declares it). Try separating out the functions. Isn't a definition of Window? Thanks a lot for the answers. They helped me a lot. I guess I should stick to the idea of not creating everything in the constructor cuz this simple bitmap is intended to be a character in my game for the half-term examination so I will need lots of other methods =) Isn't ALLEGRO_DISPLAY *Window = NULL; a definition of Window? Yes it is. Did you include main.h in main.cpp? And in Bouncer.cpp? I have already deleted that source code but as far as I remember, I didn't include "main.h" in the "main.cpp" file but I did for "bouncer.h". My programs were never complicated enough to use "extern" specificator so I'm not used to it. For the future, I need to include such library with exter variables also in "main.cpp", right? Well, you should declare the variable as extern once in a header, and define it once in a source file that includes the header, and include that header whenever you need to access it. Don't know why it didn't work for you, because you had Window declared as extern in your Bouncer.cpp through includes and you had it defined in main.cpp.
https://www.allegro.cc/forums/thread/608926/938692
CC-MAIN-2018-05
refinedweb
477
65.93
Each page in the virtual address space of a process is owned by the access mode that created the page. For example, pages in the program region that system service, the service probes the pages to be used to determine whether an access violation would occur if the image attempts to read or write one of the pages. If an access violation would occur, would occur on the first page specified, the service returns a value of -1 in both longwords of the return address array. If the retadr argument is not specified, no information is returned. 13.5.4 Working Set Paging usually defined by the process's working set default (WSDEFAULT) quota. The maximum size of a process's working set is normally defined by the process's working set quota (WSQUOTA). When ample memory is available, a process's working-set upper growth limit can be expanded by its working set extent (WSEXTENT). When the image refers to a page that is not in memory, a page fault occurs and the page is brought into memory,: Use the pagcnt argument to specify the number of pages to add or subtract from the current working set size. The new working set size is returned in wsetlm. until they are explicitly unlocked with the Unlock Pages in Working Set (SYS$ULWSET) system service or until program execution ends. The format is as follows: Specifying a Range of Addresses Use the inadr argument to specify the range of addresses to be locked. The range of addresses of the pages actually locked are returned in the retadr argument. Specifying the Access Mode Use the acmode argument to specify the access mode to be associated with the pages you want locked. 13.5.5 Process Swapping The operating system balances the needs of all the processes currently executing, providing each with the system resources it requires on an as-needed basis. The memory management routines balance the memory requirements of the process. Thus, the sum of the working sets for all processes currently in physical memory is called the balance set. When a process whose working set is in memory becomes inactive---for example, to wait for an I/O request or to hibernate---the entire working set or part of it may be removed from memory to provide space for another process's working set to be brought in for execution. This removal from memory is called swapping. The working set may be removed in two ways: When a process is swapped out of the balance set, all the pages (both modified and unmodified) of its working set are swapped, including any pages that had been locked in the working set. A privileged process may lock itself in the balance set. While pages can still be paged in and out of the working set, the process remains in memory even when it is inactive. To lock itself in the balance set, the process issues the Set Process Swap Mode (SYS$SETSWM) system service, as follows: $SETSWM_S SWPFLG=#1 This call to SYS$SETSWM disables process swap mode. You can also disable swap mode by setting the appropriate bit in the STSFLG argument to the Create Process (SYS$CREPRC) system service; however, you need the PSWAPM privilege to alter process swap mode. A process can also lock particular pages in memory with the Lock Pages in Memory (SYS$LCKPAG) system service. These pages are not part of the process's working set, but they are forced into the process's working set. When pages are locked in memory with this service, the pages remain in memory even when the remainder of the process's working set is swapped out of the balance set. These remaining pages stay in memory until they are unlocked with SYS$ULKPAG. SYS$LCKPAG can be useful in special circumstances, for example, for routines that perform I/O operations to devices without using the operating system's I/O system. You need the PSWAPM privilege to issue SYS$LCKPAG or SYS$ULKPAG. 13.5.6 Sections A section is a disk file or a portion of a disk file containing data or instructions that can be brought into memory and made available to a process for manipulation and execution. A section can also be one or more consecutive page frames in physical memory or I/O space; such sections, which require you to specify page frame number (PFN) mapping, are discussed in Section 13.5.6.15. Sections are either private or global (shared). When modified pages in writable disk file sections are paged out of memory during image execution, they are written back into the section file rather than into the paging file, as is the normal case with files. (However, copy-on-reference sections are not written back into the section file.) The use of disk file sections involves these two distinct operations: The Create and Map Section (SYS$CRMPSC) system service creates and maps a private section or a global section. Because a private section is used only by a single process, creation and mapping are simultaneous operations. In the case of a global section, one process can create a permanent global section and not map to it; other processes can map to it. A process can also create and map a global section in one operation. The following sections describe the creation, mapping, and use of disk file sections. In each case, operations and requirements that are common to both private sections and global sections are described first, followed by additional notes and requirements for the use of global sections. Section 13.5.6.9 discusses global page-file sections. 13.5.6.1 Creating Sections To create a disk file section, follow these steps: Before you can use a file as a section, you must open it using OpenVMS Record Management Services (RMS). The following example shows the OpenVMS RMS file access block ($FAB) and $OPEN macros used to open the file and the channel specification to the SYS$CRMPSC system service necessary for reading an existing file: #include <rms.h> #include <rmsdef.h> #include <string.h> #include <secdef.h> struct FAB secfab; main() { unsigned short chan; unsigned int status, retadr[1], pagcnt=1, flags; char *fn = "SECTION.TST"; /* Initialize FAB fields */ secfab = cc$rms_fab; secfab.fab$l_fna = fn; secfab.fab$b_fns = strlen(fn); secfab.fab$l_fop = FAB$M_CIF; secfab.fab$b_rtv = -1; /* Create a file if none exists */ status = SYS$CREATE( &secfab, 0, 0 ); if ((status & 1) != 1) LIB$SIGNAL( status ); flags = SEC$M_EXPREG; chan = secfab.fab$l_stv; status = SYS$CRMPSC(0, &retadr, 0, 0, 0, 0, flags, chan, pagcnt, 0, 0, 0); if ((status & 1) != 1) LIB$SIGNAL( status ); } In this example, the file options parameter (FOP) indicates that the file is to be opened for user I/O; this parameter is required so that OpenVMS RMS assigns the channel using the access mode of the caller. OpenVMS RMS returns the channel number on which the file is accessed; this channel number is specified as input to SYS$CRMPSC (chan argument). The same channel number can be used for multiple create and map section operations. The option RTV=--1 tells the file system to keep all of the pointers to be mapped in memory at all times. If this option is omitted, SYS$CRMPSC requests the file system to expand the pointer areas, if necessary. Storage for these pointers is charged to the BYTLM quota, which means that opening a badly fragmented file can fail with an EXBYTLM failure status. Too many fragmented sections may cause the byte limit to be exceeded. The file may be a new file that is to be created while it is in use as a section. In this case, use the $CREATE macro to open the file. If you are creating a new file, the file access block (FAB) for the file must specify an allocation quantity (ALQ parameter). You can also use SYS$CREATE to open an existing file; if the file does not exist, it is created. The following example shows the required fields in the FAB for the conditional creation of a file: GBLFAB: $FAB FNM=<GLOBAL.TST>, - ALQ=4, - FAC=PUT,- FOP=<UFO,CIF,CBT>, - SHR=<PUT,UPI> . . . $CREATE FAB=GBLFAB When the $CREATE macro is invoked, it creates the file GLOBAL.TST if the file does not currently exist. The CBT (contiguous best try) option requests that, if possible, the file be contiguous. Although section files are not required to be contiguous, better performance can result if they are. 13.5.6.3 Defining the Section Extents After the file is opened successfully, SYS$CRMPSC can create a section either from the entire file or from certain portions of it. The following arguments to SYS$CRMPSC define the extents of the file that constitute the section: The flags argument to SYS$CRMPSC defines the following section characteristics: Table 13-2 shows the flag bits that must be set for specific characteristics. When you specify section characteristics, the following restrictions apply: If the section is a global section, you must assign a character string name (gsdnam argument) to it so that other processes can identify it when they map it. The format of this character string name is explained in Section 13.5.6.6. The flags argument specifies the following types of global sections: Group global sections can be shared only by processes executing with the same group number. The name of a group global section is implicitly qualified by the group number of the process that created it. When other processes map it, their group numbers must match. A temporary global section is automatically deleted when no processes are mapped to it, but a permanent global section remains in existence even when no processes are mapped to it. A permanent global section must be explicitly marked for deletion with the Delete Global Section (SYS$DGBLSC) system service. You need the user privileges PRMGBL and SYSGBL to create permanent group global sections or system global sections (temporary or permanent), respectively. A system global section is available to all processes in the system. Optionally, a process creating a global section can specify a protection mask (prot argument), restricting all access or a type of access (read, write, execute, delete) to other processes. 13.5.6.6 Global Section Name The:
http://h71000.www7.hp.com/doc/731final/5841/5841pro_042.html
CC-MAIN-2014-35
refinedweb
1,729
60.24
20 January 2011 10:24 [Source: ICIS news] SINGAPORE (ICIS)--Freights for spot shipments from the Middle East to northeast (NE) ?xml:namespace> A vessel owner was heard negotiating a cargo booking for 17,000 tonnes of mono-ethylene glycol (MEG) from Assaluyeh to eastern A charterer’s freight ideas were heard in the high $40/tonne levels, but the owner was firm on fixing at $57-58/tonne, sources said. “We believe the charterer may eventually fix at our levels but we are waiting for [its] final decision,” the vessel owner said. The regional vessel owner had fully booked February Contract of Affreightment (COA) cargoes and has no balance space for spot cargoes. “We are seeing spot chemicals enquiries for 17,000-20,000 tonnes [of] methanol and/or mono-ethylene glycol, but we are fully booked for February,” the owner added. Contributing to rising freights was the lack of prompt vessel space in the Middle East as most vessels were delayed on their return journey from northeast The delays were due to backlogs in eastern In other key chemical ports along the Yangtze river, the waiting time was around three to five days, the agent said. A cargo booking for 15,000-20,000 tonnes of chemicals from the Middle East to eastern (
http://www.icis.com/Articles/2011/01/20/9427682/middle-east-to-northeast-asia-freights-to-surge-on-tight-tonnage.html
CC-MAIN-2014-41
refinedweb
215
51.21
Simple Humanoid Walking and Dancing Robot (Arduino) Introduction: Simple Humanoid Walking and Dancing Robot (Arduino). This robot can be made as a beginners robot to introduce yourself to the field of robotics. Let's get into making the robot!! Step 1: Tools and Material Required The bill of materials is as follows: - Micro Servo or any other (qty. 4) ($6 for four). - Arduino UNO(you can use other models but keep in mind that using other models may affect stability). ($6.45) - Wires. ($1.6 for 1m) - Perfboard(not necessary, I did not use one). ($0.78 for one, sold as kit) Thin sheet wood (2 pieces measuring 6.2 x 4.6 cm). This measurement can vary if you are using any other type of servos. Cardboard (small pieces required). Total:Around $16 including cardboard and sheet wood. The required tools include: - Soldering Iron (with solder). - Hot Glue gun. - Cutter. - Hacksaw (to cut wood). - Printer cable for Arduino. Step 2: Preparing Servos The first thing that you need to do is to attach the servo horns to the servos. Keep in mind that you can't just attach them to the servo, it has to be done properly or you will encounter problems later.This video explains how to attach the servo horns to the servos, click here to view it. (The video is not mine but is made by Karl Wendt). Now that you have attached the horns to the servos, it is time to glue the servos together. Both the servo sets are attached in different ways so be careful while gluing them. I have attached pictures which you can use as reference to attach the servos together. Try to keep the horns are as close to 90 degrees as possible. After that, take a piece of cardboard and glue both of the servos on it and cut extra cardboard out. One way to ensure that you do this properly is to make sure that the screw mounting part of both the servos is touching. Make sure to keep the wire part on the outside. Step 3: Attaching the Arduino Once that is done, take a rectangular piece of cardboard (mine measured 6.4 x 5.4 cm) and hot glue it to the backside of the servos as shown in the pictures. You can round the corners to make it look a bit neater. After that, take the Arduino and hot glue the backside of the arduino (the white side) to the cardboard as shown in the pictures. This MAY damage your Arduino (highly unlikely) so if you do this, it will be at your own risk. The only thing left to do now (in terms of hardware) is the wiring. Step 4: Wiring the Servos! The way I have done it, it looks a bit confusing but believe me, it isn't. The red wire of the servos are positive, the brown are negative and the orange wire is the signal wire. So, all you have to do is to solder all the red wires together and then solder a stiff piece of wire to that which connects to the 5v pin of the Arduino. Then, solder all the brown wires together and attach a solid piece of wire to them. This wire will connect to the GND pin which is below the 5v pin. (Wiring diagram attached) After that, you will be left with four orange wires or signal wires. Before soldering them to a stiff wire for connection, you have to understand the naming for the servos. Looking from the pin side of the arduino, the servo on the top right is the right thigh servo. The one below it is the right foot servo. The servo at the top left is the left thigh servo and the one below it is the left foot servo. Connect the right thigh signal wire to pin 5, the left thigh signal wire to pin 11, the left foot signal wire to pin 3 and the right foot signal wire to pin 9. Make sure that all of the signal wires are attached to the pins with a squiggly line before them (PWM pins). (Wiring diagram attached) Step 5: Choosing Power Source Now comes the time to choose whether you will be powering the robot using USB power (which is inconvenient and causes occasional disturbances) or using six AA cells. I chose the latter, although it's entirely your choice what you want to do. The problem with AA cells is that they run out in a few hours. If you chose USB power, all you have to do is to connect a printer cable and power it with that (mine didn't work like that, it kept on falling), but if you chose AA cells then its a little bit more complicated. If you are using USB power (which I don't recommend), you can a power bank to make it more portable. First of all, you have to create the 9v power supply which is just 3 3v AA battery packs soldered to each other in series. Then, you have to wire the 9v pack to the power jack on the Arduino (the wiring diagram is attached). The power jack has 3 pins one on top, one on bottom and one on the right (pic attached). The one at the top and the one at the right are the ground pins. The negative wire of your battery pack connects to one of them. The one at the bottom is the 9v pin, the positive wire of your battery pack gets soldered to that. Once again, soldering wires directly to that is can damage your Arduino (although its unlikely), so do this at your own risk. Step 6: Adding Feet The next step is creating the robot is to attach feet to the base of the robot. For feet, I used a 6.2 x 4.6 cm piece of thin sheet wood for the feet. At first, I had used cardboard but that was very floppy and unsuitable so I opted for sheet wood. All you have to do now is to hot glue the wood onto the feet servos of the robot. Try to glue the servos at the exact middle of the feet. Step 7: Programming Now that we are done with the hardware part of the robot, it is time to work on the software. there. Click on ports and select the one with Arduino/Genuino next to it. Then, enter the code I have provided into the software and click on the right arrow sign at the top left. After some time, it should say Done Uploading and if you have done everything right, your robot should start to move!! If you are having any problems, mention them in the comments section and I will try to answer them ASAP. I have uploaded the code for walking, jumping and a few dances. Of course, once you are familiar with the software you can write your own code for the robot. If you made some mistakes while making the robot, you might have to tweak the code just a little bit. Once again, if you need help just ask for it in the comment section. (Note: Yes I know, the way I programmed this robot is very inefficient but this was my first micro-controller project and I did not know how to use functions so hopefully the coding will be better in the future contests!) Step 8: Understanding the Code In this step, I will try to explain the code the best I can. Lets do the dancing code. ----------------------------------------------------------------------------------------------------------------------------------------------------- #include <><> This command is used for including the servo library. Servo rightfoot; <><> This command creates a Servo object with the name 'rightfoot'. This will be used to address the servo later. Servo rightthigh;<><> This command creates a Servo object with the name 'rightthigh'. This will be used to address the servo later. Servo leftfoot;<><> This command creates a Servo object with the name 'leftfoot'. This will be used to address the servo later. Servo leftthigh;<><> This command creates a Servo object with the name 'leftthigh'. This will be used to address the servo later. void setup() <><> This includes the intial commands that the Arduino will go through once before going through the loops. { <><> Marks the beginning of the setup commands. rightfoot.attach(9); <><> This command attached the Servo object 'rightfoot' to pin 9. rightthigh.attach(5); <><> This command attached the Servo object 'rightthigh' to pin 9. leftfoot.attach(3); <><> This command attached the Servo object 'leftfoot' to pin 9. leftthigh.attach(11); <><> This command attached the Servo object 'leftthigh' to pin 9. } <><> Marks the ending of the setup commands. void loop() <><> This is the code that will be repeated multiple times to do the actions. { <><> Marks the beginning of the loop of commands. leftfoot.write(10); <><> This command is used to make the Servo go to it's initial position after each loop. leftthigh.write(90); <><> This command is used to make the Servo go to it's initial position after each loop. rightthigh.write(105); <><> This command is used to make the Servo go to it's initial position after each loop. rightfoot.write(180); <><> This command is used to make the Servo go to it's initial position after each loop. delay(1000); <><> This creates a 1 sec delay before the next command is executed. The value is in milliseconds. leftfoot.write(17); <><>. leftthigh.write(95); <><>. ......... <><> I did not write all the commands because all the commands that follow are basically the same as these. .........<><> I did not write all the commands because all the commands that follow are basically the same as these. } <><> Marks the ending of the loop of commands. (I have attached a picture which shows how the servo degrees work.) ----------------------------------------------------------------------------------------------------------------------------------------------------- I used more or less the same commands in all of the programs I have coded so I don't think further explanation is needed. This could have been done in a much more shorter and efficient way but this was my first time working with a micro-controller so I used basic commands as they seemed to get the job done :). If you need further explanation of any part of the code, feel free to ask in the comments. Step 9: Conclusion Now that the robot is done, what next? Well, you should now be familiar with robotics and also with writing code for the Arduino. You can continue to code new functions for this robot or even make a new and bigger robot. This instructable was created with mainly the purpose of introducing people to the field of micro-controllers and robotics. I think that this is an excellent beginners robot as it is easy and cheap to make. I originally got this idea from this instructable here, but the person who made it made a mistake or something, I'm not sure, because my robot as well as many other people's robot in the comments section was not working so I though that I would re-write my own version of that instructable. Another thing that motivated me to write this instructable were the contests going on at Instructables at this time (such as the micro-controller contest) which had very good prizes. Winning those contests would equip me with gear which would be useful for future instructables. Thank you for viewing my instructable. awesome.... I will make this stuff... Good luck! :) Send me code i have made the same but my robot is not walking properly please help me I have put together a robot like yours. Your design is so simple that it is perfect to teach robotics. I have modified the code to make it walk smoothly. You can find the code here:... Using "for" loops allows you to vary the speed of any movement. It's amazing dude!!! Nice work :D Thanks again for your inspiring Instructable. Superb But i have a doubt.? Is there any way to speedup the walking, dancing like human functions............. This robot was my first ever robot so I tried to keep it simple. This robot is not stable enough to go faster than this. Going faster may cause slipping or falling over. Thanks for the comment !!! Thanks for your reply. Me too an Indian. Love to be developing Indian Technology. very creative.. this robot is so cute. you should name it. Thanks!! I'll think about naming it haha Good keep going.:) Love from INDIA.:) Thanks a lot!! ☺ really inspiring for a beginner like me -_- Not sure if you are being sarcastic or not based on that emoji lol. If you are then feel free to ask me anything about the project which you find confusing. That looks fun :) Yeah, it really is. It's educational too!
http://www.instructables.com/id/Simple-Humanoid-Walking-and-Dancing-Robot-Arduino/
CC-MAIN-2017-39
refinedweb
2,147
82.14
GD::Polyline - Polyline object and Polygon utilities (including splines) for use with GD use GD; use GD::Polyline; # create an image $image = new GD::Image (500,300); $white = $image->colorAllocate(255,255,255); $black = $image->colorAllocate( 0, 0, 0); $red = $image->colorAllocate(255, 0, 0); # create a new polyline $polyline = new GD::Polyline; # add some points $polyline->addPt( 0, 0); $polyline->addPt( 0,100); $polyline->addPt( 50,125); $polyline->addPt(100, 0); # polylines can use polygon methods (and vice versa) $polyline->offset(200,100); # rotate 60 degrees, about the centroid $polyline->rotate(3.14159/3, $polyline->centroid()); # scale about the centroid $polyline->scale(1.5, 2, $polyline->centroid()); # draw the polyline $image->polydraw($polyline,$black); # create a spline, which is also a polyine $spline = $polyline->addControlPoints->toSpline; $image->polydraw($spline,$red); # output the png binmode STDOUT; print $image->png; Polyline.pm extends the GD module by allowing you to create polylines. Think of a polyline as "an open polygon", that is, the last vertex is not connected to the first vertex (unless you expressly add the same value as both points). For the remainder of this doc, "polyline" will refer to a GD::Polyline, "polygon" will refer to a GD::Polygon that is not a polyline, and "polything" and "$poly" may be either. The big feature added to GD by this module is the means to create splines, which are approximations to curves. GD::Polyline defines the following class: GD::Polyline A polyline object, used for storing lists of vertices prior to rendering a polyline into an image. new GD::Polyline->new class method Create an empty polyline with no vertices. $polyline = new GD::Polyline; $polyline->addPt( 0, 0); $polyline->addPt( 0,100); $polyline->addPt( 50,100); $polyline->addPt(100, 0); $image->polydraw($polyline,$black); In fact GD::Polyline is a subclass of GD::Polygon, so all polygon methods (such as offset and transform) may be used on polylines. Some new methods have thus been added to GD::Polygon (such as rotate) and a few updated/modified/enhanced (such as scale) in this module. See section "New or Updated GD::Polygon Methods" for more info. Note that this module is very "young" and should be considered subject to change in future releases, and/or possibly folded in to the existing polygon object and/or GD module. The following methods (defined in GD.pm) are OVERRIDDEN if you use this module. All effort has been made to provide 100% backward compatibility, but if you can confirm that has not been achieved, please consider that a bug and let the the author of Polyline.pm know. scale $poly->scale($sx, $sy, $cx, $cy) object method -- UPDATE to GD::Polygon::scale Scale a polything in along x-axis by $sx and along the y-axis by $sy, about centery point ($cx, $cy). Center point ($cx, $cy) is optional -- if these are omitted, the function will scale about the origin. To flip a polything, use a scale factor of -1. For example, to flip the polything top to bottom about line y = 100, use: $poly->scale(1, -1, 0, 100); The following methods are added to GD::Polygon, and thus can be used by polygons and polylines. Don't forget: a polyline is a GD::Polygon, so GD::Polygon methods like offset() can be used, and they can be used in GD::Image methods like filledPolygon(). rotate $poly->rotate($angle, $cx, $cy) object method Rotate a polything through $angle (clockwise, in radians) about center point ($cx, $cy). Center point ($cx, $cy) is optional -- if these are omitted, the function will rotate about the origin In this function and other angle-oriented functions in GD::Polyline, positive $angle corrensponds to clockwise rotation. This is opposite of the usual Cartesian sense, but that is because the raster is opposite of the usual Cartesian sense in that the y-axis goes "down". centroid ($cx, $cy) = $poly->centroid($scale) object method Calculate and return ($cx, $cy), the centroid of the vertices of the polything. For example, to rotate something 180 degrees about it's centroid: $poly->rotate(3.14159, $poly->centroid()); $scale is optional; if supplied, $cx and $cy are multiplied by $scale before returning. The main use of this is to shift an polything to the origin like this: $poly->offset($poly->centroid(-1)); segLength @segLengths = $poly->segLength() object method In array context, returns an array the lengths of the segments in the polything. Segment n is the segment from vertex n to vertex n+1. Polygons have as many segments as vertices; polylines have one fewer. In a scalar context, returns the sum of the array that would have been returned in the array context. segAngle @segAngles = $poly->segAngle() object method Returns an array the angles of each segment from the x-axis. Segment n is the segment from vertex n to vertex n+1. Polygons have as many segments as vertices; polylines have one fewer. Returned angles will be on the interval 0 <= $angle < 2 * pi and angles increase in a clockwise direction. vertexAngle @vertexAngles = $poly->vertexAngle() object method Returns an array of the angles between the segment into and out of each vertex. For polylines, the vertex angle at vertex 0 and the last vertex are not defined; however $vertexAngle[0] will be undef so that $vertexAngle[1] will correspond to vertex 1. Returned angles will be on the interval 0 <= $angle < 2 * pi and angles increase in a clockwise direction. Note that this calculation does not attempt to figure out the "interior" angle with respect to "inside" or "outside" the polygon, but rather, just the angle between the adjacent segments in a clockwise sense. Thus a polygon with all right angles will have vertex angles of either pi/2 or 3*pi/2, depending on the way the polygon was "wound". toSpline $poly->toSpline() object method & factory method Create a new polything which is a reasonably smooth curve using cubic spline algorithms, often referred to as Bezier curves. The "source" polything is called the "control polything". If it is a polyline, the control polyline must have 4, 7, 10, or some number of vertices of equal to 3n+1. If it is a polygon, the control polygon must have 3, 6, 9, or some number of vertices of equal to 3n. $spline = $poly->toSpline(); $image->polydraw($spline,$red); In brief, groups of four points from the control polyline are considered "control points" for a given portion of the spline: the first and fourth are "anchor points", and the spline passes through them; the second and third are "director points". The spline does not pass through director points, however the spline is tangent to the line segment from anchor point to adjacent director point. The next portion of the spline reuses the previous portion's last anchor point. The spline will have a cusp (non-continuous slope) at an anchor point, unless the anchor points and its adjacent director point are colinear. In the current implementation, toSpline() return a fixed number of segments in the returned polyline per set-of-four control points. In the future, this and other parameters of the algorithm may be configurable. For more info on Bezier splines, see [ref needed]. addControlPoints $polyline->addControlPoints() object method & factory method So you say: "OK. Splines sound cool. But how can I get my anchor points and its adjacent director point to be colinear so that I have a nice smooth curves from my polyline?" Relax! For The Lazy: addControlPoints() to the rescue. addControlPoints() returns a polyline that can serve as the control polyline for toSpline(), which returns another polyline which is the spline. Is your head spinning yet? Think of it this way: If you have a polyline, and you have already put your control points where you want them, call toSpline() directly. Remember, only every third vertex will be "on" the spline. You get something that looks like the spline "inscribed" inside the control polyline. If you have a polyline, and you want all of its vertices on the resulting spline, call addControlPoints() and then toSpline(): $control = $polyline->addControlPoints(); $spline = $control->toSpline(); $image->polyline($spline,$red); You get something that looks like the control polyline "inscribed" inside the spline. Adding "good" control points is subjective; this particular algorithm reveals its author's tastes. In the future, you may be able to alter the taste slightly via parameters to the algorithm. For The Hubristic: please build a better one! And for The Impatient: note that addControlPoints() returns a polyline, so you can pile up the the call like this, if you'd like: $image->polyline($polyline->addControlPoints()->toSpline(),$mauve); polyline $image->polyline(polyline,color) object method . polydraw $image->polydraw(polything,color) object method $image->polydraw($poly,$black) This method draws the polything as expected (polygons are closed, polylines are open) by simply checking the object type and calling either $image->polygon() or $image->polyline(). Please see file "polyline-examples.pl" that is included with the distribution. The Polyline.pm module is copyright 2002, Daniel J. Harasty. It is distributed under the same terms as Perl itself. See the "Artistic License" in the Perl source code distribution for licensing terms. The latest version of Polyline.pm is available at your favorite CPAN repository and/or along with GD.pm by Lincoln D. Stein at.
http://search.cpan.org/~lds/GD-2.07/GD/Polyline.pm
CC-MAIN-2017-26
refinedweb
1,551
60.65
I need to write a code that tells me "It is freezing" when is below 32 and "It is not freezing" when above 32 degrees. I'm a begineer on this things and the code below a friend helped me so don't assume I understand evrything about programming. These code only converts F->C How can i modify it to say "It is freezing" and "It is not freezing" can you please post the code thanks Code://robert cruz #include <iostream> using namespace std; int main (void){ //define variables float c = 0.0; int f = 0; //request inputs cout << "Enter temperature in C "; cin >> c; //calculate f = c * 9 / 5 + 32; //show results cout << endl << " This temperature in F is "<< f << endl; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/106874-celsius-farenheit-program.html
CC-MAIN-2015-35
refinedweb
125
66.41
- arrays, i.e. name/value pairs, and the new dicts in 8.5 which bridges a gap to list-type collections - VFS, the file system model used on disk, and now available for arbitrary mapping through TclVFS - array get foo::bar * - glob foo/bar/* - info vars foo::bar::* - namespace children foo::bar * - info procs foo::bar::* - file chan - info args foo - info loaded - package names - pack slaves .foo.bar (Tk) - mk::select db.foo bar (Metakit) - select bar from foo (SQL) I think the dichotomy between fs and array is perhaps weaker than your analysis suggests: just as we can have an array("a,b") which resembles a 2-dimensional array, we can look at array("a/b") and a vfs element "a/b" - it is possible to interpret this hierarchally, but we are not really logically required to do so.In the standard tcl fs, we can't open and read or write a file whose name is the same as a directory, but other file systems (e.g. mkvfs, httpvfs) don't share this limitation.It's logically possible to treat a vfs as a flat mapping path->content, / need have no special interpretation.There's an implementation problem with trying to present a vfs as an array. One can use trace to implement file read/write as array element read/write, but trace isn't strong enough to replace glob with array names, because there's insufficient information provided to the trace to enable one to construct the kind of return value expected. It seems to me that trace array is nearly useless.-- CMcC
http://wiki.tcl.tk/13124
CC-MAIN-2018-05
refinedweb
267
55.98
A GenericStack, like Array and List is a container for storing elements. It has one type parameter and all elements of the stack must be of the specified type. Here is a small example program for initializing and working with a GenericStack. import haxe.ds.GenericStack; class Main { static public function main() { var myStack = new GenericStack<Int>(); for (ii in 0...5) myStack.add(ii); trace(myStack); //{4, 3, 2, 1, 0} trace(myStack.pop()); //4 } } Trivia: FastList In Haxe 2, the GenericStack class was known as FastList. Since its behavior more closely resembled a typical stack, the name was changed for Haxe 3. The Generic in GenericStack is literal. It is attributed with the :generic metadata. Depending on the target, this can lead to improved performance on static targets. See Generic for more details.
https://haxe.org/manual/std-GenericStack.html
CC-MAIN-2018-43
refinedweb
135
59.8
Data Format Appendix Data Using Spring XML This example shows how to configure the data type just once and reuse it on multiple routes You can also define reusable data formats as Spring beans Serialization Dependencies This data format is provided in camel-core so no additional dependencies is needed. JAXB JAXB is a Data Format which uses the JAXB2 XML marshalling standard which is included in Java 6 to unmarshal an XML payload into Java objects or to marshal Java objects into an XML payload. Using the Java DSL. Using Spring XML The following example shows how to use JAXB to unmarshal using Spring configuring the jaxb data type. partNamespaceattribute with QName of destination namespace. Example of Spring DSL you can find above.: Schema Location Available as of Camel 2.14 The JAXB Data Format supports to specify the SchemaLocation when marshaling the XML. Using the Java DSL, you can configure it in the following way: You can do the same using the XML DSL: Marshal data that is already XML Available as of Camel 2.14.1 mustBeJAXBElementyou. XmlRootElement objects Available as of Camel 2.17.2 The JAXB Data Format option objectFactory has a default value equals to false. This is related to a performance degrading. For more information look at the issue CAMEL-10043 For the marshalling of non-XmlRootElement JaxB objects you'll need to call JaxbDataFormat#setObjectFactory(true)). XmlBeans XmlBeans is a Data Format which uses the XmlBeans library to unmarshal an XML payload into Java objects or to marshal Java objects into an XML payload. Dependencies To use XmlBeans in your camel routes you need to add the dependency on camel-xmlbeans which implements this data format. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). XStream: Using the Java DSL If you would like to configure the XStream instance used by the Camel for the message transformation, you can simply pass a reference to that instance on the DSL level.? From Camel 2.2.0, you can set the encoding of XML in Xstream DataFormat by setting the Exchange's property with the key Exchange.CHARSET_NAME, or setting the encoding property on Xstream from DSL or Spring config. Setting the type permissions of Xstream DataFormat In Camel, one can always use its own processing step in the route to filter and block certain XML documents to be routed to the XStream's unmarhall step. From Camel 2.16.1, 2.15.5,. CSV The CSV Data Format uses Apache Commons CSV to handle CSV payloads (Comma Separated Values) such as those exported/imported by Excel. As of Camel 2.15.0, it now uses the Apache Commons CSV 1.1 which is based on a completely different set of options. Available options until Camel 2.15 Available options as of Camel 2.15 Marshalling a Map to CSV The component allows you to marshal a Java Map (or any other message type that can be converted in a Map) into a CSV payload. Unmarshalling a CSV message into a Java List... Marshalling a List<Map> to CSV Available as of Camel 2.1 If you have multiple rows of data you want to be marshalled into CSV format you can now store the message payload as a List<Map<String, Object>> object where the list contains a Map for each row. File Poller of CSV, then unmarshaling Given a bean which can handle the incoming data... ... your route then looks as follows Marshaling with a pipe as delimiter Using autogenColumns, configRef and strategyRef attributes inside XML DSL Available as of Camel 2.9.2 / 2.10 and deleted for Camel 2.15 You can customize the CSV Data Format to make use of your own CSVConfig and/or CSVStrategy. Also note that the default value of the autogenColumns option is true. The following example should illustrate this customization. Using skipFirstLine option while unmarshaling Available as of Camel 2.10 and deleted for Camel 2.15 You can instruct the CSV Data Format to skip the first line which contains the CSV headers. Using the Spring/XML DSL: Or the Java DSL: Unmarshaling with a pipe as delimiter Using the Spring/XML DSL: Or the Java DSL: Issue in CSVConfig It looks like that doesn't work. You have to set the delimiter as a String! Dependencies). Options Marshal In this example we marshal the file content to String object in UTF-8 encoding. Unmarshal In this example we unmarshal the payload from the JMS queue to a String object using UTF-8 encoding, before its processed by the newOrder processor. Dependencies This data format is provided in camel-core so no additional dependencies is needed. HL7 DataFormat The HL7 component ships with a HL7 data format that can be used to marshal or unmarshal HL7 model objects. marshal= from Message to byte stream (can be used when responding using the HL7 MLLP codec) unmarshal= from byte stream to Message (can be used when receiving streamed data from the HL7 MLLP To use the data format, simply instantiate an instance and invoke the marshal or unmarshal operation in the route builder:.. There is a shorthand syntax in Camel for well-known data formats that are commonly used. Then you don't need to create an instance of the HL7DataFormat object: EDI DataFormat We encourage end users to look at the Smooks which supports EDI and Camel natively.). JSON JSON is a Data Format to marshal and unmarshal Java objects to and from JSON. For JSON to object marshalling, Camel provides integration with three popular JSON libraries: - The XStream library and Jettsion - The Jackson library - Camel 2.10: The GSon library Every library requires adding the special camel component (see "Dependency..." paragraphs further down). By default Camel uses the XStream library. Direct, bi-directional JSON <=> XML conversions As of Camel 2.10, Camel supports direct, bi-directional JSON <=> XML conversions via the camel-xmljson data format, which is documented separately. Using JSON data format with the XStream library Using JSON data format with the Jackson library Using JSON data format with the GSON library Using JSON in Spring DSL When using Data Format in Spring DSL you need to declare the data formats first. This is done in the DataFormats XML tag. And then you can refer to this id in the route: Excluding POJO fields from marshalling As of Camel 2.10 When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. First create one or more marker classes. @JsonViewannotation to include/exclude certain fields. The annotation also works on getters. JacksonDataFormatto marshall the above POJO to JSON. The GSON library supports a similar feature through the notion of ExclusionStrategies: GsonDataFormataccepts an ExclusionStrategyin its constructor: @ExcludeAgewhen marshalling to JSON. Configuring field naming policy Available as of Camel 2.11 The GSON library supports specifying policies and strategies for mapping from json to POJO fields. A common naming convention is to map json fields using lower case with underscores. We may have this JSON string Which we want to map to a POJO that has getter/setters as Then we can configure the org.apache.camel.component.gson.GsonDataFormat in a Spring XML files as shown below. Notice we use fieldNamingPolicy property to set the field mapping. This property is an enum from GSon com.google.gson.FieldNamingPolicy which has a number of pre defined mappings. If you need full control you can use the property FieldNamingStrategy and implement a custom com.google.gson.FieldNamingStrategy where you can control the mapping. And use it in Camel routes by referring to its bean id as shown: Include/Exclude fields using the jsonView attribute with JacksonDataFormat Available as of Camel 2.12 As an example of using this attribute you can instead of: Directly specify your JSON view inside the Java DSL as: And the same in XML DSL: Setting serialization include option for Jackson marshal Available as of Camel 2.13.3/2.14 If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, then you need to set either an annotation on the pojo, But this requires you to include that annotation in your pojo source code. You can also configure the Camel JsonDataFormat to set the include option, as shown below: Or from XML DSL you configure this as Unmarshalling from json to POJO with dynamic class name Available as of Camel 2.14 If you use jackson to unmarshal json to POJO, then you can now specify a header in the message that indicate which class name to unmarshal to. The header has key CamelJacksonUnmarshalType if that header is present in the message, then Jackson will use that as FQN for the POJO class to unmarshal the json payload as. Notice that behavior is enabled out of the box from Camel 2.14 onwards. For JMS end users there is the JMSType header from the JMS spec that indicates that also. To enable support for JMSType you would need to turn that on, on the jackson data format as shown: Or from XML DSL you configure this as Unmarshalling from json to List<Map> or List<pojo> Available as of Camel 2.14 If you are using Jackson to unmarshal json to a list of map/pojo, you can now specify this by setting useList="true" or use the org.apache.camel.component.jackson.ListJacksonDataFormat. For example with Java you can do as shown below: And if you use XML DSL then you configure to use list using useList attribute as shown below: And you can specify the pojo type also Using custom Jackson ObjectMapper Available as of Camel 2.17 You can use custom Jackson ObjectMapper instance, can be configured as shown below. Where myMapper is the id of the custom instance that Camel will lookup in the Registry Using custom Jackson modules Available as of Camel 2.15 You can use custom Jackson modules by specifying the class names of those using the moduleClassNames option as shown below. When using moduleClassNames then the custom jackson modules are not configured, by created using default constructor and used as-is. If a custom module needs any custom configuration, then an instance of the module can be created and configured, and then use modulesRefs to refer to the module as shown below: Multiple modules can be specified separated by comma, such as moduleRefs="myJacksonModule,myOtherModule" Enabling or disable features using Jackson Available as of Camel 2.15 - com.fasterxml.jackson.databind.SerializationFeature - com.fasterxml.jackson.databind.DeserializationFeature - com.fasterxml.jackson.databind.MapperFeature To enable a feature use the enableFeatures options instead. From Java code you can use the type safe methods from camel-jackson module: Converting Maps to POJO using Jackson Available since Camel 2.16. Jackson ObjectMapper can be used to convert maps to POJO objects. Jackson component comes with the data converter that can be used to convert java.util.Map instance to non-String, non-primitive and non-Number objects. If there is a single ObjectMapper instance available in the Camel registry, it will used by the converter to perform the conversion. Otherwise the default mapper will be used. Formatted JSON marshalling (pretty-printing) Available as of Camel 2.16 Using the prettyPrint option one can output a well formatted JSON while marshalling: And in Java DSL: Please note that as of Camel 2.16 there’re 5 different overloaded json() DSL methods which support the prettyPrint option in combination with other settings for JsonLibrary, unmarshalType, jsonView etc. Integrating Jackson with Camel's TypeConverters Available as of Camel 2.17 The camel-jackson module allows to integrate Jackson as a Type Converter in the Camel registry. This works in similar ways that camel-jaxb integrates with the type converter as well. However camel-jackson must be explicit enabled, which is done by setting some options on the CamelContext properties, as shown below: The camel-jackson type converter integrates with JAXB which means you can annotate POJO class with JAXB annotations that Jackson can leverage. Dependencies for XStream To use JSON in your camel routes you need to add the a dependency on camel-xstream which implements this data format. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). Dependencies for Jackson To use JSON in your camel routes you need to add the a dependency on camel-jackson which implements this data format. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). Dependencies for GSON To use JSON in your camel routes you need to add the a dependency on camel-gson which implements this data format. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions).. TidyMarkup TidyMarkup is a Data Format that uses the TagSoup to tidy up HTML. It can be used to parse ugly HTML and return it as pretty wellformed HTML. Camel eats our own -dog food- soap). Bind to one or many Plain Old Java Object (POJO). Bindy converts the data according to the type of the java property. POJOs can be linked together with one-to-many relationships available in some cases. Moreover, for data type like Date, Double, Float, Integer, Short, Long and BigDecimal, you can provide the pattern to apply during the formatting of the property. For the BigDecimal numbers, you can also define the precision and the decimal or grouping separators. Decimal* = Double, Integer, Float, Short, Long Format supported This first release only support comma separated values fields and key value pair fields (e.g. : FIX messages). To work with camel-bindy, you must first define your model in a package (e.g. com.acme.model) and for each model class (e.g. Order, Client, Instrument, ...) add the required annotations (described hereafter) to the Class or field. Multiple models If you use multiple models, each model has to be placed in it's own package to prevent unpredictable results.. case 1 : separator = ',' The separator used to segregate the fields in the CSV record is ',' : 10, J, Pauline, M, XD12345678, Fortis Dynamic 15/15, 2500, USD,08-01-2009 case 2 : separator = ';' Compare to the previous case, the separator here is ';' instead of ',' : 10; J; Pauline; M; XD12345678; Fortis Dynamic 15/15; 2500; USD; 08-01-2009 case 3 : separator = '|' Compare to the previous case, the separator here is '|' instead of ';' : 10| J| Pauline| M| XD12345678| Fortis Dynamic 15/15| 2500| USD| 08-01-2009 case 4 : separator = '\",\"' Applies for Camel 2.8.2 or older When the field to be parsed of the CSV record contains ',' or ';' which is also used as separator, we whould find another strategy to tell camel bindy how to handle this case. To define the field containing the data with a comma, you will use simple or double quotes as delimiter (e.g : '10', 'Street 10, NY', 'USA' or "10", "Street 10, NY", "USA"). Remark : In this case, the first and last character of the line which are a simple or double quotes will removed by bindy "10","J","Pauline"," M","XD12345678","Fortis Dynamic 15,15" 2500","USD","08-01-2009" From Camel 2.8.3/2.9 or never bindy will automatic detect if the record is enclosed with either single or double quotes and automatic remove those quotes when unmarshalling from CSV to Object. Therefore do not include the quotes in the separator, but simple do as below: : case 5 : separator & skipfirstline The feature is interesting when the client wants to have in the first line of the file, the name of the data fields : order id, client id, first name, last name, isin code, instrument name, quantity, currency, date To inform bindy that this first line must be skipped during the parsing process, then we use the attribute : case 6 : generateHeaderColumns To add at the first line of the CSV generated, the attribute generateHeaderColumns must be set to true in the annotation like this : As a result, Bindy during the unmarshaling process will generate CSV like this : order id, client id, first name, last name, isin code, instrument name, quantity, currency, date 10, J, Pauline, M, XD12345678, Fortis Dynamic 15/15, 2500, USD,08-01-2009 case 7 : carriage return If the platform where camel-bindy will run is not Windows but Macintosh or Unix, than you can change the crlf property like this. Three values are available : WINDOWS, UNIX or MAC Additionally, if for some reason you need to add a different line ending character, you can opt to specify it using the crlf parameter. In the following example, we can end the line with a comma followed by the newline character: case 8 : isOrdered Sometimes, the order to follow during the creation of the CSV record from the model is different from the order used during the parsing. Then, in this case, we can use the attribute isOrdered = true to indicate this in combination with attribute 'position' of the DataField annotation. Remark : pos is used to parse the file, stream while positions is used to generate the CSV case 1 : pos This parameter/attribute represents the position of the field in the csv record As you can see in this example the position starts at '1' but continues at '5' in the class Order. The numbers from '2' to '4' are defined in the class Client (see here after). case 2 : pattern The pattern allows to enrich or validates the format of your data case 3 : precision The precision is helpful when you want to define the decimal part of your number case 4 : Position is different in output The position attribute will inform bindy how to place the field in the CSV record generated. By default, the position used corresponds to the position defined with the attribute 'pos'. If the position is different (that means that we have an asymetric processus comparing marshaling from unmarshaling) than we can use 'position' to indicate this. Here is an example This attribute of the annotation @DataField must be used in combination with attribute isOrdered = true of the annotation @CsvRecord case 5 : required If a field is mandatory, simply use the attribute 'required' setted to true If this field is not present in the record, than an error will be raised by the parser with the following information : Some fields are missing (optional or mandatory), line : case 6 : trim If a field has leading and/or trailing spaces which should be removed before they are processed, simply use the attribute 'trim' setted to true case 7 : defaultValue If a field is not defined then uses the value indicated by the defaultValue attribute This attribute is only applicable to optional fields. 4. FixedLengthRecord The FixedLengthRecord annotation is used to identified the root class of the model. It represents a record = a line of a file/message containing data fixed length formatted and can be linked to several children model classes. This format is a bit particular beause data of a field can be aligned to the right or to the left. When the size of the data does not fill completely the length of the field, we can then add 'padd' characters. The hasHeader/hasFooter parameters are mutually exclusive with isHeader/isFooter. A record may not be both a header/footer and a primary fixed-length record. case 1 : Simple fixed length record This simple example shows how to design the model to parse/format a fixed message 10A9PaulineMISINXD12345678BUYShare2500.45USD01-08-2009 case 2 : Fixed length record with alignment and padding This more elaborated example show how to define the alignment for a field and how to assign a padding character which is ' ' here'' 10A9 PaulineM ISINXD12345678BUYShare2500.45USD01-08-2009 case 3 : Field padding Sometimes, the default padding defined for record cannnot be applied to the field as we have a number format where we would like to padd with '0' instead of ' '. In this case, you can use in the model the attribute paddingField to set this value. 10A9 PaulineM ISINXD12345678BUYShare000002500.45USD01-08-2009 case 4: Fixed length record with delimiter Fixed-length records sometimes have delimited content within the record. The firstName and lastName fields are delimited with the '^' character in the following example: 10A9Pauline^M^ISINXD12345678BUYShare000002500.45USD01-08-2009 As of Camel 2.11 the 'pos' value(s) in a fixed-length record may optionally be defined using ordinal, sequential values instead of precise column numbers. case 5 : Fixed length record with record-defined field length Occasionally a fixed-length record may contain a field that define the expected length of another field within the same record. In the following example the length of the instrumentNumber field value is defined by the value of instrumentNumberLen field in the record. 10A9Pauline^M^ISIN10XD12345678BUYShare000002500.45USD01-08-2009 case 6 : Fixed length record with header and footer Bindy will discover fixed-length header and footer records that are configured as part of the model – provided that the annotated classes exist either in the same package as the primary @FixedLengthRecord class, or within one of the configured scan packages. The following text illustrates two fixed-length records that are bracketed by a header record and footer record. 101-08-2009 10A9 PaulineM ISINXD12345678BUYShare000002500.45USD01-08-2009 10A9 RichN ISINXD12345678BUYShare000002700.45USD01-08-2009 9000000002 case 7 : Skipping content when parsing a fixed length record. (Camel 2.11.1) It is common to integrate with systems that provide fixed-length records containing more information than needed for the target use case. It is useful in this situation to skip the declaration and parsing of those fields that we do not need. To accomodate this, Bindy will skip forward to the next mapped field within a record if the 'pos' value of the next declared field is beyond the cursor position of the last parsed field. Using absolute 'pos' locations for the fields of interest (instead of ordinal values) causes Bindy to skip content between two fields. Similarly, it is possible that none of the content beyond some field is of interest. In this case, you can tell Bindy to skip parsing of everything beyond the last mapped field by setting the ignoreTrailingChars property on the @FixedLengthRecord declaration. 5. Message The Message annotation is used to identified the class of your model who will contain key value pairs fields. This kind of format is used mainly in Financial Exchange Protocol Messages (FIX). Nevertheless, this annotation can be used for any other format where data are identified by keys. The key pair values are separated each other by a separator which can be a special character like a tab delimitor (unicode representation : \u0009) or a start of heading (unicode representation : \u0001) "FIX information" More information about FIX can be found on this web site :. To work with FIX messages, the model must contain a Header and Trailer classes linked to the root message class which could be a Order class. This is not mandatory but will be very helpful when you will use camel-bindy in combination with camel-fix which is a Fix gateway based on quickFix project. case 1 : separator = 'u0001' The separator used to segregate the key value pair fields in a FIX message is the ASCII '01' character or in unicode format '\u0001'. This character must be escaped a second time to avoid a java runtime error. Here is an example : 8=FIX.4.1 9=20 34=1 35=0 49=INVMGR 56=BRKR 1=BE.CHM.001 11=CHM0001-01 22=4 ... and how to use the annotation Look at test cases The ASCII character like tab, ... cannot be displayed in WIKI page. So, have a look to the test case of camel-bindy to see exactly how the FIX message looks like (src\test\data\fix\fix.txt) and the Order, Trailer, Header classes (src\test\java\org\apache\camel\dataformat\bindy\model\fix\simple\Order.java) case 1 : tag This parameter represents the key of the field in the message case 2 : Different position in output If the tags/keys that we will put in the FIX message must be sorted according to a predefine order, then use the attribute 'position' of the annotation @KeyValuePairField 7. Section In FIX message of fixed length records, it is common to have different sections in the representation of the information : header, body and section. The purpose of the annotation @Section is to inform bindy about which class of the model represents the header (= section 1), body (= section 2) and footer (= section 3) Only one attribute/parameter exists for this annotation. case 1 : Section A. Definition of the header section B. Definition of the body section C. Definition of the footer section 8. OneToMany The purpose of the annotation @OneToMany is to allow to work with a List<?> field defined a POJO class or from a record containing repetitive groups. Restrictions OneToMany Be careful, the one to many of bindy does not allow to handle repetitions defined on several levels of the hierarchy The relation OneToMany ONLY WORKS in the following cases : - Reading a FIX message containing repetitive groups (= group of tags/keys) - Generating a CSV with repetitive data case 1 : Generating CSV with repetitive data Here is the CSV output that we want : Claus,Ibsen,Camel in Action 1,2010,35 Claus,Ibsen,Camel in Action 2,2012,35 Claus,Ibsen,Camel in Action 3,2013,35 Claus,Ibsen,Camel in Action 4,2014,35 Remark : the repetitive data concern the title of the book and its publication date while first, last name and age are common and the classes used to modeling this. The Author class contains a List of Book. Very simple isn't it !!! case 2 : Reading FIX message containing group of tags/keys Here is the message that we would like to process in our model : "8=FIX 4.19=2034=135=049=INVMGR56=BRKR" "1=BE.CHM.00111=CHM0001-0158=this is a camel - bindy test" "22=448=BE000124567854=1" "22=548=BE000987654354=2" "22=648=BE000999999954=3" "10=220" tags 22, 48 and 54 are repeated and the code Using the Java DSL The next step consists in instantiating the DataFormat bindy class associated with this record type and providing Java package name(s) as parameter. For example the following uses the class BindyCsvDataFormat (who correspond to the class associated with the CSV record type) which is configured with "com.acme.model" package name to initialize the model objects configured in this package. Alternatively, you can use a named reference to a data format which can then be defined in your Registry e.g. your Spring XML file: The Camel route will pick-up files in the inbox directory, unmarshall CSV records into a collection of model objects and send the collection to the route referenced by 'handleOrders'. The collection returned is a List of Map objects. Each Map within the list contains the model objects that were marshalled out of each line of the CSV. The reason behind this is that each line can correspond to more than one object. This can be confusing when you simply expect one object to be returned per line. Each object can be retrieve using its class name. Assuming that you want to extract a single Order object from this map for processing in a route, you could use a combination of a Splitter and a Processor as per the following: This is really easy to use Spring as your favorite DSL language to declare the routes to be used for camel-bindy. The following example shows two routes where the first will pick-up records from files, unmarshal the content and bind it to their model. The result is then send to a pojo (doing nothing special) and place them into a queue. The second route will extract the pojos from the queue and marshal the content to generate a file containing the csv record. The example above is for using Camel 2.16 onwards. Be careful Please verify that your model classes implements serializable otherwise the queue manager will raise an error Dependencies To use Bindy in your camel routes you need to add the a dependency on camel-bindy which implements this data format. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). XML.The GZip Data Format is a message compression and de-compression format. It uses the same deflate algorithm that is used in Zip DataFormat, although some additional headers are provided. This format is produced by popular gzip/ gunziptool.ziptool. Options There are no options provided for this data format. Marshal In this example we marshal a regular text/XML payload to a compressed payload employing gzip compression format and send it an ActiveMQ queue called MY_QUEUE. Unmarshal In this example we unmarshal a gzipped payload from an ActiveMQ queue called MY_QUEUE to its original format, and forward it for processing to the UnGZippedMessageProcessor. Dependencies This data format is provided in camel-core so no additional dependencies is needed. Cast). Protobuf - Protocol Buffers ). SOAP DataFormat ElementNameStrategy An element name strategy is used for two purposes. The first is to find a xml element name for a given object and soap action when marshaling the object into a SOAP message. The second is to find an Exception class for a given soap fault name.. Using the Java DSL.) Using SOAP 1.2 Available as of Camel 2.11 When using XML DSL there is a version attribute you can set on the <soapjaxb> element. And in the Camel route Multi-part. The ServiceInterfaceStrategy should be initialized with a boolean parameter that indicates whether the mapping strategy applies to the request parameters or response parameters. Multi-part Request. Multi-part Response. You can also have the camel-soap DataFormate ignore header content all-together by setting the ignoreUnmarshalledHeaders value to true. Holder Object mapping. Examples Webservice client The following route supports marshalling the request and unmarshalling a response or fault. The below snippet creates a proxy for the service interface and makes a SOAP call to the above route. Webservice Server. Dependencies To use the SOAP dataformat in your camel routes you need to add the following dependency to your pom. Crypto Available as of Camel 2.3 PGP Available as of Camel 2.9 The Crypto. Options Basic Usage At its most basic all that is required to encrypt/decrypt an exchange is a shared secret key. If one or more instances of the Crypto data format are configured with this key the format can be used to encrypt the payload in one route (or part of one) and decrypted in another. For example, using the Java DSL as follows: Specifying the Encryption Algorithm Changing the algorithm is a matter of supplying the JCE algorithm name. If you change the algorithm you will need to use a compatible key. Specifying an Initialization Vector Some crypto algorithms, particularly block algorithms, require configuration with an initial block of data known as an Initialization Vector. In the JCE this is passed as an AlgorithmParameterSpec when the Cipher is initialized. To use such a vector with the CryptoDataFormat you can configure it with a byte[] containing the required data e.g. - - - Hashed Message Authentication Codes (HMAC) To avoid attacks against the encrypted data while it is in transit the CryptoDataFormat can also calculate a Message Authentication Code for the encrypted exchange contents based on a configurable MAC algorithm. The calculated HMAC is appended to the stream after encryption. It is separated from the stream in the decryption phase. The MAC is recalculated and verified against the transmitted version to insure nothing was tampered with in transit.For more information on Message Authentication Codes see Supplying Keys Dynamically When using a Recipient list or similar EIP the recipient of an exchange can vary dynamically. Using the same key across all recipients may neither be feasible or desirable. It would be useful to be able to specify keys dynamically on a per exchange basis. The exchange could then be dynamically enriched with the key of its target recipient before being processed by the data format. To facilitate this the DataFormat allow for keys to be supplied dynamically via the message headers below CryptoDataFormat.KEY "CamelCryptoKey" PGP Message The PGP Data Formater can create and decrypt/verify PGP Messages of the following PGP packet structure (entries in brackets are optional and ellipses indicate repetition, comma represents sequential composition, and vertical bar separates alternatives): Public Key Encrypted Session Key ..., Symmetrically Encrypted Data | Sym. Encrypted and Integrity Protected Data, (Compressed Data,) (One Pass Signature ...,) Literal Data, (Signature ...,) Since Camel 2.16.0 the Compressed Data packet is optional, before it was mandatory. PGPDataFormat Options Create your keyring, entering a secure password If you need to import someone elses public key so that you can encrypt a file for them. The following files should now exist and can be used to run the example PGP Decrypting/Verifying of Messages Encrypted/Signed by Different Private/Public Keys Since Camel 2.12.2. Since Camel 2.12.3. Since Camel 2.12.3.. Support of Sub-Keys and Key Flags in PGP Data Format Marshaler Since Camel 2.12.3. Since Camel 2.13 Crypto dataformat in your camel routes you need to add the following dependency to your pom. See Also Syslog DataFormat Available as of Camel 2.6 The syslog dataformat is used for working with RFC3164 and RFC5424 messages. This component supports the following: - UDP consumption of syslog messages - Agnostic data format using either plain String objects or SyslogMessage model objects. - Type Converter from/to SyslogMessage and String - Integration with the camel-mina component. - Integration with the camel-netty component. - Camel 2.14: Encoder and decoder for the camel-netty component. - Camel 2.14: Support for RFC5424 also. Maven users will need to add the following dependency to their pom.xml for this component: RFC3164 Syslog protocol5424 Syslog protocol Available as of Camel 2.14 To expose a Syslog listener service we reuse the existing camel-mina component or camel-netty where we just use the SyslogDataFormat to marshal and unmarshal messages Exposing a Syslog listener
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=24186021&showComments=true&showCommentArea=true
CC-MAIN-2016-44
refinedweb
5,813
52.09
Fay is a proper subset of Haskell that compiles to JavaScript. Thus it is by definition a statically typed lazy pure functional language. If you want a more thorough introduction to Fay, please read Paul Callaghan’s Web Programming in Haskell and Oliver Charles’s 24 Days of Hackage: fay. The original intention of Fay is to use Haskell on the client side. If you use a Haskell web framework such as Yesod or Snap, using Fay you can use the same language on both client and server sides and some code can actually be shared. However, because Fay is simply a subset of Haskell that compiles to JavaScript with no dependencies on the client side, you can use it on the server side too in combination with Node.js. I am not saying it is actually a good idea to write server code in Fay, but it is at least fun to investigate the feasibility. Here is a web server example written in Fay. {-# LANGUAGE EmptyDataDecls #-} module Hello where EmptyDataDecls is required because JavaScript types are represented by empty data declarations in Fay. import FFI FFI module provides a foreign function interface. data Http data HttpServer data Request data Response Http, HttpServer, Request and Response are JavaScript types we use in this example. They are represented by empty data declarations. requireHttp :: Fay Http requireHttp = ffi "require('http')" This is a simple example of a FFI declaration. It returns the result of require('http') as a Http instance. Fay is a monad which is similar to IO monad. Because a FFI function often has side effects, Fay monad is used to represent this. createServer :: Http -> (Request -> Response -> Fay ()) -> Fay HttpServer createServer = ffi "%1.createServer(%2)" consoleLog :: String -> Fay () consoleLog = ffi "console.log(%1)" listen :: HttpServer -> Int -> String -> Fay () listen = ffi "%1.listen(%2, %3)" writeHead :: Response -> Int -> String -> Fay () writeHead = ffi "%1.writeHead(%2, %3)" end :: Response -> String -> Fay () end = ffi "%1.end(%2)" These FFI declarations use %1, %2 that corresponds to the arguments we specify in the type. Most Fay types are automatically serialized and deserialized. Note that we can only use point free style in FFI functions. main :: Fay () main = do http <- requireHttp server <- createServer http (\req res -> do writeHead res 200 "{ 'Content-Type': 'text/plain' }" end res "Hello World\n" ) listen server 1337 "127.0.0.1" consoleLog "Server running at" main is the entry point to our web server example. Its return type is Fay () because a Fay program can’t do anything without interacting with the world outside. Because we already wrapped all the Node.js APIs we use, we can program as if we write a normal Haskell program. Compare our Fay web server program with the original Node.js program. Except for the FFI bindings, the main code is almost the same as before. However, our version is much more type-safe! var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(1337, '127.0.0.1'); console.log('Server running at');
http://kseo.github.io/posts/2014-03-11-fay-with-nodejs.html
CC-MAIN-2017-17
refinedweb
512
66.64
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. save an image as a file in product_images_olbs Hello, I'm using product_images_olbs, it's working well. I'm trying to modify the code so I save the image uploaded with a different name in a file, but it doesn't work! Here's my code: def get_image(self, cr, uid, id): each = self.read(cr, uid, id, ['link', 'filename', 'image']) if each['link']: try: (filename, header) = urllib.urlretrieve(each['filename']) f = open(filename , 'rb') data=f.read() img = base64.encodestring(data) f.close() with open('test.jpg','wb') as f2: f2.write(data) f2.close() except: img = '' else: img = each['image'] return img This code works when it's launched outside openerp! Can someone help me figure out why the file test.jpg is not created? About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/save-an-image-as-a-file-in-product-images-olbs-27824
CC-MAIN-2018-09
refinedweb
182
61.43
Writing DSLs in Groovy Recorded at: - | - - - - - - Read later My Reading List Summary In this presentation recorded at QCon London 2009, after a short introduction to DSLs, Scott Davis plays with the keyboard showing how to approach the creation of a DSL by typing working snippets of Groovy code that get executed in front of the audience. Sponsored Content Bio Author of the book Groovy Recipes: Greasing the Wheels of Java, Scott has been involved in creating web sites in Grails since 2006. Scott teaches public and private classes on Groovy and Grails for start-ups and Fortune 100 companies. He is the co-founder of the Groovy/Grails Experience conference and ThirstyHead.com, a training company that specializes in Groovy and Grails. Should read "scripting with Groovy" by Hermann Schmidt The last 10 minutes or so Scott quickly demonstrates how to extend the meta class of a closed (final) Java class (Integer) to do the ubiquitous "2.hours + 10.minutes" example. That's more like it. 2.hours and 10.minutes by Hossam Karim class Movie(var title:String, var duration: Int) { override def toString = title + " runs for " + duration + " minutes" } implicit def units(i: Int) = new { def hours = i * 60 def minutes = i def and(j: Int) = i + j } val starWars = new Movie("Star Wars", 2.hours and 30.minutes) println(starWars) great presentation by Gilad Manor The video seems to doesn't work anymore... by Dragan Stankovic Re: The video seems to doesn't work anymore... by Dragan Stankovic Re: The video seems to doesn't work anymore... by Floyd Marinescu Re: Should read by Scott Davis grade by Gene De Lisa The videographer was a bit clueless on the terminal screenshots when zooming in on the left side of directory listings. We were all wondering what the permissions were and not what he was talking about right? As a presenter he gets a D. The first half is mostly about him. Count how many times he uses the word "I". That is the problem with these conferences. The entire point is gratifying the ego of the presenter; the anti-pattern to Kathy Sierra's "you rock". Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Tell us what you think
https://www.infoq.com/presentations/Writing-DSL-in-Groovy-Scott-Davis
CC-MAIN-2017-09
refinedweb
396
72.66
Makes a copy of an entry, its DN, and its attributes. #include "slapi-plugin.h" Slapi_Entry *slapi_entry_dup( const Slapi_Entry *e ); This function takes the following parameter: Entry that you want to copy. This function returns the new copy of the entry. If the structure cannot be duplicated (for example, if no more virtual memory exists), the slapd program terminates. This function returns a copy of an existing Slapi_Entry structure. You can call other front-end functions to change the DN and attributes of this entry. When you are no longer using the entry, you should free it from memory by calling the slapi_entry_free() function.
http://docs.oracle.com/cd/E19693-01/819-0996/aaigo/index.html
CC-MAIN-2015-06
refinedweb
104
65.32
perl -MFile::Find -we '$x='File::Find'; print ${ $y = "$x::VERSION" } +print $y' [download] perl -MFile::Find -we '$x='File::Find'; print ${ $y = $x."::VERSION" } + print $y' [download] /J\ package x; VERSION = 'bar'; package Foo; VERSION = '1.01'; package main; $bar = '0.01'; $x = 'Foo'; print ${"$x::VERSION"}, $/; print ${$x . "::VERSION"}, $/; ---- 0.01 1.01 [download] The first goes and gets the thing in $x::VERSION (the $VERSION variable in the x:: namespace), then dereferences it as a symbolic reference in the main:: namespace. The second takes the $x variable in the main:: namespace and uses it to create a variable name that is used as a symbolic reference. In this case, the variable $Foo::VERSION. I'm not sure on the other questions, but you don't have a comma or semi-colon after the ${ ... }. Maybe that's changing your
http://www.perlmonks.org/index.pl/jacques?node_id=373962
CC-MAIN-2015-14
refinedweb
141
83.05
What is React Native for Web? React Native Web is an awesome web development library available for frontend developers. The beauty of React Native Web is that it can be used to run an application on any platform just using a single codebase. React Native for web makes it possible to run React Native components and APIs on the web with the help of React DOM. In other words, React Native Web makes it easy to bring your React Native app to the web. Who maintains React Native Web? React Native was originally developed by Facebook in 2015, and React Native Web was created that same year by Nicolas Gallagher as an open source GitHub repository. React Native supports the web from version 0.60 upwards. You may already know that, with React Native, developers are able to build cross-platform mobile apps which support both Android and iOS. But what about the web? That is exactly the issue React Native for Web was created to address. Using React Native Web, developers can consolidate a React Native app into a single codebase without having to develop and maintain two codebases for both mobile and web, and without a loss in app performance. Their apps will render correctly on the web and perform just as well as they do on mobile devices. Why should you use React Native for Web? As mentioned above, the core advantage of React Native Web is that you can write the code once and share it across multiple platforms. Another important advantage of this library is its native-quality interactions. Regardless of whether you are using it on your personal computer or in the browser of your mobile device, you get support for multiple input modes such as touch, mouse, or keyboard. For example, if you create a <Button/> with an onLongPress property, it will be handled correctly across all platforms. The other bright side of this library is its support for accessibility. React Native Web includes APIs that enable developers to build more accessible apps. The highly supported accessibility components of the React Native Web are accessible, accessibilityLabel, importantForAccessibility, accessibilityRole, and accessibilityLiveRegion. You should consider using these features in order to ensure that every user has equal access to your app. There are also other interesting features such as RTL (Right-to-Left) support, which automatically flips your app layout to support right-to-left languages. This is critical for developers who are planning to expand into new markets. It’s also noteworthy that React Native Web supports server-side rendering, and you can integrate it with some popular tools like Gatsby or Next. You can find examples of those tools in the React Native Web repository on GitHub. To sum it up, the advantages of React Native Web are: - Single codebase to share across multiple platforms and devices - Native-quality interactions - Support for accessibility - RTL support - Server side rendering support - Integration with static pages What companies use React Native Web? Even though React Native for Web is a relatively recent library and some features are still missing, it is used to power massive websites and web apps such as Twitter, Major League Soccer, Expo, Flipkart, Uber, DataCamp, and The Times. How does React Native Web work? React Native Web enables you to develop multi-platform applications by providing browser-compatible implementations of React Native's core components. For instance, the View component used in React Native has a DOM-based version that is aware of how to render a div. React Native Web utilizes this (and other) translations to properly render mobile components in the browser. Even though every React Native component is not supported, enough of them are that you can translate most of your React Native codebase into a fully functional web app. In addition to core components, styles for React and React Native are written differently. With React, many developers use plain CSS or a CSS preprocessor like Sass. But in React Native, all styles are written with JavaScript, as there is no DOM or selectors. React Native Web corrects this by rendering styles with Javascript instead of using CSS. This gives developers the benefit of writing a single set of styles which applies to both native mobile and web. Implementing react-native-web Below is the step-by-step process for converting React Native apps into web apps using React Native Web. Requirements - NodeJS version 8.x.x or above and npm/yarn - Basic understanding of React Native, ReactJS and ES6 JavaScript features will be useful Getting Started: Creating a React Native App To begin, you should go to the terminal and install the necessary command line tools. The first one will help you scaffold a React Native app, while the other one will run it. npm install -g create-react-app expo After installing the above tools, run the below command from the terminal in order to create a new React project. create-react-app reactnativeweb-demo This will create a new directory. An important feature of this project creation tool is that it comes with integrated support for aliasing react-native-web to react-native. Next, you will have to install a couple of more dependencies to make this project work as required. For that, run the below command after traversing inside the newly created project folder. yarn add --dev babel-plugin-module-resolver The babel plugin (babel-plugin-module-resolver) will be helpful in resolving the project modules when compiling with Babel. The Babel compiler is used by React Native internally. Now you have to install the key dependencies without which the project cannot be run. yarn add react-native expo react-native-web The final step is to create a file named .babelrc at the root of the project directory with the below code snippet: { "plugins": [ [ "module-resolver", { "alias": { "^react-native$": "react-native-web" } } ] ] } Running on the Web Before proceeding to build a demo app, we have to check if the current configuration works properly. For that, open your src/App.js file and replace the content there with the following code snippet. import React from "react" import { StyleSheet, Text, View } from "react-native" class App extends React.Component { render() { return ( <View> <Text style={styles.text}>Hello, world!</Text> </View> ) } } const styles = StyleSheet.create({ text: { fontWeight: "bold", fontSize: 30 } }) export default App Here, we’re primarily using the UI components built using the react-native API for the web application. In React Native terms, View is equal to div in HTML and text is equal to p or a span. There’s a strong resemblance between the APIs of the mobile and the web UIs, but there are also some major differences to grasp and understand. In order to run the application, go to the terminal window, type npm start, and execute it. By hitting the URL in your browser window, you can see the result as a text output of the wording "Hello, world!" When to choose React Native Web over React You may be wondering, "React has been there for many years, so what's the point of using React Native Web?" You may be also thinking both React Native Web and React are the same thing. React for web was introduced before React Native and long before React Native for Web. If you already have an app written in React Native that needs a web version, it will be easier to create that web app using React Native for Web, as it helps to run the same code on to the web with minimal changes. But if you choose React in such a scenario, you will have to start almost from scratch. Also, React Native Web is the ideal choice from the outset if you are planning to build both web and mobile apps together, as it offers maximum code sharing across multiple platforms. With React, on the other hand, you will have to maintain the view files separately. If you are going to build only a website with no mobile version, React will be the smarter choice. Bug fixes, issues, and planned features for React Native Web in 2021 Although React Native is one of the hottest frameworks right now, it's not without its limitations. The same is true for React Native Web. React Native Web’s chief axes for improvement are its lack of maturity and some missing features. It still has a long way to go in order to overcome various performance issues, limitations, and challenges. One such issue is that if your project contains some libraries that rely on native dependencies, they won’t work on the web. And even if you only use libraries which are free of native dependencies, you can’t expect them to work 100% of the time. Also, not all of the React Native APIs are available in the browser (although some are still in the process of being translated). The following are some of the key bug fixes planned for the React Native Web library in the coming months. - Nested FlatList errors. - Send onRequestClose to the correct modal even with animations. - TextInput selectTextOnFocus prop (Safari). Below are some new features that are still under development for React Native Web. - Provide better API for opening in new tab - Implementing BackHandler for web - Enabling window/body based scrolling for all ScrollViews - Support image source objects - Implementing refocusing trigger-element after closing modal - Linking API: Support for New Tab - Implementing refocusing trigger-element after closing modal - Adding support for TV Devices - Adding support for opening a link in a new tab. Building Universal Apps with React Native Creating universal apps is a dream that came true for many developers, and React Native Web is close to making that dream a reality. The critical factor that makes React Native a perfect fit for creating universal apps is that it is a pure UI language. It specifies some base components which define UI primitives, and they are considered to be independent of the platform which runs them. All of the components we can create in React Native are based on primitives like <View>, <Text>, or <Image>, which are basic elements that make sense to any visual interface, no matter where it is run. When it comes to styling, styling react-native-web components is exactly the same as styling react-native components. In the event that you want to have specific styling for the web, you can always write the conditional styling using the Platform.OS === ‘web’ check. Building with React Native Web using Crowdbotics The Crowdbotics App Builder is designed to facilitate the rapid development of universal software applications. It allows developers to scaffold and deploy working apps quickly by identifying the best packages for a given feature set. Our App Builder runs on RAD stack, which consists of React Native and Django. This enables you to set up and configure React Native Web in a React Native mobile app generated on the Crowdbotics platform. To make things even easier, we're planning to release in-app support for React Native Web during Q4 2020, which will create universal, cross-platform apps from a single codebase by default. Wrapping up Web and mobile development are growing and changing at a rapid pace. While it’s true that a mobile app can give you more control and better performance, it is often advisable to also release a web application for true cross-platform user engagement. React Native Web offers you the ability to create a highly scalable web application without having to maintain separate code bases for mobile and web. Even though this library is not yet an official part of the React Native project, its increasing popularity could revolutionize how we think about universal app development.
https://blog.crowdbotics.com/the-state-of-react-native-for-web-in-2021/
CC-MAIN-2021-17
refinedweb
1,960
52.09
Comment on Tutorial - How to use 'implements Runnable' in Java By Emiley J Comment Added by : oxesibilux Comment Added at : 2017-04-19 23:03:19 Comment on Tutorial : How to use 'implements Runnable' in Java By Emiley J oxesibil. Hi I am new to struts framework. Can u tell how to View Tutorial By: Nithya at 2012-11-27 14:03:46 3. its not working. View Tutorial By: raja at 2012-08-29 11:44:45 4. Thank you for this. I am a beg and this is making View Tutorial By: mac at 2011-08-23 13:47:15 5. package practice_bo; import practic View Tutorial By: nishi at 2011-12-09 02:00:20 6. I am doing a project to for sending sms from the s View Tutorial By: visalakshi at 2010-03-30 20:55:50 7. Can anybody tell me how to send and receive SMS us View Tutorial By: Bharat Lahori at 2010-02-04 07:14:19 8. sir needed help i am compiler dev c++ which shown View Tutorial By: shahab at 2010-05-05 13:17:53 9. Thanks for this nice comparision table for Spring View Tutorial By: guddu at 2009-10-17 03:08:59 10. guys i am getting this error..plz tell me the solu View Tutorial By: Ajit at 2010-06-17 07:38:51
http://java-samples.com/showcomment.php?commentid=40929
CC-MAIN-2018-22
refinedweb
231
74.59
Experts Exchange connects you with the people and services you need so you can get back to work. Submit import java.net.*; class DatagramPeer { private DatagramSocket au_Send; static DatagramPeer dgp; private int port = 30314; // pop this into the DatagramSocket constructor to see diff effect DatagramPeer() { try { au_Send = new DatagramSocket(); // like this, the System allocates a port. System.out.println("DatagramPeer's socket is at address : "+this.au_Send.getLocalSocketAddress()); } catch (SocketException exSocket) { exSocket.printStackTrace(); } } public static void main(String[] args){ dgp = new DatagramPeer(); PeerClass pc = new PeerClass(dgp.au_Send.getLocalSocketAddress()); } static class PeerClass{ private DatagramSocket dgs; PeerClass(SocketAddress sa){ try{ dgs = new DatagramSocket();dgs.connect(sa); System.out.println("dgs is "+dgs.isConnected()+" connected."); System.out.println("This inner PeerClass's socket is NOW connected, on port "+dgs.getPort()); System.out.println("The DatagramPeer's socket is NOT connected, and so returns "+dgp.au_Send.getPort()+" for the port."); dgp.au_Send.connect(dgs.getLocalSocketAddress()); } catch(Exception ep){ep.printStackTrace();} System.out.println("PeerClass is connected to remote socket at : "+dgs.getRemoteSocketAddress()); System.out.println("PeerClass's own local socket is at address : "+dgs.getLocalSocketAddress()); System.out.println("But now DatagramPeer's socket is connected too, and so returns "+dgp.au_Send.getPort()); } } } Select all Open in new window For some baffling reason, the DatagramSocket on the serverClient never gets made, in serverclient.java , line 31 UDPsocket = new DatagramSocket(UDPPORT); $389.00. Premium members get this course for $95.20. Premium members get this course for $174.99. Premium members get this course for $79.20. Premium members get this course for $259.00. Premium members get this course for $329.00. Open in new window However, you are creating the DatagramSocket "unbound" and I'm guessing what you actually mean to do is something like this... Open in new windowie. you have to pass the port to it that it should be listening on. It’s our mission to create a product that solves the huge challenges you face at work every day. In case you missed it, here are 7 delightful things we've added recently to monday to make it even more awesome. .getLocalSocketAddress() which will return the port as part of the SocketAddress field. Thanks but how much longer am I going to get away with programming this all on one machine ! ! ! It seems to work adequately with 2 clients. Will that very soon just become a disaster with > 2 clients? Can I do dev with 2 client engines on my PC; and the server on my Macbook? I can make this gameplay initially just a Jframe of 800x800, to move letters around, so that each window is in perfect synch . . until the network looks solid? Thanks
https://www.experts-exchange.com/questions/28391833/java-null-DatagramSocket-in-RTS-server-puzzling-me.html
CC-MAIN-2018-13
refinedweb
447
51.55
send a message to another process #include <sys/kernel.h> #include <sys/sendmx.h> int Sendmx( pid_t pid, unsigned sparts, unsigned rparts, struct _mxfer_entry *smsg, struct _mxfer_entry *rmsg ); The kernel function Sendmx() sends a message, taken from the array of buffers pointed to by smsg, to the process identified by pid. Any reply is placed in the array of buffers pointed to by rmsg. The number of elements in the send array is given by sparts while the number of elements in the receive array is given by rparts. The size of these arrays must not exceed _QNX_MXTAB_LEN (defined in <limits.h>). process that doesn't exist or dies while you're BLOCKED on it, Sendmx() returns -1 and errno is set to ESRCH. Sendmx() may be interrupted by a signal, in which case it returns -1 and errno is set to EINTR. It's quite common to send two-part messages consisting of a fixed header and a buffer of data. The Send <errno.h> #include <unistd.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/kernel.h> #include <sys/sendmx.h> /* Define all messages that are sent and replied */ #define WRDATA 1 #define STOP 2 /* The sizeof(type) == sizeof(status) in all messages */ struct msg_wrdata { short unsigned type; short unsigned nbytes; } ; struct msg_wrdata_reply { short unsigned status; } ; struct msg_stop { short unsigned type; } ; struct msg_stop_reply { short unsigned status; } ; /* Define the union of all messages */ union { short unsigned type; short unsigned status; struct msg_wrdata wrdata; struct msg_wrdata_reply wrdata_reply; struct msg_stop stop; struct msg_stop_reply stop_reply; } msg; char buffer[1000]; void main( void ) { pid_t child; if( child = fork() ) client( child ); else server(); exit( EXIT_SUCCESS ); } void server() { pid_t pid; unsigned nbytes; struct _mxfer_entry mx; for( ;; ) { _setmx( &mx, &msg, sizeof msg); pid = Receivemx( 0, 1, &mx ); nbytes = sizeof( msg.status ); switch( msg.type ) { case WRDATA: printf( "Server WRDATA %d ", msg.wrdata.nbytes ); /* * For speed you could have the receive read the * data in one gulp rather than invoke Readmsgmx. * You would need a different structure for the * server which included the max number of bytes * of data you wished to read. */ _setmx( &mx, buffer, msg.wrdata.nbytes); Readmsgmx( pid, sizeof( msg.wrdata ), 1, &mx ); fwrite( buffer, msg.wrdata.nbytes, 1, stdout ); fflush( stdout ); msg.wrdata_reply.status = EOK; break; case STOP: /* * Note that for this example we terminate without * replying to show that the client Send unblocks. */ printf( "Server STOP\n" ); fflush( stdout ); return; default: printf( "Server unknown message %04X\n", msg.type ); msg.status = ENOSYS; break; } _setmx( &mx, &msg, nbytes); Replymx( pid, 1, &mx ); } } void client( child ) pid_t child; { int r; printf( "Client WRDATA\n" ); r = wrdata( child, "Hello world!\n", 13 ); printf( "Client WRDATA %d %d\n", r, errno ); printf( "Client STOP\n" ); r = stop( child ); printf( "Client STOP %d %d\n", r, errno ); } int wrdata( pid, buf, nbytes ) pid_t pid; char *buf; unsigned int nbytes; { union { struct msg_wrdata s; struct msg_wrdata_reply r; } wmsg; struct _mxfer_entry mx[2]; /* Set up the message header. */ wmsg.s.type = WRDATA; wmsg.s.nbytes = nbytes; _setmx( &mx[0], &wmsg, sizeof( wmsg.s ) ); /* Setup the message data description. */ _setmx( &mx[1], buf, nbytes ); if( Sendmx( pid, 2, 1, &mx, &mx ) == -1 ) return( -1 ); if( wmsg.r.status != EOK ) { errno = wmsg.r.status; return( -1 ); } return( nbytes ); } int stop( pid ) pid_t pid; { union { struct msg_stop s; struct msg_stop_reply r; } smsg; struct _mxfer_entry mx[2]; smsg.s.type = STOP; _setmx( mx, &smsg.s, sizeof(smsg.s) ); _setmx( mx + 1, &smsg.r, sizeof(smsg.r) ); if( Sendmx( pid, 1, 1, mx, mx + 1) == -1 ) return( -1 ); if( smsg.r.status != EOK ) { errno = smsg.r.status; return( -1 ); } return( 0 ); } QNX Sendmx() is a macro. Creceive(), Creceivemx(), errno, Receive(), Receivemx(), Reply(), Replymx(), Readmsg(), Readmsgmx(), Send(), Sendfd(), Sendfdmx(), Writemsg(), Writemsgmx(), Trigger()
https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/qnx/sendmx.html
CC-MAIN-2022-33
refinedweb
621
66.23
The Constant concept represents data that can be manipulated at compile-time. At its core, Constant is simply a generalization of the principle behind std::integral_constant to all types that can be constructed at compile-time, i.e. to all types with a constexpr constructor (also called Literal types). More specifically, a Constant is an object from which a constexpr value may be obtained (through the value method) regardless of the constexprness of the object itself. All Constants must be somewhat equivalent, in the following sense. Let C(T) and D(U) denote the tags of Constants holding objects of type T and U, respectively. Then, an object with tag D(U) must be convertible to an object with tag C(T) whenever U is convertible to T, has determined by is_convertible. The interpretation here is that a Constant is just a box holding an object of some type, and it should be possible to swap between boxes whenever the objects inside the boxes can be swapped. Because of this last requirement, one could be tempted to think that specialized "boxes" like std::integral_constant are prevented from being Constants because they are not able to hold objects of any type T ( std::integral_constant may only hold integral types). This is false; the requirement should be interpreted as saying that whenever C(T) is meaningful (e.g. only when T is integral for std::integral_constant) and there exists a conversion from U to T, then a conversion from D(U) to C(T) should also exist. The precise requirements for being a Constant are embodied in the following laws. value and to, satisfying the laws below. Let c be an object of with tag C, which represents a Constant holding an object with tag T. The first law ensures that the value of the wrapped object is always a constant expression by requiring the following to be well-formed: This means that the value function must return an object that can be constructed at compile-time. It is important to note how value only receives the type of the object and not the object itself. This is the core of the Constant concept; it means that the only information required to implement value must be stored in the type of its argument, and hence be available statically. The second law that must be satisfied ensures that Constants are basically dumb boxes, which makes it possible to provide models for many concepts without much work from the user. The law simply asks for the following expression to be valid: where, i is an arbitrary Constant holding an internal value with a tag that can be converted to T, as determined by the hana::is_convertible metafunction. In other words, whenever U is convertible to T, a Constant holding a U is convertible to a Constant holding a T, if such a Constant can be created. Finally, the tag C must provide a nested value_type alias to T, which allows us to query the tag of the inner value held by objects with tag C. In other words, the following must be true for any object c with tag C: In certain cases, a Constant can automatically be made a model of another concept. In particular, if a Constant C is holding an object of tag T, and if T models a concept X, then C may in most cases model X by simply performing whatever operation is required on its underlying value, and then wrapping the result back in a C. More specifically, if a Constant C has an underlying value ( C::value_type) which is a model of Comparable, Orderable, Logical, or Monoid up to EuclideanRing, then C must also be a model of those concepts. In other words, when C::value_type models one of the listed concepts, C itself must also model that concept. However, note that free models are provided for all of those concepts, so no additional work must be done. While it would be possible in theory to provide models for concepts like Foldable too, only a couple of concepts are useful to have as Constant in practice. Providing free models for the concepts listed above is useful because it allows various types of integral constants ( std::integral_constant, mpl::integral_c, etc...) to easily have models for them just by defining the Constant concept. Constantis actually the canonical embedding of the subcategory of constexprthings into the Hana category, which contains everything in this library. Hence, whatever is true in that subcategory is also true here, via this functor. This is why we can provide models of any concept that works on constexprthings for Constants, by simply passing them through that embedding. Any Constant c holding an underlying value of tag T is convertible to any tag U such that T is convertible to U. Specifically, the conversion is equivalent to Also, those conversions are marked as an embedding whenever the conversion of underlying types is an embedding. This is to allow Constants to inter-operate with constexpr objects easily: Strictly speaking, this is sometimes a violation of what it means to be an embedding. Indeed, while there exists an embedding from any Constant to a constexpr object (since Constant is just the canonical inclusion), there is no embedding from a Constant to a runtime object since we would lose the ability to define the value method (the constexprness of the object would have been lost). Since there is no way to distinguish constexpr and non- constexpr objects based on their type, Hana has no way to know whether the conversion is to a constexpr object of not. In other words, the to method has no way to differentiate between which is an embedding, and which isn't. To be on the safer side, we could mark the conversion as not-an-embedding. However, if e.g. the conversion from integral_constant_tag<int> to int was not marked as an embedding, we would have to write plus(to<int>(int_<1>), 1) instead of just plus(int_<1>, 1), which is cumbersome. Hence, the conversion is marked as an embedding, but this also means that code like will be considered valid, which implicitly loses the fact that int_<1> is a Constant, and hence does not follow the usual rules for cross-type operations in Hana. Because of the requirement that Constants be interchangeable when their contents are compatible, two Constants A and B will have a common data type whenever A::value_type and B::value_type have one. Their common data type is an unspecified Constant C such that C::value_type is exactly common_t<A::value_type, B::value_type>. A specialization of the common metafunction is provided for Constants to reflect this. In the same vein, a common data type is also provided from any constant A to a type T such that A::value_type and T share a common type. The common type between A and T is obviously the common type between A::value_type and T. As explained above in the section on conversions, this is sometimes a violation of the definition of a common type, because there must be an embedding to the common type, which is not always the case. For the same reasons as explained above, this common type is still provided. Initial value: #include <boost/hana/fwd/value.hpp> Return the compile-time value associated to a constant.This function returns the value associated to a Constant. That value is always a constant expression. The normal way of using value on an object c is. However, for convenience, an overload of value is provided so that it can be called as: This overload works by taking a const& to its argument, and then forwarding to the first version of value. Since it does not use its argument, the result can still be a constant expression, even if the argument is not a constant expression. value<T>()is tag-dispatched as value_impl<C>::apply<T>(), where Cis the tag of T. hana::valueis an overloaded function, not a function object. Hence, it can't be passed to higher-order algorithms. If you need an equivalent function object, use hana::value_ofinstead. Referenced by boost::hana::literals::operator""_c(), boost::hana::literals::operator""_s(), and boost::hana::optional< T >::optional(). Initial value: #include <boost/hana/fwd/value.hpp> Equivalent to value, but can be passed to higher-order algorithms.This function object is equivalent to value, except it can be passed to higher order algorithms because it is a function object. value can't be passed to higher-order algorithms because it is implemented as an overloaded function. value, and hence it is not tag-dispatched and can't be customized.
https://www.boost.org/doc/libs/1_62_0/libs/hana/doc/html/group__group-Constant.html
CC-MAIN-2018-22
refinedweb
1,451
50.36