text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hi all, I am trying to use XQuery to translate XML to RDF/XML, so I just started by writing a script that reads the XML nodes and simply translates them to RDF/XML, as follows: <namespace:node-name rdf:</namespace:node-name> My script follows: (please disregard the white spaces, I wrote them just in case the sequences will show up as < and > ) let $lt := " & l t ;" let $gt := " & g t ;" for $node in doc( "doc.xml" )//* return concat($lt, "namespace", ":", node-name($node) , $gt) Unfortunately it did not do the trick, they'll still show up at & l t ; without the white spaces.. I tried using ascii codes as well, as in "& # Code ;" Please help Topic This topic has been locked. 1 reply Latest Post - 2012-10-12T10:37:04Z by SystemAdmin SystemAdmin 110000D4XK 746 Posts ACCEPTED ANSWER Answered question This question has been answered. Unanswered question This question has not been answered yet. Pinned topic How to write special characters to XML file? 2012-10-12T04:36:47Z | Updated on 2012-10-12T10:37:04Z at 2012-10-12T10:37:04Z by SystemAdmin - SystemAdmin 110000D4XK746 PostsACCEPTED ANSWER Re: How to write special characters to XML file?2012-10-12T10:37:04Z in response to SystemAdminYou should write the query to construct a tree of nodes, not to construct lexical XML. The way to write an element node to the tree is to use an element constructor, such as <e x= "1" y= "{$var}"/> or if it needs to be dynamic element {$name } {$content }
https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014896781
CC-MAIN-2014-10
refinedweb
256
63.22
1 /*2 * @(#)SelectionKey.java 1.24 03/12/193 *4 * Copyright 2004 Sun Microsystems, Inc. All rights reserved.5 * SUN PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.6 */7 8 package java.nio.channels;9 10 import java.io.IOException ;11 12 13 /**14 * A token representing the registration of a {@link SelectableChannel} with a15 * {@link Selector}.16 *17 * <p> A selection key is created each time a channel is registered with a18 * selector. A key remains valid until it is <i>cancelled</i> by invoking its19 * {@link #cancel cancel} method, by closing its channel, or by closing its20 * selector. Cancelling a key does not immediately remove it from its21 * selector; it is instead added to the selector's <a22 *<i>cancelled-key set</i></a> for removal during the23 * next selection operation. The validity of a key may be tested by invoking24 * its {@link #isValid isValid} method.25 *26 * <a name="opsets">27 *28 * <p> A selection key contains two <i>operation sets</i> represented as29 * integer values. Each bit of an operation set denotes a category of30 * selectable operations that are supported by the key's channel.31 *32 * <ul>33 *34 * <li><p> The <i>interest set</i> determines which operation categories will35 * be tested for readiness the next time one of the selector's selection36 * methods is invoked. The interest set is initialized with the value given37 * when the key is created; it may later be changed via the {@link38 * #interestOps(int)} method. </p></li>39 *40 * <li><p> The <i>ready set</i> identifies the operation categories for which41 * the key's channel has been detected to be ready by the key's selector.42 * The ready set is initialized to zero when the key is created; it may later43 * be updated by the selector during a selection operation, but it cannot be44 * updated directly. </p></li>45 *46 * </ul>47 *48 * <p> That a selection key's ready set indicates that its channel is ready for49 * some operation category is a hint, but not a guarantee, that an operation in50 * such a category may be performed by a thread without causing the thread to51 * block. A ready set is most likely to be accurate immediately after the52 * completion of a selection operation. It is likely to be made inaccurate by53 * external events and by I/O operations that are invoked upon the54 * corresponding channel.55 *56 * <p> This class defines all known operation-set bits, but precisely which57 * bits are supported by a given channel depends upon the type of the channel.58 * Each subclass of {@link SelectableChannel} defines an {@link59 * SelectableChannel#validOps() validOps()} method which returns a set60 * identifying just those operations that are supported by the channel. An61 * attempt to set or test an operation-set bit that is not supported by a key's62 * channel will result in an appropriate run-time exception.63 *64 * <p> It is often necessary to associate some application-specific data with a65 * selection key, for example an object that represents the state of a66 * higher-level protocol and handles readiness notifications in order to67 * implement that protocol. Selection keys therefore support the68 * <i>attachment</i> of a single arbitrary object to a key. An object can be69 * attached via the {@link #attach attach} method and then later retrieved via70 * the {@link #attachment attachment} method.71 *72 * <p> Selection keys are safe for use by multiple concurrent threads. The73 * operations of reading and writing the interest set will, in general, be74 * synchronized with certain operations of the selector. Exactly how this75 * synchronization is performed is implementation-dependent: In a naive76 * implementation, reading or writing the interest set may block indefinitely77 * if a selection operation is already in progress; in a high-performance78 * implementation, reading or writing the interest set may block briefly, if at79 * all. In any case, a selection operation will always use the interest-set80 * value that was current at the moment that the operation began. </p>81 *82 *83 * @author Mark Reinhold84 * @author JSR-51 Expert Group85 * @version 1.24, 03/12/1986 * @since 1.487 *88 * @see SelectableChannel89 * @see Selector90 */91 92 public abstract class SelectionKey {93 94 /**95 * Constructs an instance of this class.96 */97 protected SelectionKey() { }98 99 100 // -- Channel and selector operations --101 102 /**103 * Returns the channel for which this key was created. This method will104 * continue to return the channel even after the key is cancelled. </p>105 *106 * @return This key's channel107 */108 public abstract SelectableChannel channel();109 110 /**111 * Returns the selector for which this key was created. This method will112 * continue to return the selector even after the key is cancelled. </p>113 *114 * @return This key's selector115 */116 public abstract Selector selector();117 118 /**119 * Tells whether or not this key is valid.120 *121 * <p> A key is valid upon creation and remains so until it is cancelled,122 * its channel is closed, or its selector is closed. </p>123 *124 * @return <tt>true</tt> if, and only if, this key is valid125 */126 public abstract boolean isValid();127 128 /**129 * Requests that the registration of this key's channel with its selector130 * be cancelled. Upon return the key will be invalid and will have been131 * added to its selector's cancelled-key set. The key will be removed from132 * all of the selector's key sets during the next selection operation.133 *134 * <p> If this key has already been cancelled then invoking this method has135 * no effect. Once cancelled, a key remains forever invalid. </p>136 *137 * <p> This method may be invoked at any time. It synchronizes on the138 * selector's cancelled-key set, and therefore may block briefly if invoked139 * concurrently with a cancellation or selection operation involving the140 * same selector. </p>141 */142 public abstract void cancel();143 144 145 // -- Operation-set accessors --146 147 /**148 * Retrieves this key's interest set.149 *150 * <p> It is guaranteed that the returned set will only contain operation151 * bits that are valid for this key's channel.152 *153 * <p> This method may be invoked at any time. Whether or not it blocks,154 * and for how long, is implementation-dependent. </p>155 *156 * @return This key's interest set157 *158 * @throws CancelledKeyException159 * If this key has been cancelled160 */161 public abstract int interestOps();162 163 /**164 * Sets this key's interest set to the given value.165 *166 * <p> This method may be invoked at any time. Whether or not it blocks,167 * and for how long, is implementation-dependent. </p>168 *169 * @param ops The new interest set170 *171 * @return This selection key172 *173 * @throws IllegalArgumentException174 * If a bit in the set does not correspond to an operation that175 * is supported by this key's channel, that is, if176 * <tt>set & ~(channel().validOps()) != 0</tt>177 *178 * @throws CancelledKeyException179 * If this key has been cancelled180 */181 public abstract SelectionKey interestOps(int ops);182 183 /**184 * Retrieves this key's ready-operation set.185 *186 * <p> It is guaranteed that the returned set will only contain operation187 * bits that are valid for this key's channel. </p>188 *189 * @return This key's ready-operation set190 *191 * @throws CancelledKeyException192 * If this key has been cancelled193 */194 public abstract int readyOps();195 196 197 // -- Operation bits and bit-testing convenience methods --198 199 /**200 * Operation-set bit for read operations.201 *202 * <p> Suppose that a selection key's interest set contains203 * <tt>OP_READ</tt> at the start of a <a204 *selection operation</a>. If the selector205 * detects that the corresponding channel is ready for reading, has reached206 * end-of-stream, has been remotely shut down for further reading, or has207 * an error pending, then it will add <tt>OP_READ</tt> to the key's208 * ready-operation set and add the key to its selected-key set. </p>209 */210 public static final int OP_READ = 1 << 0;211 212 /**213 * Operation-set bit for write operations. </p>214 *215 * <p> Suppose that a selection key's interest set contains216 * <tt>OP_WRITE</tt> at the start of a <a217 *selection operation</a>. If the selector218 * detects that the corresponding channel is ready for writing, has been219 * remotely shut down for further writing, or has an error pending, then it220 * will add <tt>OP_WRITE</tt> to the key's ready set and add the key to its221 * selected-key set. </p>222 */223 public static final int OP_WRITE = 1 << 2;224 225 /**226 * Operation-set bit for socket-connect operations. </p>227 *228 * <p> Suppose that a selection key's interest set contains229 * <tt>OP_CONNECT</tt> at the start of a <a230 *selection operation</a>. If the selector231 * detects that the corresponding socket channel is ready to complete its232 * connection sequence, or has an error pending, then it will add233 * <tt>OP_CONNECT</tt> to the key's ready set and add the key to its234 * selected-key set. </p>235 */236 public static final int OP_CONNECT = 1 << 3;237 238 /**239 * Operation-set bit for socket-accept operations. </p>240 *241 * <p> Suppose that a selection key's interest set contains242 * <tt>OP_ACCEPT</tt> at the start of a <a243 *selection operation</a>. If the selector244 * detects that the corresponding server-socket channel is ready to accept245 * another connection, or has an error pending, then it will add246 * <tt>OP_ACCEPT</tt> to the key's ready set and add the key to its247 * selected-key set. </p>248 */249 public static final int OP_ACCEPT = 1 << 4;250 251 /**252 * Tests whether this key's channel is ready for reading.253 *254 * <p> An invocation of this method of the form <tt>k.isReadable()</tt>255 * behaves in exactly the same way as the expression256 *257 * <blockquote><pre>258 * k.readyOps() & OP_READ != 0</pre></blockquote>259 *260 * <p> If this key's channel does not support read operations then this261 * method always returns <tt>false</tt>. </p>262 *263 * @return <tt>true</tt> if, and only if,264 * <tt>readyOps()</tt> <tt>&</tt> <tt>OP_READ</tt> is265 * nonzero266 *267 * @throws CancelledKeyException268 * If this key has been cancelled269 */270 public final boolean isReadable() {271 return (readyOps() & OP_READ) != 0;272 }273 274 /**275 * Tests whether this key's channel is ready for writing.276 *277 * <p> An invocation of this method of the form <tt>k.isWritable()</tt>278 * behaves in exactly the same way as the expression279 *280 * <blockquote><pre>281 * k.readyOps() & OP_WRITE != 0</pre></blockquote>282 *283 * <p> If this key's channel does not support write operations then this284 * method always returns <tt>false</tt>. </p>285 *286 * @return <tt>true</tt> if, and only if,287 * <tt>readyOps()</tt> <tt>&</tt> <tt>OP_WRITE</tt>288 * is nonzero289 *290 * @throws CancelledKeyException291 * If this key has been cancelled292 */293 public final boolean isWritable() {294 return (readyOps() & OP_WRITE) != 0;295 }296 297 /**298 * Tests whether this key's channel has either finished, or failed to299 * finish, its socket-connection operation.300 *301 * <p> An invocation of this method of the form <tt>k.isConnectable()</tt>302 * behaves in exactly the same way as the expression303 *304 * <blockquote><pre>305 * k.readyOps() & OP_CONNECT != 0</pre></blockquote>306 *307 * <p> If this key's channel does not support socket-connect operations308 * then this method always returns <tt>false</tt>. </p>309 *310 * @return <tt>true</tt> if, and only if,311 * <tt>readyOps()</tt> <tt>&</tt> <tt>OP_CONNECT</tt>312 * is nonzero313 *314 * @throws CancelledKeyException315 * If this key has been cancelled316 */317 public final boolean isConnectable() {318 return (readyOps() & OP_CONNECT) != 0;319 }320 321 /**322 * Tests whether this key's channel is ready to accept a new socket323 * connection.324 *325 * <p> An invocation of this method of the form <tt>k.isAcceptable()</tt>326 * behaves in exactly the same way as the expression327 *328 * <blockquote><pre>329 * k.readyOps() & OP_ACCEPT != 0</pre></blockquote>330 *331 * <p> If this key's channel does not support socket-accept operations then332 * this method always returns <tt>false</tt>. </p>333 *334 * @return <tt>true</tt> if, and only if,335 * <tt>readyOps()</tt> <tt>&</tt> <tt>OP_ACCEPT</tt>336 * is nonzero337 *338 * @throws CancelledKeyException339 * If this key has been cancelled340 */341 public final boolean isAcceptable() {342 return (readyOps() & OP_ACCEPT) != 0;343 }344 345 346 // -- Attachments --347 348 private volatile Object attachment = null;349 350 /**351 * Attaches the given object to this key.352 *353 * <p> An attached object may later be retrieved via the {@link #attachment354 * attachment} method. Only one object may be attached at a time; invoking355 * this method causes any previous attachment to be discarded. The current356 * attachment may be discarded by attaching <tt>null</tt>. </p>357 *358 * @param ob359 * The object to be attached; may be <tt>null</tt>360 *361 * @return The previously-attached object, if any,362 * otherwise <tt>null</tt>363 */364 public final Object attach(Object ob) {365 Object a = attachment;366 attachment = ob;367 return a;368 }369 370 /**371 * Retrieves the current attachment. </p>372 *373 * @return The object currently attached to this key,374 * or <tt>null</tt> if there is no attachment375 */376 public final Object attachment() {377 return attachment;378 }379 380 }381 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/java/nio/channels/SelectionKey.java.htm
CC-MAIN-2018-05
refinedweb
2,219
51.28
How to use Webpack’s new “magic comment” feature with React Universal Component + SSR Webpack 2.4.0, which came out a few weeks ago, launched with a very interesting new feature: “magic comments.” In combination with dynamic imports, “magic comments” greatly simplify code-splitting + server-side rendering. Now you can name the chunks corresponding to the modules/components you import. Prior to this an enormous amount of work was needed to determine what chunks to serve in initial requests so async components could also render synchronously. Here’s all it looks like: () => import(/* webpackChunkName: 'Anything' / './MyComponent') Recently I wrote about how to use webpack-flush-chunks to cross-reference moduleIds flushed from react-universal-component or react-loadable with Webpack stats to generate the precise scripts and stylesheets to send from the server. However, naming chunks greatly greatly simplifies this process. webpack-flush-chunks supports this as well, but it’s very important to know how to do this manually since inevitably you will have custom needs. We’ll start with the server and work our way backwards, saving the magic for last. To begin we’re going to assume you have a method to flush the chunk names rendered on the server. For now we’ll assume you can easily do this. const appString = ReactDOMServer.renderToString(<App />) const chunkNames = flushChunkNames() Now, you should have an array of chunk names that look like this: [‘chunk1’, ‘chunkWhatever’, ‘my-chunk-name’, ‘etc’] We’re going to skip a detailed explanation of how to get Webpack compilation stats. In production, I recommend you use the Webpack plugin, stats-webpack-plugin, and during development I recommend you use webpack-hot-server-middleware to usher the stats to your serverRenderfunction. You can use one of my boilerplates to give all this a try in a few commands: Once you have the stats, you need to know where to find your chunks and the files they contain. They exist within the stats.assetsByChunkName: stats.assetsByChunkName = { chunk1: ['0.js', '0.js.map', '0.css', '0.css.map'], chunk2: ['1.js', '1.js.map', '1.css', '1.css.map'], chunk3: ['2.js', '2.js.map', '2.css', '2.css.map'], main: ['main.js', 'main.js.map', 'main.css', 'main.css.map'], vendor: ['vendor.js', 'vendor.js.map'], bootstrap: ['bootstrap.js', 'bootstrap.js.map'] } Having familiarity with what assetsByChunkNamelooks like is half the battle. Notice a “chunk” isn’t just a javascript file, but an array of files. Because you want an array rather than array of arrays, the primary thing we do here is pick the chunks we’re interested in (which are arrays), and then combine/flatten them into a single array: const assets = webpackStats.assetsByChunkName const filesForChunk = chunk => assets[chunk] const files = flatten(chunkNames.map(filesForChunk)) function flatten(array) { return [].concat(...array) } This is the easiest part. If things had been this frictionless all along, code-splitting in combination with SSR would have been a far easier nut to crack: const scripts = files.filter(f => f.endsWith('.js')) const stylesheets = files.filter(f => f.endsWith('.css')) // and for good measure let's get the publicPath before proceeding const path = webpackStats.publicPath Now it’s just a matter of creating strings (or perhaps React elements for ReactDOMServer.renderToStaticMarkup) to send to clients: export default function serverRender(req, res) { const appString = ReactDOMServer.renderToString(<App />) const chunkNames = flushChunkNames() // will get to this soon const assets = webpackStats.assetsByChunkName const filesForChunk = chunk => assets[chunk] const files = flatten(chunkNames.map(filesForChunk)) const scripts = files.filter(f => f.endsWith('.js')) const stylesheets = files.filter(f => f.endsWith('.css')) const path = webpackStats.publicPath res.send( <!doctype html><br> <html><br> <head><br> ${stylesheets<br> .map(f =><link href='${path}/${f}' /> )<br> .join('\n')<br> }<br> </head><br> <body><br> <div id="root">${appString}</div><br> ${scripts<br> .map(f =><script src='${path}/${f}'></script> )<br> .join('\n')<br> }<br> </body><br> </html> ) } const flatten = (array) => [].concat(...array) In a less naive example you would insure that your bootstrap.jsscript comes first, followed by chunks, and ending with main.js. It’s a chore to achieve this. This is one of the things webpack-flush-chunks automatically does for you. If you were looking for a quick overview of how to weed your way through Webpack stats to utilize the new “magic comment” feature, hopefully your pleasantly surprised with how easy it is. What’s left is: - actually telling Webpack to generate these chunks - demarcating in code when they are in fact used - and then flushing the ones used. To put Webpack chunk names to use you need a component or package that lets you mark what is considered a chunk and when it is used. Calling import with a magic comment is the easy part. You need your React component to also register the usage of a chunk by the same name. react-universal-component does this for you via the chunkName option: import React from 'react' import universal from 'react-universal-component' const asyncComponent = () => import(/ webpackChunkName: 'Anything' */ './MyComponent') const UniversalComponent = universal(asyncComponent, { resolve: () => require.resolveWeak('./Foo'), chunkName: 'Anything' }) export default () => <div> <UniversalComponent /> </div> The important part is that the string you provide for webpackChunkName must match the chunkName option. Notice they are both ‘Anything’. It’s a bit redundant, but the former is a static comment and there’s not much we can do about that. From my perspective, expending these lines is the least of my worries. I’m very thankful this feature is finally in Webpack. What is more nuanced though is when asyncComponent is called. You very well could provide a standalone promise rather than a function. Eg: import(.. without the arrow function part. But then the client would make an additional request to get that component immediately on page load, even if you did not render it. By guarding the promise with a function, you guarantee it’s not called until <UniversalComponent /> is rendered. React Universal Component handles that internally for you. But what’s the resolve option? This ultimately is the ultimate trick here. It gives your Webpack server a synchronous way to require your component, without the client including it in the parent chunk’s dependencies. See, if you did this: resolve: require('./Foo'), just by the existence of that in main, Webpack would include the very code you are trying to move into another chunk. It would defeat the purpose of code-splitting. In addition, require.resolveWeak is used by the client on page load when your app is rendered for the first time and you have correctly provided its corresponding chunk (as per Part 1). This avoids React “checksum mismatches” which lead to an additional render. And more importantly this prevents that unnecessary second request for a chunk corresponding to what was already rendered correctly by SSR. See, even if you correctly rendered the component you want split on the server via renderToStringand sent that to the client, the client would soon after replace it with your main.jsexpects to render, while it fetches its chunk anyway. Not good. But does Babel have require.resolveWeak? If you are using Babel for your server, you must also provide the absolute module path: const UniversalComponent = universal(asyncComponent, { resolve: () => require.resolveWeak('./Foo'), chunkName: 'Anything', path: path.join(dirname, './Foo') }) The rest is left up to React Universal Component to dynamically toggle between using one of the 3 methods of importing your module depending on the environment. The universal HoC takes several more options (all of which are optional). To learn about them, visit: Here’s a quick summary of the options: - error — optional <Error /> component - key — used to find the export you want in a given module - onLoad — used to take other exports from a module and do things like replace reducers - minDelay — controlled delay to prevent jank when switching from your loading component to your primary component - timeout — maximum loading time before error component shows - chunkName — the name of the chunk that will be flushed if rendered If you’ve done all correctly up to this point, flushing chunks is just a matter of making an additional require in serverRender and calling it with a few parameters: import flushChunks from 'webpack-flush-chunks' import * as Universal from 'react-universal-component/server' const appString = ReactDOM.renderToString(<App />) const { js, styles } = flushChunks(webpackStats, { chunkNames: Universal.flushChunkNames(), before: ['bootstrap', 'vendor'], after: ['main'] }) res.send( <br> <html><br> <head><br> ${styles}<br> </head><br> <body><br> <div id='root'>${appString}</div><br> ${js}<br> </body><br> </html><br>) The key thing to recognize here is that because of the way Node works (i.e. regarding the event loop) and the way renderToString works (i.e. synchronously), you can perform other synchronous actions immediately after, which make use of some global state (namely the arrays/sets behind the scenes containing your chunkNames) and guarantee that they are reset for queued requests. flushChunkNames() clears the set of chunk names recorded in the render, and as a result no annoying React provider components are necessary. Just make sure not to call await or kick off any promises before calling res.send. As for the original example in Part 1, obviously you can use Universal.flushChunkNames() in the same way and do the corresponding work to get scripts and stylesheets manually. webpack-flush-chunks also has a Low Level API that can simplify things for the semi-manual route as well. Specifically, it gives you the all files from rendered chunks, without addressing file ordering, creating strings/components, or getting your main/vendor scripts,etc. The most important thing you will need is the correct Webpack configuration. Make sure to check out that section of the docs. The previous sections were designed to be easy-reading — perhaps if you’re familiar with this stuff, you’re here after just perusing the code examples. The meat is really in the answer to the following questions you may have: Almost everywhere we want to do code-splitting, we want to dynamically do it for a bunch of sections of our site/app at the same time. We want to hit a bunch of birds with the same stone. How can we accomplish the following: function createUniversalComponent(page) { return universal(import( ./pages/${page} ), {<br> resolve: () => require.resolveWeak(./pages/ ${page}) }) } Your importno longer needs to be a function since it’s within a function not called until “render time.” Magic comments are left out for readability. If you didn’t know, webpack-flush-chunksworks without magic comments too. Using moduleIdswhich you can retrieve from Universal.flushModuleIds(), you can achieve the same on older versions of Webpack! The way we implement this is a bit different since we won’t know the page you want to render until “render time.” Let’s give it a look: const MyParentComponent = ({ page }) => { const UniversalComponent = createUniversalComponent(page) return ( <div> <UniversalComponent /> </div> ) } Now you can create your own HoC and re-use it perhaps for the entire code-splitting needs of your apps. That’s a lot of bytes you’re saving your clients, and at way cheaper developer price than ever before. And most importantly, SSR is no longer a tradeoff you have to make. There is one problem with this. Currently require.resolveWeak can’t resolve dynamic imports like import can. I.e. you can’t have a string such as './page/${page}' where part of the path is dynamic. If you don’t already know, Webpack is perfectly capable of doing this. The way it handles it is by creating chunks for every module in the page folder. I’ve created an Issue for this and Tobias Koppers has recently prioritized it with Important. Vote it up if you want this fixed. Here’s how you accomplish this for now (and it’s likely similar to what you’ve been doing in the past, just without SSR): const Page1 = universal(() => import('./pages/Page1'), { resolve: () => require.resolveWeak(('./pages/Page1') }) const Page2 = universal(() => import('./pages/Page2'), { resolve: () => require.resolveWeak(('./pages/Page2') }) const Page3 = universal(() => import('./pages/Page3'), { resolve: () => require.resolveWeak(('./pages/Page3') }) const pages = { Page1, Page2, Page3 } const createUniversalComponent = page => pages[page] And the implementation of <MyParentComponent /> is the same as before. One thing to note about this is its performance characteristics. This actually does less work at “render time,” since the components are pre-created. You’re essentially sacrificing memory for CPU at a very critical point in time. I haven’t measured how many cycles/milliseconds creating these components during every render is, but it’s probably negligible. I’m definitely looking forward to doing the initial HoC implementation. You can also reduce work during render by just creating the component once during lifecycle methods such as componentWillMount and componentWillReceiveProps, and then set them as state. So you have options here. The main takeaway is you WANT the capability to dynamically choose which “universal component” to render. In the future we will likely offer this interface as well: const create = universal(({ page }) => import( ./pages/${page}), { resolve: ({ page }) => require.resolveWeak(( ./pages/${page}), path: ({ page }) => path.join(dirname, ./pages/${page}) }) const UniversalComponent = create() <UniversalComponent page={page} /> Then you don’t even need to create your own HoCs, and you avoid wasting precious CPU cycles at render time. Best of all worlds. Often times your users have few options of where to navigate to and you want to preload all options to optimize experience. You can do so like this: import universal from 'react-universal-component' const UniversalComponent = universal(() => import('../components/Foo'), { resolve: () => require.resolveWeak('./Foo') }) export default class MyComponent extends React.Component { componentWillMount() { UniversalComponent.preload() } render() { return <div>{this.props.visible && <UniversalComponent />}</div> } } What’s been covered very little in this article is the <Error /> components that React Universal Component displays for you as needed. It’s an obvious capability, but what isn’t is that you too can trigger them, say, if you’re fetching data in a parent component. It’s very intuitive and saves you from repeating yourself. Here’s how you do it, for example, using Apollo’s asynchronous HoCs: const UniversalUser = universal(() => import('./User'), { resolve: () => require.resolveWeak('./User'), }) const User = ({ loading, error, user }) => <div> <UniversalUser isLoading={loading} error={error} user={user} /> </div> export default graphql(gql <br> query CurrentUser {<br> user {<br> id<br> name<br> }<br> }<br>, { props: ({ ownProps, data: { loading, error, user } }) => ({ error, user, }), })(User) What this accomplishes beyond just code re-use is less jank in terms of switching between your <UniversalUser /> will show <Loading /> until both the async import resolves AND the data is returned from your GraphQL server (which both can operate in parallel). If you were going the route of 2 separate spinners — it saves you from that trap as well. As for the server, you will of course use Apollo’s amazing recursive promise resolution solution to populate your component tree with data while the requires from <UniversalUser /> resolve synchronously and in a split second. So it combines nicely in both environments. Anyone who’s successfully done code-splitting (with or without SSR) know that it’s just for your javascript chunks — well, here’s the new hotness: const ExtractCssChunks = require("extract-css-chunks-webpack-plugin") module.exports = { module: { rules: [ { test: /.css$/, use: ExtractCssChunks.extract({ use: { loader: 'css-loader', options: { modules: true, localIdentName: '[name]__[local]--[hash:base64:5]' } } }) } ] }, plugins: [ new ExtractCssChunks, ] } Look familiar? It took a lot of work to re-orient this plugin. extract-css-chunks-webpack-plugin also supports HMR for your css — something the original extract-text-webpack-plugin did not. There’s a lot more to this as well, given you have to configure a matching server environment either with Babel or Webpack, create chunks that inject js and chunks that have it removed since stylesheets already have it, etc. More coming about this soon… There are still a few remaining pieces to the puzzle missing. Namely: the ability to generically fetch data along with [multiple] imports in your asyncComponent function. See, it shouldn’t have to only contain a call to import(). You should be able to specify any data dependencies in this promise, perform calculations, even call import() multiple times. I think that sort of frictionlessness is what will evolve the platform. const asyncWork = async props => { const prom = await Promise.all([ import('./User'), import('./Post'), fetch( /user?id=${props.id}) ]) const User = prom[0].default const Post = prom[1].default const data = await prom[2].json() return ( <User data={data} {...props}> </User> ) } const UniversalComponent = () => universal(asyncWork) <UniversalComponent id={123} /> Notice how we’re requesting 2 chunks + fetching some data in parallel! Being able to code your components without friction like that, and without having to worry about tradeoffs regarding SSR vs. Code-Splitting is the future. If you’ve read Bret Victor’s Ladder of Abstraction article, you know the importance of things becoming frictionless for platforms to evolve. Up until recently we’ve kinda been stalled at being able to get the full use of our platform. There shouldn’t be certain places where we can do the above things and other places we can’t. We shouldn’t be constrained even to the current react-universal-component HoC. There have been solutions that accomplish asyncWork above, but they are async-only. You trade SSR for async-only splitting. That absolutely shouldn’t be the case. That said, react-universal-component as it is will remain very important, even if/when the aforementioned problem is solved. It will always have a leg up over solutions that recurse your component tree resolving promises in the simple fact that it doesn’t need to waste cycles on your server doing a “pre-render” to find those promises. However, next week I’ll be releasing the best of all worlds, and you can decide based on your needs. Everything covered here will stay the same and I, myself, will 100% continue to use exactly what you’ve seen today. But for developers who like to fetch data in componentWillMount, I have some very exciting things coming your way. Stay tuned! Checkout the 4 boilerplates to frictionless try things out with either a Webpack or Babel server, and with or without “magic comments.”
http://brianyang.com/how-to-use-webpacks-new-magic-comment-feature-with-react-universal-component-ssr/
CC-MAIN-2017-47
refinedweb
3,026
56.96
- - Bubble Sort This article is created to cover a program in Java that performs the sorting of an array using bubble sort technique. That is, this article includes multiple bubble sort programs in Java. List of bubble sort programs included in this article: - Bubble sort program in Java - basic version - Bubble sort program in Java - complete version - Bubble sort program in Java - descending order - Bubble sort with printing of new array after each sort If you're not aware about, How bubble sort works ? Refer to Bubble Sort Logic. Now let's move on, and create a bubble sort program in Java. Bubble Sort in Java - Basic Version The question is, write a Java program to perform bubble sort based on 10 elements. The 10 elements must be received by user at run-time of the program. The program given below is its answer. This program sorts in ascending order. import java.util.Scanner; public class CodesCracker { public static void main(String[] args) { int n=10, i, j, x; int[] array = new int[n]; Scanner s = new Scanner(System.in); System.out.print("Enter 10 Elements in Random Order: "); for(i=0; i<n; i++) { array[i] = s.nextInt(); } for(i=0; i<(n-1); i++) { for(j=0; j<(n-i-1); j++) { if(array[j]>array[j+1]) { x = array[j]; array[j] = array[j+1]; array[j+1] = x; } } } System.out.println("\nThe new sorted array is:"); for(i=0; i<n; i++) System.out.print(array[i]+ " "); } } The snapshot given below shows the sample run of above bubble sort program in Java, with user input 10, 19, 11, 18, 12, 17, 13, 16, 14, 15 as ten elements in random order. Random order means, all the elements are neither in ascending nor in descending order: Bubble Sort in Java - Complete Version Since previous program operates based on 10 elements or numbers only. Therefore I've modified that program to allow user to define the size of array too, along with its elements.("\n\nSorting the array..."); for(int i=0; i<(n-1); i++) { for(int j=0; j<(n-i-1); j++) { if(arr[j]>arr[j+1]) { int temp = arr[j]; arr[j] = arr[j+1]; arr[j+1] = temp; } } } System.out.println("The array is sorted successfully!"); System.out.println("\nThe new sorted array is:"); for(int i=0; i<n; i++) System.out.print(arr[i]+ " "); } } The sample run with user input 5 as size, 1, 5, 2, 4, 3 as five elements, is shown in the snapshot given below: Bubble Sort in Java - Descending Order To sort an array in descending order, using bubble sort technique, only replace that greater than (>) sign to less than (<), that is available inside condition of if statement, in the sorting loop. That is, replace: if(arr[j]>arr[j+1]) from above program, with: if(arr[j]<arr[j+1]) Only the matter of a sign, a single character. The whole program gets changed. Here is the snapshot of sample run with same user input as of previous program's sample run: Bubble Sort in Java - Print Array after Each Sort This is the last program of this article, created to print new array after each sort using bubble sort technique. This program provides the rough look, how the bubble sort actually performs.("\nSorting the array..."); for(int i=0; i<(n-1); i++) { for(int j=0; j<(n-i-1); j++) { if(arr[j]>arr[j+1]) { int temp = arr[j]; arr[j] = arr[j+1]; arr[j+1] = temp; } } System.out.print("\nStep No." +(i+1)+ " -> "); for(int j=0; j<n; j++) System.out.print(arr[j]+ " "); } System.out.println("\n\nThe array is sorted successfully!"); System.out.println("\nThe new sorted array is:"); for(int i=0; i<n; i++) System.out.print(arr[i]+ " "); } } Here is its sample run with user input 10 as size, 1, 10, 2, 9, 3, 8, 4, 7, 6, 5 as ten elements: Below is another sample run with same user input as size, and 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 as ten elements: Same Program in Other Languages « Previous Program Next Program »
https://codescracker.com/java/program/java-program-bubble-sort.htm
CC-MAIN-2022-21
refinedweb
703
53.1
Scott Golightly, Microsoft Regional Director Keane, Inc. April 2006 Applies To: Visual Studio 2005 Web Services Summary: Test Web Services quickly and more efficiently without having to write full-blown applications each time by using .NET reflection and the CodeDOM to dynamically generate a proxy to a Web Service. (10 printed pages) Introduction Dynamic Proxy Library A Sample Windows Application Conclusion As companies move more to using Web services for application integration and Service Oriented Architecture (SOA) the number of Web services inside an organization will naturally grow. The need to manage these services as well as testing the Web service in development before the client is written has proved to be an issue. There are many free and commercial products to help fill this need. These tools all generate a proxy to the Web service to send a message to the Web service. This article looks at a method of using reflection and the code DOM to dynamically generate a proxy to a Web service. An application is also created that will allow the user to specify a WSDL file, choose a Web method, specify parameters, and then see the results returned by the proxy class. As most experienced Web developers know, ASP.NET will generate a Web page explaining how to call the Web service when you navigate to an ASP.NET Web service. If you are accessing the Web service from the machine that hosts the Web service and the parameters are all simple types, a page like the one in Figure 1 will be created that also contains a form that lets you call the Web service and view any result that is returned. This page can come in handy when you need to determine if the Web service is working correctly. Unfortunately this has some problems as the page uses HTTP and will bypass any custom SOAP handlers. If you cannot get direct access to the machine (as is common in most production environments) then the generated page will not have the form to call the Web service. A developer could open Visual Studio 2005 and use the "add web reference" functionality to create a proxy to the Web service and in a few minutes create a simple application to call the Web service. As your organization uses more Web services managing all of the different applications, to call an individual Web service can become a configuration and management nightmare. One solution is to have a single application that will allow you to call many different Web services and to provide the correct parameters. There are several tools that will allow you to not only call a Web service, but also monitor and manage it. Regardless of whether you eventually choose to use the ASP.NET page or a tool, the secret to calling a Web service is to generate the proxy to the Web server at run time. In the rest of this article I will show a class library that will create a proxy for a Web service specified in a WSDL (Web Services Description Language) file at runtime. Using this class library I am able to call many different Web services from the same application. I will also show a Windows application that will use the library to get information about a Web service and allow you to call it. Figure 1. ASP.NET Web Service Test Page (Click on the image for a larger picture) In order to separate the functionality of calling the Web service from the user interface I decided to create a class library that generates a proxy to handle all of the interaction with the Web service that I want to call. The class library will also allow me to reuse this functionality in many different applications. To interact with the Web service the class library that generates the dynamic proxy needs to be able to perform five basic functions. Those functions are to parse a valid WSDL file, generate a proxy class for the Web service, return a list of Web methods supported by the Web service, enumerate the parameters for a Web method, and finally allow the user to call the method on the Web service. To simplify coding later on I decided to have the constructor of my class take the URI of the WSDL file as a parameter. By limiting each instance of the class to a single URI, I avoid having to reset member variables if the user decides to call a different Web service. The URI is stored in a private variable for later use. The constructor also sets up a ServerCertificateValidationCallback that will handle any issues with invalid SSL certificates. The constructor for my class looks like this: Public Sub New(ByVal uri As Uri) _uri = uri ServicePointManager.ServerCertificateValidationCallback = _ New RemoteCertificateValidationCallback(AddressOf SSLResult) End Sub I found that many development Web services (and some production sites) use SSL certificates that are self signed or that have expired. When using a Web browser this might cause a security dialog like the one in Figure 2 to appear. With a Web service call there is no user interface to allow the user to decide if the certificate should be trusted so the caller must programmatically decide whether to accept certain certificate errors. The ServerCertificateValidationCallback lets you determine which class of errors to ignore by returning true for that type of error. In this implementation of ServerCertificateValidationCallback all certificate errors are ignored. Public Function SSLResult(ByVal sender As Object, _ ByVal c Figure 2. Internet Explorer Security Alert Once the foundation for communication with the Web service is set up, the next step is to get some basic information about the Web service from the WSDL file. To do this I create a Web request and pass in the URI to the WSDL file. The WebRequest class will handle all the details of parsing and validating the WSDL file. Calling the GetResponse method on the WebRequest object gets a stream that can be passed to the shared Read method of the ServiceDescription object. The read method returns a ServiceDescription object that has information about the Web service defined in the WSDL file. Dim webReq As WebRequest = WebRequest.Create(_uri) Dim reqStrm As Stream = webReq.GetResponse.GetResponseStream() _serviceDesc = ServiceDescription.Read(reqStrm) _serviceName = _serviceDesc.Services(0).Name The name of the Web service will be needed for several other method calls in the class library so I store it in a local variable. Now that I have a description of the Web service I can generate the proxy class that will allow me to call the methods of the Web service. The purpose of generating the proxy class is to create the same code that would be put into a reference.vb or refernce.cs file when you add a Web reference in Visual Studio and then compile that code into an assembly that we can use immediately from the application that calls our class library. The first step is to create an instance of the ServiceDescriptionImporter class. The purpose of the ServiceDescriptionImporter class is to allow you to easily import the information in a WSDL file into a CodeDom.CodeCompileUnit. After creating the ServiceDescriptionImporter I call the AddServiceDescription method and pass it in the ServiceDescription object that contains the information about the Web service I want to call. I also set the ProtocolName property on the ServiceDescriptionImporter object to request that I communicate with the Web service using SOAP. I also set the CodeGenerationOptions property to instruct the CodeDom to generate properties for any simple data types exposed by the Web service. Dim servImport As New ServiceDescriptionImporter servImport.AddServiceDescription(_serviceDesc, String.Empty, String.Empty) servImport.ProtocolName = "Soap" servImport.CodeGenerationOptions = CodeGenerationOptions.GenerateProperties In version 1.0 and 1.1 of the .NET Framework pair of methods for asynchronous calls would be generated for each Web method. The pair of methods would be named with a begin and an end before the method name. In version 2.0 of the .NET Framework there is an option to create events to invoke asynchronous methods. To generate the asynchronous methods with the begin/end pair use the CodeGenerationOption.GenerateOldAsync. To use events to invoke asynchronous methods use CodeGenerationOption.GenerateNewAsync. Since I only want to call synchronous methods I will not specify either option. Next I want to generate a CodeDom CodeCompileUnit tree. The CodeCompileUnit class provides a container to store the program graph in. After adding a namespace to the CodeCompileUnit I import the service description into it. Dim ns As New CodeNamespace Dim ccu As New CodeCompileUnit ccu.Namespaces.Add(ns) Dim warnings As ServiceDescriptionImportWarnings warnings = servImport.Import(ns, ccu) I check the ServiceDescriptionImportWarnings. If the import produced either a NoCodeGenerated warning or a NoMethodsGenerated warning then I stop processing the data since there will be nothing for the user to call later on. If the import was successful then I generate an instance of a CodeDomProvider (in the code that follows it is a Visual Basic provider but could just as well be C# since the languages are interoperable). I then generate the code for the proxy class using the GenerateCodeFromNamespace method. The first parameter is the namespace to use. In the second parameter I provide a StringWriter that will store the code so I can compile it into an assembly later on. It would be possible to write the code in the StreamWriter to a file if you wanted to store it for later inspection. The third parameter is for options such as indenting nested blocks of code, whether to place the opening brace of a block on the same line as the start of the block or on a different line, and other formatting options. Since I will not be directly viewing the code I do not bother to set any of the options. Dim sw As New StringWriter(CultureInfo.CurrentCulture) Dim prov As New VBCodeProvider prov.GenerateCodeFromNamespace(ns, sw, Nothing) The last step in generating the assembly is to create a set of compiler parameters and then invoke the compiler on the code that is stored in the StringWriter. The constructor for the CompilerParameters class takes an array of strings as an argument. The strings in the array represent DLLs that should be referenced by the compiler. I then set the GenerateExecutable property to false so it will generate a class library. I set the GenerateInMemory property to true since I do not need to persist the proxy assembly to disk. The next two lines of code instruct the complier to only break on serious errors. I next create a CompilerResults object to store the results of comiling the code. From the CompilerResults I am able to access the CompiledAssembly property, which is the compiled proxy assembly. I store the assembly in a local variable for later use. Dim param As New CompilerParameters(New String() _ {"System.dll", "System.Xml.dll", _ "System.Web.Services.dll", "System.Data.dll"}) param.GenerateExecutable = False param.GenerateInMemory = True param.TreatWarningsAsErrors = False param.WarningLevel = 4 Dim results As New CompilerResults(Nothing) results = prov.CompileAssemblyFromSource(param, sw.ToString()) _proxyAssembly = results.CompiledAssembly Now that I have an assembly that lists the public properties and methods of the Web service I can use reflection to retrieve the information about the methods on the class. All of the methods that appear in the proxy class are methods that have been decorated with the WebMethod attribute in the original code. This class library has a method that will return an array of MethodInfo objects to the program using it. I first get a reference to the class in the proxy assembly that represents the Web service. I then call GetMethods to get the Web methods. Dim service As Type service = _proxyAssembly.GetType(_serviceName) Return service.GetMethods(BindingFlags.DeclaredOnly Or _ BindingFlags.IgnoreCase Or BindingFlags.Instance Or _ BindingFlags.InvokeMethod Or BindingFlags.Public) For the class library to be useful it must also provide the ability to enumerate the parameters for a given method on the proxy class. I created a simple function that takes the name of the Web method and will return an array of ParameterInfo objects. The code to accomplish this is shown below. I first get the proxy to the Web service as a type and then call GetMethod to get a reference to the method I want to call. I am then able to call GetParameters to return the ParameterInfo array. Dim serviceType As Type = _proxyAssembly.GetType(_serviceName) Return serviceType.GetMethod(methodName).GetParameters() Once the user of the class library has identified a Web method and the parameters on that method they would probably like to call the Web method. To call the Web method I need to first instantiate the proxy class. I do that by calling Activator.CreateInstance and pass it the type information for the Web service. I then get a MethodInfo object for the method that the user wants to call and return the results of calling the Invoke method on the proxy for that Web method. I have to pass the Object array of parameters that the Web method is expecting as the second parameter to the Invoke method. Dim assemblyType As Type = _proxyAssembly.GetType(_serviceName) Dim instance As Object = Activator.CreateInstance(assemblyType) Dim methodInfo As MethodInfo = assemblyType.GetMethod(methodName) Return methodInfo.Invoke(instance, params) By returning an object as either a single object or an array of objects, this library differs from the Web page generated by ASP.NET, which shows the XML message returned by the Web service. You can use the tracing capability of Microsoft Web Service Extensions (WSE) to capture the actual messages being sent between the client and Web server. A full discussion of WSE is beyond the scope of this article. While testing the class library I came across a Web method that required a parameter that was an enumeration. I had to get an instance of the enumeration to parse the value supplied by the user. I added a method to my class library to return a Type object for any one of the objects in the proxy assembly. The method takes as a parameter the name of the type that the user would like returned. The method then finds and returns that type from the proxy assembly. Return _proxyAssembly.GetType(typeName) This class library is dynamically generating an assembly and then instantiating it and calling code in the assembly. It requires full trust permissions in order to do that. I have added the following line to the AssemblyInfo.vb file to ensure that the class library is never instantiated with less than full trust permissions. <Assembly: PermissionSet(SecurityAction.RequestMinimum,Name:="FullTrust")> I also need to make sure that any calling program also acquires full trust permissions before instantiating the class library so I will add a similar line to the AssemblyInfo file of any program that uses this class library. That completes the functionality of the class library that will interact with the Web service. In the next section I will demonstrate a sample application that you can use to call an arbitrary Web service. For a sample application that will use the class library I created the Windows application shown in Figure 3. The application contains a text box for entering the URI to the WSDL file. This could be a file that resides on the hard drive or the URL to a Web service that is half way around the world. There is a DataGridView control to show the methods on the Web service and another DataGridView to display the parameters of the currently selected method. I have also added a RichTextBox control to display the data returned from the Web method. Finally, there are two buttons: the first will instantiate the class library and retrieve the method and parameter information, and the second will invoke the Web method using the values the user has provided for the parameters. The rest of this article describes the implementation details of this application. Figure 3. Windows Application to Call An Arbitrary Web Service (Click on the image for a larger picture) As you recall when providing information about methods and parameters, the class library returns an array of MethodInfo or ParameterInfo objects, respectively. The data binding features in the .NET Framework allow us to bind a DataGridView to the array, and it will generate a row for each element in the array and a column for each public property on the class. While this works very well, it tends to make a cluttered interface with a lot of information that is not needed for this application. When the form loads I call a method to display only the necessary columns in each DataGridView. To display just the columns that I want I first set the AutoGenerateColumns property of the DataGridView to false. I then create a new DataGridViewTextBoxColumn, and set some properties for that column. The properties allow me to set many of the visual attributes of the column, like its header text value, whether the text can be changed, the minimum width that the column can be sized to, and the initial width of the column. The most important property is the DataPropertyName. This property tells the column which property from the object it will be displaying. After I have set all the properties that I need to for the column I add it to the Columns collection of the DataGridView. The following code shows setting the properties for the first column of the DataGridView that displays method information. The column will have a header of "Return Type" and will show the type of data returned by the Web method. dgvMethods.AutoGenerateColumns = False Dim newColumn As New DataGridViewTextBoxColumn newColumn = New DataGridViewTextBoxColumn With newColumn .Name = "returntype" .DataPropertyName = "returntype" .HeaderText = "Return Type" .ReadOnly = True .MinimumWidth = 100 .Width = 300 End With dgvMethods.Columns.Insert(0, newColumn) I repeat the process for each of the columns that I want to display. For the column that allows the user to provide a value for the parameter in a Web method I do something a little different. I want the column to be editable. For a column in a DataGridView to be editable, the column must have the ReadOnly property value set to false, and the underlying data store must be updateable, as well. The ParameterInfo class doesn't have a writeable property for storing the value. To allow the user to add a value I create a DataGridTextBoxColumn and also set ReadOnly to false. I do not set the DataPropertyName property, so it is not bound to a property on the ParmameterInfo object. When this column is added to the DataGridView it will be editable. newColumn = New DataGridViewTextBoxColumn With newColumn .Name = "ParamValue" .HeaderText = "Value" .ReadOnly = False .MinimumWidth = 100 .Width = 398 End With dgvParameters.Columns.Insert(2, newColumn) After the form has loaded, users will enter the path to the WSDL file. They will then click the button labeled "Get Service" to retrieve information about the methods that are available on the Web service. The first thing I do is instantiate the class library and pass in the URI entered by the user. I then retrieve the array of MethodInfo objects that are the public interface for the Web server. I could just bind that array to the DataGridView. I set up an event handler so that every time a different method is selected, the corresponding parameter information is shown in the DataGridVeiw that shows parameters. This is easily accomplished by retrieving the name of the current method and passing it to the GetParameters method on the class library. I can bind the ParameterInfo array directly to the DataGridView. Dim methodName As String = dgvMethods.CurrentRow().Cells("name").Value dgvParameters.DataSource = _wsp.GetParameters(methodName) Once the user has entered in values for all of the parameters, they must be converted from a string back into the data type that the Web method is expecting, and then packaged as an array of objects to send off to the Web method. I use a function to convert the parameter type from its string representation in the DataGridView into the type that the Web method is expecting. If the parameter is a string I return the value passed in since it is already a string. If the parameter is a primitive type (Int32, Boolean, Double, and so on) then I call ChangeType to convert the parameter to the correct type. If the parameter is an enumeration I use the GetRemoteType method on the class library to retrieve the definition for the enumeration and call the Parse method to convert the text representation of the enumeration into the correct value. Finally, if the parameter is an object or other complex data type, I throw an exception. I probably could have done some more work to use reflection and figure out how to instantiate an object and get its public methods, but most of the Web services that I have used do not have objects as parameters so I haven't spent the time to investigate how much work this would be. Private Function ConvertParameterDataType(ByVal paramValue As String, _ ByVal pi As ParameterInfo) As Object If paramValue Is Nothing Then Return Nothing If pi.ParameterType.FullName = "System.String" Then Return paramValue End If If pi.ParameterType.IsPrimitive Then Return Convert.ChangeType(paramValue, pi.ParameterType) End If If pi.ParameterType.IsEnum Then Dim enumType As Type = _wsp.GetRemoteType(pi.ParameterType.Name) Return System.Enum.Parse(enumType, paramValue, True) End If Throw New ArgumentException( _ "Unable to convert parameter to a simple type.", pi.Name) End Function To call the Web service I merely need to call the Invoke method on the class library. It takes the name of the method and an object array as parameters and returns an object back. When the results are returned I check to see if the object is Nothing (null in C#), and if so write a message to the user that the call succeeded but did not return any results. If the result is an array I use a For Each loop to call ToString() on each element of the array and add it to the text in the RichTextBox control. For most objects this will print out the class name. If the return value is not an array I add its value to the RichTextBox control. This will show the actual value of the object. While not a perfect replacement for the page generated by ASP.NET, the class library shown in this article will allow you to call most Web services. The class library has the advantage of using SOAP to communicate with the Web service and not be restricted to the local machine. This application does not have all the functionality of many commercial applications for managing Web services, but it does show how they are able to call any Web service. I have used this application successfully during development and testing to call various Web services. It has helped me when looking into exceptions logged by a production Web service to determine if the exception was caused by invalid parameters. By harnessing the power of reflection and the CodeDOM in the .NET framework, it is relatively easy to generate a proxy for an arbitrary Web service. Scott Golightly is Microsoft Regional Director and a Senior Principal Consultant with Keane, Inc. in Salt Lake City. He has over 13 years experience helping his clients design and build systems that meet their business needs. When Scott is not working he enjoys fishing, camping, hiking, and spending time with his family. You can reach Scott at Scott_J_Golightly@keane.com.
http://msdn.microsoft.com/en-us/library/aa730835(VS.80).aspx
crawl-002
refinedweb
3,942
54.22
fork, vfork - Create a new process #include <unistd.h> pid_t fork( void ); pid_t vfork( void ); Application developers may want to specify an #include statement for <sys/types.h> before the one for <unistd.h> if programs are being developed for multiple platforms. The additional #include statement is not required on Tru64 UNIX systems or by ISO or XSH standards, but may be on other vendors' systems that conform to these standards. Interfaces documented on this reference page conform to industry standards as follows: fork(): XSH4.0, XSH4.2, XSH5.0 vfork(): XSH4.2, XSH5.0 Refer to the standards(5) reference page for more information about industry standards and associated tags. The fork() and vfork() functions create a new process (child process) that is identical to the calling process (parent process). The child process inherits the following from the parent process: Environment Close-on-exec flags Signal-handling settings Set user ID mode bit Set group ID mode bit Trusted state Profiling on/off status Nice value All attached shared libraries Process group ID tty group ID Current directory Root directory File mode creation mask File size limit Attached shared memory segments Attached mapped file segments All mapped regions, with the same protection and sharing mode as in the parent process [Tru64 UNIX] Message catalog descriptors. These are shared by parent and child processes until a modification is made. The parent's policy and priority settings for the SCHED_FIFO and SCHED_RR scheduling policies (fork() call) Open semaphores. Any semaphores open in the parent process are also open in the child process. The child process differs from the parent process in the following ways: The child process has a unique process ID that does not match any active process group ID. The parent process ID of the child process matches the process ID of the parent. The child process has its own copy of the parent process's file descriptors. Each of the child's file descriptors refers to the same open file description with the corresponding file descriptor of the parent process. The child process has its own copy of the parent's open directory streams. Each open directory stream in the child process may share directory stream positioning with the corresponding directory stream of the parent. The child process has its own copy of the parent's message queue descriptors, each of which refers to the same open message description as referred to by the corresponding message queue descriptor of the parent. All semadj values are cleared. Process locks, text locks, and data locks are not inherited by the child process. The child process' values of tms_utime, tms_stime, tms_cutime, and tms_cstime are set to 0 (zero). Any pending alarms are cleared in the child process. [Tru64 UNIX] Any interval timers enabled by the parent process are reset in the child process. Any signals pending for the parent process are cleared for the child process. Address space memory locks established by the parent process through calls to mlockall() or mlock() are not inherited by the child process. Per-process timers created by the parent process are not inherited by the child process. Asynchronous input or asynchronous output operations started by the parent process are not inherited by the child process. If a multithreaded process forks a child process, the new process contains a replica of the calling thread and its entire address space, possibly including the states of mutexes and other resources. Consequently, to avoid errors, the child process should only execute operations it knows will not cause deadlock. The fork() and vfork() functions have essentially the same implementation at the level of the operating system kernel but may differ in how they are supported through different libraries. Some libraries, such as libpthread and libc, support fork handler routines that can acquire and release resources that are critical to the child process. Fork handlers therefore allow an application to manage potential deadlock situations that might occur between the parent and child processes. Fork handlers do not work correctly if the application calls vfork() to create the child process. Therefore, applications using libpthread and libc should call fork() to create a child process. For more information about fork handler routines, see pthread_atfork(3). For a complete list of system calls that are reentrant with respect to signals, see signal(4). Upon successful completion, the fork() and vfork() functions return a value of 0 (zero) to the child process and return the process ID of the child process to the parent process. Otherwise, a value of -1 is returned to the parent, no child process is created, and errno is set to indicate the error. The fork() and vfork() functions set errno to the specified values for the following conditions: The limit on the total number of processes executing for a single user would be exceeded. This limit can be exceeded by a process with superuser privilege. There is not enough space left for this process. Functions: exec(2), exit(2), getpriority(2), getrusage(2), plock(2), ptrace(2), semop(2), shmat(2), sigaction(2), sigvec(2), umask(2), wait(2), nice(3), pthread_atfork(3), raise(3), times(3), ulimit(3) Files: signal(4) Others: standards(5) fork(2)
https://nixdoc.net/man-pages/Tru64/man2/vfork.2.html
CC-MAIN-2020-45
refinedweb
867
60.85
Opened 7 years ago Closed 5 years ago #19444 closed defect (fixed) sage.misc.functional.log(float(3)) raises an AttributeError Description import_statements suggest to import log this way: sage: import_statements('log') # **Warning**: distinct objects with name 'log' in: # - sage.functions.log # - math # - sage.functions # - sage.misc.functional from sage.misc.functional import log While those works: sage: math.log(float(3)) 1.0986122886681098 sage: sage.functions.log.log(float(3)) 1.0986122886681098 This one raises an AttributeError: sage: sage.misc.functional.log(float(3)) Traceback (most recent call last): ... AttributeError: 'sage.rings.real_double.RealDoubleElement' object has no attribute '_log_base' A subquestion is why do we have two implementations of log? Change History (26) comment:1 Changed 5 years ago by - Branch set to u/chapoton/19444 - Cc tscrim jdemeyer jhpalmieri added - Commit set to f4c012de47d26ab2fabb24266805ac6dccd5a8cb - Milestone changed from sage-6.10 to sage-8.1 - Status changed from new to needs_review comment:2 Changed 5 years ago by This implementation is very weak. How can you assume that this is supposed to be the real logarithm? sage: sage.misc.functional.log(complex(3j)) Traceback (most recent call last): ... TypeError: can't convert complex to float versus sage: sage.misc.functional.log(CDF(0,3)) 1.0986122886681098 + 1.5707963267948966*I comment:3 Changed 5 years ago by There is no need to make another perfect log, there is one already in functions.log, which is the one we provide as "log" in the global namespace. We should fix the issue raised here. And later get rid of this function, which is only used 10 times or so. comment:4 Changed 5 years ago by What is the point of fixing an issue on a useless function? If I open a ticket for the complex case will you fix it ;-? If useless, the log from functional would better be deprecated and this ticket closed as invalid. Anyway, the complaint is about import_statements that is not fixed by your branch. comment:5 Changed 5 years ago by And indeed, there is something wrong with find_objects_from_name sage: sage: from sage.misc.dev_tools import find_objects_from_name sage: find_objects_from_name('log') [<function log at ...>, <function log at ...>, log, <built-in function log>, <module 'sage.functions.log' from '/opt/sage/local/lib/python2.7/site-packages/sage/functions/log.pyc'>] The above is wrong since log is in the global namespace and the answer should have been the log from the global namespace... comment:6 Changed 5 years ago by And indeed, globals() is restricted to the module where the function is defined... so that its usage is wrong. comment:7 Changed 5 years ago by comment:8 Changed 5 years ago by - Commit changed from f4c012de47d26ab2fabb24266805ac6dccd5a8cb to 02e1fc084228ff509b6927ce82190acb55426caf Branch pushed to git repo; I updated commit sha1. This was a forced push. New commits: comment:9 Changed 5 years ago by ok, done comment:10 Changed 5 years ago by - Commit changed from 02e1fc084228ff509b6927ce82190acb55426caf to b1a85c98b3d957bc23f9002de7db6cc54e52dce0 Branch pushed to git repo; I updated commit sha1. New commits: comment:11 Changed 5 years ago by - Reviewers set to Sébastien Labbé - Status changed from needs_review to needs_work If it turns out that for some reason I have used the bad log function in my code and I now get the following error message: DeprecationWarning: this version of log is no longer used I will not understand and I will lose time to understand what is happening. I much prefer DeprecationWarning: function sage.misc.functional.log is deprecated, use sage.functions.log or log from the global sage namespace instead comment:12 Changed 5 years ago by - Commit changed from b1a85c98b3d957bc23f9002de7db6cc54e52dce0 to 465125e6133b5f33493d1f78b98237228bb7dae4 Branch pushed to git repo; I updated commit sha1. New commits: comment:13 Changed 5 years ago by - Status changed from needs_work to needs_review Done comment:14 Changed 5 years ago by This is wrong: - k = int(ceil(log(n,2))) + k = int(ceil(RDF(n).log(2))) Do not use floating-point computations when you want an exact answer. An exception to this would be MPFR (used in Sage by RR) because that does give a guarantee of exactness if possible. So, you should either use ZZ or RR to do this computation. comment:15 Changed 5 years ago by - Status changed from needs_review to needs_work comment:16 Changed 5 years ago by I would replace ceil(log(n,2)) by (ZZ(n) - 1).nbits() which is guaranteed to be correct and reasonably fast. comment:17 Changed 5 years ago by - Commit changed from 465125e6133b5f33493d1f78b98237228bb7dae4 to e09327389baa0b96060ac5ea36cf77b69ee6dab6 Branch pushed to git repo; I updated commit sha1. New commits: comment:18 Changed 5 years ago by ok ; but now there remains an RDF in "src/sage/rings/finite_rings/element_ntl_gf2e.pyx" comment:19 Changed 5 years ago by - Commit changed from e09327389baa0b96060ac5ea36cf77b69ee6dab6 to 5489453100b096923d1376b82139c4d3952edd83 Branch pushed to git repo; I updated commit sha1. New commits: comment:20 Changed 5 years ago by - Status changed from needs_work to needs_review should be good now comment:21 Changed 5 years ago by In the light of __future__ division, it is better to use // 8 instead of / 8. comment:22 Changed 5 years ago by - Commit changed from 5489453100b096923d1376b82139c4d3952edd83 to d2f7962a6ee8a2e6e0b174083a44b4ed067b5785 Branch pushed to git repo; I updated commit sha1. New commits: comment:23 Changed 5 years ago by done comment:24 Changed 5 years ago by bot is morally green comment:25 Changed 5 years ago by - Status changed from needs_review to positive_review comment:26 Changed 5 years ago by - Branch changed from u/chapoton/19444 to d2f7962a6ee8a2e6e0b174083a44b4ed067b5785 - Resolution set to fixed - Status changed from positive_review to closed Done. This should be an easy review. NOTA BENE: Getting rid of this log function (in favor of the one in functions.log) should be done in another ticket. I tried to do that, but sutmbled on the infamous import hell (import cycles everywhere). New commits:
https://trac.sagemath.org/ticket/19444
CC-MAIN-2022-27
refinedweb
973
64
So far we've explored how to use dependency properties and attached properties to create reusable behaviors and triggers. I showed you recently how to refactor an attached property to use the Behavior base class instead. Today, we'll look at the TriggerAction that is also available in System.Windows.Interactivity (either as a part of Expression Blend, or available through the Blend SDK). If you recall in TextBox Magic, I used an attached property to define an attribute that, when set to true, would bind to the TextChanged event and force data-binding to allow validation and binding without having to lose focus from the text box. Because the action is tied to an event, it makes sense to refactor as a trigger action. The first step is simply to build a class based on TriggerAction which requires adding a reference to System.Windows.Interactivity. Like the Behavior class, the TriggerAction can be strongly typed, so we will type it to the TextBox: public class TextBoxBindingTrigger : TriggerAction<TextBox> { } The method you must override is the Invoke method. This will be called when the action/event the trigger is bound to fires. If you recall from the previous example, we simply need to grab a reference to the binding and force it to update: protected override void Invoke(object parameter) { BindingExpression binding = AssociatedObject.GetBindingExpression(TextBox.TextProperty); if (binding != null) { binding.UpdateSource(); } } That's it! We now have a trigger action that is ready to force the data binding. Now we just need to implement it with a text box and bind it to the TextChanged event. In Expression Blend, the new action appears in the Behaviors section under assets. You can click on the trigger and drag it onto an appropriate UI element. Because we typed our trigger to TextBox, Blend will only allow you to drag it onto a TextBox. Once you've attached the trigger to a UI action, you can view the properties and set the appropriate event. In this case, we'll bind to the TextChanged event. Notice how the associated element defaults to the parent, but can be changed in Blend to point to any other TextBox available as well. Of course, all of this can be done programmatically or through XAML as well. To add the trigger in XAML, simply reference both System.Windows.Interactivity as well as the namespace for your trigger. Then, you can simply add to the TextBox lik this: ... <TextBox Text="{Binding Name, Mode=TwoWay, NotifyOnValidationError=true, ValidatesOnExceptions=true}" Grid. <i:Interaction.Triggers> <i:EventTrigger <local:TextBoxBindingTrigger/> </i:EventTrigger> </i:Interaction.Triggers> </TextBox> As you can see, this is a much more elegant way to create behaviors attached to events as it allows you to easily attach the trigger as well as define the event that the trigger is coupled with.codeproject
https://csharperimage.jeremylikness.com/2009/10/silverlight-behaviors-and-triggers_11.html
CC-MAIN-2018-30
refinedweb
472
54.93
Pandas Data Series: Stack two given series vertically and horizontally Pandas: Data Series Exercise-37 with Solution Write a Pandas program to stack two given series vertically and horizontally. Sample Solution : Python Code : import pandas as pd series1 = pd.Series(range(10)) series2 = pd.Series(list('pqrstuvwxy')) print("Original Series:") print(series1) print(series2) series1.append(series2) df = pd.concat([series1, series2], axis=1) print("\nStack two given series vertically and horizontally:") print(df) Sample Output: Original Series: 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 dtype: int64 0 p 1 q 2 r 3 s 4 t 5 u 6 v 7 w 8 x 9 y dtype: object Stack two given series vertically and horizontally: 0 1 0 0 p 1 1 q 2 2 r 3 3 s 4 4 t 5 5 u 6 6 v 7 7 w 8 8 x 9 9 y Python Code Editor: Have another way to solve this solution? Contribute your code (and comments) through Disqus. Previous: Write a Pandas program to convert given series into a dataframe with its index as another column on the dataframe. Next: Write a Pandas program to check the equality of two
https://www.w3resource.com/python-exercises/pandas/python-pandas-data-series-exercise-37.php
CC-MAIN-2021-21
refinedweb
210
58.21
Meeting:Packaging IRC log 20070123 From FedoraProject (09:02:18 AM) scop: hello (09:02:22 AM) lutter: hey (09:02:26 AM) racor: hi (09:03:08 AM) rdieter: meeting? spot? (09:04:37 AM) rdieter: who's here? (09:05:50 AM) rdieter: Bueller? (09:06:26 AM) scop: pong, sort of (09:06:27 AM) ecik [n=ecik] entered the room. (09:07:36 AM) rdieter: Sounds like we won't much done today. (09:07:47 AM) rdieter: s/much/get much/ (09:09:13 AM) tibbs: Sorry, arranging my flight to fudcon. (09:09:22 AM) tibbs: All done now. (09:09:40 AM) scop: just a note, I went ahead and wrote up provides/obsoletes, and only after doing so, noticed that it was in "ratify" status in the todo (09:09:51 AM) rdieter left the room (quit: Remote closed the connection). (09:10:02 AM) scop: I thought it was supposed to be writeup material already for a while... (09:10:29 AM) tibbs: I don't recall when it was presented to the other committees. (09:11:02 AM) rdieter1 [n=rdieter1] entered the room. (09:11:05 AM) scop: it was, and there was a discussion about when to remove "deprecated" provides later FCX+1 or FCX+2 or something (09:11:12 AM) scop: ...on fedora-maintainers, IIRC (09:11:31 AM) ***rdieter1 grumbles a bit more about his flaky home network (09:11:41 AM) rdieter1 is now known as rdieter (09:11:57 AM) tibbs: The policy is that if other committees have adequate time to object and don't, then the proposal moves from ratify to writeup. (09:12:21 AM) tibbs: As long as it was formally presented and no objections were raised, we're good to go. (09:12:31 AM) scop: I think it's kosher (09:12:33 AM) tibbs: I just don't remember when that happened. (09:13:16 AM) rdieter: anyone with topics to discuss? (09:13:59 AM) scop: maybe a few words about language subpackage naming? (09:14:19 AM) tibbs: I don't remember that being brought up before. (09:14:33 AM) tibbs: Is there a pending discussion that I missed? (09:14:49 AM) rdieter: scop: what did you have in mind? (09:14:55 AM) scop: no, that was just out of the blue, doesn't have to be discussed right now (09:15:06 AM) scop: the guidelines are IMO pretty clear that perl-foo, python-foo etc should be used, but there are some foo-perl, foo-python packages in the repo and new ones are being introduced (09:15:23 AM) rdieter: Most folks seem to be using <pkg>-langpack-<lang> (09:15:35 AM) tibbs: So which kind of language are we discussing? (09:15:39 AM) tibbs: Human or programming? (09:15:54 AM) rdieter: foo-perl/foo-python stuff likely simply needs to be fixed, unless there's (very) good reason to do otherwise. (09:16:17 AM) rdieter: (sorry, I misunderstood, I think scop is talking about programming) (09:16:23 AM) scop: mostly yes (09:16:38 AM) racor: depends, foo-perl is something different than perl-foo (09:16:56 AM) scop: one recent example: (09:17:04 AM) racor: foo-perl can be a perl-frontend to foo, (09:17:22 AM) racor: while perl-foo would be the "foo" perl dist/module (09:17:47 AM) scop: I disagree with "dist/module" having anything to do with it (09:18:00 AM) Rathann|work left the room (quit: Read error: 113 (No route to host)). (09:18:22 AM) rdieter: In this case perl-*, python-* is a no-brainer, these seem to be language bindings. (09:19:04 AM) tibbs: I've always resolved this kind of thing in favor of not using %package -n. (09:19:25 AM) scop: they are also "perl-frontend to foo", so I guess racor would see it differently? (09:19:42 AM) racor: scop: Would you agree to perl-module being installed to /usr/lib/perl5? (09:19:44 AM) tibbs: So if you're building a tarball that spits out language-specific subpackages, use %package perl, not %package -n %{name}-perl (09:19:55 AM) scop: racor, I don't understand the question (09:20:15 AM) rdieter: tibbs: isn't that eseentially the same thing? (09:20:22 AM) racor: app-specific perl-modules may well live outside of /usr/lib/perl* (09:20:26 AM) scop: rdieter, no, it's exactly the opposite (09:20:30 AM) lutter: tibbs: I'd much rather all language bindings are prefixed; (perl-foo) (09:20:32 AM) tibbs: Sorry, the latter should be %package -n perl-%{name} (09:20:39 AM) lutter: makes it much easier to guess the name of the lang bindings for foo (09:20:40 AM) rdieter: tibbs: better. (: (09:20:49 AM) lutter: tibbs: phew (09:20:50 AM) scop: ah, fooled me too ;) (09:21:24 AM) tibbs: In other words, I'm disagreeing with lutter here. (09:21:38 AM) rdieter: my take on the current guidlines are clear: use %package -n perl-%{name} (09:21:48 AM) scop: rdieter++ (09:22:11 AM) lutter: tibbs: no you're not ;) .. we all want lang bindings to be called perl-foo, python-foo etc. (09:22:13 AM) rdieter: now whether there is agreement whether this is a "good thing", well... (09:22:13 AM) ***abadger1999 is here now (09:22:18 AM) tibbs: I don't. (09:22:27 AM) scop: anyway, seems that we do need to discuss this sometime and clarify guidelines as there are differing opinions and interpretations (09:23:08 AM) abadger1999: lutter: tibbs has a "not" in his sentence. (09:23:10 AM) rdieter: I'd say take further discussion to a wider audience (on the list) (09:23:20 AM) lutter: abadger1999: yeah, just noticed that .. bummer (09:23:48 AM) scop: FWIW, spot let obexftp-perl pass in the review, dunno if that can be taken as a statement of opinion (09:24:01 AM) rdieter: doh. (09:24:33 AM) abadger1999: scop: He didn't argue with you though. Could be he hadn't thought about it before. (09:24:47 AM) scop: right, could be (09:26:29 AM) rdieter: anyone willing to discuss: ? (09:27:24 AM) scop: OT, was there a meeting last week, BTW? had hardware problems (and still do) and couldn't participate (09:27:34 AM) Rathann|work [n=rathann] entered the room. (09:27:36 AM) rdieter: yep. (09:28:28 AM) rdieter: (though I don't see any irc logs posted for it, not sure who suckered, err, volunteered to do that) (09:28:57 AM) lutter: rdieter: looks reasonable to me ... I would replace a few of the 'need' with either should or must (09:29:59 AM) scop: if I read it correctly, the new proposal sort of drops the requirement to add menu entries for GUI apps (09:30:15 AM) rdieter: huh? (hope not), what makes you say that? (09:30:18 AM) scop: it leaves it up to the packager (09:30:36 AM) scop: old: "If a package contains a GUI application, then it needs to also include a properly installed .desktop file." (09:30:48 AM) scop: new: "If a package contains an application you wish to appear in the Desktop menu, then it needs to also include a properly installed .desktop file." (09:31:21 AM) rdieter: Fact is (imo, of course), if a packager doesn't want it in the menus, they shouldn't *have* to do it. (09:31:37 AM) rdieter: I don't care if folks here insist on keeping the original intent, tho (09:32:10 AM) rdieter: the big thing is simply adding the SHOUD/MUST for compliance with the desktop-entry-spec (09:32:23 AM) tibbs: For example, do our wine packages install a menu entry for regedit or notepad? (09:32:27 AM) scop: I think the original intent is fine (09:32:33 AM) scop: tibbs, I think they do (09:32:38 AM) tibbs: They're graphical apps, but I doubt they should clutter the menus. (09:32:52 AM) abadger1999: What are the ramifications of f.d.o compliance? (09:32:56 AM) abadger1999: tibbs: Yes, they do. (09:33:11 AM) scop: IIRC wine solves the clutter by adding a separate menu "folder" (09:33:13 AM) abadger1999: applications::wine::regedit (09:33:16 AM) rdieter: abadger1999: one result would be non-stupid .desktop files. (: (09:33:52 AM) abadger1999: I'm just wanting to know if it's going to spark controversy that we'll need to be able to justify. (09:34:06 AM) rdieter: stupid examples: apps abusing Name vs. GenericName entries. (09:34:34 AM) tibbs: rdieter: You're going to get a lot of resistance from the gnome people over that. (09:34:43 AM) abadger1999: So... I take that as a yes ;-) (09:34:47 AM) scop: isn't name vs genericname a result of some GNOME HIG or something? (09:35:05 AM) scop: s/name vs genericname/name vs genericname "abuse"/ (09:35:05 AM) rdieter: I know, but they brought it on themselves by using an implementation counter to the desktop-spec. (09:35:36 AM) tibbs: Perhaps, but we've already established that we're not allowed to require compliance with freedesktop.org standards and specs. (09:35:50 AM) abadger1999: Err.. When it's for no benefit. (09:36:04 AM) abadger1999: rdieter, does name/generic name abuse cause problems for kde? (09:36:34 AM) scop: if spec compliant entries look like crap or confusing in gnome, then I'm afraid we just can't require it (09:36:45 AM) rdieter: IMO, it causes problems for all desktops. If a user wants to run evolution, gaim, it's nowhere to be found in the menus (as-is) (09:37:11 AM) f13: rdieter: not every thing, but things concerning the desktop packages should really get somebody from the desktop team to look over them, since it directly effects their packaging. (09:37:14 AM) rdieter: scop: that's why I'm willing to settle for even a SHOULD guideline. (09:37:43 AM) lutter: rdieter: have you talked to any of the gnome guys about it ? (09:38:04 AM) abadger1999: rdieter: What I'm driving towards, is if kde can't work right with the abusing .desktop files then we have a case for enforcing spec compliance. (09:38:08 AM) rdieter: lutter: not directly, f13 was going to run it by the desktop folks. (09:38:16 AM) abadger1999: If kde looks fine then it's a change for no apparent benefit. (09:38:25 AM) abadger1999: s/apparent/visible/ (09:38:30 AM) rdieter: abadger1999: it benefits kde. (09:39:05 AM) lutter: rdieter: I would prefer if you (or f13) can get the gnome folks to agree (09:39:32 AM) tibbs: They're violating the spec on purpose; there's little chance that they would agree. (09:39:37 AM) rdieter: lutter: of course, but don't get your hopes up, since it requires them to actually *change* something. (09:39:48 AM) rdieter: tibbs: ++ (09:40:36 AM) rdieter: And that sucks, I thought we're here to work toward getting things right, even if it is painful for some in the short-term. (09:40:47 AM) abadger1999: rdieter: Then maybe we should look into something similar to java naming: .desktop files must comply with f.d.o specs. Since GNOME (a large and important part of our distro doesn't handle f.d.o correctly), we will allow gnome apps to use this format for generic name/name until we can patch the following programs: gnome-panel, (other menu creating apps in gnome) (09:41:31 AM) rdieter: abadger1999: it's not just gnome, it's apps that the desktop team have use for "default" apps that are currently problematic. (09:42:02 AM) abadger1999: i'm tempted to say those should be fied to comply with the standard. (09:42:07 AM) abadger1999: s/fied/fixed/ (09:42:17 AM) rdieter: well, that's a big, duh! (: (09:42:39 AM) rdieter: but that's just my lowly opinion. (: (09:42:58 AM) abadger1999: Exempt gnome apps only. (09:43:12 AM) scop: hm (09:43:46 AM) scop: many KDE apps are visible in GNOME menus, too (09:44:31 AM) rdieter: abadger1999: I'm more inclined to call this a SHOULD, instead of including any explicit exemptions. (09:45:21 AM) abadger1999: When you come down to it, though, a SHOULD doesn't require anything. (09:45:37 AM) rdieter: abadger1999: it's better than nothing. (09:45:45 AM) abadger1999: You can mention it in the review but the packager can just ignore it. (09:46:28 AM) scop: I tend to think that SHOULDs are things that can be skipped, but an explanation is required (09:46:43 AM) abadger1999: A MUST with eemption should have a condition that states what functionality needs to be included for the exemption to go away. (09:46:45 AM) scop: ie. not just ignored (09:46:53 AM) rdieter: Would I prefer this be a MUST item? Absolutely. I just seriously doubt it would get the votes and/or not be vetoed. (09:47:31 AM) abadger1999: I think the Packaging Committee has to agree whether it's good. But if so I think if it got vetoed we should take it to the Board. (09:47:59 AM) tibbs: Well, first we should at least consult the gnome folks and get them to officially refuse to comply with the spec (and get their reasons for doing so). (09:48:16 AM) rdieter: tibbs: fair enough. (09:48:22 AM) scop: bear in mind that this is a thing that mostly affects the area of the desktop people, not packaging per se (09:48:44 AM) tibbs: Perhaps they understand that it's not the best thing but have too much to fix all at once; you never know. (09:48:58 AM) abadger1999: scop: But the "desktop people" we talk about within Red Hat are GNOME devs. (09:49:14 AM) scop: abadger1999, yes, I'm aware of that :/ (09:49:39 AM) abadger1999: If desktop people is expanded to include community kde packagers we would have a different perspective. (09:50:33 AM) tibbs: It's already been established that for Fedora, desktop == gnome. (09:50:43 AM) abadger1999: tibbs: I disagree. (09:50:49 AM) scop: abadger1999, but that wouldn't change the balance between if this is a thing that should be handled by desktop or packaging folks, no? (09:50:50 AM) racor: tibbs: -- (09:50:52 AM) tibbs: Well, the mails were pretty clear. (09:51:11 AM) racor: tibbs: RH != community (09:51:33 AM) rdieter: racor: Fedora != RH either. (: (09:51:36 AM) abadger1999: firefox != epiphany? (09:51:38 AM) abadger1999: :-) (09:51:55 AM) tibbs: racor: that's what I was saying as well, but reality intrudes. (09:51:55 AM) scop: Kedora anyone? =) (09:52:47 AM) rdieter: I think we've ran this one into the ground (for now), any other (votable) topics today? (09:53:06 AM) racor: I have an item nagging me, but I am not sure if it's our business: (09:53:16 AM) rdieter: (in the meantime, I'll ping fedora-packaging + fedora-maintainers for more feedback on .desktop files) (09:53:19 AM) racor: packages accessing remote systems unattendedly and/or w/o opt-in (09:53:20 AM) abadger1999: rdieter: Could we vote on the non-controversial changes? (09:53:29 AM) abadger1999: (Or do we not have quorum?) (09:53:54 AM) rdieter: abadger1999: I'm ok with a vote, if wew have the folks. (09:54:13 AM) lutter: abadger1999: I really think the gnome guys should make some sort of statement about the proposal (09:54:21 AM) ***spot is here now (09:54:28 AM) ***spot snuck out of this mindless meeting (09:54:32 AM) abadger1999: I count eight. (09:55:07 AM) rdieter: abadger1999: which do you consider non-controversial? (09:55:53 AM) tibbs: The part about desktop-file-install not being mandatory seems good to me. (09:55:57 AM) abadger1999: Everything except f.d.o with the change that scop mntioned. (09:56:00 AM) tibbs: More examples are always good. (09:56:13 AM) rdieter: ok. (09:56:20 AM) spot: umm, since i'm jumping in late, where is the draft? (09:56:29 AM) abadger1999: (09:56:36 AM) spot: thx (09:57:18 AM) rdieter: racor: I think your gripe is valid, and that should be pretty much a no-no. (09:58:03 AM) rdieter: racor: heck, that's just common sense, not sure if it's worth explicitly mentioning in packaging guidelines. (09:58:26 AM) spot: umm, why wouldn't we want d-f-i to be mandatory? (09:58:38 AM) abadger1999: Because it's not necessary. (09:58:55 AM) abadger1999: Many packages install the desktop file just fine. (09:58:59 AM) ***rdieter doesn't care strongly either way, if the consensus is to keep in mandatory, so be it. (09:59:06 AM) tibbs: You do lose the lint-like quality, though. (09:59:26 AM) scop: "use d-f-i, or explicitly run d-f-validate during build"? (09:59:31 AM) rdieter: tibbs: some of dfi's lint-like warnings/errors are wrong. (: (09:59:42 AM) tibbs: For example, lots of packages are now warning about Applications, aren't they? (09:59:54 AM) tibbs: rdieter: Those should then be bugs that need fixing. (09:59:56 AM) spot: I think it is far easier to keep d-f-i as mandatory and ensure the correct desktop files get put in place than to try to build an ecosystem to check this (09:59:56 AM) rdieter: scop: ++ , I think I like that. (09:59:57 AM) scop: Applications is not a registered category in the spec (10:00:28 AM) racor: rdieter: I think we can't avoid to: Think along "smolt" (Fedora spyware) and .... (gulp) firefox (10:00:39 AM) rdieter: ok, I'm sold, MUST: use either d-f-i or d-f-validate on gui .desktop files. (10:00:57 AM) scop: rdieter, also on ones installed in eg. KDE private locations? (10:01:06 AM) spot: i still think that since we have no way to be sure that d-f-validate is being run (10:01:07 AM) scop: I suppose some of those won't validate (10:01:34 AM) tibbs: Perhaps rpmlint could run the validator on any desktop files in the installed package? (10:01:36 AM) rdieter: scop: mostly because the validator's errors are simply wrong, but that's fixable. (10:01:45 AM) tibbs: Or does anyone but me run rpmlint on the installed packages? (10:01:46 AM) scop: tibbs, it already does (10:01:53 AM) spot: tibbs: that works at review, but what about after review? (10:02:08 AM) spot: requiring d-f-i ensures that we're getting the latest desktop file, always, forever and ever amen (10:02:13 AM) tibbs: Nothing stops us from running rpmlint over the entire distro if we wanted. (10:02:17 AM) tibbs: We just don't have the infrastructure. (10:02:24 AM) scop: make rpmbuild run d-f-v in post-%install checks? (10:02:45 AM) spot: we're trying to hack and abuse the ecosystem to get around a trivial action (10:02:52 AM) scop: btw, for Fedora would be very nice (10:02:53 AM) spot: d-f-i will never give us what we don't want (10:03:12 AM) spot: it will sometimes be irrelevant, but it will always leave us safe. (10:03:33 AM) rdieter: spot: so your suggestion is simply to keep d-f-i usage mandatory, and call it good? (10:03:37 AM) spot: i honestly would rather see changes to d-f-i (10:03:53 AM) spot: such that it checks for an existing desktop file and exits if it is identical (10:04:12 AM) abadger1999: spot: k. That would be fine with me. (10:04:22 AM) ***scop doesn't follow... rephrase? (10:04:58 AM) scop: checks for an existing desktop file where? (10:05:05 AM) scop: exits if what is identical? (10:05:05 AM) spot: Why are we trying to make d-f-i not mandatory? (10:05:50 AM) spot: We're using d-f-i to install the desktop file into the proper location and validate that it is functional. (10:06:24 AM) rdieter: spot: no good reason, I'll update the proposal to not change d-f-i's mandatory'ness. (10:06:39 AM) rdieter: sorry folks gotta run to take a little one to pre-school. (10:07:02 AM) tibbs: d-f-t can be extra busy work if the package is already installing its desktop files properly. (10:07:18 AM) tibbs: You have to remove them and then use d-f-i to put them back, even if nothing is changing. (10:07:30 AM) rdieter: tibbs, true, but I think I appreciate spot's point of validation. (10:07:38 AM) tibbs: So I can understand why some might want to get rid of that step. (10:07:51 AM) rdieter: tibbs: afaik, no, you don't have to remove anything. You can use d-f-i on a .desktop file *in-place* (10:08:05 AM) tibbs: Ah, OK. I thought there was a time when this didn't work. (10:08:21 AM) rdieter: (maybe there was, but not now... (:) (10:08:27 AM) tibbs: Will d-f-i fail the build if the file is bad enough? (10:08:28 AM) spot: it certainly works now. :) (10:08:33 AM) rdieter: tibbs: yes. (10:08:37 AM) spot: tibbs: yep. (10:08:53 AM) tibbs: Then it seems to be a useful step. (10:09:30 AM) scop: its judgement of bad enough could use some tweaking though, but that's an implementation detail (10:09:49 AM) rdieter is now known as rdieter_away (10:10:05 AM) spot: scop: indeed, but thats tuning d-f-i, not making it less required. :) (10:10:37 AM) scop: right (10:11:00 AM) scop: I can live with d-f-i being mandatory (10:12:18 AM) scop: spot, any comments wrt cases like obexftp-{perl,python} vs {perl,python}-obexftp in (10:12:49 AM) scop: we discussed it a bit and there seem to be differing opinions/interpretations of the naming guidelins (10:14:16 AM) spot: there is a fair amount of precedent for non-CPAN packages to use -perl (10:14:25 AM) spot: and for CPAN packages to use perl-* (10:14:32 AM) scop: don't get stuck with CPAN (10:14:42 AM) spot: uuid-perl-0:1.5.1-2.fc6.i386 (10:14:42 AM) spot: libpreludedb-perl-0:0.9.11.1-2.fc7.i386 (10:14:42 AM) spot: graphviz-perl-0:2.12-2.fc7.i386 (10:14:42 AM) spot: perl-perlmenu-0:4.0-4.fc6.noarch (10:14:43 AM) spot: netcdf-perl-0:1.2.3-2.fc7.i386 (10:14:43 AM) spot: pcsc-perl-0:1.4.4-3.fc7.i386 (10:14:45 AM) spot: tetex-perltex-0:1.3-1.fc6.noarch (10:14:49 AM) spot: openbabel-perl-0:2.1.0-0.1.b4.fc7.i386 (10:14:51 AM) spot: claws-mail-plugins-perl-0:2.7.1-1.fc7.i386 (10:14:53 AM) spot: GraphicsMagick-perl-0:1.1.7-6.fc7.i386 (10:14:55 AM) spot: libprelude-perl-0:0.9.12.1-1.fc7.i386 (10:14:57 AM) spot: cyrus-imapd-perl-0:2.3.7-6.fc7.i386 (10:14:59 AM) spot: epylog-perl-0:1.0.3-5.fc7.noarch (10:15:01 AM) spot: rrdtool-perl-0:1.2.15-9.fc7.i386 (10:15:03 AM) spot: nagios-plugins-perl-0:1.4.5-1.fc7.i386 (10:15:05 AM) spot: (sorry for the flood) (10:15:07 AM) spot: (and ignore the tetex-perltex false positive) (10:15:32 AM) scop: (and perl-perlmenu :)) (10:15:42 AM) spot: I think perhaps having the perl Provide is more important in these cases (10:15:49 AM) spot: perl(obexftp) (10:16:00 AM) tibbs: To me it's all about whether it's a subpackage of something like cyrus-imapd or mysql or somesuch. (10:16:14 AM) tibbs: But then I suppose that doesn't have a lot of relevance except to packagers. (10:16:31 AM) scop: what about ruby, python, tcl, java, etc? they don't have the Provide, at least not all of them (10:17:14 AM) tibbs: That would be a deficiency in those packaging guidelines, wouldn't it? (10:17:22 AM) spot: well, we can either mandate the namespace, or mandate the Provide (10:17:36 AM) spot: which do you guys think will be less controversial? :) (10:17:45 AM) scop: the Provide is useless unless something is made to use it (10:17:57 AM) scop: with perl it's automatic, with many others it's nonexistent (10:18:00 AM) abadger1999: Both cntroversial but namespace less so. (10:18:24 AM) spot: the namespace leaves us with less load on yum as well (10:18:34 AM) spot: anywhere we can eliminate extra Provides speeds up yum (10:19:08 AM) spot: In the specific case of perl, perl packages use the perl() provides almost universally (10:19:15 AM) scop: hey, by the way, now I remember a thing I have been meaning to bring up many times before but always forgotten (10:19:16 AM) spot: because the provides are autogenerated (10:19:38 AM) scop: does anyone see any downsides if rpmbuild would prune self-requires from all packages? (10:19:42 AM) scop: (at build time, that is) (10:19:59 AM) tibbs: What do you define as self-requires? (10:20:22 AM) Rathann|work: tibbs: a case where Requires: == Provides: (10:20:29 AM) scop: $ rpm -qR perl-URI | grep -F 'perl(URI' (10:20:40 AM) Rathann|work: ah (10:20:46 AM) Rathann|work: my bad then (10:20:47 AM) racor: scop: No, I think this would work (10:21:01 AM) spot: seems reasonable to me. (10:21:15 AM) scop: the only case I can think of this being even potentially harmful is (10:21:17 AM) tibbs: You don't lose much. (10:21:26 AM) scop: if we have a package that requires virtual "foo" (10:21:39 AM) Rathann|work: so what about my obexftp-perl vs perl-obexftp? (and same for python?) (10:21:41 AM) scop: and provides "foo" itself, but we have several others providing "foo" too (10:22:01 AM) scop: a depsolver could present a list of packages providing foo and let the user choose (10:22:36 AM) spot: scop: yeah, that is a valid problem (10:22:41 AM) abadger1999: I like perl-obexftp and python-obexftp but my opinion is kinda slushy. (10:22:50 AM) racor: rathann: If this package install's it's perl module to /usr/lib/perl*, it must be perl-* (10:23:01 AM) scop: I'm with abadger1999 on that (10:23:22 AM) Rathann|work: scop, racor: will you file bugreports against packages which don't follow that rule? (10:23:25 AM) ***lutter nods violently (10:23:32 AM) racor: scop: I see perl-* tied to /usr/lib/perl* (10:23:37 AM) spot: i've historically sided with the packager's ability to choose, but I can see the benefit of the universal namespace here (10:23:38 AM) scop: Rathann|work, possibly, but this requires more discussion (10:23:52 AM) spot: we also need to revisit whether the "python namespace exception" still applies (10:23:53 AM) scop: Rathann|work, and it is not my intention to block obexftp on this (10:24:06 AM) spot: aka: if the package has "py" in its name, it doesn't need the namespace (10:24:14 AM) lutter: we shold probably grandfather existing packages in if we mandate the namespace (which I am all for) (10:24:17 AM) Rathann|work: scop: but I want to avoid renaming packages later (10:24:44 AM) Rathann|work: so I guess I'll go with perl-foo and python-foo (10:25:05 AM) scop: well, if you want to be extra safe, use Provides (10:25:13 AM) scop: not sure if it's worth it though (10:26:36 AM) racor left the room (quit: "Dinner's waiting ...."). (10:28:18 AM) Rathann|work: ok thanks (10:28:23 AM) ***Rathann|work goes home (10:28:23 AM) ***scop needs to go too (10:28:38 AM) ***spot goes back into his meeting of doooom. (10:31:01 AM) scop left the room (quit: "Leaving"). (10:44:30 AM) ecik: are we going to update BuildRequires section of Packaging Guidelines? (10:44:41 AM) ecik: it would be nice, if there were written something about mock (10:54:50 AM) rdieter_away is now known as rdieter (11:16:32 AM) jorge_ [n=jorge] entered the room. (11:26:04 AM) rdieter: ecik: What would you suggest? (because it's not entirely clear from your comment) (11:27:05 AM) ecik: rdieter, BuildRequires section mentions only about rpmdev-rmdevelrpms to find missing BRs (11:28:46 AM) ecik: if we say about finding buildrequires, mock looks like a better solution to find them (11:29:59 AM) rdieter: either way works, but I'd have to say using mock is harder (esp for newbies) (11:31:34 AM) ecik: that's right, but we should show the another good way to check BRs (11:31:44 AM) ecik: we could paste some text from MockTricks (11:32:41 AM) rdieter: sounds reasonable, you willing to write it up and propose it to the committee? (: (11:33:20 AM) ecik: doesn't sound like a big problem thus I can do it :) (11:34:56 AM) rdieter: good man. (11:44:11 AM) ecik: I should put it under PackagingDrafts? (11:50:19 AM) rdieter: yup (11:50:47 AM) rdieter: then send something to the fedora-packaging ml for review/comment. (11:51:31 AM) ecik: ok (12:10:05 PM) f13: rdieter: ping (12:10:28 PM) rdieter: f13: yo (12:10:32 PM) f13: rdieter: some feedback from .desktop proposal. Not every gui app should show up in the menu, things like gconf-editor, eog, evince, etc... they are purposfully left out of the menu (12:10:47 PM) f13: rdieter: I think it would be better worded that any app that should show up in the menu requires a .desktop file. (12:10:53 PM) rdieter: arg, that's what I said, but other folks here wanted otherwise... (12:11:04 PM) f13: <@mclasen> it may be good to elaborate on the concept of installing a desktop file without making it visible in the menus (12:11:07 PM) f13: <@mclasen> for the benefit of menu editing (12:11:39 PM) f13: <@mclasen> it would also be good to write something down about the ideas behind GenericName vs Name, (12:11:43 PM) f13: <@mclasen> and the relation to default apps (12:11:46 PM) f13: <@mclasen> but I am not prepared to do that right now (12:12:17 PM) rdieter: I think *purposely* leaving something out of the menus may also be a valid exception, which should/would require justification at review-time. (12:12:50 PM) rdieter: Not sure if it's a case worth mentioning explictly... (I don't care strongly one way or the other). (12:13:35 PM) rdieter: f13: thanks for the feedback, I'll take those comments to the list to see what others have to say. (12:15:19 PM) f13: <@jkeating> mclasen: if we took the part out regarding every gui app having a .desktop file, would you be ok with this proposal going through, as an interim step toward something with even more information? (12:15:23 PM) f13: <@mclasen> doesn't seem to contain anything particularly controversial afaics on quick reading (12:15:27 PM) f13: <@mclasen> does it mention that .desktop files go in /usr/share/applications, normally (12:15:30 PM) f13: <@mclasen> what about the ownership of that directory (12:15:33 PM) f13: <@mclasen> and the resulting Requires (12:15:35 PM) f13: <@mclasen> does every .desktop-file installing package require gnome-session for that directory ? (12:15:39 PM) f13: <@mclasen> like we did for .pc files ? (12:16:23 PM) tibbs: f13: Where is this coming from? (12:16:31 PM) f13: <@mclasen> might also be worthwhile to mention that Categories are used to place the entries in suitable menus (12:16:34 PM) f13: tibbs: internal IRC (12:16:47 PM) f13: tibbs: our desktop guys use internal IRC and #fedora-desktop on gimpnet (12:16:54 PM) f13: (since thats where most of gnome is) (12:17:06 PM) f13: trying to manage 3 IRC servers gets to be a bit rediculous (12:17:27 PM) tibbs: I can imagine. (12:17:54 PM) rdieter: Hmm, nothing I type seems to be going through (at least I can't see it) (12:17:59 PM) rdieter: ah, there it is again. (12:18:24 PM) rdieter: re: /usr/share/appliations should be owned, but gnome-session is inappropriate, imo, maybe filesystem? (12:18:27 PM) f13: hrm, seems the gnome-session may be a packaging bug (12:18:39 PM) f13: rdieter: yeah, that was the suggestion I got internal too, make filesystem own it, or a filesystem like package. (12:18:58 PM) f13: xdg-system perhaps? Are there other xdg type dirs and such that would be usefull to have desktop agnostic? (12:19:21 PM) rdieter: f13++, add in /etc/xdg (12:19:57 PM) rdieter: + /usr/share/mime-info (12:20:00 PM) rdieter: + /usr/share/icons (12:20:09 PM) f13: <@halfline> filesystem already owns /usr/share/applications (12:20:19 PM) f13: maybe we should just make filesystem own them all. (12:20:22 PM) f13: a worthwile RFE (12:21:00 PM) rdieter: ok. (12:21:06 PM) rdieter: I'll do it now... (12:21:11 PM) f13: thanks (12:21:42 PM) f13: rdieter: stil not sure if they all really GO in filesystem, or if they should go in a higher level package so that they aren't created for nonX systems. But *shrug*, not my call (12:21:49 PM) f13: the RFE should sort it out (12:23:12 PM) rdieter: imo, your suggestion of xdg-filesystem (or something like that) seems to make the most sense. (12:23:19 PM) f13: yeah. (12:23:26 PM) f13: more packages sucks more but *shrug* (12:23:49 PM) rdieter: I don't care, as long as it's somewhere. (: (12:29:15 PM) rdieter:, filesystem: +xdg-using directores (12:35:09 PM) rdieter: f13: now I remember, folks wanted to keep d-f-i usage mandatory, to serve as a sort of lint for .desktp files (12:36:16 PM) rdieter: .desktop files that don't validate need to be fixed. (12:37:33 PM) f13: sure, thats for packages that have .desktop files. Which is different from making every gui package have a .desktop file. (12:37:50 PM) f13: if it HAS one, it should be installed (whether or not it shows up in the menu). (12:38:11 PM) rdieter: f13: .desktop files can have Hidden=true. (: (12:39:16 PM) f13: yeah, should be pointed out the seperation between installed vs visible (12:39:50 PM) rdieter: ok.
https://fedoraproject.org/wiki/Meeting:Packaging_IRC_log_20070123?rd=Packaging:IRCLog20070123
CC-MAIN-2015-14
refinedweb
6,136
76.76
Several On Mon, Dec 16, 2002 at 01:58:11PM +0000, Colin.Helliwell@Zarlink.Com wrote: > I've been porting the MIPS kernel to our system-on-chip hardware > (4KEc-based) and have encountered a problem with a pre-emptible patch. The > original kernel was the 2.4.19 from the CVS server, onto which I applied > Robert Love's preemptible patch (preempt-kernel-rml-2.4.19-2.patch), plus > the addition of a #include to softirq.h, and a missing definition for > release_irqlock() in hardirq.h. > I've found that when CONFIG_PREEMPT is set, it no longer loads the > (non-compressed) initrd correctly - about 1.8MB through loading (2MB total) > I get a Data Bus Error. A typical call trace shown by the oops is shown > below, and looks a little 'confused' to me, so I'm thinking there may be > some stack corruption going on? > > Address Function > > 801174fc tasklet_hi_action > 801af0a4 printChipInfo > 801af0a4 printChipInfo > 8013bf50 sys_write > 801089c4 stack_done > 80108b28 reschedule > 801133d0 _call_console_drivers > 80113ad8 release_console_sem > 80113848 printk > 801506b8 sys_ioctl > 801af0f8 printChipInfo > 8014ccd4 sys_mkdir > 801af0a4 printChipInfo > 80100470 init > 80100470 init > 80100840 prepare_namespace > 80100470 init > 8010049c init > 8010352c kernel_thread > 80100420 _stext > 8010351c kernel_thread > > > I wondered if anyone had any thoughts about what might be causing this, or > had seen this occuring before - were there perhaps some changes made just > after this point in time (now in the 2.5.x kernel)? > > > > Thanks. > > > >
http://www.linux-mips.org/archives/linux-mips/2002-12/msg00182.html
CC-MAIN-2014-10
refinedweb
231
59.33
NAME Class::Array - array based perl objects SYNOPSIS package My::BaseClass; use strict; use Class::Array -fields=> -public=> qw(Name Firstname), -protected=> qw(Age), -private=> qw(Secret); # Method example sub age { my $self=shift; if (@_) { my $val=shift; if ($val>=18) { $self->[Age]=$val; } else { carp "not old enough"; $self->[Secret]=$val; } } else { $self->[Age] } } ---- package My::DerivedClass; use strict; use My::BaseClass -extend=> -public=> qw(Nickname), -private=> qw(Fiancee); # The best way to generate an object, if you want to # initialize your objects but let parent classes # initialize them too: sub new { my $class=shift; my $self= $class->SUPER::new; # Class::Array::new will catch the above if # no one else does # do initialization stuff of your own here $self } sub DESTROY { my $self=shift; # do some cleanup here $self->SUPER::DESTROY; # can be called without worries, # Class::Array provides an empty default DESTROY method. } ---- # package main: use strict; use My::DerivedClass; my $v= new My::DerivedClass; $v->[Name]= "Bla"; $v->age(19); $v->[Age]=20; # gives a compile time error since `Age' # does not exist here DESCRIPTION So you don't like objects based on hashes, since all you can do to prevent mistakes while accessing object data is to create accessor methods which are slow and inconvenient (and you don't want to use depreciated pseudohashes either) - what's left? Some say, use constants and array based objects. Of course it's a mess since the constants and the objects aren't coupled, and what about inheritance? Class::Array tries to help you with that. Array based classes give the possibility to access data fields of your objects directly without the need of slow (and inconvenient) wrapper methods but still with some protection against typos and overstepping borders of privacy. USAGE Usage is somewhat similar to importing from non-object oriented modules. `use Class::Array', as well as `use ' of any Class::Array derived class, takes a number of arguments. These declare which parent class you intend to use, and which object fields you want to create. See below for an explanation of all options. Option arguments begin with a minus `-' Arguments listed before the first option are interpreted as symbol names to be imported into your namespace directly (apart from the field names). This is handy to import constants and `enum's. (Note that unlike the usual Exporter, the one from Class::Array doesn't look at the @EXPORT* variables yet. Drop me a note if you would like to have that.) - -fields => list This option is needed to set up an initial Class::Array based class (i.e. not extend an existing class). The following arguments are the names of the object fields in this class. (For compatibility reasons with older versions of this module, `-members' is an alias for -fields.) - -extend => list This is used to inherit from an existing class. The following names are created in addition to the member names inherited from the parent class. - -public | -protected | -private => list These options may be used anywhere after the -fields and -extend options to define the scope of the subsequent names. They can be used multiple times. Public means, the member will be accessible from anywhere your class is `use'd. Protected means, the member will only be accessible from the class itself as well as from derived classes (but not from other places your class is used). Private means, the member will only be accessible inside the class which has defined it. (For compatibility reasons with older versions there is also a `-shared' option which is the same as protected.) Note that you could always access all array elements by numeric index, and you could also fully qualify the member name constant in question. The member name is merely not put `before your nose'. The default is protected. - -onlyfields list Optional. List the fields you want to use after this option. If not given, all (public) member names are imported. Use this if you have name conflicts (see also the IMPORTANT section). (`-onlymembers' is an alias for -onlyfields.) - -nowarn If you have a name conflict, and you don't like being warned all the time, you can use this option (instead of explicitely listing all non-conflicting names with -onlyfields). - -class => 'Some::Baseclass' In order to make it possible to define classes independant from module files (i.e. package Some::Baseclass is not located in a file .../Some/Baseclass.pm) you can inherit from such classes by using the -class option. Instead of `use Some::Baseclass ...' you would type `use Class::Array -class=>"Some::Baseclass", ...'. - -namehash => 'hashname' By default, there is no way to convert field name strings into the correspondent array index except to use eval ".." to interpret the string as a constant. If you need fast string-to-index lookups, use this option to get a hash with the given name into your namespace that has the field name strings as keys and the array index as value. Use this only if needed since it takes some time to create the hash. Note that the hash only has the fields that are accessible to you. You could use the reverse of the hash to get the field names for an index, i.e. for debugging. There's also a class_array_namehashclass method with which you can create the hash (or get the cached copy of it) at a later time: class->class_array_namehash( [ aliasname [, some more args ]] ) This returns a reference to the hash. Depending on whether you are in a class inheriting from 'class' or not, or whether you *are* the 'class' or not, you will get a hash containing protected (and your private) fields or not. If 'aliasname' is given, the hash is imported into your namespace with that name. To get a list of indices for a list of field names, there is also a method: class->class_array_indices( list of fieldnames ) This will croak if a field doesn't exist or is not visible to you. IMPORTANT 1.) Be aware that object member names are defined as constants (like in the `constant' module/pragma) that are independant from the actual object. So there are two sources of mistakes: Use of member names that are also used as subroutine names, perl builtins, or as member names of another array based class you use in the same package (namespace). When a particular name is already used in your namespace and you call `use' on a Class::Array based class, Class::Array will detect this, warn you and either die (if it's about a member name you're about to newly create), or both not import the member name into your namespace and also *remove* the existant entry, so you can't accidentally use the wrong one. You can still access the field by fully qualifying it's constant name, i.e. $v->[My::SomeClass::Name] (note that this way you could also access private fields). Use the -onlyfields or -nowarn options if you don't like the warning messages. Using the member constants from another class than the one the object belongs to. I.e. if you have two classes, `My::House' and `My::Person', perl and Class::Array won't warn you if you accidentally type $me->[Pissoire] instead of $me->[Memoire]. 2.) The `use Class::Array' or `use My::ArraybasedClass' lines should always be the *LAST* ones importing something from another module. Only this way name conflicts can be detected by Class::Array. But there is one important exception to this rule: use of other Class::Array based modules should be even *later*. This is to resolve circularities: if there are two array bases modules named A and B, and both use each other, they will have to create their field names before they can be imported by the other one. To rephrase: always put your "use" lines in this order: 1. other, not Class::Array based modules, 2. our own definition, 3. other Class::Array based modules. 3.) Remember that Class::Array relies on the module import mechanism and thus on it's `import' method. So either don't define subroutines called `import' in your modules, or call SUPER::import from there after having stripped the arguments meant for your own import functionality, and specify -caller=> scalar caller() as additional arguments. 4.) (Of course) remember to never `use enum' or `use constant' to define your field names. (`use enum' is fine however for creating *values* you store in your object's fields.) 5.) Don't forget to `use strict' or perl won't check your field names! TIPS To avoid name conflicts, always use member names starting with an uppercase letter (and the remaining part in mixed case, so to distinguish from other constants), and use lowercase names for your methods / subs. Define fields private, if you don't need them to be accessible outside your class. BUGS Scalars can't be checked for conflicts/existence. This is due to a strange inconsistency in perl (5.00503 as well as 5.6.1). This will probably not have much impact. (Note that constants are not SCALARs but CODE entries in the symbol table.) CAVEATS Class::Array only supports single inheritance. I think there's no way to implement multiple inheritance with arrays / member name constants. Another reason not to use multiple inheritance with arrays is that users can't both inherit from hash and array based classes, so any class aiming to be compatible to other classes to allow multiple inheritance should use the standard hash based approach. There is now a -pmixin => class Note that you can still force multiple inheritance by loading further subclasses yourself ('use Classname ()' or 'require Classname') and push()ing the additional classnames onto @ISA. (But for Class::Array, subclasses of such a class will look as they would only inherit from the one class that Class::Array has been told of.) NOTE There is also another helper module for array classes (on CPAN), Class::ArrayObjects by Robin Berjon. I didn't know about his module at the time I wrote Class::Array. You may want to have a look at it, too. (Well it's not yet one but I put this in here before it becomes one:) - Q: Why does perl complain with 'Bareword "Foo" not allowed' when I have defined Foo as -public in class X and I have a 'use X;' in my class Y? A: Could it be there is a line 'use Y;' in your X module and you have placed it before defining X's fields? (See also "IMPORTANT" section.) AUTHOR Christian Jaeger <copying@christianjaeger.ch>. SEE ALSO constant, enum, Class::Class, Class::ArrayObjects
https://metacpan.org/pod/release/PFLANZE/FunctionalPerl-0.72/lib/Class/Array.pm
CC-MAIN-2020-05
refinedweb
1,779
61.46
Hi, I would appreciate help in resolving this issue. My specification for Class Employee states: Employee() //post variables set to non-null values; Employee (String nme, char sex, Date dob, int id, Date start) //pre start !=null //post variables initialised with params passed in Class Employee extends class Person. This is what i have done for the constructors which I have taken the details from the specification for. Code : public class Employee extends Person { //instance variables protected int id; protected Date start; protected float salary; //start constructors public Employee() { super(); id=0; salary=0; [COLOR="Red"]start=[/COLOR] } public Employee(String nme, char sex, Date dob, int number, Date start) { super (nme, sex, dob); id=number; start= new Date(start); salary=0; } //end of constructors the problem i have is with setting the date (the section highlighted in red) to a non-null value. How do i do this? I have looked over my previous work and found nothing where we have set the initial state for a date. Your help will be greatly appreciated. Many Thanks
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/1279-post-variable-set-non-null-vlaues-printingthethread.html
CC-MAIN-2015-32
refinedweb
177
58.82
ThrowPropsPlugin is a plugin for TweenLite and TweenMax that allows you to simply define an initial velocity for a property (or multiple properties) as well as optional maximum and/or minimum end values and then it will calculate the appropriate landing position and plot a smooth course based on the easing equation you define (Quad.easeOut by default, as set in TweenLite). This is perfect for flick-scrolling or animating things as though they are being thrown. For example, let's say a user clicks and drags content and you track its velocity using an ENTER_FRAME handler and then when the user releases the mouse button, you'd determine the velocity but you can't do a normal tween because you don't know exactly where it should land or how long the tween should last (faster initial velocity would mean a longer duration). You need the tween to pick up exactly where the user left off so that it appears to smoothly continue moving in the same direction and at the same velocity and then decelerate based on whatever ease you define in your tween. Oh, and one more challenge: maybe you want the final resting value to always lie within a particular range so that things don't land way off the edge of the screen. But you don't want it to suddenly jerk to a stop when it hits the edge of the screen; instead, you want it to ease gently into place even if that means going past the landing spot briefly and easing back (if the initial velocity is fast enough to require that). The whole point is to make it look smooth. No problem. Flick-scrolling example Rotation example Note: ThrowPropsPlugin is a membership benefit of Club GreenSock ("Shockingly Green" and corporate levels). If you're not a member yet, you can sign up here. Once you join you'll get access to several plugins like this which can be downloaded from your GreenSock account. One of the fundamental problems ThrowPropsPlugin solves is when you need to start tweening something at a particular velocity but you don't necessarily know the end value (most tweens require that you know the end value ahead of time). Another tricky part of building a tween that begins at a particular velocity and looks fluid and natural (particularly if you're applying maximum and/or minimum values) is determining its duration. Typically it's best to have a relatively consistent level of resistance so that if the initial velocity is very fast, it takes longer for the object to come to rest compared to when the initial velocity is slower. You also may want to impose some restrictions on how long a tween can last (if the user drags incredibly fast, you might not want the tween to last 200 seconds). The duration will also affect how far past a max/min boundary the property can potentially go, so you might want to only allow a certain amount of overshoot tolerance. That's why ThrowPropsPlugin has a few static helper methods that make managing all these variables much easier. The one you'll probably use most often is the to() method which is very similar to TweenLite.to() except that it doesn't have a duration parameter and it adds several other optional parameters. Read the ASDocs for details. The demo video covers this method in the 2nd half too. Documentation View the full ASDocs here. Sample AS3 code import com.greensock.*; import com.greensock.plugins.*; import com.greensock.easing.*; TweenPlugin.activate([ThrowPropsPlugin]); //simplest syntax, start tweening mc.x at 200 pixels/second and mc.y at -30 pixels/second so that they both come to a stop in exactly 5 seconds using the Strong.easeOut ease TweenLite.to(mc, 5, {throwProps:{x:200, y:-30}, ease:Strong.easeOut}); //to define max and/or min values, use the object syntax like this: TweenLite.to(mc, 5, {throwProps:{x:{velocity:200, min:100, max:600}, y:{velocity:-30, min:0, max:150}}, ease:Strong.easeOut}); //to have ThrowPropsPlugin automatically determine the duration based on the initial velocity and resistance, use the static to() method like so: ThrowPropsPlugin.to(mc, {throwProps:{x:{velocity:200, min:100, max:600}, y:{velocity:-30, min:0, max:150}, resistance:50}, ease:Strong.easeOut}); - Where can I get ThrowPropsPlugin? It isn't in the public downloads!ThrowPropsPlugin is a membership benefit of Club GreenSock ("Shockingly Green" and corporate levels) so it isn't in the public downloads. If you're not a member yet, you can sign up here. Once you join you'll get access to several plugins like this which can be downloaded from your GreenSock account. - Is ThrowPropsPlugin only for tweening the x and y properties?Absolutely not. You can tween ANY numeric property of ANY object. For example, you could tween mc.rotation like this: TweenLite.to(mc, 2, {throwProps:{rotation:150}, ease:Quad.easeOut}); Or you can tween a custom property of a custom object like this: TweenLite.to(myCustomObject, 2, {throwProps:{myCustomProperty:600}, ease:Back.easeOut}); - Does ThrowPropsPlugin completely automate flick-scrolling and tracking the velocity of the mouse, etc.?Nope. The plugin was meant to be relatively small, efficient, and flexible. There are many ways you could track mouse movement, touch events, dragging, etc. so the plugin doesn't impose a particular method on you. The example in the Plugin Explorer demonstrates one way to track the velocity of the mouse, so feel free to use that as a reference. You simply feed the velocity (and optionally other values like max, min, and resistance) to the plugin and it takes it from there. - Why would I want to use a tween to do this type of motion rather than an ENTER_FRAME or Timer that manually applies deceleration iteratively?Using a solution that is integrated with a tweening engine like TweenLite or TweenMax allows you to do sequencing much more easily and you get better control because you can pause(), resume(), reverse(), jump to any time in the tween, use onComplete/onUpdate callbacks, change timeScale, and more. Overwrite management is built-in too. Plus most other solutions don't give you the smooth handling of the maximum/minimum values. They are typically frame-based too, so they slow down if the frame rate drops and it can be difficult to predict exactly when the object will come to rest. - What is that BlitMask object that's used in the interactive example and where can I get it? Is it necessary?BlitMask isn't necessary at all, but it can significantly improve scrolling performance. It's just another GreenSock tool that aims to make your life easier, your apps run better, and your clients happy. See the dedicated BlitMask page for details. - Can I get the FLA from the video with the code that made the dial rotate?Sure. You can get it here (CS4). - Can I use any ease that I want?Technically yes, but generally only easeOut eases look good. Quad.easeOut, Cubic.easeOut, Sine.easeOut, Expo.easeOut Strong.easeOut, Back.easeOut, etc. are all good options. If you need a detailed explanation as to why other eases don't work well, please ask in the forums or below. - What about panel-based flick-scrolling like when flicking through photos on the iPhone/iPad? Can ThrowPropsPlugin help with that?Actually, that's much simpler than you might think and it doesn't require ThrowPropsPlugin. The logic is as follows: - Sense the direction of the flick movement (right or left) - If the velocity is more than a particular (relatively low) threshold, do a simple easeOut tween to the next (or previous) panel. Maybe 0.5 seconds or so. - If the velocity is below that threshold, just do an easeOut tween to the current panel (to snap it back into position) I whipped together a sample set of files that demonstrates panel-based flick scrolling. Get it here. It looks like this: - Is the API locked-down or might it change?No changes are planned to the API right now, but this is a brand new plugin and I'm committed to making improvements and enhancements if the need arises as it gets out into the wild and folks provide feedback. So yes, there could be some changes down the road. Please feel free to submit your feedback and suggestions. Need help? Feel free to post your question on the forums. You'll increase your chances of getting a prompt answer if you provide a brief explanation and include a simplified FLA file (and any class files) that clearly demonstrates the problem.
https://greensock.com/throwprops-as
CC-MAIN-2017-04
refinedweb
1,444
54.52
[ ] Uma Maheswara Rao G commented on HDFS-11193: -------------------------------------------- I think API itself will not worry whether it is EC'ed file or normal. Process goes same for all files. But worth checking the part on scanning blocks specifically for EC'ed files and add some tests around that. Of course we can fix issues if tests finds. > [SPS]: Erasure coded files should be considered for satisfying storage policy > ----------------------------------------------------------------------------- > > Key: HDFS-11193 > URL: > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode > Reporter: Rakesh R > Assignee: Rakesh R > > Erasure coded striped files supports storage policies {{HOT, COLD, ALLSSD}}. {{HdfsAdmin#satisfyStoragePolicy}} API call on a directory should consider all immediate files under that directory and need to check that, the files really matching with namespace storage policy. All the mismatched striped blocks should be chosen
http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201612.mbox/%3CJIRA.13024393.1480503401000.416854.1480628938551@Atlassian.JIRA%3E
CC-MAIN-2017-39
refinedweb
133
61.56
Opened 11 years ago Closed 11 years ago #4871 closed (invalid) def __str__(self): Description (last modified by ) def __unicode__(self): return self.x I think this needs to be updated to def __str__(self): return self.x Change History (2) comment:1 Changed 11 years ago by comment:2 Changed 11 years ago by No instances of "return self.x" found in trunk either. Closing ticket as invalid until reporter can clarify. Note: See TracTickets for help on using tickets. Where does this need to be updated? Are you sure you are referring to the documentation for the version you are using (noting that Unicode support is only in recent subversion, not, for example, 0.96)?
https://code.djangoproject.com/ticket/4871
CC-MAIN-2018-17
refinedweb
117
66.23
» Threads and Synchronization Author threads not misbehaving in 1.5 Stephen Bloch Ranch Hand Joined: Aug 19, 2003 Posts: 48 posted Apr 14, 2005 12:54:00 0 I was teaching a bunch of college students about Java threads today, and gave an example in which two threads share an int counter, without giving any consideration to synchronization, and jointly count down from 100 to 0. I showed how most of the time the count goes smoothly, but once in a while you get "78...77...77...76...74...73..." or the like. So then I introduced "synchronized" to fix the problem. Except that some of my students had never seen the problem, so they couldn't tell if it was fixed. Specifically, the students running JDK 1.5 never got duplicate or skipped numbers. Why not? What has changed in the thread libraries that makes threads behave more predictably? And how can I still illustrate the need for synchronization? public class DuelingPrints { private final static int MAX_SLEEP = 5; private static int sharedCounter = 100; public static void main (String[] args) { System.out.println ("About to start everything..."); startThread ("Less Filling"); startThread ("Tastes Great"); System.out.println ("Started everything!"); } private static void startThread (final String label) { Runnable action = new PrintingTask (label); new Thread (action).start(); } static class PrintingTask implements Runnable { private String label; public PrintingTask (String label) { this.label = label; } public void run () { while (sharedCounter > 0) { System.out.println (label + ": " + sharedCounter); --sharedCounter; try { Thread.sleep ((long)(Math.random() * MAX_SLEEP)); } catch (InterruptedException e) {} } } } // end of inner class PrintingTask } (By the way, that code structure was a poor choice pedagogically: synchronizing the run() method doesn't do anything, because the two run() methods have different "this"es. I'll redesign this before the next time I present it. But that's not the question at hand....) SCJP 1.4 Jayesh Lalwani Ranch Hand Joined: Nov 05, 2004 Posts: 502 posted Apr 14, 2005 13:40:00 0 Can you remove the sleep and try?. Possibly, the thread scheduler is waking a thread up when the other goes to sleep. I don't know really. just a thought. Warren Dew blacksmith Ranch Hand Joined: Mar 04, 2004 Posts: 1332 2 posted Apr 15, 2005 15:39:00 0 I believe Java 1.5 is slightly stricter about synchronization than previous versions, for example regarding volatile variables. It's possible that early implementations are even more strict, so that they don't exhibit the synchronization issues you are trying to illustrate. David Harkness Ranch Hand Joined: Aug 07, 2003 Posts: 1646 posted Apr 15, 2005 20:01:00 0 There are three ways that I see in which this can exhibit synchronization problems. Both threads enter the while loop when the counter is at 1. This results in 0 being printed. One thread gets interrupted right after printing the counter but before decrementing it. This results in duplicated values being printed along with a value being skipped: 10 9 9 7 6 ... One thread gets interrupted in the middle of decrementing the counter, and the other thread decrements the counter during the interruption. This causes the countdown to reset to a previous value, most likely just one step since blocking I/O tends to cause thread interruptions: 10 9 8 7 9 ... Now that I've typed all that out, I can't think of anything in JDK 1.5 that would make this less likely other than general improvements in thread-switching performance. Perhaps threads are switched more frequently now, and as such there's much less likelihood of the second condition occurring (the most likely of the three assuming blocking I/O swaps the thread out). Oh, maybe non-blocking I/O is used for the console and so threads tend to be swapped out in sections of the code less likely to cause issues. Mr. C Lamont Gilbert Ranch Hand Joined: Oct 05, 2001 Posts: 1170 I like... posted Apr 16, 2005 12:41:00 0 Yes, step one is to remove the sleeping. You are counting on the variable being accessed at EXACTLY the same time. But you put in random sleeping which works against that. In fact random sleeping is a common technique to reduce the chance of clashing over a resource. IP uses it for broadcast messages and other reasons. Also, try to let the computer detect the bad count. Write some code that you can use to verify the count is correct. Much easier than counting on the printout especially since counting to 100 happens so fast on computers nowadays, there is probably not resonably enough time to get a clash. I agree. Here's the link: subject: threads not misbehaving in 1.5 Similar Threads Print Odd and even number in sequence with using two threads cannot resolve symbol: Linux Display JDialog Problem Thread Pool implementation Where to store Policy file in Windows 98 All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/232937/threads/java/threads-misbehaving
CC-MAIN-2015-35
refinedweb
834
72.46
You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org PAWS/Admin: Difference between revisions Revision as of 20:10, 13 October 2021 Task T211096 Introduction PAWS is a Jupyterhub deployment that runs in the PAWS Coud VPS project. The main Jupyterhub login is accessible at, and is a public service that can authenticated to via Wikimedia OAuth. More end-user info is at PAWS. Besides a simple Jupyterhub deployment, PAWS also contains easy access methods for the wiki replicas, the wikis themselves via the OAuth grant and pywikibot. Kubernetes cluster Deployment The PAWS Kubernetes cluster built to similar specifications as the Toolforge cluster, deployed using puppet to prepare the system and native kubeadm deployment of the Kubernetes layer. As such, the deployment is nearly identical to the process described for Toolforge. After you have the base layer of Kubernetes deployed using the procedures outlined for Toolforge and the yaml deployed by puppet for PAWS, you can proceed with deploying the Jupyterhub itself. Upgrading Upgrading should be following the same schedule and technique as upgrading Toolforge Kubernetes because this is a similarly kubeadm-plus-puppet cluster. Regular upgrades are essential to staying ahead of CVEs (via point-releases) and keeping the cluster's certs fresh. Kubernetes 1.16+ has nice tools for manually refreshing certs, but it isn't a fun situation. Remember to check that something strange isn't in the kubeadm-config configmap in the kube-system namespace if one of the control plane pods isn't staying live! The only special consideration here is that you should make sure that the Jupyterhub helm chart isn't trying to deploy deprecated objects. Objects that are deprecated on Kubernetes will continue to work, but you'll have an issue doing upgrades and deployments of Jupyterhub. The best thing to do here is probably to try to get a PR in upstream to fix things or upgrade our version of Jupyterhub. Architecture We opted to use a stacked control plane like in the original build, but we set it up with a redundant three-node cluster. To maintain HA for the control plane and for the services, two haproxy servers sit in front of the cluster with a floating IP managed with keepalived that should be capable of automatic failover. DNS simply points at that IP. A simple diagram is as follows: General Build With the exception of the stacked control plane and specific services, nearly the entire build re-uses the security and puppet design of Toolforge Kubernetes. By using helm 3, we were able to prevent any divergence from secure RBAC and Pod Security Policies. Upgrades should be conducted when Toolforge upgrades are on the same cycle, but the component repositories used (which are separated by major k8s version) allow the upgrade schedules to diverge if required. An ingress exists (not on this diagram) for the deploy-hook service, but that is disabled in the first iteration to work out some kinks in the process. Floating IP The floating IP is our second service using a manually-provisioned Neutron port with IP 172.16.1.171/32 that is managed with keepalived, using this procedure: Portal:Cloud VPS/Admin/Keepalived That is is NAT'd to public IP 185.15.56.57/32. Ports At the load balancer layer (haproxy), routing is done by port back to the Kubernetes control plane service on control plane nodes or ingresses at the dedicated ingress worker nodes. The control plane is hit at the usual port of TCP 6443, for both the frontend and backend. The ingress layer is served at the well-known web ports (TCP 80 and 443), which hits the dedicated ingress worker nodes on a Nodeport service at port 30000. The neutron security group paws-loadbalancer prevents internet clients from contacting the k8s API at this time. TLS TLS certs are done via acme-chief and distributed to the haproxy load balancer layer. Therefore inside the cluster, Kubernetes basically has the TLS ingress bits in helm turned off. Users The maintain-kubeusers service used in Toolforge runs on paws, granting the same privileges to admin users on the paws.admin group as would be found for members of the tools.admin group in Toolforge. The certs for these users are automatically renewed as they come close to their expiration date. Where cluster-admin is required directly rather than through the usual impersonation method, such as for using the helm command directly root@paws-k8s-control-1/2/3 has that access. Helm Helm 3 is used to deploy kubernetes applications on the cluster. It is installed by puppet via Debian package. The community supported ingress-nginx controller is deployed from its own helm chart, but the ingress objects are all managed in the PAWS helm chart. As this is helm 3, there is no tiller and RBAC affects what you can do. General notes - The control plane uses a converged or "stacked" etcd system. Etcd runs in containers deployed by kubeadm directly on the control plane nodes. Therefore, it is unwise to ever turn off 2 control plane nodes at once since it will cause problems for the etcd raft election system. - The control plane and haproxy nodes are part of separate anti-affinity server groups so that Openstack will not schedule them on the same hypervisor. Worker nodes are placed in a soft anti-affinity server group. - Ingress controllers are deployed to dedicated ingress worker nodes, which also take advantage of being in an anti-affinity server group. - To see status of k8s control plane pods (running coredns, kube-proxy, calico, etcd, kube-apiserver, kube-controller-manager), see kubectl --namespace=kube-system get pod -o wide. - Prometheus stats and metrics-server are deployed in the metrics namespace during cluster build via kubectl apply -f $yaml-file, just like in the Toolforge deploy documentation. - Because of pod security policies in place, all init containers have been removed from the paws-project version of things. Privileged containers cannot be run inside the prodnamespace. Jupyterhub deployment Jupyterhub & PAWS Components Jupyterhub is a set of systems deployed together that provide Jupyter notebook servers per user. The three main subsystems for Jupyterhub are the Hub, Proxy, and the Single-User Notebook Server. Really good overview of these systems is available at. PAWS is a Jupyterhub deployment (Hub, Proxy, Single-User Notebook Server) with some added bells and whistles. Some additional PAWS-specific pods in our deployment are: - db-proxy: Mysql-proxy plugin script to perform simple authentication to the Wiki Replicas. See We haven't found a replacement for that yet, but efforts are welcome because the code is quite old. task T253134 - nbserve and render: nbserveis an nginx proxy that runs in the cluster at that handles URL rewriting for public URLs to map numerical IDs to Wiki usernames (so we can have URLS like), and renderhandles the actual rendering of the ipynb notebook as a static page. These images are both essential to how the publishing of PAWS notebooks works. PAWS also includes customized versions of some Jupyterhub images: - singleuser: Since this is the environment for end users, there is a fair bit going on here. Our image is a replacement of the upstream one. We set the correct UID and directory. We install the jupyterhub/lab code directly from pip, along with PyWikiBot, a small library to allow importing a notebook like a python package along the lines of import paws.$username.$notebooks_namecalled ipynb-paws and code from to add a public link button. There are other customizations because this is a great surface for doing them. The general goal is to get a notebook up and running for use on wikis as fast as possible. - paws-hub: We build upon the upstream Jupyterhub hub image just a touch, adding bits that respect more of the UID settings and adding in a custom culling script. The code for doing OAuth is actually inserted in the helm chart instead. The other custom image is a deploy-hook, which is undergoing some renovations before it is redeployed in the cluster. Deployment - The PAWS repository is at. It should be cloned locally. Then the git-crypt key needs to be used to unlock secrets.yaml file. See one of the PAWS admins if you should have access to this key. - PAWS will be deployed in the near future with Travis CI as it in tools, and the dashboard is at. The configuration for the Travis builds are at, and builds and deploys launch the travis-script.bashscript with appropriate parameters. However, this is not going to work at first, so please deploy via helm directly until the deploy-hook and CI setup is revisited.Tracked in Phabricator Task T256689 - To deploy via helm directly, you need to know some parameters because the values.yaml file of the helm chart both lacks some sane defaults (TODO) and requires some params no matter what so it deploys the right version of the images. At a bare minimum, you will need to know the right images and tags for some of the images. The command used to deploy it right now running cd'd into an unlocked git checkout is: helm install paws --namespace prod ./paws -f paws/secrets.yaml -f paws/production.yaml If you are deploying to an actual paws cluster, you will also need the ingress controller: helm repo add ingress-nginx helm update kubectl create ns ingress-nginx-gen2 helm install -n ingress-nginx-gen2 ingress-nginx-gen2 ingress-nginx/ingress-nginx --values ingress/values.yaml Pod Security Policy: kubectl apply -f paws/ingress/nginx-ingress-psp.yaml and the controllers themselves kubectl apply -f paws/ingress/nginx-ingress-psp.yaml. Please note, you will need your dedicated ingress worker nodes deployed (prefix puppet looks for the name paws-k8s-ingress-) for that to do anything because there are tolerations and affinities for the nodes. If already deployed, do not use the "install" command. Change that to "upgrade" to deploy changes/updates, such as: helm upgrade paws --namespace prod ./paws -f paws/secrets.yaml -f paws/production.yaml Database JupyterHub uses a database in Trove to manage the user state. Credentials are in secrets.yaml. Moving to sqlite During ToolsDB outages we can change the db to in memory sqlite without significant impact. The smoothest way is to do a helm upgrade as root on a control node (as above, in an unlocked checkout) with this command: helm upgrade paws --namespace prod ./paws -f paws/secrets.yaml -f paws/production.yaml --set=jupyterhub.hub.db.url="sqlite://" --set=jupyterhub.hub.db.type=sqlite You can roll back to ToolsDB with helm by going into an unlocked checkout of and run helm with helm upgrade paws --namespace prod ./paws -f paws/secrets.yaml -f paws/production.yaml Without using helm If you don't have an unlocked checkout and you are using your user account on a shell on one of the k8s control plane hosts, you can also manually edit the configmap to do this: $ kubectl --as admin --as-group system:masters --namespace prod edit configmap hub-config write down the existing value and then set hub.db_url to "sqlite://" Restart the hub with $ kubectl --as admin --as-group system:masters -n prod delete pod $(kubectl get pods --namespace prod|grep hub|cut -f 1 -d ' ') To move it back you can set hub.db_url to the previous value (if you didn't write it down before you changed it, see /home/bstorm/src/paws/paws/secrets.yaml at jupyterhub.hub.db.url) and restart the hub with $ kubectl --as admin --as-group system:masters -n prod delete pod $(kubectl get pods --namespace prod|grep hub|cut -f 1 -d ' ') Deleting user data in case of spam or credential leaks In the instance a notebook or file hosted on PAWS needs an admin to remove it immediately (vs. asking a user to delete it), you can access all user data via the NFS mounted locally on all k8s nodes. - SSH to a worker or control node such as paws-k8s-worker-1.paws.eqiad1.wikimedia.cloud. - Become root with sudo -i cd /data/project/paws/userhomesthis is the top level of user homes and paws public pages. cd $wiki_user-idwhere $wiki_user-id is the numeric id of the user, not the text username - Remove the offending file with rm as needed.
https://wikitech-static.wikimedia.org/w/index.php?title=PAWS/Admin&diff=prev&oldid=584165
CC-MAIN-2022-40
refinedweb
2,081
61.97
back in business! :) Search Criteria Package Details: python2-guessit 2.1.4-1 Dependencies (5) - python2 (placeholder, pypy19, python26, stackless-python2) - python2-dateutil - python2-babelfish>=0.5.5 (python2-babelfish-git) - python2-rebulk>=0.9.0 (python2-rebulk-git) - python2-setuptools (make) Required by (3)? highway commented on 2016-05-08 15:33 err just the older version of dateutil? the current version is 2.5.3 highway commented on 2016-05-08 13:14 is there a reason that this package requires older versions of dateutil and rebulk? cgirard commented on 2016-05-06 12:31 Adopted and updated petecan commented on 2016-05-05 17:11 I do not use subliminal or guessit anymore, so it is better that someone else takes care of this. I have just disowned the package. A python2-guessit-010 package sounds like the way to go, although it seems to me we might not be far from a new subliminal release. cgirard commented on 2016-05-05 16:53 I am not using subliminal, I have not checked past the PR. You could always upload a python2-guessit-010 package for subliminal. Your call. As I am maintaining the python2-guessit-rc (which is not RC anymore) meanwhile, it won't be extra load for me. petecan commented on 2016-05-05 16:47 Unless I'm missing something, that commit to subliminal was merged on the develop branch only and has not made it to a pypi release nor to master yet, let alone to the subliminal packages in the aur. @cgirard do you want to take over the maintenance of this package? cgirard commented on 2016-05-03 15:37 This commit has been merged 3 months ago... petecan commented on 2016-01-30 12:09 Guessit2 is out, but it breaks the API. I will update the package as soon as subliminal sees a new release with these changes on it, hopefully soon enough: petecan commented on 2014-05-07 08:53 All sorted, thanks katta. katta commented on 2014-05-07 08:44 I forget a dependency for cli interface in previous file: katta commented on 2014-05-05 09:37 When I try to import guessit module I receive this error: Traceback (most recent call last): File "/usr/bin/guessit", line 5, in <module> from pkg_resources import load_entry_point File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2749, in <module> working_set = WorkingSet._build_master() File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 444, in _build_master ws.require(__requires__) File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 725, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 628, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: stevedore I solved updating to 0.7.1. Please update! fede commented on 2013-11-13 14:53 Version 0.6.2 PKGBUILD: # Maintainer: Julien Nicoulaud <julien.nicoulaud@gmail.com> # Source: pkgname=python2-guessit pkgver=0.6.2 pkgrel=1 pkgdesc="A library for guessing information from video files." arch=(any) url="" license=(LGPL) depends=(python2) makedepends=(python2-distribute) changelog=Changelog conflicts=(${pkgname}-git) source=({pkgver}.tar.gz) md5sums=('581162e417d0c3f35534fcb2ea91a72d') package() { cd "${srcdir}/guessit-${pkgver}" python2 setup.py install --root="$pkgdir/" --optimize=1 } acieroid commented on 2013-10-31 09:09 Latest version is 0.6.1, could you update this package? (It is just a matter of updating the version number and the md5sum.) Many thanks. nicoulaj commented on 2012-11-18 18:11 @Cilyan: It's fixed now. nicoulaj commented on 2012-10-25 19:25 @Cylan: the python-guessit is misnamed, it also provides guessit for python2, that's why it conflicts with this one. The python-guessit package must be fixed before I can remove provides+conflicts. Cilyan commented on 2012-10-25 19:00 Hi nicoulaj, you should not conflict with python-guessit as your package should provide guessit for python2 whereas python-guessit should provide it for python 3 and both can be installed on the same system. Moreover, version 0.5.2 is out.
https://aur.archlinux.org/packages/python2-guessit/?comments=all
CC-MAIN-2017-34
refinedweb
680
59.09
Many problems of wrong border format in the exported rectangle along with the wrong position or cropping of exported image were identified in old release of Aspose.Slides for JasperReports. Problem while writing the presentation to a stream was also observed. In order to get these issues resolved, a new version of Aspose.Slides for JasperReports 1.3.0 has been released. A remarkable feature of PPTX/PPSX export in JasperReports and JasperServer has been introduced along with the compatibility to work with the latest version of JasperReports 3.7.1.<?xml:namespace prefix = oIn order to view a complete list of fixes and to download Aspose.Slides for JasperReports 1.3.0, please visit this link.
https://forum.aspose.com/t/pptx-ppsx-export-jasperserver-amp-jasperreports-3-7-compatibility/93593
CC-MAIN-2021-10
refinedweb
117
52.36
Introduction Rate limiting is a very powerful feature for securing backend APIs from malicious attacks and for handling unwanted streams of requests from users. In general terms, it allows us to control the rate at which user requests are processed by our server. In this article, we will examine the different approaches to implementing rate limiting in theory, as well as the pros and cons of each. We will also get practical by implementing a selected approach, i.e., the most optimal for our use case in Node.js. Prerequisites In order to follow along effectively as you read through this article, you are expected to have the following: - A general understanding of how servers handle requests - A good understanding of how to build REST APIs in Node - Some experience working with middleware in Node If you’re lacking some or all of these, do not feel intimidated. We will make sure to break things down as much as possible so that you can easily understand every concept we end up exploring. What is rate limiting, and why should I care? 🤔 Rate limiting is a technique used to control the amount of incoming or outgoing traffic within a network. In this context, network refers to the line of communication between a client (e.g., web browser) and our server (e.g., API). Thus, it is a technique that allows us to handle user requests based on some specified constraint such that: - There is better flow of data - There is a reduced risk of attack, i.e., improved security - The server is never overloaded - Users can only do as much as is allowed by the developer For example, we might want to limit the number of requests an unsubscribed user can make to a public API to 1,000 requests per month. Once the user exceeds that number, we can ignore the request and throw an error indicating that the user has exceeded their limit. Bear in mind that in order that for rate limiting to be implemented, there must be a clearly defined constraint (limit), which could be based on any of the following: - Users: Here the constraint is specific to a user and is implemented using a unique user identifier - Location: Here the constraint is based on geography and is implemented based on the location from which the request was made - IP addresses: Here the constraint is based on the IP address of the device that initiates a request Let us now consider various rate limiting algorithms as well as their pros and cons. Examining rate limiting algorithms 🧠 As with most engineering problems, there are different algorithms for implementing rate limiting, each with its pros and cons. We will now examine five well-known techniques and determine when they are most efficient and when we should look for another solution. Fixed window counter This is probably the most obvious approach to implementing rate limiting. In this approach, track the number of requests a user makes in each window. Window in this context refers to the space of time under consideration. That is, if I want my API to allow 10 requests per minute, we have a 60-second window. So, starting at 00:00:00, one window will be 00:00:00 to 00:01:00. Thus, for the first request a user makes in the minute, using an optimized key-value store like a HashMap or Redis, we can store the user’s ID against a count, now 1 since this is the first request. See the format below: On subsequent requests within the same window, we check to see that the user has not exceeded the limit (i.e., count is not greater than 10). If the user hasn’t, we increment the count by one; otherwise, the request is dropped and an error triggered. At the end of the window, we reset every user’s record to count 0 and repeat the process for the current window. ✅ The pros - This approach is relatively easy to implement. ❌ The cons - This approach isn’t entirely accurate, as it is unfair to impose a general window start time on all users. In reality, a user’s window should start counting from the time of their first request to 60 seconds later, in this case. - When there is a burst of traffic towards the end of a window, e.g., at the 55th second, the server ends up doing way more work than is planned per minute. For example, we may have 10 requests from a user between 55 to 60 seconds, and another 10 from the same user in the next window between 0 to 5 seconds. Thus, the server ends up processing 20 requests in 10 seconds for this user. - In especially larger window cycles — e.g., 50 requests per hour (3,600 seconds) — the user may end up waiting for a very long time if they reach the limit in the first 10 minutes (600 seconds). That means it takes the user 10 minutes to make 50 requests, but one hour to make 51. This may result in a stampeding of the API immediately after a new window opens up. Sliding logs The sliding logs algorithm keeps track of the timestamp for each request a user makes. Requests here can be logged using a HashMap or Redis. In both cases, the requests may be sorted according to time in order to improve operations. The process of logging the requests is illustrated below: - Retrieve all requests logged in the last window (60 seconds) and check if the number of requests exceeds the allowed limit - If the number of requests is less than the limit, log the request and process it - If the number of requests is equal to the limit, drop the request ✅ The pros - This approach is more accurate as it calculates the last window per user based on the user’s activity and does not impose a fixed window for all users. - It is unaffected by a surge of requests towards the end of the window since there is no fixed window. ❌ The cons - It is not memory-efficient because we end up storing a new entry for every request made. - It is also quite expensive to compute as each request will trigger a calculation on previously saved requests to retrieve the logs from the last minute and then get the count. Sliding window counter This approach attempts to optimize some of the inefficiencies of both the fixed window counter and sliding logs technique. In this technique, the user’s requests are grouped by timestamp, and rather than log each request, we keep a counter for each group. It keeps track of each user’s request count while grouping them by fixed time windows(usually a fraction of the limit’s window size). Here’s how it works. When a user’s request is received, we check whether the user’s record already exists and whether there is already an entry for that timestamp. If both cases are true, we simply increment the counter on the timestamp. In determining whether the user has exceeded their limit, we retrieve all groups created in the last window, and then sum the counters on them. If the sum equals the limit, then the user has reached their limit and the incoming request is dropped. Otherwise, the timestamp is inserted or updated and the request processed. As an addition, the timestamp groups can be set to expire after the window time is exhausted in order to control the rate at which memory is consumed. ✅ The pros - This approach saves more memory because instead of creating a new entry for every request, we group requests by timestamp and increment the counter. Token bucket In the token bucket algorithm, we simply keep a counter indicating how many tokens a user has left and a timestamp showing when it was last updated. This concept originates from packet-switched computer networks and telecomm networks in which there is a fixed-capacity bucket to hold tokens that are added at a fixed rate (window interval). When the packet is tested for conformity, the bucket is checked to see whether it contains a sufficient number of tokens as required. If it does, the appropriate number of tokens are removed, and the packet passes for transmission; otherwise, it is handled differently. In our case, when the first request is received, we log the timestamp and then create a new bucket of tokens for the user: On subsequent requests, we test whether the window has elapsed since the last timestamp was created. If it hasn’t, we check whether the bucket still contains tokens for that particular window. If it does, we will decrement the tokens by 1 and continue to process the request; otherwise, the request is dropped and an error triggered. In a situation where the window has elapsed since the last timestamp, we update the timestamp to that of the current request and reset the number of tokens to the allowed limit. ✅ The pros - This is an accurate approach as the window is not fixed across users and, as such, is determined based on a user’s activity. - Memory consumption is minimal since you only have one entry per user, which is used to manage their activity (timestamp and available tokens) over time. Leaky bucket The leaky bucket algorithm makes use of a queue that accepts and processes requests in a first-in, first-out (FIFO) manner. The limit is enforced on the queue size. If, for example, the limit is 10 requests per minute, then the queue would only be able to hold 10 requests per time. As requests get queued up, they are processed at a relatively constant rate. This means that even when the server is hit with a burst of traffic, the outgoing responses are still sent out at the same rate. Once the queue is filled up, the server will drop any more incoming requests until space is freed up for more. ✅ The pros - This technique smooths out traffic, thus preventing server overload. ❌ The CodeLab 👨💻 Now that we have explored rate limiting from a theoretical perspective, it is time for us to get practical. Below, we have identified certain scenarios in which a rate limiting algorithm is required to achieve the expected outcome. Take your time to go through them and, in each case, try to identify what algorithm you are inclined to use and why. - A fintech company trying to implement a daily transaction value per user capped at $5,000. - Implementing checks on a public books API to ensure that each user can only perform 100 API requests per day (24 hours). In this tutorial, we will be implementing scenario two in Node.js. However, now we need to decide on what algorithm is most appropriate for our use case. If you are feeling up to the challenge, feel free to download the tutorial boilerplate here and try to implement any of the algorithms yourself. Algorithmic thinking What algorithm do we stick with for our use case? As explained above, the fixed window counter and sliding logs are the most inefficient ways to implement rate limiting. That leaves us with sliding window counter, leaky bucket, and token bucket. The leaky bucket algorithm is most applicable in scenarios where, along with rate limiting, we are trying to do some traffic shaping. Traffic shaping (also known as packet shaping) is a bandwidth management technique that delays the flow of certain types of network packets in order to ensure network performance for higher-priority applications. In this context, it describes the ability to manage server resources to process and respond to requests at a certain rate, no matter the amount of traffic it receives. As that is not a major concern in this case, that leaves us with sliding window counter and token bucket algorithm. Either approach will work just fine, but for the sake of this article, we will go with the sliding window counter. We will use this algorithm to keep track of each user’s request count per day (24 hours) while grouping them by a fixed one-hour window. Now, let’s get started! Project setup To get started, clone this repository on your computer, navigate into the project directory on your terminal, and install the project’s dependencies using the command below: npm i The boilerplate code contains a simple API that allows us retrieve a list of books using a GET request to the /books endpoint. Therefore, we will be implementing rate limiting using a middleware layer which will enforce the limits for each user. All the code for the API lives within the src directory. There is no user authentication in this case, therefore we will identify users using their IP addresses. This is available as a property on the request object for every request made i.e req.ip. Finally, rename the .env.example file to .env as it contains the project’s environment variables. You can now start the server by running the command below: npm run dev To the codeground! Implementing the rate limiter We will implement our sliding window counter rate limiter algorithm in two ways. In the first, we will use a third-party library, express-rate-limit, and in the other, we will be doing a custom implementation. Using a third-party library (express-rate-limit) express-rate-limit is an npm package commonly used as a basic rate limiting middleware for Node. To make use of this plugin, we will have to install it first. Run the command below from your terminal, within the project directory, to do so: npm i express-rate-limit --save Next, proceed to the middlewares folder within the project and create a file named rateLimiter.js. This is where we will be writing the rate limiting middleware for our API. Copy and paste the following code inside this file: // src/middlewares/rateLimiter.js import rateLimit from 'express-rate-limit'; export const rateLimiterUsingThirdParty = rateLimit({ windowMs: 24 * 60 * 60 * 1000, // 24 hrs in milliseconds max: 100, message: 'You have exceeded the 100 requests in 24 hrs limit!', headers: true, }); In the code snippet above, we imported the npm package into the project. Using the package, we create a middleware that enforces rate limiting based on the options we have passed in, i.e.: windowMs– This is the window size (24 hours in our case) in milliseconds max– This represents the number of allowed requests per window per user message– This specifies the response message users get when they have exceed the allowed limit headers– This specifies whether the appropriate headers should be added to the response showing the enforced limit ( X-RateLimit-Limit), current usage ( X-RateLimit-Remaining), and time to wait before retrying ( Retry-After) when the limit is reached Now that we have created the middleware, we need to configure our application to use this middleware when handling requests. First, export the middleware from our middleware module by updating the index.js file in the middlewares folder as shown below: // src/middlewares/index.js export { default as errorHandler } from './errorHandler'; export { rateLimiterUsingThirdParty } from './rateLimiter'; Next, import the rateLimiterUsingThirdParty middleware and apply it to all application routes: // src/index.js // ...Some code here import { rateLimiterUsingThirdParty } from './middlewares'; // ...Some code here app.use(rateLimiterUsingThirdParty); // ...Some more code goes here Voilà! We are done. Notice that we didn’t have to specify the identifier for each user manually. If you go through the docs for this package, as found here on npm, you would notice that this package identifies users by their IP addresses using req.ip by default. Pretty straightforward, right? Now let’s try a slightly more complex approach. A custom implementation (using an Express middleware and Redis) For this implementation, we will be making use of Redis to keep track of each user’s request count and timestamp using their IP addresses. If you do not have Redis installed on your machine, follow the instructions here to do so. Using the command below, install the following packages which allow us to connect to Redis and manipulate time easily within our application. npm i redis moment --save Next, update your rateLimiter.js, file as shown below. The code below is a middleware that handles rate limiting for our API using Redis. Copy and paste it inside rateLimiter.js. import moment from 'moment'; import redis from 'redis'; const redisClient = redis.createClient(); const WINDOW_SIZE_IN_HOURS = 24; const MAX_WINDOW_REQUEST_COUNT = 100; const WINDOW_LOG_INTERVAL_IN_HOURS = 1; export const customRedisRateLimiter = (req, res, next) => { try { // check that redis client exists if (!redisClient) { throw new Error('Redis client does not exist!'); process.exit(1); } // fetch records of current user using IP address, returns null when no record is found redisClient.get(req.ip, function(err, record) { if (err) throw err; const currentRequestTime = moment(); console.log(record); // if no record is found , create a new record for user and store to redis if (record == null) { let newRecord = []; let requestLog = { requestTimeStamp: currentRequestTime.unix(), requestCount: 1 }; newRecord.push(requestLog); redisClient.set(req.ip, JSON.stringify(newRecord)); next(); } // if record is found, parse it's value and calculate number of requests users has made within the last window let data = JSON.parse(record); let windowStartTimestamp = moment() .subtract(WINDOW_SIZE_IN_HOURS, 'hours') .unix(); let requestsWithinWindow = data.filter(entry => { return entry.requestTimeStamp > windowStartTimestamp; }); console.log('requestsWithinWindow', requestsWithinWindow); let totalWindowRequestsCount = requestsWithinWindow.reduce((accumulator, entry) => { return accumulator + entry.requestCount; }, 0); // if number of requests made is greater than or equal to the desired maximum, return error if (totalWindowRequestsCount >= MAX_WINDOW_REQUEST_COUNT) { res .status(429) .jsend.error( `You have exceeded the ${MAX_WINDOW_REQUEST_COUNT} requests in ${WINDOW_SIZE_IN_HOURS} hrs limit!` ); } else { // if number of requests made is less than allowed maximum, log new entry let lastRequestLog = data[data.length - 1]; let potentialCurrentWindowIntervalStartTimeStamp = currentRequestTime .subtract(WINDOW_LOG_INTERVAL_IN_HOURS, 'hours') .unix(); // if interval has not passed since last request log, increment counter if (lastRequestLog.requestTimeStamp > potentialCurrentWindowIntervalStartTimeStamp) { lastRequestLog.requestCount++; data[data.length - 1] = lastRequestLog; } else { // if interval has passed, log new entry for current user and timestamp data.push({ requestTimeStamp: currentRequestTime.unix(), requestCount: 1 }); } redisClient.set(req.ip, JSON.stringify(data)); next(); } }); } catch (error) { next(error); } }; There’s quite a lot going on here, so let’s do a step-by-step walkthrough: We installed and imported Redis and Moment.js from npm and initialized all useful constants. We use Redis as an in-memory storage for keeping track of user activity, while Moment helps us accurately parse, validate, manipulate, and display dates and times in JavaScript. Next, we create a middleware, customRedisRateLimiter, within which we are to implement the rate limiting logic. Inside the middleware function’s try block, we check that the Redis client exists and throw an error if it doesn’t. Using the user’s IP address req.ip, we fetch the user’s record from Redis. If null is returned, this indicates that no record has been created yet for the user in question. Thus, we create a new record for this user and store it to Redis by calling the set() method on the Redis client. If a record was found, the value is returned. Thus, we parse that value to JSON and proceed to calculate if the user is eligible to get a response. In order to determine this, we calculate the cumulative sum of requests made by the user in the last window by retrieving all logs with timestamps that are within the last 24 hours and sum their corresponding requestCount. If the number of requests in the last window — i.e., totalWindowRequestsCount — is equal to the permitted maximum, we send a response to the user with a constructed error message indicating that the user has exceeded their limit. However, if totalWindowRequestsCount is less than the permitted limit, the request is eligible for a response. Thus, we perform some checks to see whether it’s been up to one hour since the last log was made. If it has been up to one hour, we create a new log for the current timestamp. Otherwise, we increment the requestCount on the last timestamp and store (update) the user’s record on Redis. Make sure to export and apply the middleware to our Express app as we did in the third-party library implementation. Whew! That’s it. Does this work as desired? Let’s see! Testing When you test our API from Postman, you get the following response: localhost:8080/books When you have exceeded the permitted limit (i.e., 100 requests per hour), the server returns the message below: We made it! 🎊 We have now come to the end of this tutorial. 🤗 Conclusion In this article, we have successfully explored the concept of rate limiting — what it is, how it works, various ways to implement it, and practical scenarios in which it is applicable. We have also done our very own implementation in Node.js, first using a simple third-party library that handles all the heavy lifting for us, then a custom implementation using Redis. I hope you enjoyed doing this with me. You may find the source code for this tutorial here on GitHub. See you in the next one!✌. 6 Replies to “Understanding and implementing rate limiting in Node.js” 2 of 3 cons of fixed window counter are not fair: – “user’s window should start counting from the time of their first request” -> this is easy to implement. – “burst traffic towards the end of a window” -> it may be issue, if your service is for one customer. It is unlikely, that all your thousands users would make all requests at once. Hi, It looks like using app.use() would limit the rate to the whole API. How would you go about applying rate limit to only a particular POST request while letting users do unlimited GET requests? Michal, You can do this by applying the middleware to the POST route directly instead of `app.use` e.g. `app.post(‘/limitedRoute’, customRedisRateLimiter, (req, res, next) => {})` i have this error :Uncaught Exception: 500 – listen EACCES: permission denied development When the record is null in the Redis store, you create the record, store it and then go to the next middleware. Shouldn’t there be a return statement after the next() instruction to prevent the middleware from executing the rest of the code ? does this work on heroku as well with a running redis add on ? not sure !
https://blog.logrocket.com/rate-limiting-node-js/
CC-MAIN-2022-05
refinedweb
3,733
61.36
<< andyhansenMembers Content count13 Joined Last visited Community Reputation105 Neutral About andyhansen - RankMember DX11 andyhansen posted a topic in Graphics and GPU ProgrammingI am attempting to use the reflection features of the effects11 library to retrieve the argument list for a pass's vertex shader. When I call GetInputSignatureElementDesc() I get a runtime check error, "Run-Time Check Failure #2 - Stack around the variable ParamDesc was corrupted." Is this something other people have experienced? I'm hoping it isn't due to a memory error buried somewhere in my code. Here is the code where I compile the effect an try to read in a parameter. [CODE] ID3DBlob* pErrorBlob; ID3DBlob* compiled; HRESULT hr=D3DX11CompileFromFile(file.c_str(),NULL,NULL,NULL,GetProfile(device).c_str(),dwShaderFlags,NULL,NULL,&compiled,&pErrorBlob,NULL); if( FAILED(hr) ) { if( pErrorBlob != NULL ){ throw runtime_error((char*)pErrorBlob->GetBufferPointer()); if( pErrorBlob ) pErrorBlob->Release(); } else{ throw runtime_error("Failed to compile effect "+StringUtil::WstrToStr(file)); } } hr=D3DX11CreateEffectFromMemory(compiled->GetBufferPointer(),compiled->GetBufferSize(),NULL,device.GetDevice(),&effect); compiled->Release(); if(FAILED(hr)){ throw runtime_error("Failed to create effect from file"); } UINT i=0; ID3DX11EffectTechnique* technique=effect->GetTechniqueByIndex(i++); while(technique->IsValid()){ D3DX11_PASS_SHADER_DESC vsDesc; technique->GetPassByIndex(0)->GetVertexShaderDesc(&vsDesc); D3D11_SIGNATURE_PARAMETER_DESC paramDesc; vsDesc.pShaderVariable->GetInputSignatureElementDesc(vsDesc.ShaderIndex,0,¶mDesc); technique=effect->GetTechniqueByIndex(i++); } [/CODE] Right now I'm just trying to read the first parameter of the first pass of each technique to see if it will work. After the runtime error is triggered, I can hover my mouse over paramDesc, and it seems to have at least some of the correct data in it. I can't find any actual examples of how to do effect reflection, so I may be doing something wrong. - Thanks for the quick replies guys! That's exactly the type of thing I was looking for. I was just wondering, would I not want to pack it into a float4, rather than a float3? - [quote name='Adaline' timestamp='1312576195' post='4845203'] Hello Did you effectively try to create your map with R32_FLOAT format ? It seems odd that you can't use this format with DX11 : I use this format for my shadow map with DX10 Here's the [url=""]HW support for DX11[/url] (the page you mentioned is about DX10 formats, not DX11) [/quote] I wasn't able to create a texture with R32_FLOAT, it would give me an error saying that that format was only supported for data buffers at feature level 9_1. andyhansen posted a topic in Graphics and GPU ProgrammingI'm currently working on implementing shadow mapping. I'm using DirectX 11 at feature level 9.1 since that is what my laptop supports. It seems the texture formats supported as render targets at this feature level is pretty limited. The table here shows what is supported. R8G8B8A8_UNORM R8G8B8A8_UNORM_SRGB These seem to be the only supported render target formats. This means that I can only store 8 bits of depth precision, rather than using the full 32. What would be ideal is if I could use R32_FLOAT. Is there a way I can somehow use R8G8B8A8_UNORM or R8G8B8A8_UNORM_SRGB, but use all 32 bits to store a single value, rather than only 8 bits? andyhansen replied to andyhansen's topic in Graphics and GPU ProgrammingThanks for the reply! I'm still having some trouble getting it to work though. My skybox cube is 2 units long along each side. My scene is 4096 units long. If I use the regular depth settings on the viewport when rendering the skybox, I can't see anything except the skybox, which is how it should behave since the geometry is so small. When I set the depth settings on the viewport to both 1, I can see for a little ways, but my skybox still appears to intersect the scene geometry. I've made sure i'm using the same viewprojection matrix for both the scene and the skybox. If I increase the scale factor of the world matrix of the skybox to a very high value (1000), I am able to get it to not intersect the geometry. If I have the scale set to 1000, but I set the viewport depth settings back to normal, it will still intersect the scene geometry. So I at least know that settings the depth values to 1 is helping to some degree. Should this be expected? I'd rather not have to worry about setting the scale of the skybox depending on how large the scene is. My understanding is that setting the viewport to always use 1 as the depth value, and LESS_EQUAL as the comparison operator should cause the skybox to never intersect the scene geometry. andyhansen posted a topic in Graphics and GPU ProgrammingI've been trying to implement skybox rendering. I have it mostly working, except that when I move the camera, the polygons of my skybox will sometimes flicker out of view, showing the backdrop color behind it. I'm using a textured cube with the vertex order reversed to represent the skybox. I'm setting the depthstencilstate to this: [code] DepthEnable=TRUE; DepthWriteMask=D3D11_DEPTH_WRITE_MASK_ALL; DepthFunc=D3D11_COMPARISON_LESS_EQUAL; StencilEnable=FALSE; StencilReadMask=D3D11_DEFAULT_STENCIL_READ_MASK; StencilWriteMask=D3D11_DEFAULT_STENCIL_WRITE_MASK;[/code] My vertex shader looks like this: [code] VS_OUT VS(float3 Pos : POSITION, float2 Tex: TEXCOORD0) { VS_OUT output = (VS_OUT)0; output.posH=mul(float4(Pos,1),World); //make z and w the same so that the depth is always considered 1 output.posH=mul(output.posH,ViewProjection).xyww; output.tex0=Tex; return output; }[/code] I'm rendering the skybox after everything else. Could this just be a problem with my graphics card? I'm currently running on an intel integrated card on my laptop with dx9. It all works perfectly except for when I start to move or rotate the camera. Edit: I was able to try it on another computer, and the problem disappeared. So it must have something to do with the intel card. Is there any way I could prevent this problem from occurring though? andyhansen posted a topic in Graphics and GPU ProgrammingI've been working on optimizing my render loop to make the best use of the CPU cache. Right now I'm simply keeping a vector of pointers to each object to be rendered. I then iterate through the vector and call each objects Render() method. Inside the render method, each object updates the World matrix that is passed to the shader, then performs the draw call on its geometry. I'm thinking this is very cache inefficient, since each rendered object could be located anywhere in memory. I want to change up the way I do rendering by storing all needed data in a contiguous array. For example struct Model{ XMFLOAT4X4world; ID3D11Buffer* index; ID3D11Buffer* vertices; short iIndexCount; }; Model data[ModelCount]; Then I would loop through data[] and perform the draw call on the data contained at each contiguous location of memory. I would think that this would improve cache usage a lot. The only thing I wonder about is what cache hit the actual draw call would entail. The structure is storing pointers to the index and vertex buffer. Would issuing the draw call cause problems? Would it be better for me to use a command list while looping through the data[] array, and then call the command list after the loop? Or is there an even better way to setup a render loop? I'm open to suggestions. DX11 andyhansen replied to andyhansen's topic in Graphics and GPU ProgrammingOk, I'm guessing it's a driver issue then. I'm on an older laptop with an integrated Intel graphics card. I created a basic application with just a simple window. I got the same exception when I tried to create the d3d device, so I'm pretty sure it isn't my code. I'll just ignore it for now. Thank you, that helped a lot. DX11 andyhansen replied to andyhansen's topic in Graphics and GPU ProgrammingI don't even know where to look. My program runs properly as far as I know, it just shows those exceptions in the debug output window, so for now I'm just ignoring them. I'm hoping that someone has encountered these errors before and knows what could cause a _com_error. DX11 andyhansen posted a topic in Graphics and GPU ProgrammingI've been working on porting my DirectX Framework over to DX11. At runtime I've been getting the following type of error: First-chance exception at 0x75b69617 (KernelBase.dll) in Engine.exe: Microsoft C++ exception: _com_error at memory location 0x0021e7dc.. Multiple of these exceptions are thrown. It occurs when I call: D3D11CreateDeviceAndSwapChain [code] //Create the swap chain parameters DXGI_SWAP_CHAIN_DESC sd; ZeroMemory(&sd,sizeof(sd)); sd.BufferCount=1; sd.BufferDesc.Width=Engine->GetWindow()->GetWindowWidth(); sd.BufferDesc.Height=Engine->GetWindow()->GetWindowHeight(); sd.BufferDesc.Format=DXGI_FORMAT_R8G8B8A8_UNORM; sd.BufferDesc.RefreshRate.Numerator=60; sd.BufferDesc.RefreshRate.Denominator=1; sd.BufferUsage=DXGI_USAGE_RENDER_TARGET_OUTPUT; sd.OutputWindow=Engine->GetWindow()->GetHandle(); sd.SampleDesc.Count=1; sd.SampleDesc.Quality=0; sd.Windowed=(Engine->GetWindow()->GetWindowMode()==CWindow::WINDOW); //create the swap chain UINT flags=0; D3D_FEATURE_LEVEL fl; #if defined(DEBUG) || defined(_DEBUG) flags |= D3D11_CREATE_DEVICE_DEBUG; #endif if(FAILED(D3D11CreateDeviceAndSwapChain(NULL,D3D_DRIVER_TYPE_HARDWARE,NULL,0,NULL,NULL,D3D11_SDK_VERSION,&sd,&pSwapChain,&pDevice,&fl,&pImmediateContext))){ Engine->ShowError("Failed to create d3d device."); return; }[/code] I'm using VS C++ 2010 professional. I created the project as an empty Win32 project. Does anyone have any ideas? I've been working at this for hours. DX11 andyhansen replied to andyhansen's topic in Graphics and GPU ProgrammingSo to clarify, checking to see if the query is done is quick, but getting the actual result is slow. However, what you said in the first part of your post didn't quite make sense given what the documentation says. I read that you have to provide time in between issuing the predicate and doing the predicated draw call. This is why I want to be able to check if a query is done. That way I can spend time doing other things while the predicate is processed. I am reading from the dx10 sample "draw predicated" at the bottom of the article. Perhaps this has to do with whether the predicate is a hint or not. Is there a way to specify if a predicate is hinted or not? I'm using dx11, but am basing my knowledge of how it works off this sample. DX11 andyhansen replied to andyhansen's topic in Graphics and GPU ProgrammingThank you for your response. In reading about the predicate queries, it looks like there might be a delay between when the query is issued and when it actually gets checked, so I would want to not try and render the geometry until the query actually completes. Meanwhile I would perform other tasks until the query is done, which is why I want to be able to check it with the CPU. Would even checking whether the query is done slow things down too much? I wouldn't be looping doing nothing until the query is finished. I would be doing other things while periodically checking if it is done. DX11 andyhansen posted a topic in Graphics and GPU ProgrammingI'm looking to implement an occlusion culling system using the predicated drawing features of dx11. There is not much documentation on the subject. I read that the predicate query could take some time to complete. If I wanted to check if the query was complete, am I right in using the following code? HRESULT hr=pDeviceContext->GetData(pPredicate,NULL,0,0); If the query was finished, hr would be S_OK, and it would be S_FALSE if it were not finished. Is that right? I also am wondering how this works in conjunction with command lists. Would I have to use the predicate to determine whether or not an entire command list gets executed? There really isn't any documentation I can find that says what works inside a command list. The algorithm I want to implement is at
https://www.gamedev.net/profile/175935-andyhansen/?tab=topics
CC-MAIN-2017-30
refinedweb
1,997
54.93
Anonymous. Before we learn something about anonymous functions we will take a quick look into a PHP concept known as a variable function. It means that if we append parenthesis to a variable, then php will look for a function with the same name as to whatever the variable evaluates to and tries to execute it. Say we have the following simple function: We could then call the function indirectly by using a variable whose value evaluates to the function name. This can be quite useful when the name of the function that you want to execute cannot be determined till run-time. Another example using a class and a static method: There are times when you need to create a small localized throw-away function consisting of a few lines for a specific purpose, such as a callback. It is unwise to pollute the global namespace with these kind of single use functions. For such an event you can create anonymous or lambda functions using create_function. Anonymous functions allow the creation of functions which have no specified name. An example is shown below: Here we have created a small nameless (Anonymous) function which is used as a callback in the preg_replace_callback function. Although create_function lets you create anonymous functions, it is not truly a part of the language itself but a hack. PHP 5.3 introduced true Anonymous functions in the base language itself. We create a unnamed function and assign it to a variable, including whatever parameters the functions accepts and then simply use the variable like an actual function. A example is given below: Note the ending semicolon at the end of the defined function. This is because the function definition is actually a statement, and statements always ends with a semicolon. Another example is shown below. PHP allows functions to be nested inside one another. Although it seems like a side-effect of the parser rather then a design decision, it can be quite helpful in some situations. Take the example below. The function censorString takes a string as a parameter and replaces any censored word given with a string of ‘*’. The censorString functions defines a nested function replace that is used as a callback function by preg_replace_callback. Assuming that the ‘replace’ function is only used by the censorString function in our program it is better to define it within censorString itself and avoid polluting the global namespace with small single use functions When you define a nested function as above, the inner function does not come into existence until the parent function is executed. Once the parent function (censorString) is executed the inner function (replace) goes into global scope. Now you can access the inner function from anywhere in your current document. One problem though is that calling the parent function again in the current document will cause a redeclaration of the inner function, which will generate an error, as the inner function is already declared. A solution is to use a anonymous function as shown below. (Note again the semicolon at the end of the inner function.) Now whenever the censorString function is executed the inner anonymous function comes into existence. But unlike a normal nested function it goes out of scope once the parent function ends. So we can repeatedly call the censorString function without throwing a redeclaration error. Another way is to define the function in the callback itself. Closures are anonymous functions that are aware of their surrounding context. In short these are anonymous functions which have knowledge about variables not defined within themselves. A simple example will make it clear. Say we want to create a anonymous function that returns a given number multiplied by 5. If we want to return a number multiplied by 7 rather then 5 ,we have to create another function and so on for other numbers. Instead of creating a series of different functions we can create a closure using the ‘use’ construct, which allows variables outside the anonymous function to be accessible or ‘closed’ within the current function. Take another example along the above lines. Lets say we want to filter an array of number according to a certain criteria; say all the numbers above 100. The code for the same is given below. Note the use of a anonymous function. Now what if we want to allow all numbers above 400, then we have to change the anonymous function to the following. Rather then creating different functions for various criteria, we can create a closure. Note that in the above example when createFilter exists, normally the $lowerBound variable goes out of scope, but because we have used closure here using the ‘use’ keyword, the inner anonymous function binds the $lowerBound variable with itself even after the createFilter function exists. This is what we call closure. The inner function ‘closes’ over the variables of the outer function in which it is defined. We can do a var_dump on the $greaterThan400 and $greaterThan100 objects to see if the inner function really carries the $lowerBound variable with itself. Which returns the following: Or better yet we can use the Reflection API. Which gives the following: Lambda functions and closures have taken PHP a notch closer towards other modern languages. In practice how much people really use lambdas and closures in their daily work remains to be seen. I’m still getting a hang of 9 Responses 1 Christian June 6th, 2010 at 11:02 pm Great post, thanks! 2 John Conde June 7th, 2010 at 10:32 am Best explanation of closures for PHP I’ve seen yet. 3 Jonathan Foucher June 7th, 2010 at 12:16 pm This is an amazing post, I did not know we could now use anonymous functions in PHP… 4 Ondra June 7th, 2010 at 12:27 pm Very nice summary, thank you. 5 Rene Silva June 7th, 2010 at 1:42 pm Nice explanation, PHP 5.3 has nice features! 6 Justin Noel June 8th, 2010 at 5:55 pm At first blush, I don’t really like this idea. These examples really seem like a great way to make your code hard to understand. The closures and nested anonymous functions are just going to leave maintenance developers left scratching their heads. Is all this really worth the trouble? I’d love to see a more in-depth useful example scenario. Maybe that would open my eyes a bit more so I could see the benefit of these new capabilities. sameer June 8th, 2010 at 9:04 pm I didn’t say lambda functions were easier to understand at a first look. These concepts come from a functional paradigm (Lisp, Scheme, Haskell) which most people find hard to grasp because of years of training in the imperative languages (Java, C, C++, Pascal, PHP). I’ll try to post some long examples in a coming post. 8 Pascal December 5th, 2012 at 10:52 am The ability to create anonymous functions existed for some time (as you mentioned in the first part) via create_function. I never used it and thought it was more or less really ugly as it is the same story with eval. What is more interesting for me is the memory consumption & execution time as this was a major problem in previous PHP versions. Do you have any news with the release of PHP 5.3 and real closure/lambda support (and no hacking anymore) 9 December 21st, 2012 at 9:51 pm Many Thanks for creating Anonymous functions in PHP, I had been seeking for something comparable and was glad to come across the info as a result of this particular content.
http://www.codediesel.com/php/anonymous-functions-in-php/
CC-MAIN-2013-20
refinedweb
1,276
61.26
#include<iostream> #include<conio.h> #include<time.h> #include<windows.h> #include<math.h> #include <string.h> #include <sstream> using namespace std; int main() { int result = 1; float checker; //Holds the full array char char_holder; int loop = 0; float checker2 = 0; //gets the letter float checker3 = 0; //Holds the previous array int counter; int x = 0; char name = 'A'; double Dresult; char Cresult[10] = {'\0','\0','\0','\0','\0','\0','\0','\0','\0','\0'}; do{ printf("\n Enter value for %c ", name); cin>>Cresult; for (x = 0; Cresult[x] != '\0' ; x++) //Cresult[x] != Null Terminator { if (x != 0) {checker3 = checker;} char_holder = (int)Cresult[x]; checker2 = atof(&char_holder); if (x == 0) //if its the first one start adding onto it { checker3 = checker2; checker = checker2; } else {checker2 /= 10; checker = (checker + checker2)*10; Sleep(1000); if(checker > 2147483647) { cout<<"Overflow error....2\n"; Sleep(2000); result = 0; Dresult = 0; for (x = 0 ; Cresult[x] != '\0' ; x++) //Cresult[x] != Null Terminator { Cresult[x]= '\0';} break; } else if(checker < checker3) { cout<<"Overflow error....1"; Sleep(2000); Dresult = 0; result = 0; break; } //else { checker3 = checker;} } cout<<"Checker: "<<checker<<"\t"; cout<<"Checker2: "<<checker2<<"\t"; cout<<"Checker3: "<<checker3<<"\t"<<'\n'; Sleep(10); } if (Dresult != 0) { Dresult = atoi (Cresult); } cout<<"\n\n"; cout<<"Cresult: "<<Cresult<<'\n'; cout<<"Dresult: "<<Dresult<<'\n';; Sleep(1000); for (x = 0 ; Cresult[x] != 0 ; x++) { Cresult[x]= '\0';} cout<<Cresult; checker = 0; checker2 = 0; checker3 =0; Dresult = 0; Sleep(1000); } while(loop == 0); return 0; } Attention: This is something I was messing around with for hours, I don't expect it to look picture perfect, right now I just want the idea working before i clean it up. This is an isolated part of code from a bigger calculator I'm working on. My teacher enjoys breaking our programs in any way possible, so I'm trying to show him up by making it unable to be broken. In my original project I have it setup to use a process similar to this in order to recieve the number inputted by the user. In the original project I use isdigit to check if theyre inputting characters where theyre inputting numbers so thats why i have the input orginally saved as a string in order to do so. In order to check for overflow i have it set up so that it manually adds on each number until it reaches overflow (overflows an int) then it breaks out and resets. If you input 21 it will output 21, if you input 999999999999, itll overflow and work properly, but once i start having 999999999999999999999999999(enough to fill up 2 - 3 lines of my console. It out puts 1.#INF without adding up the numbers, outputs overflow(which is good) it resets everything(which is good), but once it reaches the end of the program it crashes. I have no idea how to stop it from doing so besides it just being unecessary for me to input so much. A friend of mine doesnt have the fix for characters, so if you input fsdfsdf21, it wil break, but for his if you spam 9's it outputs 1.#INF, but continues to run (He accepts the input as an INT). I don't know if anyone will know what the problem is =/.
https://www.daniweb.com/programming/software-development/threads/288825/1-inf-calculator-problem
CC-MAIN-2017-26
refinedweb
544
67.49
vmod_saintmode - Man Page Saint mode backend director Synopsis import saintmode [as name] [from "path"] VOID blacklist(DURATION expires) STRING status() new xsaintmode = saintmode.saintmode(BACKEND backend, INT threshold) BACKEND xsaintmode.backend() INT xsaintmode.blacklist_count() BOOL xsaintmode.is_healthy() Description This VMOD provides saintmode functionality for Varnish Cache 4.1 and newer. The code is in part based on Poul-Henning Kamp's saintmode implementation in Varnish 3.0. Saintmode. Saintmode in Varnish 4.1 is implemented as a director VMOD. We instantiate a saintmode object and give it a backend as an argument. The resulting object can then be used in place of the backend, with the effect that it also has added saintmode capabilities. Any director will then be able to use the saintmode backends, and as backends marked sick are skipped by the director, this provides a way to have fine grained health status on the backends, and making sure that retries get a different backend than the one which failed. Example: vcl 4.0; import saintmode; import directors; backend tile1 { .host = "192.0.2.11"; .port = "80"; } backend tile2 { .host = "192.0.2.12"; .port = "80"; } sub vcl_init { # Instantiate sm1, sm2 for backends tile1, tile2 # with 10 blacklisted objects as the threshold for marking the # whole backend sick. new sm1 = saintmode.saintmode(tile1, 10); new sm2 = saintmode.saintmode(tile2, 10); # Add both to a director. Use sm0, sm1 in place of tile1, tile2. # Other director types can be used in place of random. new imagedirector = directors.random(); imagedirector.add_backend(sm1.backend(), 1); imagedirector.add_backend(sm2.backend(), 1); } sub vcl_backend_fetch { # Get a backend from the director. # When returning a backend, the director will only return backends # saintmode says are healthy. set bereq.backend = imagedirector.backend(); } sub vcl_backend_response { if (beresp.status >= 500) { # This marks the backend as sick for this specific # object for the next 20s. saintmode.blacklist(20s); # Retry the request. This will result in a different backend # being used. return (retry); } } VOID blacklist(DURATION expires) Marks the backend as sick for a specific object. Used in vcl_backend_response. Corresponds to the use of beresp.saintmode in Varnish 3.0. Only available in vcl_backend_response. Example: sub vcl_backend_response { if (beresp.http.broken-app) { saintmode.blacklist(20s); return (retry); } } STRING status() Returns a JSON formatted status string suitable for use in vcl_synth. sub vcl_recv { if (req.url ~ "/saintmode-status") { return (synth(700, "OK")); } } sub vcl_synth { if (resp.status == 700) { synthetic(saintmode.status()); return (deliver); } } Example JSON output: { "saintmode" : [ { "name": "sm1", "backend": "foo", "count": "3", "threshold": "10" }, { "name": "sm2", "backend": "bar", "count": "2", "threshold": "5" } ] } new xsaintmode = saintmode.saintmode(BACKEND backend, INT threshold) new xsaintmode = saintmode.saintmode( BACKEND backend, INT threshold ) Constructs a saintmode director object. The threshold argument sets the saintmode threshold, which is the maximum number of items that can be blacklisted before the whole backend is regarded as sick. Corresponds with the saintmode_threshold parameter of Varnish 3.0. Example: sub vcl_init { new sm = saintmode.saintmode(b, 10); } BACKEND xsaintmode.backend() Used for assigning the backend from the saintmode object. Example: sub vcl_backend_fetch { set bereq.backend = sm.backend(); } INT xsaintmode.blacklist_count() Returns the number of objects currently blacklisted for a saintmode director object. Example: sub vcl_deliver { set resp.http.troublecount = sm.blacklist_count(); } BOOL xsaintmode.is_healthy() Checks if the object is currently blacklisted for a saintmode director object. If there are no valid objects available (called from vcl_hit or vcl_recv), the function will fall back to the backend's health function.
https://www.mankier.com/3/vmod_saintmode
CC-MAIN-2022-40
refinedweb
569
53.47
What are Fine-Grained Notifications? Prior to Realm Objective-C & Swift 0.99 , you could observe for changes on your Results , List , or AnyRealmCollection types by adding a notification block. Any time that any of the data you were watching changed, you would get notified and could trigger an update to your UI. A lot of people asked for more precise information about what exactly changed in the underlying data so they could implement more flexible updates to their app’s UI. In 0.99, we gave them just that power, by deprecating the existing addNotificationBlock API and replacing it with a new one: func addNotificationBlock(block: (RealmCollectionChange<T>) -> Void) -> NotificationToken The new API provides information not only about the change in general, but also about the precise indexes that have been inserted into the data set, deleted from it, or modified. The new API takes a closure which takes a RealmCollectionChange . This closure will be called whenever the data you are interested in changes. You can read more about using this new method in our docs on Collection Notifications , or simply follow this tutorial through for a practical example! Building a GitHub Repository List App In this post we’re going to look into creating a small app that shows all the GitHub repositories for a given user. The app will periodically ping GitHub’s JSON API and fetch the latest repo data, like the amount of stars and the date of the latest push. If you want to dig through the complete app’s source code as you read this post, go ahead and clone the project . The app is quite simple and consists of two main classes – one called GitHubAPI , which periodically fetches the latest data from GitHub, and the other is the app’s only view controller, which displays the repos in a table view. Naturally, we’ll start by designing a Repo model class in order to be able to persist repositories in the app’s Realm: import RealmSwift class Repo: Object { //MARK: properties dynamic var name = "" dynamic var id: Int = 0 dynamic var stars = 0 dynamic var pushedAt: NSTimeInterval = 0 //MARK: meta override class func primaryKey() -> String? { return "id" } } The class stores four properties: the repo name, the number of stars, the date of the last push, and, last but not least, the repo’s id , which is the primary key for the Repo class. GitHubAPI will periodically re-fetch the user’s repos from the JSON API. The code would loop over all JSON objects and for each object will check if the id already exists in the current Realm and update or insert the repo accordingly: if let repo = realm.objectForPrimaryKey(Repo.self, key: id) { //update - we'll add this later } else { //insert values fetched from JSON let repo = Repo() repo.name = name repo.stars = stars repo.id = id repo.pushedAt = NSDate(fromString: pushedAt, format: .ISO8601(.DateTimeSec)).timeIntervalSinceReferenceDate realm.add(repo) } This piece of code inserts all new repos that GitHubAPI fetches from the web into the app’s Realm. Next we’ll need to show all Repo objects in a table view. We’ll add a Results<Repo> property to ViewController : let repos: Results<Repo> = { let realm = try! Realm() return realm.objects(Repo).sorted("pushedAt", ascending: false) }() var token: NotificationToken? repos defines a result set of all Repo objects sorted by their pushedAt property, effectively ordering them from the most recently updated repo to the one getting the least love. :broken_heart: The view controller will need to implement the basic table view data source methods, but those are straightforward so we won’t go into any details: extension ViewController: UITableViewDataSource { func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return repos.count } func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { let repo = repos[indexPath.row] let cell = tableView.dequeueReusableCellWithIdentifier("RepoCell") as! RepoCell cell.configureWith(repo) return cell } } Inserting New Repos Next, we’ll need to react to updates: In viewDidLoad() we’ll add a notification block to repos , using the new (bam! :boom:) fine-grained notifications: token = repos.addNotificationBlock {[weak self] (changes: RealmCollectionChange) in guard let tableView = self?.tableView else { return } switch changes { case .Initial: tableView.reloadData() break case .Update(let results, let deletions, let insertions, let modifications): tableView.beginUpdates() //re-order repos when new pushes happen tableView.insertRowsAtIndexPaths(insertions.map { NSIndexPath(forRow: $0, inSection: 0) }, withRowAnimation: .Automatic) tableView.endUpdates() break case .Error(let error): print(error) break } } This is quite a long piece of code so let’s look what’s happening in there. We add a notification block to repos and create a local constant tableView to allows us to work with the controller’s table view. The key to making the most of fine-grained notifications is the changes parameter that you get in your notification block. It is a RealmCollectionChange enumeration and there are three different values: .Initial(let result)– This is the very first time the block is called; it’s the initial data you get from your Results, List, etc. It does not contain information about any updates, because you still don’t have previous state – in a sense all the data has just been “inserted”. In the example above, we don’t need to use the Resultsobject itself – instead we simply call tableView.reloadData()to make sure the table view shows what we need. .Update(let result, let insertions, let deletions, let updates)– This is the case you get each time after the initial call. The last three parameters are [Int], arrays of integers, which tell you which indexes in the data set have been inserted, deleted, or modified. .Error(let error)– This is everyone’s least favorite case – something went wrong when refreshing the data set. Since we’re looking into how to handle fine-grained notifications, we are interested in the line that goes over insertions and adds the corresponding rows into the table view: tableView.insertRowsAtIndexPaths(insertions.map { NSIndexPath(forRow: $0, inSection: 0) }, withRowAnimation: .Automatic) We convert (or map if you will) insertions from a [Int] to [NSIndexSet] and pass it to insertRowsAtIndexPaths(_:, withRowAnimation:) . That’s all it takes to have the table view update with a nice animation! When we run the app for the very first time it will fall on the .Initial case, but since there won’t be any Repo objects yet (because we haven’t fetched anything yet), tableView.reloadData() will not do anything visible on screen. Each time you start the app after the very first time, there will be stored Repo objects, so initially the app will show the existing data and will update it with the latest values when it fetches the latest JSON from the web. When the GitHubAPI fetches the user’s repos from the API, the notification block will be called again and this time insertions will contain all the indexes where repos were inserted much like so: [0, 1, 2, 3, 4, 5, 6, etc.] The table view will display all inserted rows with a nice animation: That’s neat, right? And since GitHubAPI is periodically fetching the latest data, when the user creates a new repo it will pop up shortly in the table view like so (i.e., it comes as another insertion update when it’s saved into the app’s Realm): Re-ordering the list as new data comes in repos is ordered by pushedAt , so any time the user pushes to any of their repositories that particular repo will move to the top of the table view. When the order of the data set elements changes the notification block will get called with both insertions and deletions indexes: insertions = [0] deletions = [5] What happened in the example above is that the element that used to be at position 5 (don’t forget the repos are ordered by their last push date) moved to position 0. This means we will have to update the table view code to handle both insertions and deletions: tableView.insertRowsAtIndexPaths(insertions.map { NSIndexPath(forRow: $0, inSection: 0) }, withRowAnimation: .Automatic) tableView.deleteRowsAtIndexPaths(deletions.map { NSIndexPath(forRow: $0, inSection: 0) }, withRowAnimation: .Automatic) Do you see a pattern here? The parameters you get in the .Update case suit perfectly the UITableView API. #notacoincidence With code to handle both insertions and deletions in place, we only need to look into updating the stored repos with the latest JSON data and reflect the changes in the UI. Back in GitHubAPI , we will need our code to update or insert depending on whether a repo with the given id already exists. The initial code that we had turns into: if let repo = realm.objectForPrimaryKey(Repo.self, key: id) { //update - this is new! let lastPushDate = NSDate(fromString: pushedAt, format: .ISO8601(.DateTimeSec)) if repo.pushedAt.distanceTo(lastPushDate.timeIntervalSinceReferenceDate) > 1e-16 { repo.pushedAt = lastPushDate.timeIntervalSinceReferenceDate } if repo.stars != stars { repo.stars = stars } } else { //insert - we had this before let repo = Repo() repo.name = name repo.stars = stars repo.id = id repo.pushedAt = NSDate(fromString: pushedAt, format: .ISO8601(.DateTimeSec)).timeIntervalSinceReferenceDate realm.add(repo) } This code checks if pushedAt is newer in the received JSON data than the date we have in Realm and if so, updates the pushed date on the stored repo. (It also checks if the star count changed and updates accordingly the repo. We’ll use this info in the next section.) Now, any time the user pushes to one of their repositories on GitHub, the app will re-order the list accordingly (watch the jazzy repo below): You can do the re-ordering in a more interesting way in certain cases. If you are sure that a certain pair of insert and delete indexes is actually an object being moved across the data set look into UITableView.moveRowAtIndexPath(_:, toIndexPath) for an even nicer move animation. Refreshing table cells for updated items If you are well-versed with the UITableView API, you probably already guessed that we could simply pass the modifications array to UITableView.reloadRowsAtIndexPaths(_:, withRowAnimation:) and have the table view refresh rows that have been updated. However… that’s too easy. Let’s spice it up a notch and write some custom update code! When the star count on a repo changes the list will not re-order, thus it will be difficult for the user to notice the change. Let’s add a smooth flash animation on the row that got some stargazer love, to attract the user’s attention. :sparkles: In our custom cell class we’ll need a new method: func flashBackground() { backgroundView = UIView() backgroundView!.backgroundColor = UIColor(red: 1.0, green: 1.0, blue: 0.7, alpha: 1.0) UIView.animateWithDuration(2.0, animations: { self.backgroundView!.backgroundColor = UIColor.whiteColor() }) } That new method replaces the cell background view with a bright yellow view and then tints it slowly to white. Let’s call that new method on any cells that need to display updated star count. Back in the view controller we’ll add under the .Update case: ... //initial case up here case .Update(let results, let deletions, let insertions, let modifications): ... //insert & delete rows for row in modifications { let indexPath = NSIndexPath(forRow: row, inSection: 0) let cell = tableView.cellForRowAtIndexPath(indexPath) as! RepoCell let repo = results[indexPath.row] cell.configureWith(repo) cell.flashBackground() } break ... //error case down here We simply loop over the modifications array and build the corresponding table index paths to get each cell that needs to refresh its UI. We fetch the Repo object from the updated results and pass it into the configureWith(_:) method on the cell (which just updates the text of the cell labels). Finally we call flashBackground() on the cell to trigger the tint animation. Oh hey – somebody starred one of those repos as I was writing this post: (OK, it was me who starred the repo – but my point remains valid. :grin:) Conclusion As you can see, building a table view that reacts to changes in your Realm is pretty simple. With fine-grained notifications, you don’t have to reload the whole table view each time. You can simply use the built-in table APIs to trigger single updates as you please. Keep in mind that Results , List , and other Realm collections are designed to observe changes for a list of a single type of objects. If you’d like to try building a table view with fine-grained notifications with more than one section you might run into complex cases when you will need to batch the notifications so that you can update the table with a single call to beginUpdates() and endUpdates() . If you want to give the app from this post a test drive, you can clone it from this repo . We are excited to have released the most demanded Realm feature of all time and we’re looking to your feedback! Tell us what you think on Twitter & a Simple Swift App With Fine-Grained Notifications 评论 抢沙发
http://www.shellsec.com/news/16074.html
CC-MAIN-2017-22
refinedweb
2,151
54.73
Latest python I2c module issue Not sure if this is the place to raise this but the latest pythonI2c module appears to have an issue. I get python Python 2.7.13 (default, Sep 18 2017, 20:32:03) [GCC 5.4.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. from OmegaExpansion import onionI2C Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: Error relocating /usr/lib/python2.7/OmegaExpansion/onionI2C.so: i2c_writeBufferRaw: symbol not found The latest i2C packages appear to. Regards Paul - Patrick Fehr Same problem here. I tried downgrading with several methods but unsuccessful. Package pyOmegaExpansion version 0.4-1 has no valid architecture, ignoring. @Paul-Austen how did you downgrade if I may ask? PS: The low version of opkg itself allows for no opkg install package=version notation and if I use --force-downgrade opkg installs 0.5-1 regardless. These tickets might be related/duplicates: Initially I reset the Omega2 to factory settings using the following from a serial console firstboot -y sync reboot After setting up the wifi I then copied the following files to flash via scp pyOmegaExpansion_0.4-1_mipsel_24kc.ipk pyOnionI2C_0.4-1_mipsel_24kc.ipk and then ran opkg install pyOmegaExpansion_0.4-1_mipsel_24kc.ipk opkg install pyOnionI2C_0.4-1_mipsel_24kc.ipk This did the trick. However opkg remove pyOmegaExpansion opkg remove pyOnionI2C Followed by opkg install pyOmegaExpansion_0.4-1_mipsel_24kc.ipk opkg install pyOnionI2C_0.4-1_mipsel_24kc.ipk should do it. I downloaded the two package files from Paul @Paul-Austen my mistake the download URL should be @Paul-Austen said in Latest python I2c module issue: from OmegaExpansion import onionI2C Just tried the following minimal workaround opkg remove pyOnionI2C opkg install pyOnionI2C_0.4-1_mipsel_24kc.ipk and this gets rid of the import error Paul - Patrick Fehr I see my mistake, I was getting the packages from the wrong repo (onion instead of onion2). Rookie mistake. Now everything works - except fo course for both packages 0.5-1 version remains broken. Thank you very much! I have put all steps together for future visitors For onion omega2: $ wget repo.onion.io/omega2/packages/onion/pyOnionI2C_0.4-1_mipsel_24kc.ipk $ wget repo.onion.io/omega2/packages/onion/pyOmegaExpansion_0.4-1_mipsel_24kc.ipk $ opkg list-installed | grep pyO pyOmegaExpansion - 0.5-1 pyOnionI2C - 0.5-1 $ opkg remove pyOnionI2C pyOmegaExpansion $ opkg install pyOmegaExpansion_0.4-1_mipsel_24kc.ipk Installing pyOmegaExpansion (0.4-1) to root... Configuring pyOmegaExpansion. $ opkg install pyOnionI2C_0.4-1_mipsel_24kc.ipk Installing pyOnionI2C (0.4-1) to root... Configuring pyOnionI2C. $ opkg list-installed | grep pyO pyOmegaExpansion - 0.4-1 pyOnionI2C - 0.4-1 Is it for Omega1 or Omega2 or it's for both? - Patrick Fehr @ccs-hello the problem can be reproduced on Omega2+ but I don't have an Omega1 so I am unable to make a statement for Omega1.
http://community.onion.io/topic/2403/latest-python-i2c-module-issue/8
CC-MAIN-2019-13
refinedweb
470
52.76
Very nice, Can you turn the money feature off ? for milsim and such. Great Stuff! Fantastic! This build has been the most fun to play with. thanks for the upload! is there a way to decrease the amount of time to build like instead of having to hit the build option 10 times could i set it to like 5..... or 2 build and how do i add other items to Build/buy Total comments : 4, displayed on page: 4 null = [] execVM "Loli_Defense\LD_Init.sqf"; #include "Loli_Defense\LD_GUI_Defines.hpp" #include "Loli_Defense\LD_GUI_Dialogs.hpp" this setVariable ["materials", amount_of_material, true]; this setVariable ["cash", amount_of_cash,!
https://www.armaholic.com/page.php?id=24150
CC-MAIN-2021-10
refinedweb
102
77.64
How to set up and tune the FM radio for Windows Phone 8 [ This article is for Windows Phone 8 developers. If you’re developing for Windows 10, see the latest documentation. ] This topic describes how to connect to the FM radio in a Windows Phone app that targets Windows Phone OS 7.1. This topic contains the following sections. It can take up to three seconds for the first FMRadio method call to return after the phone boots up. After the FM Radio is first initialized, if the phone is running in an active state, the methods will typically return within 100 ms. Avoid setting up the FM Radio or synchronizing the UI thread while the app is running. Delay sending further commands to the FM Radio until at least one second after the FM Radio is enabled. For more information and performance tips, see Creating High Performance Applications for Windows Phone. To set up the FM radio: Add a using directive to include the Microsoft.Devices.Radio namespace, which contains the FMRadio API. Create an instance of the FMRadio class and then set the power mode.
https://msdn.microsoft.com/library/windows/apps/ff769541
CC-MAIN-2016-36
refinedweb
188
64.41
Writing Portable HTML5 Server Side Events Applications using the Atmosphere Framework The Atmosphere Framework easily allow the creation of HTML5 Server Side Events (SSE). Better, any existing Servlet based application can add SSE support without any changes to their existing application. HTML5 Server Side Events (SSE) are getting more and more adopted and support for it starts to appear. As an example, the GlassFish Application Server recently added support for it, the upcoming release of the Jersey Framework is also adding some sort of support, and framework like jQuery-Socket has sample supporting SSE as well. Both GlassFish and Jersey suffer major issues: First, you need to use non portable API to start using SSE (will only work in GlassFish or Jersey) and second, they expose special API to support SSE, which is a major mistake in my opinion. Just take a look at how simple it can be to implement SSE using the jQuery-Socket sample. Why would you use heavyweight API like GlassFish or Jersey to achieve something that simple? Not only that, but currently Internet Explorer isn’t supporting SSE, so if you use either use GlassFish or Jersey, your application will NOT WORK with Internet Exporer. Oups!!! This is where Atmosphere comes to the rescue. With Atmosphere, you don’t have to add anything special to have SSE supported by your application. Event better, you can ask Atmosphere to fallback to another technique if SSE is not supported. As an example, you can ask for WebSocket or Long-Polling to be used when SSE is not available. Atmosphere will transparently fall back. On the server side, you don’t need to care about anything as Atmosphere will do it for you. As an example, let’s write a simple Chat application using Atmosphere and Jersey (but not the Jersey SSE implementation!). First, let’s define a super simple Jersey Resource: 1 @Path("/") 2 public class ResourceChat { 3 4 @Suspend(contentType = "application/json") 5 @GET 6 public String suspend() { 7 return ""; 8 } 9 10 @Broadcast(writeEntity = false) 11 @POST 12 @Produces("application/json") 13 public Response broadcast(Message message) { 14 return new Response(message.author, message.message); 15 } 16 } The important code here is line 4, where the Atmosphere Suspend annotation is used to suspend to tell Atmosphere to not commit the response, e.g leave the connection open. Under the hood it means the Atmosphere native SSE implementation will be enabled and SSE message transparently handled. With line 10, we are telling Atmosphere to broadcast the message back the client to all suspended connections, or stated differently, to all our connected Browser, supporting SSE or not. This is important to not here that if the remote browser isn’t supporting SSE, a fallback transport will be used. For that sample let’s use long-polling, but if you are using Internet Explorer 10 we could have chosen WebSockets as a fallback mechanism. You can download the server code (complete sample) here. No on the client side, all we need to do is to tell Atmosphere to use SSE as its main transport (complete code here): 1 var request = { url: document.location.toString() + 'chat', 2 contentType : "application/json", 3 logLevel : 'debug', 4 transport : 'sse' , 5 fallbackTransport: 'long-polling'}; 6 7 8 request.onOpen = function(response) { 9 content.html($('<p>', { text: 'Atmosphere connected using ' + response.transport })); 10 input.removeAttr('disabled').focus(); 11 status.text('Choose name:'); 12 }; 13 14 request.onMessage = function (response) { 15 var message = response.responseBody; 16 try { 17 var json = JSON.parse(message); 18 } catch (e) { 19 console.log('This doesn\'t look like a valid JSON: ', message.data); 20 return; 21 } 22 23 if (!logged) { 24 logged = true; 25 status.text(myName + ': ').css('color', 'blue'); 26 input.removeAttr('disabled').focus(); 27 } else { 28 input.removeAttr('disabled'); 29 30 var me = json.author == author; 31 var date = json.time; 32 addMessage(json.author, json.text, me ? 'blue' : 'black', new Date(date)); 33 } 34 }; 35 36 request.onError = function(response) { 37 content.html($('<p>', { text: 'Sorry, but there\'s some problem with your ' 38 + 'socket or the server is down' })); 39 }; 40 41 var subSocket = socket.subscribe(request); Important code here is line 1 where we create a request and configuring SSE as the main transport and Long-Polling as a fallback transport. Note that we could replace SSE with WebSocket and our application WILL STILL WORK, WITHOUT ANY CHANGES NEEDED ON THE SERVER SIDE!!! Line 8, 14 and 36 are used to define some function that will be invoked when the connection gets established, when messages are received and when network errors occurs. And your application will works everywhere Servlet 2.5 is supported. So the question is: Why using private/non portable API and worse, special API just to support only Server Side Events? Don’t go that route and use Atmosphere, and get all for free! For any questions or to download Atmosphere Client and Server Framework, go to our main site, use our Google Group forum, follow the team or myself and tweet your questions there! . - May 28, 2012 at 10:46 pmWriting Portable WebSockets Application using Java « 6312 - June 4, 2012 at 8:57 pmWriting Socket.IO application that runs on the JVM « 6312
http://jfarcand.wordpress.com/2012/05/14/writing-portable-html5-server-side-evens-application-using-the-atmosphere-framework/
CC-MAIN-2014-41
refinedweb
873
56.35
For my API I'm using Entity Framework Core with code first migrations. I've created some relations which are working fine. Now, I've added another relation (one to many) and suddenly I'm slapped around the ears with this error: Cannot insert explicit value for identity column in table 'Companies' when IDENTITY_INSERT is set to OFF." Offcourse, I must be doing something wrong but I just can't figure out what. I've come across more questions like this where the answer was "set IDENTITY_INSERT to ON" but that doesn't work for me since EF is handling everything. My Company class which can belong to a Group: public class Company { // Primary key public int Id { get; set; } // The optional Id of a Group public int? GroupID { get; set; } ... } And the Group class: public class Group { // Primary key public int Id { get; set; } // Name of the Group public string Name { get; set; } // List of Companies in this group public IEnumerable<Company> Companies { get; set; } } The code used for handling the POST: // POST api/groups [HttpPost] public async Task<IActionResult> Post([FromBody] Group group) { try { if (ModelState.IsValid) { _context.Groups.Add(group); await _context.SaveChangesAsync(); return CreatedAtRoute("GetGroup", new { id = group.Id }, group); } return BadRequest(ModelState); } catch (Exception e) { return BadRequest($"Unable to create: {e.Message}"); } } In my database, all columns, index and keys are created as expected and just like every other one to many relationship I've got in my API. But this specific case just seems to end up in misery... The class I'm trying to add to the database: Problem is that there's no hint for EF to know if Company (under Group relationship) is being explicitly inserted or if it is supposed to use the pre-existing one from database. Since those instanced are disconnected from DbContext there is no indication whether they exist or not on the database by the time EF tries to generate its SQL command. There is no easy way here, at least there was none by the time I've played with EF Core. You should either: IDinstead of the navigation property so you'll avoid this whenever possible, or; Companyand attach it to Group) before saving desired data (eg: before saving Group). So, for instance: var companyDB = await context.Companies.SingleOrDefaultAsync(c => c.Id == group.Company.Id); group.Company = companyDB; context.Groups.Add(group); await context.SaveChangesAsync(); Yes, you're making two trips to database. That's why I'd suggest using the first approach, so you can avoid this fetch and just save Group entity directly into DB. That does not, however, prohibits you from sending a navigation instace of Company to your view. Just create some entity-related classes which will correlate to your database so you can load/save data using this entity type, and create a Dto object which will be your endpoint input/output whenever needed. Binding one into another can be done by using AutoMapper, manual linq or other approaches. I had a similar problem when trying to save an entity (for ex., Cat) which had many-to-one relationships to existing entities (property Owner, pointing at Person). In addition to a) getting the relationship (Person) from the database before saving the entity ( Cat), and b) adding another property ( int PersonId) to the entity ( Cat), I discovered what I think is the best solution: c) stick to "navigation" properties only (do not create extra int <<Name>>Id properties), and when referencing is needed, use cat.Owner = dbContext.Person.Attach(new Person { Id = 123 }).Entity;
https://entityframeworkcore.com/knowledge-base/52662734/cannot-insert-explicit-value-for-identity-column-in-table-when-identity-insert-is-set-to-off--
CC-MAIN-2020-40
refinedweb
594
52.49
No. Javascript is the current "official" language because of historical reasons. I just wanted to make it clear that you can use other languages, but just have to be aware of the tradeoffs. 11Responses 8 Jan 2017 · answer This answer has received 3 appreciations. You can write in most languages in the browser right now, and it's getting easier and more popular as time goes by. There are many serious downsides, as well as compelling advantages. Some downsides to using a different language: Of course, there are benefits from using other languages. Most of them only show up with large, long-lived applications: There are three main categories of alternate languages to be aware of: JavaScript is popular enough that most major languages have tools to "compile" them to JavaScript. If you already "speak" one of these languages then they can be a great option to bring your existing code and expertise to the web. Each of these languages were designed by very smart people with a JavaScript background to solve real-world problems. They are definitely not a good idea if you're making small apps, since ES2016 is pretty good anyway and you'll lose compatibility with the existing ecosystem. However if you're making a large webapp that's going to be used in production and have features added over time, then you should definitely consider these languages: There are more, but I'll stop there since those are the biggest. TL;DR: If you want to make video games, write them with Unity and they'll run everywhere, including the web. One big problem with all these "compile-to-js" languages is that Javascript is kinda slow. It is weakly typed and so the javascript engines have to do crazy on-the-fly recompilations as the data changes for maximum performance. If only there was some kind of low-level language which we could run in the browser but which didn't need any kind of optimizing engine. Like an assembly language for the web. Enter ASM.js: ASM.js is a very ugly subset of javascript that looks like this: function strlen(ptr) { ptr = ptr|0; var curr = 0; curr = ptr; while (MEM8[curr]|0 != 0) { curr = (curr + 1)|0; } return (curr - ptr)|0; } Notice all the |0? Those are valid Javascript which casts the values to integers. A smart JavaScript engine will pick up on that and produce much faster code. ASM.js also does some other things like manually managing it's own memory in a TypedArray which means no pauses for Garbage Collection. With these optimizations paired with a browser that's also been optimized for ASM.js, you can usually get within 1/2 the performance of native code. Another important tool is Emscriptem which compiles C/C++ code to ASM.js. (I'm simplifying here. Go check it out if you want the full story.) Thanks to the great performance of ASM.js, people have been able to compile entire video games for the browser. In fact, Unity, which is one of the most popular video game engines, can export to asm.js as one of it's official targets! ASM.js is just about as far as you can push regular JavaScript. But thankfully, the browser vendors have come together to give us something even better: WebAssembly. WebAssembly is a cross-platform assembly language that's safe to run with untrusted code. This means that, someday soon, almost any language will be able to compile for the browser without sacrificing speed or having subtle quirks due to JavaScript's limitations. The future is bright for assembly languages on the web. ASM.js already works in all modern browsers and WebAssembly works in Firefox Nightly and Chrome Canary. If you want to have a taste of the future go check out this video game demo which has both ASM.js and WebAssembly versions. If you're looking to get into front end development, then JavaScript is still the way to go. JavaScript is one of the few languages that actually has a job deficit, so it's a very good career move. Whatever company you get a job for will probably already have decided whether to use vanilla JS or some other language, and won't be interested in changing. If they use some alternate language, then their strategy will probably be to hire JS developers and train them in. TL;DR: Because, standards. One of the biggest pain point among front-end developers is cross-browser compatibility; and nothing would get them more enraged than seeing a browser deviate from a standard. If I had a nickel for every time I heard @alkshendra and @fazlerocks curse IE <9, I would have amassed quite a good amount of fortune. :) As @jiyinyiyong pointed it out, a browser doing things "uniquely" is a big no; it fragments the concept of "open" web. If one browser does it, all browsers should fall in line, and it would mean the creation of a new standard, which is a long and painful process. On a related note, see @leaverou's answer to Why does it take so long for new specs to get browser adoption? Google did propose an addition of support for multiple VMs into Webkit. One of the comments in the thread hits it spot on: Previous branches have been used to bring up interestingly complicated features, or features that had the potential to cause dramatic stability issues during their early work (such as the old svg-experimental branch). This project appear to be largely a make work project as it's already possible to have bindings for multiple languages (as the C++, GLib, ObjC, V8 and JSC bindings demonstrate). It seems an academic exercise to see if we can create a general architecture to make more bindings, as is exporting support for proprietary extensions like vbscript, python or dart to the web. As the 90s demonstrated such "features" are bad for developers, and bad for the open web. This may not be apparent to people who have not spent years working in the environment but random additions frequently cause significant pain down the road, even in the cases where the overall result was a "good" thing -- such as canvas - for the subsequent standardisation caused us quite a bit of compatibility problems, even though it was a very compact and contained api. — Oliver 9 Jan 2017 · answer A lot of answers are pretty good but miss the core of the question: why did we end up with JS specifically? Originally there was only HTML. The arrival of JS even pre-dates that of CSS, to the extent that at some point there were actual plans to create a styling language based on JS (JSSS = JavaScript Style Sheets, not to be confused with JSS which is a recent styling library for JS). The history of JS has been explored in great detail elsewhere but the important takeaways are that back when "the web" was split between Netscape and Microsoft, Netscape wanted to create a scripting language for web pages around the same time Sun tried to establish Java as the universal language for writing real applications. At the time "build cross-platform applications" and "breathe life into web pages" were two different problem sets and the licensing deal that resulted in JS being marketed as "JavaScript" so Sun would let it live alongside the Java plugin for Netscape is an example of how deep the divide ran. Web pages are HTML documents, no sane person would try to create applications with that, right? At the time the primary use cases for JS were doing mouseover effects on images, creating popup windows and animating the status bar (before someone found a way to exploit that feature for nefarious purposes and browsers no longer supported it). But Java (and Shockwave/Flash) were inherently limited by living inside their opaque sandboxes where JS could access the entire page. Meddling with JS was also very easy because there was no need for compilation or expensive software. It was also "not a real programming language" so non-programmers had less barriers to dip their toes in or copy code snippets from their peers. Let's emphasise that: Java and Shockwave and Flash were not competitors to JS. "Real programmers" didn't like JS. It was just a "scripting language" for "scripting web pages" so designers could add some interactivity to their "pretty pictures". The only direct competitor was VBScript and VBScript was only supported by Microsoft (who also supported a reverse engineered clone of JavaScript called JScript). Ultimately standards are what helped JS survive and flourish. Standards helped nip the threat of fragmentation (e.g. via JScript) in the bud. But what gave JS the initial edge was simply the contempt "real programmers" had for "scripting languages" and the inclusive environment this created for beginners as well as the ability to instantly see the effect of your experiments. 8 Jan 2017 · answer Technical reasons? Not really. But human/political reasons definitely. The history of JavaScript is a great yarn, both Eich and Crockford tell it pretty well ( and) and it should give an idea of the sort of games that were being played. From an author's point of view - the web was created in the early days by a ton of people, only a handful of whom were Serious Programmers(tm) who liked Java. You can write a few lines of HTML and JavaScript in a text file, run it in a browser and it works. That was never the case with Java and there may not be much reason beyond that. Barring the obvious one, that most web people I know hate writing Java as much as Java people hate writing JavaScript. As a sidenote I think ease of use also played a huge part in PHP's success, because it was so commonly installed on web servers you could just rename your .html files .php and suddenly you could do new and awesome tricks! ;) People didn't care about the purity of language, they were too busy making stuff in the crazy wild west of the web. The funny thing of course is that JavaScript gets more like Java all the time, in ways both superficial and substantial. Most React projects don't run in the browser, the source code has to be converted into something that runs in the browser. Which is basically where Web Assembly comes in - if you're transpiling into a target language you basically never see, why not make that language faster? I don't see the average author writing things directly in wasm; although I will probably be wrong ;) I think we're entering a period of time where people will write whatever language they like and transpile it into JavaScript (and eventually wasm) during deployment. To put it another way: nobody who hates JavaScript has to write it any more, so it barely matters why Java didn't "win". Anyone who wishes Java won can still write Java for the server, or find some flavour of x-to-javascript that suits them. 8 Jan 2017 · answer The primary "technical" reason is that the web platform is in a transitional phase as far as foundational technologies required for cross language support are concerned. NPAPI (plagued by security issues) is well on its way towards deprecation, and WebAssembly is still in its infancy. Once WebAssembly matures and is more widely supported it is inevitable that ecosystems will evolve around frameworks and libraries targeting the web in various languages. In fact, rust community is actively working on preliminary support for WebAssembly. Alternative route is to compile languages to javascript target. There are of course lot of languages that do this today, eg. there is Bridge.NET for C#, JSweet and GWT for Java, Opal for ruby, Transcrypt for python etc. However, the primary issue is that none of these languages were originally intended to be compiled to javascript and hence when they are ported to javascript the implementations end up being subtly different from the original implementations and/or the runtime characteristics end up being quite different from the original language. And porting the libraries in the ecosystem is almost always a very cumbersome endeavour. The major problematic aspect, that is always present when you write interop layers to target javascript libraries, is that you invariably introduce a second layer of abstraction that beginners have to struggle with. Sure you can use React with Opal and the integration layer works really well (I have used it myself) but if a beginner is trying to learn react through opal it is almost double the effort because he/she will first have to understand the original react concepts (which are explained through javascript API references) and then would have to learn how to apply these concepts through the Opal interface, which has a lost of subtly different characteristics (underscored names, different way to associate event handlers, block oriented DSL rather than JSX etc.). To be fair, none of these subtleties are bad per se, because without them the usage will not be idiomatic ruby, but it is undeniable that they add a cost to learning curve. The above is explicitly not a criticism towards immense work that has gone into libraries like hyperloop. To their credit, getting started with hyperloop requires adding just a few dependencies to a Gemfile and your asset processing pipeline is up and running in seconds. When I first tried it I was amazed as to how frictionless the initial experience was compared to things like webpack. But having said that, the ground reality is that unless there is a strong commercial incentive, no one puts in the effort to maintain exhaustive parallel documentations for interop layers that mirror each release of wrapped libraries. This is precisely the reason why it is extremely unlikely that these ecosystems will ever catch on with the popularity and momentum that javascript ecosystem has. 8 Jan 2017 · answer Thing is, every browser, the big ones, like Chrome, Firefox, InternetExplorer, as well as the small ones, like Lync, Midori, etc. all have to be able to display your site. So, people went ahead and thought about it. What kind of information should be important enough so that every browser should be able to handle it? And those smart people decided, that HTML (not Jade or JSON or...) and CSS (not Sass, Less, etc.) and JS (not Ruby, C#, etc) should be that minimum standard which has to be supported by all browsers so a website can be displayed correctly. While that is one decision on standards, no one hinders you to use a different language for the web. Go ahead and use C#! But you will see, that unless you get C# standardized, normale people with mainstream browsers will not be able to execute your scripts. Standardizing a new language is a very painful procedure. It includes explaining, why the language in question is better suited to the web than JavaScript. Is it because you prefer that language? Then screw you! No one cares about your opinion! That might sound harsh, but the only thing which is relevant to the world is facts. If you present facts to W3C which proof that your language is good enough for the web, they might accept it (a thing at which Google and Microsoft and many others failed...). So, the thing is: JavaScript was chosen as a standard, because of reasons. You either accept the standard and build a site which everyone can view, or you do lots of research and explain to the world, why an interpreter for a different language should be included in every major browser. That's quite hard, and you will have to proof definite shortcomings of JavaScript , or rather ECMAScript (which cannot be overcome by a standard update, like ES8), which make the existence of a different programming language in a browser plausible for normal conditions (no special cases allowed). I found a really good write up on a persons experience with Java vs other standard web languages. 13 Jan 2017 · answer Not sure I can compete with the some of the excellent detail already provided, but I think, for me, it boils down to compiled languages versus interpreted/scripted ones. Websites, needing to be platform agnostic (and the evolving list of browsers ever growing), were always going to favour the latter to stand any chance of adoption. Javascript briefly contested this title with VBScript, but has long been king on the client-side, hence it's increasingly broad adoption. Server side scripting is, of course, a slightly different battle, but results in pure HTML (and JS) arriving at the client browser. Interestingly, new kids on the block like Typescript, a lot closer to traditional high level (compiled) languages, still rely heavily upon ECMAScript standards to produce bundled i.e. compiled JS. 7 Jan 2017 · answer Well, for java you have the applets i think if you need some extra functionality, or flash too. But i think the point of the browser is keep things clear. 7 Jan 2017 · answer I guess one reason may be caused by the fact that web browsers are not provided by only one company so no one is allowed to use a closed language for his own only profits.
https://hashnode.com/post/is-there-a-technical-reason-on-why-we-couldnt-have-a-language-like-javac-integrated-into-the-browser-instead-of-javascript-cixn984yd000thd5370cvuzac
CC-MAIN-2017-43
refinedweb
2,904
58.72
I'm running VS Code Version 1.5.3 with TypeScript 2 and I can't get my 'os' import to work. I managed to resolve other dependencies, such as Express, by running typings install express --save import { os } from 'os'; You need to add a typings file. Ususally, you install typing files (ending in d.ts) from typings, which you install using npm install -g typings in the command line. However, I can't seem to find a typing for os which is weird, so you can create a fake module definition to solve it in the meantime: // file: os.d.ts declare namespace os { interface OsStatic { ... everything os has ... } } declare var os: os.OsStatic; declare module "os" { export = os; } What we do here can be separated into three parts: namespacedefines the interfaces that compose the library varis the exported / main object of the library moduleis an ambient module, since it's name is a string. Using that string, visual studio can locate the module and allow you to import In general, you should read about typings to manage typing files, but this should work.
https://codedump.io/share/ZbspYg3V3viU/1/how-can-i-import-os-in-typescript
CC-MAIN-2016-44
refinedweb
186
63.7
Why isn't my session state working in ASP.NET Core? Session state, GDPR, and non-essential cookies! - "Gah, this stupid framework doesn't work" (╯°□°)╯︵ ┻━┻ The cause of this "issue" is the interaction between the GDPR features introduced in ASP.NET Core 2.1 and the fact that session state uses cookies. In this post I describe why you see this behaviour, as well as some ways to handle it. The GDPR features were introduced in ASP.NET Core 2.1, so if you're using 2.0 or 1.x you won't see these problems. Bear in mind that 1.x are falling out of support on June 27 2019, and 2.0 is already unsupported, so you should consider upgrading your apps to 2.1 where possible. Session State in ASP.NET Core As I stated above, if you're using ASP.NET Core 2.0 or earlier, you won't see this problem. I'll demonstrate the old "expected" behaviour using an ASP.NET Core 2.0, to show how people experiencing the issue typically expect session state to behave. Then I'll create the equivalent app in ASP.NET Core 2.1, and show that session state no longer seems to work. What is session state? Session state is a feature that harks back to ASP.NET (non-Core) in which you can store and retrieve values server-side for a user browsing your site. Session state was often used quite extensively in ASP.NET apps, but was problematic for various reasons), primarily performance and scalability. Session state in ASP.NET Core is somewhat dialled back. You should think of it more like a per-user cache. From a technical point of view, session state in ASP.NET Core requires two separate parts: - A cookie. Used to assign a unique identifier (the session ID) to a user. - A distributed cache. Used to store items associated with a given session ID. Where possible, I would avoid using session state if you can get away without it. There's a lot of caveats around its usage that can easily bite you if you're not aware of them. For example: - Sessions are per-browser, not per logged-in user - Session cookies (and so sessions) should be deleted when the browser session ends, but might not be. - If a session doesn't have any values in it, it will be deleted, generating a new session ID - The GDPR issue described in this post! That covers what session state is and how it works. In the next section I'll create a small app that tracks which pages you've visited by storing a list in session state, and then displays the list on the home page. Using session state in an ASP.NET Core 2.0 app To demonstrate the change in behaviour related to the 2.0 to 2.1 upgrade, I'm going to start by building an ASP.NET Core 2.0 app. As I apparently still have a bazillion .NET Core SDKs installed on my machine, I'm going to use the 2.0 SDK (version number 2.1.202) to scaffold a 2.0 project template. Start by creating a global.json to pin the SDK version in your app directory: dotnet new globaljson --sdk-version 2.1.202 Then scaffold a new ASP.NET Core 2.0 MVC app using dotnet new: dotnet new mvc --framework netcoreapp2.0 Session state is not configured by default, so you need to add the required services. Update ConfigureServices in Startup.cs to add the session services. By default, ASP.NET Core will use an in-memory session store, which is fine for testing purposes, but will need to be updated for a production environment: public void ConfigureServices(IServiceCollection services) { services.AddMvc(); services.AddSession(); // add session } You also need to add the session middleware to your pipeline. Only middleware added after the session middleware will have a access to session state, so you typically add it just before the MVC middleware in Startup.Configure: public void Configure(IApplicationBuilder app, IHostingEnvironment env) { // ...other config app.UseSession(); app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); }); } For this simple (toy) example I'm going to retrieve and set a string session value with each page view, using the session-key "actions". As you browse between various pages, the session value will expand into a semicolon-separated list of action names. Update your HomeController to use the RecordInSession function, as shown below: public class HomeController : Controller { public IActionResult Index() { RecordInSession("Home"); return View(); } public IActionResult About() { RecordInSession("About"); return View(); } private void RecordInSession(string action) { var paths = HttpContext.Session.GetString("actions") ?? string.Empty; HttpContext.Session.SetString("actions", paths + ";" + action); } } Note: Session.GetString(key)is an extension method in the Microsoft.AspNetCore.Httpnamespace Finally, we'll display the current value of the "actions" key in the homepage, Index.chstml: @using Microsoft.AspNetCore.Http @{ ViewData["Title"] = "Home Page"; } <div> @Context.Session.GetString("actions") </div> If you run the application and browse around a few times, you'll see the session action-list build up. In the example below I visited the Home page three times, the About page twice, and the Contact page once: If you view the cookies associated with the page, you will see the .AspNetCore.Session cookie that holds an encrypted session ID. If you delete this cookie, you'll see the "actions" value is reset, and the list is lost. This is the behaviour most people expect with session state, so no problems there. The difficulties arise when you try the same thing using an ASP.NET Core 2.1 / 2.2 app. Session state problems in ASP.NET Core 2.1/2.2 To create the ASP.NET Core 2.2 app, I used pretty much the same behaviour, but this time I did not pin the SDK. I have the ASP.NET Core 2.2 SDK installed (2.2.102) so the following generates an ASP.NET Core 2.2 MVC app: dotnet new mvc You still need to explicitly install session state, so update Startup as before, by adding services.AddSession() to ConfigureServices and app.UseSession() to Configure. The newer 2.2 templates have been simplified compared to previous versions, so for consistency I copied across the HomeController from the 2.0 app. I also copied across the Index.chtml, About.chtml, and Contact.cshtml view files. Finally, I updated Layout.cshtml to add links for the About and Contact pages in the header. These two apps, while running different versions of ASP.NET Core, are pretty much the same. Yet this time, if you click around the app, the home page always shows just a single visit to the home page, and doesn't record any visits to other pages: Don't click the privacy policy banner - you'll see why shortly! Also, if you check your cookies, you'll find there aren't any! The .AspNetCore.Session cookie is noticeably absent: Everything is apparently configured correctly, and the session itself appears to be working (as the value set in the HomeController.Index action can be successfully retrieved in Index.cshtml). But it seems like session state isn't being saved between page reloads and navigations. So why does this happen in ASP.NET Core 2.1 / 2.2 where it worked in ASP.NET Core 2.0? Why does this happen? GDPR The answer is due to some new features added in ASP.NET Core 2.1. In order to help developers conform to GDPR regulations that came into force in 2018, ASP.NET Core 2.1 introduced some additional extension points, as well as updates to the templates. The documentation for these changes is excellent so I'll just summarise the pertinent changes here: - A cookie consent dialog. By default, ASP.NET Core won't write "non-essential" cookies to the response until a user clicks the consent dialog - Cookies can be marked essential or non-essential. Essential cookies are sent to the browser regardless of whether consent is provided, non-essential cookies require consent. - Session cookies are considered non-essential, so sessions can't be tracked across navigations or page reloads until the user provides their consent. - Temp data is non-essential. The TempData provider stores values in cookies in ASP.NET Core 2.0+, so TempData will not work until the user provides their consent. So the problem is that we require consent to store cookies from the user. If you click "Accept" on the privacy banner, then ASP.NET Core is able to write the session cookie, and the expected functionality is restored: So if you need to use session state in your 2.1/2.2 app, what should you do? Working with session state in ASP.NET Core 2.1+ apps Depending on the app you're building, you have several options available to you. Which one is best for you will depend on your use case, but be aware that these features were added for a reason - to help developers comply with GDPR. If you're not in the EU, and so you think "GDPR doesn't apply to me", be sure to read this great post from Troy Hunt - it's likely GDPR still applies to you! The main options I see are: - Accept that session state may not be available until users provide consent. - Disable features that require session state until consent is provided. - Disable the cookie consent requirement. - Mark the session cookie as essential. I'll go into a bit more detail for each option below, just remember to consider that your choice might affect your conformance to GDPR! 1. Accept the existing behaviour The "easiest" option is to just to accept the existing behaviour. Session state in ASP.NET Core should typically only be used for ephemeral data, so your application should be able to handle the case that session state is not available. Depending on what you're using it for, that may or may not be possible, but it's the easiest way to work with the existing templates, and the least risky in terms of your exposure to GDPR issues. 2. Disable optional features The second option is very similar to the first, in that you keep the existing behaviour in place. The difference is that option 1 treats session state simply as a cache, so you always assume session values can come and go. Option 2 takes a slightly different approach, in that it segments off areas of your application that require session state, and makes them explicitly unavailable until consent is given. For example, you might require session state to drive a "theme-chooser" that stores whether a user prefers a "light" or "dark" theme. If consent is not given, then you simply hide the "theme-chooser" until they have given consent. This feels like an improvement over option 1, primarily because of the improved user experience. If you don't account for features requiring session state, it could be confusing for the user. For example, if you implemented option 1 and so always show the "theme-chooser", the user could keep choosing the dark theme, but it would never remember their choice. That sounds frustrating! There's just one big caveat for this approach. Always remember that session state could go away at any moment. You should treat it like a cache, so you shouldn't build features assuming a) it'll always be there (even if the user has given consent), or b) that you'll have one session per real user (different browsers will have different session IDs for the same logical user). 3. Disable the cookie consent requirement If you're sure that you don't need the cookie consent feature, you can easily disable the requirement in your apps. The default template even calls this out explicitly in Startup.ConfigureServices where you configure the CookiePolicyOptions:.AddSession(); // added to enable session services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2); } The CheckConsentNeeded property is a predicate that is called by the framework to check whether non-essential cookies should be written to the response. If the function returns true (as above, the default in the template), then non-essential cookies are skipped. Change this to false and session state will work without requiring cookies to be explicitly accepted. 4. Mark session cookies as essential Disabling the cookie consent feature entirely may be a bit heavy handed for your applications. If that's the case, but you want the session state to be written even before the user accepts cookies, you can mark the session cookie as essential. Just to reiterate, session state was considered non-essential for good reason. Make sure you can justify your decisions and potentially seek advice before making changes like this or option 3. There is an overload of services.AddSession() that allows you to configure SessionOptions in your Startup file. You can change various settings such as session timeout, and you can also customise the session cookie. To mark the cookie as essential, set IsEssential to true: public void ConfigureServices(IServiceCollection services) { services.Configure<CookiePolicyOptions>(options => { options.CheckConsentNeeded = context => true; // consent required options.MinimumSameSitePolicy = SameSiteMode.None; }); services.AddSession(opts => { opts.Cookie.IsEssential = true; // make the session cookie Essential }); services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2); } With this approach, the cookie consent banner will still be shown, and non-essential cookies will not be written until it's clicked. But session state will function immediately, before consent is given, as it's considered essential: Summary. Finally, I described four ways to handle this behaviour: do nothing and accept it; disable features that rely on session state until consent is given; remove the consent requirement; and make the session cookie an essential cookie. Which option is best for you will depend on the app you're building, as well as your exposure to GDPR and similar legislation.
https://andrewlock.net/session-state-gdpr-and-non-essential-cookies/amp/
CC-MAIN-2021-04
refinedweb
2,326
59.7
I'm trying to get Xamarin setup on my win 10 PC but keep running into issues. I've got android sdk installed and installed Xamarin using the setup provided from here but when I try start I get "The type or namespace name 'App' could not be found (are you missing a using directive or an assembly reference?) " I've looked into it and I don't seem to have the target versions for android installed (targets in the properties for the main project). Any idea how to resolve this? Answers @SamAllen Do Update xamarin.forms from NuGet.. and for installation look into
https://forums.xamarin.com/discussion/comment/212959
CC-MAIN-2021-17
refinedweb
104
74.08
I have some python code for generating some sine wave audio in pygame: import pygame from pygame.locals import * import math import numpy def generate_sound(freq): #setup our numpy array to handle 16 bit ints, which is what we set our mixer to expect with "bits" up above buf = numpy.zeros((n_samples, 2), dtype = numpy.int16) max_sample = 2**(bits - 1) - 1 for s in range(n_samples): t = float(s)/sample_rate # time in seconds buf[s][0] = int(round(max_sample*math.sin(2*math.pi*freq*t))) # left buf[s][1] = int(round(max_sample*math.sin(2*math.pi*freq*t))) # right return pygame.sndarray.make_sound(buf) bits = 16 pygame.mixer.pre_init(44100, -bits, 2) pygame.init() pygame.mixer.fadeout(4000) #fade out time in milliseconds size = (1200, 720) screen = pygame.display.set_mode(size, pygame.HWSURFACE | pygame.DOUBLEBUF) duration = 1.0 # in seconds sample_rate = 44100 n_samples = int(round(duration*sample_rate)) #lower sound_130_81 = generate_sound(130.81) #C sound_146_83 = generate_sound(146.83) #D sound_164_81 = generate_sound(164.81) #E sound_174_61 = generate_sound(174.61) #F sound_196_00 = generate_sound(196.00) #G sound_220_00 = generate_sound(220.00) #A sound_246_94 = generate_sound(246.94) #B #middle sound_261_63 = generate_sound(261.63) #C sound_293_66 = generate_sound(293.66) #D sound_329_63 = generate_sound(329.63) #E sound_349_23 = generate_sound(349.23) #F sound_392_00 = generate_sound(392.00) #G sound_440_00 = generate_sound(440.00) #A sound_493_88 = generate_sound(493.88) #B #higher sound_523_25 = generate_sound(523.25) #C sound_587_33 = generate_sound(587.33) #D sound_659_25 = generate_sound(659.25) #E sound_698_46 = generate_sound(698.46) #F sound_783_99 = generate_sound(783.99) #G sound_880_00 = generate_sound(880.00) #A sound_987_77 = generate_sound(987.77) #B sound_1046_50 = generate_sound(1046.50) #C #This will keep the sound playing forever, the quit event handling allows the pygame window to close without crashing _running = True while _running: for event in pygame.event.get(): if event.type == pygame.QUIT: _running = False if event.type == KEYDOWN: if event.key == K_ESCAPE: _running = False #lower notes DOWN elif event.key == K_z: sound_130_81.play(loops = -1) elif event.key == K_x: sound_146_83.play(loops = -1) elif event.key == K_c: sound_164_81.play(loops = -1) elif event.key == K_v: sound_174_61.play(loops = -1) elif event.key == K_b: sound_196_00.play(loops = -1) elif event.key == K_n: sound_220_00.play(loops = -1) elif event.key == K_m: sound_246_94.play(loops = -1) elif event.key == K_COMMA: sound_261_63.play(loops = -1) #middle notes DOWN elif event.key == K_a: sound_261_63.play(loops = -1) elif event.key == K_s: sound_293_66.play(loops = -1) elif event.key == K_d: sound_329_63.play(loops = -1) elif event.key == K_f: sound_349_23.play(loops = -1) elif event.key == K_g: sound_392_00.play(loops = -1) elif event.key == K_h: sound_440_00.play(loops = -1) elif event.key == K_j: sound_493_88.play(loops = -1) elif event.key == K_k: sound_523_25.play(loops = -1) #higher notes DOWN elif event.key == K_q: sound_523_25.play(loops = -1) elif event.key == K_w: sound_587_33.play(loops = -1) elif event.key == K_e: sound_659_25.play(loops = -1) elif event.key == K_r: sound_698_46.play(loops = -1) elif event.key == K_t: sound_783_99.play(loops = -1) elif event.key == K_y: sound_880_00.play(loops = -1) elif event.key == K_u: sound_987_77.play(loops = -1) elif event.key == K_i: sound_1046_50.play(loops = -1) if event.type == KEYUP: #lower notes UP if event.key == K_z: sound_130_81.stop() elif event.key == K_x: sound_146_83.stop() elif event.key == K_c: sound_164_81.stop() elif event.key == K_v: sound_174_61.stop() elif event.key == K_b: sound_196_00.stop() elif event.key == K_n: sound_220_00.stop() elif event.key == K_m: sound_246_94.stop() elif event.key == K_COMMA: sound_261_63.stop() #middle notes UP elif event.key == K_a: sound_261_63.stop() elif event.key == K_s: sound_293_66.stop() elif event.key == K_d: sound_329_63.stop() elif event.key == K_f: sound_349_23.stop() elif event.key == K_g: sound_392_00.stop() elif event.key == K_h: sound_440_00.stop() elif event.key == K_j: sound_493_88.stop() elif event.key == K_k: sound_523_25.stop() #higher notes UP elif event.key == K_q: sound_523_25.stop() elif event.key == K_w: sound_587_33.stop() elif event.key == K_e: sound_659_25.stop() elif event.key == K_r: sound_698_46.stop() elif event.key == K_t: sound_783_99.stop() elif event.key == K_y: sound_880_00.stop() elif event.key == K_u: sound_987_77.stop() elif event.key == K_i: sound_1046_50.stop() pygame.quit() I am getting a delay when the pygame window initially opens. Can anyone help to optimise this code to remove this delay? Also, how can I enable to sound to play as one continuous note, rather than repeated intervals? I modified your code a bit to show the time delays in the initiation sequence (after pygame.init() which i moved to the start) by getting the start time with pygame.time.get_ticks() and comparing at each notable point this is the output (the number is time in milliseconds) right at the start 0 post mixer preinit 6 post mixer fadeout 21 screen made 373 before generating sounds 377 after generating sounds 3686 after full init 3689 Basically this delay is all in the block of "generate_sound()" calls, and you are never going to get rid of this delay, you are generating loads of big arrays. What you should just do is handle the loading gracefully with a separate loading screen in your program (look in to threading to keep a live window and load at the same time)
http://www.dlxedu.com/askdetail/3/467485c3b1260a9014fb0c9611bf9f16.html
CC-MAIN-2018-39
refinedweb
846
67.01
21 January 2010 17:14 [Source: ICIS news] By Stephen Burns ?xml:namespace> HOUSTON That's the question that oil company shareholders and some nervous employees will be asking in the next few weeks as the firms report fourth-quarter results that are certain to paint a bleak outlook for the refining business in the The picture could be so dark, in fact, that some financial reports may even be accompanied by announcements of refinery closures. That would sound alarm bells for chemical producers, many of which rely on feedstocks from adjoining refineries. The situation is a stunning turnaround from just a couple of years ago when the corporate and political focus was on how to enable US refiners to expand capacity or even building greenfield assets - something that not had happened since 1976. Refiners are even still feeling the political heat from their salad days in 2007, as evidenced by the recently revived proposal to force them to disclose their turnaround plans months in advance. Economic recession has obviously not helped business conditions, but the First and foremost, there is too much capacity compared with a demand pool that is going to grow slowly at best for the next few years before going into a longer-term decline. Analysts see three factors behind the downtrend in demand that will become evident from around 2014 on. First, demand destruction will occur due to relatively high prices, reflecting expectations that crude oil values will stay high in the long term. US motorists do enjoy by far the cheapest gasoline in the industrialised world at around $2.50/gal (€0.46/litre) but they consume far more than counterparts elsewhere. The Earth Policy Institute, a Washington-based environmental think-tank, published research this month that shows the US fleet declined by 4m vehicles in 2009, as 14m were scrapped and only 10m new automobiles were sold. That was the first decline since World War II, and the downward trend could continue through 2020, according to the research. The second factor is that government regulations requiring greater fuel efficiency in vehicles will also shrink the natural level of consumption. Notably, the virtual collapse of the The third factor is the entrenched place that ethanol has grabbed in the According to the most recent data available from the Renewable Fuels Association (RFA), That represents 8.5% of the 8.978m bbl/day of finished gasoline demand seen in the same month, as reported by the Energy Information Administration (EIA). As the overall demand for gasoline declines in years to come, ethanol's relative share of the shrinking pool is bound to grow. Taking the Capacity utilisation briefly dipped below 80% in November, according to four-week averages reported by the EIA; apart from hurricane-related slowdowns in 2005 and 2008, that was the lowest operating level since the recession in 1991-1992. In contrast, operating rates reached highs in the 90%-plus range in 2006 and 2007, and just below 90% in 2008. But reduced rates are ultimately a delaying action rather than a solution. Some refineries will simply have to be taken out of the game. Stephen Jones, a consultant with Purvin & Gertz in Houston, said that US refineries will account for part of the 2.4m bbl/day of global refining capacity that needs to be shut down to bring balance to supply and demand. The shutdowns to come would be on top of the 1.4m bbl/day of worldwide capacity that has already been mothballed since the sharp downturn in commodity markets in the fourth quarter of 2008, Jones said. The crush on margins "has been an amazing rug-pull from under the feet of the refiners", said Jones. This year has already seen Shell's decision to convert its 130,000 bbl/day In the A particular plant may not be the smallest or least efficient in the With analysts seeing future shutdowns as a foregone conclusion, US chemical producers will be watching the looming corporate results season closely to see which of their neighbours might be heading for mothballs. (
http://www.icis.com/Articles/2010/01/21/9327901/insight-pressure-mounts-for-us-refinery-closures.html
CC-MAIN-2014-35
refinedweb
681
55.37
Question: Solution:1 You can check if a variable is a string or unicode string with isinstance(some_object, basestring) This will return True for both strings and unicode strings Edit: Solution:2 Type checking: def func(arg): if not isinstance(arg, (list, tuple)): arg = [arg] # process func('abc') func(['abc', '123']) Varargs: def func(*arg): # process func('abc') func('abc', '123') func(*['abc', '123']) Solution:3 isinstance is an option: In [2]: isinstance("a", str) Out[2]: True In [3]: isinstance([], str) Out[3]: False In [4]: isinstance([], list) Out[4]: True In [5]: isinstance("", list) Out[5]: False Solution:4") Solution:5 Check the type with isinstance(arg, basestring) Solution:6 >>> type('abc') is str True >>> type(['abc']) is str False This code is compatible with Python 2 and 3 Solution:7 Have you considered varargs syntax? I'm not really sure if this is what you're asking, but would something like this question be along your lines? Solution:8 Can't you do: (i == list (i) or i == tuple (i)) It would reply if the input is tuple or list. The only issue is that it doesn't work properly with a tuple holding only one variable. Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/05/tutorial-check-if-input-is-listtuple-of.html
CC-MAIN-2018-43
refinedweb
220
50.23
As I said earlier this month: While Java applets are dead for games and animations and whatever else they were used back in the nineties, they still have their use when you have to access the local machine from your web application in some way. There are other possibilities of course, but they all are limited: - Flash loads quickly and is available in most browsers, but you can only access the hardware Adobe has created an API for. That’s upload of files the user has to manually select, webcams and microphones. - ActiveX doesn’t work in browsers, but only in IE. - .NET dito. - Silverlight is neither commonly installed on your users machines, nor does it provide the native hardware access. So if you need to, say, access a bar code scanner. Or access a specific file on the users computer – maybe stored in a place that is inconvenient for the user to get to (%Localappdata% for example is hidden in explorer). In this case, a signed Java applet is the only way to go. You might tell me that a website has no business accessing that kind of data and generally, I would agree, but what if your requirements are to read data from a bar code scanner without altering the target machine at all and without requiring the user to perform any steps but to plug the scanner and click a button. But Java applets have that certain 1996 look to them, so even if you access the data somehow, the applet still feels foreign to your cool Web 2.0 application: It doesn’t quite fit the tight coupling between browser and server that AJAX gets us and even if you use Swing, the GUI will never look as good (and customized) as something you could do in HTML and CSS. But did you know that Java Applets are fully scriptable? Per default, any JavaScript function on a page can call any public method of any applet on the site. So let’s say your applet implements public String sayHello(String name){ return "Hello "+name; } Then you can use JavaScript to call that method (using jQuery here): $('#some-div').html( $('#id_of_the_applet').get(0).sayHello( $('#some-form-field').val()) ); If you do that, you have to remember though that any applet method called this way will run inside the sandbox regardless if the applet is signed or not. So how do you access the hardware then? Simple: Tell the JRE that you are sure (you are. aren’t you?) that it’s ok for a script to call a certain method. To do that, you use AccessController.doPrivileged(). So if for example, you want to check if some specific file is on the users machine. Let’s further assume that you have a singleton RuntimeSettings that provides a method to check the existence of the file and then return its name, you could do something like this: public String getInterfaceDirectory(){ return (String) AccessController.doPrivileged( new PrivilegedAction() { public Object run() { return RuntimeSettings.getInstance().getInterfaceDirectory(); } } ); } Now it’s safe to call this method from JavaScript despite the fact that RuntimeSettings.getInterfaceDirectory() directly accesses the underlying system. Whatever is in PrivilegedAction.run() will have full hardware access (provided the applet in question is signed and the user has given permission). Just keep one thing in mind: Your applet is fully scriptable and if you are not very careful where that Script comes from, your applet may be abused and thus the security of the client browser might be at risk. Keeping this in mind, try to: - Make these elevated methods do one and only one thing. - Keep the interface between the page and the applet as simple as possible. - In elevated methods, do not call into javascript (see below) and certainly do not eval() any code coming from the outside. - Make sure that your pages are sufficiently secured against XSS: Don’t allow any user generated content to reach the page unescaped. The explicit and cumbersome declaration of elevated actions was put in place to make sure that the developer keeps the associated security risk in mind. So be a good developer and do so. Using this technology, you can even pass around Java objects from the Applet to the page. Also, if you need your applet to call into the page, you can do that too, of course, but you’ll need a bit of additional work. - You need to import JSObject from netscape.javascript (yes – that’s how it’s called. It works in all browsers though), so to compile the applet, you’ll have to add plugin.jar (or netscape.jar – depending on the version of the JRE) from somewhere below your JRE/JDK installation to the build classpath. On a Mac, you’ll find it below /System/Library/Frameworks/JavaVM.framework/Versions/<your version>/Home/lib. - You need to tell the Java plugin that you want the applet to be able to call into the page. Use the mayscript attribute of the java applet for that (interestingly, it’s just mayscript – without value, thus making your nice XHTML page invalid the moment you add it – mayscript=”true” or the correct mayscript=”mayscript” don’t work consistently on all browsers). - In your applet, call the static JSObject.getWindow() and pass it a reference to your applet to acquire a reference to the current pages window-object. - On that reference you can call eval() or getMember() or just call() to call into the JavaScript on the page. This tool set allows you to add the applet to the page with 1 pixel size in diameter placed somewhere way out of the viewport distance and with visibility: hidden, while writing the actual GUI code in HTML and CSS, using normal JS/AJAX calls to communicate with the server. If you need access to specific system components, this (together with JNA and applet-launcher) is the way to go, IMHO as it solves the anachronism that is Java GUIs in applets. There is still the long launch time of the JRE, but that’s getting better and better with every JRE release. I was having so much fun last week discovering all that stuff.
https://blog.pilif.me/2009/04/
CC-MAIN-2018-47
refinedweb
1,033
60.04
ReSharper provides the following code analysis and coding assistance features for ASP.NET and ASP.NET MVC projects. Syntax highlighting ReSharper highlights various symbols in ASP.NET markup, so that it is easy to distinguish them. It also provides syntax highlighting for C# and VB.NET code blocks. Code highlighting Various code inspections are available for ASP.NET, including detecting unused import namespaces, unknown symbols and entities, etc. You can set a severity level for each inspection. For more information, see Configuring Code Inspection Settings. To navigate between code issues that ReSharper discovers, use marker bar and status indicator.: >>IMAGE:. Auto-importing namespaces During code analysis ReSharper detects and highlights symbols that can't be resolved because the appropriate Import or Register directives are missing. Also, it offers an easy way to add such directives and fix the problem. ReSharper adds a necessary Import directive for .NET symbols: Or a necessary Register directive for ASP.NET controls: For more information, see Importing Namespaces. Examples of context actions More than 25 context actions for ASP.NET are available to replace, collapse or remove tags, convert HTML entities, create events and properties, add code-behind files, etc. See Context Actions for the full list of context actions.. Examples of quick-fixes Import type If the corresponding Import directive is missing, ReSharper suggests this quick-fix. After applying the quick-fix, the necessary directive is added. Create method ReSharper informs you that the ChangePasswordPushButton_Click method doesn't exist... ...and offers to create one. The method declaration is inserted into the code-behind file or into the current file depending on web page code model (single-file page model or code-behind page model). Change signature The signature of the ChangePasswordPushButton_Click method doesn't match the signature of the OnClick event. ReSharper offers a quick-fix to change the signature of the method: Remove unused directives in file ReSharper detects unused import namespace directives. As they are unnecessary ReSharper suggests a quick-fix that removes all of them from the current file. _20<< Create ContentPlaceholder If there is a Content control on a content page that is mapped to a missing ContentPlaceholder on the master page, ReSharper suggests creating the corresponding ContentPlaceholder control on the corresponding master page. The ContentPlaceholder control with the corresponding ID attribute is added to the master page: Rearrange code Using this feature you can easily move code structures or parts of them. For example, you can move tags up or down, attributes to the left or to the right. See Also Procedures: Reference:
http://www.jetbrains.com/resharper/webhelp80/Web_Development__Code_Analysis_and_Coding_Assistance.html
CC-MAIN-2015-27
refinedweb
424
50.23
Existential Types in Scala Existential Types in Scala This deep dive into existential types in Scala will cover what problems they solve, particularly when switching from Java, and how to properly use them. Join the DZone community and get the full member experience.Join For Free In the first blog of the Scala Type System series, I had put a lot of emphasis on the fact that “Type variables make a very powerful piece of type-level programming.” They appear in a variety of forms and variations. One of the important forms is “existential types.” In today’s blog, I would like to get you familiar with the existential types. Given below is a typical way of defining a variable: type F[A] = SomeClass[A] Where A is said to be a type variable. Notice that A has appeared on both sides of the above equation. This implies that F is fully dependent on A. The type of A is essential for instantiation of the SomeClass. Let us take a few examples: sealed trait F[A] final case class SomeClass[A](a: A) extends F[A] SomeClass("hello"): F[String] SomeClass(1: Int): F[Int] SomeClass(true): F[Boolean] SomeClass("hello"): F[Int] Notice that the last example would not compile as the Type A on the LHS and RHS of the equation conflict. Let us understand the need for existential types. Consider the following is defined as: def getLength(x : Array[Any]): Int = x.length val stringArray = Array[String]("foo", "bar", "baz") getLength(stringArray) The above line gives us the following error: error: type mismatch; found : Array[String] required: Array[Any] We could fix the above error in the following manner: def getLength[T](x: Array[T]): Int = x.length getLength(stringArray) Do you realize the potential overhead of the above fix? We’ve parameterized the method, but now we have a superfluous type parameter in our method. Speaking precisely, I mean I want an Array, and I don’t care what type of things it contains. This verbosity of providing a type parameter is a hassle sometimes. Having this basic understanding of the normal variable declaration, let us try to understand existential types. By Definition “Existential types are a way of abstracting over types. They let you “acknowledge” that there is a type involved without specifying exactly what it is, usually because you don’t know what it is and you don’t need that knowledge in the current context.” Existential types are normal type variables, except that the variable only shows up on the RHS of the declaration. type F = SomeClass[A] forSome { type A } Let us take an example: sealed trait F final case class SomeClass[A](a: A) extends F case class User(name: String, age: Int, contact: String) SomeClass("SomeString"): F SomeClass(1: Int): F SomeClass(User): F Notice that `A` appears only on the right in case of existential type. The advantage of this is that the final type, F, will not change regardless of the type of A. Let us try to fit in the existential type in the getLength() method defined above as well. def getLength(x : Array[T] forSome { type T}): Int = x.length Let us expand a little more with another example: sealed trait Existential { type Inner val value: Inner } final case class PrepareExistential[A](value: A) extends Existential { type Inner = A } PrepareExistential("SomeText"): Existential PrepareExistential(1: Int): Existential PrepareExistential(User): Existential Notice that the PrepareExistential is a type eraser. The fact is irrelevant to the data type we put into PrepareExistential — it will erase the type and always return Existential. Let us take this a step further. The next logical line of questions is, “If we have some function that returns and existential then what shall be the type of inner? ” The simplest answer would be “We don’t know.” I would, in my defense, say that the original type defined in PrepareExistential has been erased from the compilation. But that is incorrect. This doesn’t imply that we have lost all the information. We know for a fact is exists. Hence this is called existential type. Another inevitable question that follows is “Where should we use existential types then?” But before I answer this question, there is another question “What is the use of type erasures?” To answer that, let me quote an answer from Martin Odersky himself: .” Existential types have an important property of unifying different types into a single one with shared restrictions. These restrictions could be anything: from upper bounds to type classes or even a combination. A very common use case is the creation of type-safe wrappers around unsafe libraries. Consider the following: def write(objs: Object*): String The signature of the above function is not special, it is very generic. Also, since the object will probably take everything, the program shall crash at runtime rather than failing at compile time. In the context of Scala specifically, a lot of these Java libraries have these generic methods and interfaces wherein certain Scala types don’t work, like BigInt, Int, List etc. So if we can think of an Object as an existential type, we could resolve the problem of the type being too wide of encompassing all types. Here in, we need to restrict the number of types that are workable with the write. Let us improvise def safeWrite(columns: AnyAllowedType*): String = { write(columns.map(_.toObject):_*) } Let us extend the getLength example: def getLength(x : Array[T] forSome { type T <: CharSequence}) = x.foreach(y => println(y.length)) Notice the additional restriction we have imposed. Here, we wanted to act on a more specific type, but did not care exactly what type it was. The type arguments for an existential type can declare upper and lower bounds just like normal type declarations. By adding bounds, all we have done is restrict the range of T. An Array[Int] is not an Array[T] forSome { type T <: CharSequence } In a generic definition,. Before concluding the blog, it is essential to address one last question, “Are existential types the same as raw types?” A simple answer is NO. Existential types are not raw types. Another important thing to understand is that existentials are safe, raw types are not. In Scala, type arguments are always required. The common problem Scala developers often face is that they are unable to implement some Java method with missing type arguments in its signature, e.g. one that takes a raw List as an argument. Example: Consider using the following Java code: import java.util.Arrays; import java.util.ArrayList; import java.util.List; abstract class TestExistential { static List arrayOfWords() { return new ArrayList<>(Arrays.asList("hi", "Welcome", "to", "Knoldus")); } } class UnsafeTypes extends TestExistential { public static final List wordsRaw = arrayOfWords(); } warning: [rawtypes] found raw type: List missing type arguments for generic class List where E is a type-variable: E extends Object declared in interface List class ExistentialTypes extends TestExistential { public static final List<?> wordsExistentialType = arrayOfWords(); } Notice the warning here in the UnsafeTypes, whereas while using the same in Scala, there is no warning for the equivalent to wordsRaw, as types are inferred and are type safe. If you get a Java raw type, such as java.util.List, it is a list where you don’t know the element type. This can be represented in Scala by an existential type: val stringArray: util.List[String] = TestExistential.arrayOfWords() val genericArray: util.List[_] = TestExistential.arrayOfWords() In essence, the reason that existential is safe is that the rules in place for values of existential type are consistent with the rest of the generic system, whereas raw types contradict those rules, resulting in code that is not type check. From a developer’s perspective, existential types are quite powerful when mixed with the correct restrictions. They prove to be potent for the following scenarios: - We need to make some sense of Java’s wildcards, and existential types are the sense we make of them. - We need to make some sense of Java’s raw types because they are also still in the libraries, the un-generic types. - We need existential types as a way to explain what goes on in the VM at the high level of Scala. Published at DZone with permission of Pallavi Singh , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/existential-types-in-scala
CC-MAIN-2020-29
refinedweb
1,412
54.63
Hi folks! Python, data visualization, and programming are the topics I'm profoundly devoted to. That’s why I’d like to share with you my ideas as well as my enthusiasm for discovering new ways to present data in a meaningful way. The case I'm going to cover is quite common: you have data on the back end of your app and want to give it shape on the front end. If such a situation sounds familiar to you, then this tutorial may come in handy. After you complete it, you’ll have a Django-powered app with interactive pivot tables & charts. Prerequisites To confidently walk through the steps, you need a basic knowledge of the Django framework and a bit of creativity. ✨ To follow along, you can download the GitHub sample. Here's a brief list of tools we’re going to use: - Python 3.7.4 - Django - Virtualenv - Flexmonster Pivot Table & Charts (JavaScript library) - SQLite If you have already set up a Django project and feel confident about the basic flow of creating apps, you can jump straight to the Connecting data to Flexmonster section that explains how to add data visualization components to it. Let's start! Getting started with Django First things first, let’s make sure you’ve installed Django on your machine. The rule of thumb is to install it in your previously set up virtual environment - a powerful tool to isolate your projects from one another. Also, make sure you’ve activated in a newly-created directory. Open your console and bootstrap a Django project with this command: django-admin startproject analytics_project Now there’s a new directory called analytics_project. Let’s check if we did everything right. Go to analytics_project and start the server with a console command: python manage.py runserver Open in your browser. If you see this awesome rocket, then everything is fine: Next, create a new app in your project. Let’s name it dashboard: python manage.py startapp dashboard Here's a tip: if you're not sure about the difference between the concepts of apps and projects in Django, take some time to learn about it to have a clear picture of how Django projects are organized. Here we go. Now we see a new directory within the project. It contains the following files: __init__.py to make Python treat it as a package admin.py - settings for the Django admin pages apps.py - settings for app’s configs models.py - classes that will be converted to database tables by the Django’s ORM tests.py - test classes views.py - functions & classes that define how the data is displayed in the templates Afterward, it’s necessary to register the app in the project. Go to analytics_project/settings.py and append the app's name to the INSTALLED_APPS list: INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'dashboard', ] Now our project is aware of the app’s existence. Views In the dashboard/views.py, we’ll create a function that directs a user to the specific templates defined in the dashboard/templates folder. Views can contain classes as well. Here’s how we define it: from django.http import JsonResponse from django.shortcuts import render from dashboard.models import Order from django.core import serializers def dashboard_with_pivot(request): return render(request, 'dashboard_with_pivot.html', {}) Once called, this function will render dashboard_with_pivot.html - a template we'll define soon. It will contain the pivot table and pivot charts components. A few more words about this function. Its request argument, an instance of HttpRequestObject, contains information about the request, e.g., the used HTTP method (GET or POST). The method render searches for HTML templates in a templates directory located inside the app’s directory. We also need to create an auxiliary method that sends the response with data to the pivot table on the app's front-end. Let's call it pivot_data: def pivot_data(request): dataset = Order.objects.all() data = serializers.serialize('json', dataset) return JsonResponse(data, safe=False) Likely, your IDE is telling you that it can’t find a reference Order in models.py. No problem - we’ll deal with it later. Templates For now, we’ll take advantage of the Django template system. Let's create a new directory templates inside dashboard and create the first HTML template called dashboard_with_pivot.html. It will be displayed to the user upon request. Here we also add the scripts and containers for data visualization components: <head> <meta charset="UTF-8"> <title>Dashboard with Flexmonster</title> <script src=""></script> <script src=""></script> <link rel="stylesheet" href=""> </head> <body> <div id="pivot-table-container" data-</div> <div id="pivot-chart-container"></div> </body> Mapping views functions to URLs To call the views and display rendered HTML templates to the user, we need to map the views to the corresponding URLs. Here's a tip: one of Django's URL design principles says about loose coupling, we shouldn't make URLs with the same names as Python functions. Go to analytics_app/urls.py and add relevant configurations for the dashboard app at the project's level. from django.contrib import admin from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), path('dashboard/', include('dashboard.urls')), ] Now the URLs from the dashboard app can be accessed but only if they are prefixed by dashboard. After, go to dashboard/urls.py (create this file if it doesn’t exist) and add a list of URL patterns that are mapped to the view functions: from django.urls import path from . import views urlpatterns = [ path('', views.dashboard_with_pivot, name='dashboard_with_pivot'), path('data', views.pivot_data, name='pivot_data'), ] Model And, at last, we've gotten to data modeling. This is my favorite part. As you might know, a data model is a conceptual representation of the data stored in a database. Since the purpose of this tutorial is to show how to build interactive data visualization inside the app, we won’t be worrying much about the database choice. We’ll be using SQLite - a lightweight database that ships with the Django web development server. But keep in mind that this database is not the appropriate choice for production development. With the Django ORM, you can use other databases that use the SQL language, such as PostgreSQL or MySQL. For the sake of simplicity, our model will consist of one class. You can create more classes and define relationships between them, complex or simple ones. Imagine we're designing a dashboard for the sales department. So, let's create an Order class and define its attributes in dashboard/models.py: from django.db import models class Order(models.Model): product_category = models.CharField(max_length=20) payment_method = models.CharField(max_length=50) shipping_cost = models.CharField(max_length=50) unit_price = models.DecimalField(max_digits=5, decimal_places=2) Working with a database Now we need to create a database and populate it with records. But how can we translate our model class into a database table? This is where the concept of migration comes in handy. Migration is simply a file that describes which changes must be applied to the database. Every time we need to create a database based on the model described by Python classes, we use migration. The data may come as Python objects, dictionaries, or lists. This time we'll represent the entities from the database using Python classes that are located in the models directory. Create migration for the app with one command: python manage.py makemigrations dashboard Here we specified that the app should tell Django to apply migrations for the dashboard app's models. After creating a migration file, apply migrations described in it and create a database: python manage.py migrate dashboard If you see a new file db.sqlite3 in the project's directory, we are ready to work with the database. Let's create instances of our Order class. For this, we'll use the Django shell - it's similar to the Python shell but allows accessing the database and creating new entries. So, start the Django shell: python manage.py shell And write the following code in the interactive console: from dashboard.models import Order >>> o1 = Order( ... product_category='Books', ... payment_method='Credit Card', ... shipping_cost=39, ... unit_price=59 ... ) >>> o1.save() Similarly, you can create and save as many objects as you need. Connecting data to Flexmonster And here's what I promised to explain. Let's figure out how to pass the data from your model to the data visualization tool on the front end. To make the back end and Flexmonster communicate, we can follow two different approaches: - Using the request-response cycle. We can use Python and the Django template engine to write JavaScript code directly in the template. - Using an async request (AJAX) that returns the data in JSON. In my mind, the second one is the most convenient because of a number of reasons. First of all, Flexmonster understands JSON. To be precise, it can accept an array of JSON objects as input data. Another benefit of using async requests is the better page loading speed and more maintainable code. Let's see how it works. Go to the templates/dashboard_pivot.html. Here we've created two div containers where the pivot grid and pivot charts will be rendered. Within the ajax call, we make a request based on the URL contained in the data-URL property. Then we tell the ajax request that we expect a JSON object to be returned (defined by dataType). Once the request is completed, the JSON response returned by our server is set to the data parameter, and the pivot table, filled with this data, is rendered. The query result (the instance of JSONResponse) returns a string that contains an array object with extra meta information, so we should add a tiny function for data processing on the front end. It will extract only those nested objects we need and put them into a single array. This is because Flexmonster accepts an array of JSON objects without nested levels. function processData(dataset) { var result = [] dataset = JSON.parse(dataset); dataset.forEach(item => result.push(item.fields)); return result; } After processing the data, the component receives it in the right format and performs all the hard work of data visualization. A huge plus is that there’s no need to group or aggregate the values of objects manually. Here's how the entire script in the template looks: function processData(dataset) { var result = [] dataset = JSON.parse(dataset); dataset.forEach(item => result.push(item.fields)); return result; } $.ajax({ url: $("#pivot-table-container").attr("data-url"), dataType: 'json', success: function(data) { new Flexmonster({ container: "#pivot-table-container", componentFolder: "", width: "100%", height: 430, toolbar: true, report: { dataSource: { type: "json", data: processData(data) }, slice: {} } }); new Flexmonster({ container: "#pivot-chart-container", componentFolder: "", width: "100%", height: 430, //toolbar: true, report: { dataSource: { type: "json", data: processData(data) }, slice: {}, "options": { "viewType": "charts", "chart": { "type": "pie" } } } }); } }); Don't forget to enclose this JavaScript code in <script> tags. Phew! We’re nearly there with this app. Fields customization Flexmonster provides a special property of the data source that allows setting field data types, custom captions, and defining multi-level hierarchies. This is a nice feature to have - we can elegantly separate data and its presentation right in the report's configuration. Add it to the dataSource property of the report: mapping: { "product_category": { "caption": "Product Category", "type": "string" }, "payment_method": { "caption": "Payment Method", "type": "string" }, "shipping_cost": { "caption": "Shipping Cost", "type": "number" }, "unit_price": { "caption": "Unit Price", "type": "number" } } Dashboard's design To make the dashboard, we’ve rendered two instances of Flexmonster (you can create as many as you want, depending on the data visualization goals you want to reach). One is for the pivot table with summarized data, and the other is for the pivot charts. Both instances share the same data source from our model. I encourage you to try making them work in sync: with the reportchange event, you can make one instance react to the changes in another one. You can also redefine the ‘Export’ button’s functionality on the Toolbar to make it save your reports to the server. Results Let’s start the Django development server and open to see the resulting dashboard: Looks nice, doesn't it? Feedback This time we learned how to create a simple Django app and display the data on the client side in the form of an analytics dashboard. I do hope you enjoyed the tutorial! Please leave your comments below - any feedback on the code’s improvement is highly appreciated. References The source code for the tutorial can be found on GitHub. And here’s the project with Flexmonster & Django integration that inspired me for this tutorial. Further, I recommend walking through important concepts in the documentation to master Django:
https://www.freecodecamp.org/news/how-to-create-an-analytics-dashboard-in-django-app/
CC-MAIN-2021-31
refinedweb
2,148
57.57
by Neil Kakkar How not to be afraid of Python anymore A dive into the language reference documentation For the first year or two when I started coding, I thought learning a language was all about learning the syntax. So, that’s all I did. Needless to say, I didn’t turn into a great developer. I was stuck. Then, one fine day, it just clicked. I realised I was doing this wrong. Learning the syntax should be the least of my concerns. What matters is everything else about the language. What exactly is all that? Read on. This article is divided into three main subparts: The Data Model, the Execution model and the Lexical analysis. This article is more an insight into how things work in Pythonland — in contrast to how to learn Python. You’ll find many how-to learning sources online. What I didn’t find online was a single source of common ‘gotchas’ in Python. A source explaining how the language works. This attempts to solve that problem. I think I’ve come up short, there’s so much to it! Everything here comes from the official documentation. I’ve condensed it — to the important points, reordered stuff and added my examples. All links point to the documentation. Without further ado, here we go. Data Model Objects, values and types Objects are Python’s abstraction for data. Every object has its unique fixed identity, a fixed type and a value. ‘Fixed’ means the identity and type of an Object can never change. The value may change. Objects whose value can change are called mutable while objects whose value can’t change are called immutable. The mutability is determined by type : - Numbers, Strings and Tuples are immutable - Lists and Dictionaries are mutable The identity of objects can be compared via the is operator. id() returns the identity type() returns the type Note:. This note made my head spin the first two times I read it. Simple translation: Immutability is not the same as unchangeable value. In the example below, the tuple is immutable, while it’s value keeps changing (as the list changes). Example: >>> t = ("a", [1]) # a tuple of string and list>>>; id(t)4372661064>>> t('a', [1])>>> type(t)<class 'tuple'>>>> t[1][1]>>> t[1].append(2)>>> t('a', [1, 2])>>> id(t)4372661064>>> type(t)<class 'tuple'> The tuple is immutable, even though it contains a mutable object, a list. Compare this to a string, where changing the existing array changes the object (since strings are immutable). >>>>>; id(x)4371437472>>> x += "d">>> x'abcd'>>> id(x)4373053712 Here, the name , x is bound to another object of type string. This changes its id as well. The original object, being immutable, stays immutable. The binding is explained in further detail below, which should make things clearer. Built-in types Python comes with several built-in types: None The type is represented by a single object, hence a single value. The sole object with type = NoneType >>> type(None)<class 'NoneType'> Numbers This is a collection of abstract base classes used to represent numbers. They can’t be instantiated, and int, float inherit from numbers.Number. They are created by numeric literals and arithmetic operations. The returned objects are immutable, as we have seen. The following list of examples will make this clear: >>> a = 3 + 4>>> type(a)<class 'int'>>>> isinstance(a, numbers.Number)True>>> isinstance(a, numbers.Integral)True >>> isinstance(3.14 + 2j, numbers.Real)False>>> isinstance(3.14 + 2j, numbers.Complex)True Sequences These represent finite ordered sets indexed by non negative integers. Just like an array from other languages. len() returns the length of sequences. When length is n, the index set has elements from 0...n-1 . Then the ith element is selected by seq[i-1]. For a sequence l, you can select elements in between indexes using slicing: l[i:j]. There are two types of sequences: mutable and immutable. - Immutable sequences include: strings, tuples and bytes. - Mutable sequences include: lists and byte arrays Sets These represent unordered, finite sets of unique, immutable objects. They can’t be indexed, but can be iterated over. len() still returns the number of items in the set. There are two types of sets: mutable and immutable. - A mutable set is created by set(). - An immutable set is created by frozenset(). Mappings Dictionary These represent finite sets of objects indexed by nearly arbitrary values. Keys can’t be mutable objects. That includes lists, other dictionaries and other objects that are compared by value, and not by object identity. This means a frozenset can be a dictionary key too! Modules A module object is a basic organisational unit in Python. The namespace is implemented as a dictionary. Attribute references are lookups in this dictionary. For a module m, the dictionary is read-only, accessed by m.__dict__ . It’s a regular dictionary so you can add keys to it! Here’s an example, with the Zen of Python: We are adding our custom function, figure() to the module this. >>> import this as t>>>} >>> def figure():... print("Can you figure out the Zen of Python?")... >>> t.fig = figure >>> t.fig()Can you figure out the Zen of Python? >>>'fig': <function figure at 0x109872620>} >>> print("".join([t.d.get(c, c) for c in t! Not very useful either, but good to know. Operator Overloading Python allows for operator overloading. Classes have special function names — methods they can implement to use Python’s defined operators. This includes slicing, arithmetic operations and subscripting. For example, __getitem__() refers to subscripting. Hence, x[i] is equivalent to type(x).__getitem__(x,i). Hence, to use the operator [] on a class someClass : you need to define __getitem__() in someClass. >>> class operatorTest(object):... vals = [1,2,3,4]... def __getitem__(self, i):... return self.vals[i]... >>> x = operatorTest()>>> x[2]3>>> x.__getitem__(2)3>>> type(x)<class '__main__.OperatorTest'>>>> type(x).__getitem__(x,2)3>>> OperatorTest.__getitem__(x,2)3 Confused about why all of them are equivalent? That’s for next part — where we cover class and function definitions. Likewise, the __str__() function determines the output when the str() method is called on an object of your class. For comparison operations, the special function names are: object.__lt__(self, other)for < (“less than”) object.__le__(self, other)for <= (“less than or equal to”) object.__eq__(self, other)for ==(“equal to”) object.__ne__(self, other)for !=(“not equal to”) object.__gt__(self, other)for > (“greater than”) object.__ge__(self, other)for >= (“greater than or equal to”) So for example, x<y is called as x.__lt__(y) There are also special functions for arithmetic operations, like object.__add__(self, other). As an example, x+y is called as x.__add__(y) Another interesting function is __iter__(). You call this method when you need an iterator for a container. It returns a new iterator object that can iterate over all the objects in the container. For mappings, it should iterate over the keys of the container. The iterator object itself supports two methods: iterator.__iter__(): Returns the object itself. This makes iterators and the containers equivalent. This allows the iterator and containers both to be used in for and in statements. iterator.__next__(): Returns the next item from the container. If there are no further items, raises the StopIterationexception. class IterableObject(object): # The iterator object class vals = [] it = 0 def __init__(self, val): self.vals = val it = 0 def __iter__(self): return self def __next__(self): if self.it < len(self.vals): index = self.it self.it += 1 return self.vals[index] raise StopIteration class IterableClass(object): # The container class vals = [1,2,3,4] def __iter__(self): return iterableObject(self.vals) >>> iter_object_example = IterableObject([1,2,3])>>> for val in iter_object_example:... print(val)... 123>>> iter_container_example = IterableClass()>>> for val in iter_container_example:... print(val)... 1234 Cool stuff, right? There’s also a direct equivalent in Javascript. Context Managers are also implemented via operator overloading. with open(filename, 'r') as f open(filename, 'r') is a context manager object which implements object.__enter__(self) and object.__exit__(self, exc_type, exc_value, traceback) All the above three parameters are null when error is None. class MyContextManager(object): def __init__(self, some_stuff): self.object_to_manage = some_stuff def __enter__(self): print("Entering context management") return self.object_to_manage # can do some transforms too def __exit__(self, exc_type, exc_value, traceback): if exc_type is None: print("Successfully exited") # Other stuff to close >>> with MyContextManager("file") as f:... print(f)... Entering context managementfileSuccessfully exited This isn’t useful — but gets the point across. Does that make it useful anyway? Execution Model A block is a piece of code executed as a unit in an execution frame. Examples of blocks include: - Modules, which are top level blocks - Function body - Class definition - But NOT forloops and other control structures Remember how everything is an object in Python? Well, you have names bound to these objects. These names are what you think of as variables. >>> xTraceback (most recent call last): File "<stdin>", line 1, in <module>NameError: name 'x' is not defined Name binding, or assignment occurs in a block. Examples of name binding — these are intuitive: - Parameters to functions are bound to the names defined in the function - Import statements bind name of module - Class and function definitions bind the name to class / function objects - Context managers: with ... as f: f is the name binding to the ...object Names bound to a block are local to that block . That means global variables are simply names bound to the module. Variables used in a block without being defined there are free variables. Scopes define visibility of a name in a block. The scope of a variable includes the block it is defined in, as well as all blocks contained inside the defining block. Remember how for loops aren’t blocks? That’s why iteration variables defined in the loop are accessible after the loop, unlike in C++ and JavaScript. >>> for i in range(5):... x = 2*i... print(x, i)... 0 02 14 26 38 4>>> print(x, i) # outside the loop! x was defined inside.8 4 When a name is used in a block, it is resolved using the nearest enclosing scope. Note: If a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block. This can lead to errors when a name is used within a block before it is bound. For example: >>>>> def foo():... name = "inner_function" if name == "outer_scope" \ else "not_inner_function"... >>> foo()Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in fooUnboundLocalError: local variable 'name' referenced before assignment This is a wonderful traceback, which should make sense now. We have the top level block, the module — in which there’s another block, the function. Every binding inside the function has the function as its top level scope! Hence, when you’re binding the name name to the object "inner_function" : before the binding you’re checking its value. The rule says you can’t reference it before the binding. Exactly the reason for the UnboundLocalError. Lexical Analysis Python lets you use line joinings. To explicitly continue lines, use a backslash. if a < 10 and b < 10 \ # Comment results in SyntaxErrorand c < 10: # Comment okay return Trueelse: return False Implicitly, line joining occurs on its own when elements are inside braces. Comments here are allowed. month_names = ['Januari', 'Februari', 'Maart', # These are the 'April', 'Mei', 'Juni', # Dutch names 'Juli', 'Augustus', 'September', # for the months 'Oktober', 'November', 'December'] # of the year Indentation The number of spaces / tabs in the indentation doesn’t matter, as long as it’s increasing for things that should be indented. The first line shouldn’t be indented. The four spaces rule is a convention defined by PEP 8: Style Guide. It’s good practice to follow it. # Compute the list of all permutations of l.def perm(l): # Comment indentation is ignored if len(l) <= 1: return [l] r = [] for i in range(len(l)): s = l[:i] + l[i+1:] # Indentation level chosen p = perm(s) # Must be same level as above for x in p: r.append(l[i:i+1] + x) # One space okay return r There are a few reserved identifiers as well. _for import: functions / variables starting with _aren’t imported. __*__for system defined names, defined by implementation : we’ve seen a few of these. ( __str__(), __iter__(), __add__()) Python also offers Implicit String Literal concatenation >>> def name():... return "Neil" "Kakkar"... >>> name()'Neil Kakkar' Format Strings String formatting is a useful tool in Python. Strings can have { expr } in the string literal where expr is an expression. The expression evaluation is substituted in place. Conversions can be specified to convert the result before formatting. !r calls repr(), !s calls str() and !a calls ascii() >>>>> f"He said his name is {name!r}.""He said his name is 'Fred'." >>> f"He said his name is {repr(name)}." # repr() is equiv. to !r"He said his name is 'Fred'." >>> width = 10>>> precision = 4>>> value = decimal.Decimal("12.34567")>>> f"result: {value:{width}.{precision}}" # nested fields'result: 12.35' # This is same as "{decf:10.4f}".format(decf=float(value)) >>> today = datetime(year=2017, month=1, day=27)>>> f"{today:%B %d, %Y}" # using date format specifier'January 27, 2017'>>> number = 1024>>> f"{number:#0x}" # using integer format specifier'0x400' It’s a cleaner syntax to using str.format() Summary With this, we’ve covered the major pillars of Python. The object data model, execution model with its scopes and blocks and some bits on strings. Knowing all this puts you ahead of every developer who only knows the syntax. That’s a higher number than you think. In Part 2, we’ll look at object based classes and functions. To learn more, here’s a great book — Effective Python. [Affiliate link — thanks for supporting!] Other stories in this series: Enjoyed this? Don’t miss a post again — subscribe to my mailing list!
https://www.freecodecamp.org/news/how-not-to-be-afraid-of-python-anymore-b37b58871795/
CC-MAIN-2019-26
refinedweb
2,345
60.51
DBFunctor DBFunctor - Functional Data Management => ETL/ELT Data Processing in Haskell See all snapshots DBFunctor appears in Module documentation for 0.1.0.0 There are no documented modules for this package. DBFunctor: Functional Data Management ETL/ELT* Data Processing in Haskell DBFunctor is a Haskell library for ETL/ELT[^1] data processing of tabular data. What does this mean? It simply means that whenever you have a data analysis, data preparation, or data transformation task and you want to do it with Haskell type-safe code, that you enjoy, love and trust so much, now you can! Main Features - Julius: An Embedded Domain Specific (EDSL) Language for ETL Provides an intuitive type-level Embedded Domain Specific (EDSL) Language called Julius for expressing complex data flows (i.e., ETL flows) but also for performing SQL-like data analysis. For more info check this Julius tutorial. - Supports all known relational operations Julius supports all known relational operations (selection, projection, inner/outer join, grouping, ordering, aggregation, set operations etc.) - Provides the ETL Mapping and other typical ETL constructs and operations Julius implements typical ETL constructs such the Column Mapping and the ETL Mapping. - Applicable to all kinds of tabular data It is applicable to all kinds of “tabular data” (see explanation below) - In-memory, database-less data processing Data transformations or queries can run in-memory, within your Haskell code, without the need for a database to process your data. - Offloading to a database for heavy queries/data transformations In addition, a query or data transformation can be offloaded to a Database, when data don’t fit in memory, or heavy data processing over large volumes of data is required. The result can be fetched into the client’s memory (i.e., where your haskell code runs) in the RTabledata structure (see below), or stored in a database staging table. - Workflow Operations Julius provides common workflow operations. Workflows provide the ability to combine the evaluation of several different Julius Expressions (i.e., data pipelines) in an arbitrary logic. Examples of such operations include: - Ability to handle a failure of some operation in a Julius expression: - retry the failed operation (after corrective actions have taken place) and continue the evaluation of the Julius expression from this point onward. - skip the failed operation and move on with the rest operations in the pipeline. - restart the Julius expression from the beginning - terminate the Julius expression and skip all pending operations - Ability to start a Julius expression based on the success or failure result of another one - Ability to fork several different Julius expressions that will run concurrently - Conditional execution of Julius expressions and iteration functionality - Workflow hierarchy (i.e., flows, subflows etc.) - “Declarative ETL” Enables declarative ETL implementation in the same sense that SQL is declarative for querying data (see more below). Typical examples of DBFunctor use-cases - Build database-less Haskell apps. Build your data processing haskell apps without the need to import your data in a database for querying functionality or any for executing any data transformations. Analyze your CSV files in-place with plain haskell code (for Haskellers!). - Data Preparation. I.e., clean-up data, calculate derived fields and variables, group by and aggregate etc., in order to feed some machine learning algorithm (for Data Scientists). - Data Transformation. in order to transform data from Data Model A to Data Model B (typical use-case for Data Engineers who perform ETL/ELT[^1] tasks for feeding Data Warehouses or Data Marts) - Data Exploration. Ad hoc data analysis tasks, in order to explore a data set for several purposes such as to find business insights and solve a specific business problem, or maybe to do data profiling in order to evaluate the quality of the data coming from a data source, etc (for Data Analysts). - Business Intelligence. Build reports, or dashboards in order to share business insights with others and drive decision making process (for BI power-users) [^1]: ETL stands for Extract Transform and Load and is the standard technology for accomplishing data management tasks in Data Warehouses / Data Marts and in general for preparing data for any analytic purposes (Ad hoc queries, data exploration/data analysis, Reporting and Business Intelligence, feeding Machine Learning algorithms, etc.). ELT is a newer variation of ETL and means that the data are first Loaded into their final destination and then the data transformation runs in-place (as opposed to running at a separate staging area on possibly a different server)). When to Use it? DBFunctor should be used whenever a data analysis, or data manipulation, or data transformation task, over tabular data, must be performed and we wish to perform it with Haskell code -yielding all the well-known (to Haskellers) benefits from doing that- without the need to use a database query engine for this task. DBFunctor provides an in-memory data structure called RTable, which implements the concept of a Relational Table (which -simply put- is a set of tuples) and all relevant relational algebra operations (Selection, Projection, Inner Join, Outer Joins, aggregations, Group By, Set Operations etc.). Moreover, it implements the concept of Column Mapping (for deriving new columns based on existing ones - by splitting , merging , or with any other possible combination using a lambda expression or a function to define the new value) and that of the ETL Mapping, which is the equivalent of a “mapping” in an ETL tool (like Informatica, Talend, Oracle Data Integrator, SSIS, Pentaho, etc.). With this powerful construct, one can build arbitrary complex data pipelines, which can enable any type of data transformations and all these by writing Haskell code. What Kinds of Data? With the term “tabular data” we mean any type of data that can be mapped to an RTable (e.g., CSV (or any other delimiter), DB Table/Query, JSON etc). Essentially, for a Haskell data type ato be “tabular”, one must implement the following functions: toRTable :: RTableMData -> a -> RTable fromRTable :: RTableMData -> RTable -> a These two functions implement the “logic” of transforming data type a to/from an RTable based on specific RTable Metadata, which specify the column names and data types of the RTable, as well as (optionally) the primary key constraint, and/or alternative unique constraints (i.e., similar information provided with a CREATE TABLE statement in SQL) . By implementing these two functions, data type a essentially becomes an instance of the type class RTabular and thus can be transformed with the DBFunctor package. Currently, we have implemented a CSV data type (any delimeter allowed), based one the Cassava library, in order to enable data transformations over CSV files. Current Status and Roadmap Currently (version DBFunctor-0.1.0.0), the DBFunctor package is stable for in-memory data transformation and queries of CSV files (any delimiter allowed), with the Julius EDSL (module Etl.Julius) , or directly via RTable functions (module RTable.Core). The use of the Julius language is strongly recommended because it simplifies greatly and standardizes the creation of complex ETL flows. All in all, currently main features from #1 to #5 (from the list above) have been implemented and main features > #5 are future work that will be released in later versions. Future Vision -> Declarative ETL Our ultimate goal is, eventually to make DBFunctor the first Declarative library for ETL/ELT, or data processing in general, by exploiting the virtues of functional programming and Haskell strong type system in particular. Here we use “declarative” in the same sense that SQL is a declarative language for querying data. (You only have to state what data you want to be returned and you don’t care about how this will be accomplished - the DBMS engine does this for you behind the scenes). In the same manner, ideally, one should only need to code the desired data transformation from a source schema to a target schema, as well as all the data integrity constraints and business rules that should hold after the transformation and not having to define all the individual steps for implementing the transformation, as it is the common practice today. This will yield tremendous benefits compared to common ETL challenges faced today and change the way we build data transformation flows. Just to name a few: - Automated ETL coding driven by Source-to-Target mapping and business rules - ETL code correctness out-of-the-box - Data Integrity / Business Rules controls automatically embedded in your ETL code - Self-documented ETL code (Your documentation i.e., the Source-to-Target mapping and the business rules, is also the only code you need to write!) - Drastically minimize time-to-market for delivering Data Marts and Data Warehouses, or simply implementing Data Analysis tasks. The above is inspired by the theoretical work on Categorical Databases by David Spivak, Available Modules DBFunctor consists of the following set of Haskell modules: - RTable.Core: Implements the relational Table concept. Defines all necessary data types like RTableand RTupleas well as basic relational algebra operations on RTables. - Etl.Julius: A simple Embedded DSL for ETL/ELT data processing in Haskell - RTable.Data.CSV: Implements RTableover CSV (TSV, or any other delimiter) files logic. It is based on the Cassava library. A Very Simple Example In this example, we will load a CSV file, turn it into an RTable and then issue a very simple query on it and print the result, just to show the whole concept. So lets say we have a CSV file called test-data.csv. The file stores table metadata from an Oracle database. Each row represents a table stored in the database. Here is a small extract from the csv file: $ head test-data.csv OWNER,TABLE_NAME,TABLESPACE_NAME,STATUS,NUM_ROWS,BLOCKS,LAST_ANALYZED APEX_030200,SYS_IOT_OVER_71833,SYSAUX,VALID,0,0,06/08/2012 16:22:36 APEX_030200,WWV_COLUMN_EXCEPTIONS,SYSAUX,VALID,3,3,06/08/2012 16:22:33 APEX_030200,WWV_FLOWS,SYSAUX,VALID,10,3,06/08/2012 22:01:21 APEX_030200,WWV_FLOWS_RESERVED,SYSAUX,VALID,0,0,06/08/2012 16:22:33 APEX_030200,WWV_FLOW_ACTIVITY_LOG1$,SYSAUX,VALID,1,29,07/20/2012 19:07:57 APEX_030200,WWV_FLOW_ACTIVITY_LOG2$,SYSAUX,VALID,14,29,07/20/2012 19:07:57 APEX_030200,WWV_FLOW_ACTIVITY_LOG_NUMBER$,SYSAUX,VALID,1,3,07/20/2012 19:08:00 APEX_030200,WWV_FLOW_ALTERNATE_CONFIG,SYSAUX,VALID,0,0,06/08/2012 16:22:33 APEX_030200,WWV_FLOW_ALT_CONFIG_DETAIL,SYSAUX,VALID,0,0,06/08/2012 16:22:33 1. Turn the CSV file into an RTable The first thing we want to do is to read the file and turn it into an RTable. In order to do this we need to define the RTable Metadata, which is the same information one can provide in an SQL CREATE TABLE statement, i,e, column names, column data types and integrity constraints (Primary Key, Unique Key only - no Foreign Keys). So lets see how this is done: -- Define table metadata main :: IO () main = do -- read source csv file srcCSV <- readCSV "./app/test-data.csv" let -- turn source csv to an RTable src_DBTab = toRTable src_DBTab_MData srcCSV ... We have used the following functions: -- | createRTableMData : creates RTableMData from input given in the form of a list -- We assume that the column order of the input list defines the fixed column order of the RTuple. createRTableMData :: (RTableName, [(ColumnName, ColumnDType)]) -> [ColumnName] -- ^ Primary Key. [] if no PK exists -> [[ColumnName]] -- ^ list of unique keys. [] if no unique keys exists -> RTableMData in order to define the RTable metadata. For reading the CSV file we have used: -- | readCSV: reads a CSV file and returns a CSV data type (Treating CSV data as opaque byte strings) readCSV :: FilePath -- ^ the CSV file -> IO CSV -- ^ the output CSV type Finally, in order to turn the CSV data type into an RTable, we have used function: toRTable :: RTableMData -> CSV -> RTable which comes from the RTabular type class instance of the CSV data type. 2. Query the RTable Once we have created an RTable, we can issue queries on it, or apply any type of data transformations. Note that due to immutability, each query or data transformation creates a new RTable. We will now issue the following query: We return all the rows, which correspond to some filter predicate - in particular all rows where the table_name starts with a ‘B’. For this we use the Julius EDSL, in order to express the query and then with the function juliusToRTable :: ETLMappingExpr -> RTable, we evaluate the expression into an RTable. tabs_with_B = juliusToRTable $ EtlMapStart :-> (EtlR $ ROpStart -- apply a filter on RTable "src_DBTab" based on a predicate, expressed with a lambda expression :. (Filter (From $ Tab src_DBTab) $ FilterBy (\t -> let fstChar = Data.Text.take 1 $ fromJust $ toText (t <!> "TABLE_NAME") in fstChar == (pack "B")) ) -- A simple column projection applied on the Previous result :. (Select ["OWNER", "TABLE_NAME"] $ From Previous) ) A Julius expression is a data processing chain consisting of various Relational Algebra operations (EtlR $ ...) and/or column mappings (EtlC $ ...) connected together via the :-> data constructor, of the form (Julius expressions are read from top-to-bottom or from left-to-right): myJulExpression = EtlMapStart :-> (EtlC $ ...) -- this is a Column Mapping :-> (EtlR $ -- this is a series of Relational Algebra Operations ROpStart :. (Rel Operation 1) -- a relational algebra operation :. (Rel Operation 2)) :-> (EtlC $ ...) -- This is another Column Mapping :-> (EtlR $ -- more relational algebra operations ROpStart :. (Rel Operation 3) :. (Rel Operation 4) :. (Rel Operation 5)) :-> (EtlC $ ...) -- This is Column Mapping 3 :-> (EtlC $ ...) -- This is Column Mapping 4 ... In our example, the Julius expression consists only of two relational algebra operations: a Filter operation, which uses an RTuple predicate of the form RTuple -> Bool to filter out RTuples (i.e., rows) that dont satisfy this predicate. The predicate is expressed as the lambda expression: FilterBy (\t -> let fstChar = Data.Text.take 1 $ fromJust $ toText (t <!> "TABLE_NAME") in fstChar == (pack "B")) The second relational operation is a simple Projection expressed with the node (Select ["OWNER", "TABLE_NAME"] $ From Previous) Finally, in order to print the result of the query on the screen, we use the printfRTable :: RTupleFormat -> RTable -> IO() function, which brings printf-like functionality into the printing of RTables And here is the output: $ stack exec -- dbfunctor-example These are the tables that start with a "B": ------------------------------------- OWNER TABLE_NAME ~~~~~ ~~~~~~~~~~ DBSNMP BSLN_BASELINES DBSNMP BSLN_METRIC_DEFAULTS DBSNMP BSLN_STATISTICS DBSNMP BSLN_THRESHOLD_PARAMS SYS BOOTSTRAP$ 5 rows returned ------------------------------------- Here is the complete example. module Main where import RTable.Core (RTableMData ,ColumnDType (..) ,createRTableMData, printfRTable, genRTupleFormat, genDefaultColFormatMap, toText, (<!>)) import RTable.Data.CSV (CSV, readCSV, toRTable) import Etl.Julius import Data.Text (take, pack) import Data.Maybe (fromJust) -- Define Source Schema (i.e., a set of tables) -- | This is the basic source table -- It includes the tables of an imaginary database -- | Define Target Schema (i.e., a set of tables) main :: IO () main = do -- read source csv file srcCSV <- readCSV "./app/test-data.csv" let -- create source RTable from source csv src_DBTab = toRTable src_DBTab_MData srcCSV -- select all tables starting with a B tabs_with_B = juliusToRTable $ EtlMapStart :-> (EtlR $ ROpStart :. (Filter (From $ Tab src_DBTab) $ FilterBy (\t -> let fstChar = Data.Text.take 1 $ fromJust $ toText (t <!> "TABLE_NAME") in fstChar == (pack "B")) ) :. (Select ["OWNER", "TABLE_NAME"] $ From Previous ) ) putStrLn "\nThese are the tables that start with a \"B\":\n" -- print source RTable first 100 rows printfRTable ( -- this is the equivalent when pinting on the screen to a list of columns in a SELECT clause in SQL genRTupleFormat ["OWNER", "TABLE_NAME"] genDefaultColFormatMap ) $ tabs_with_B Julius Tutorial We have written a Julius tutorial to help you get started with Julius DSL. How to run Download or clone DBFunctor in a new directory, $ git clone $ cd haskell-DBFunctor/ then run $ stack build --haddock in order to build the code and generate the documentation with the stack tool. In order, to use it in you own haskell app, you only need to import the Julius module (you dont have to import RTable.Core, because it is exported by Julius) import Etl.Julius Finally, for a successful build of your app, you must direct stack to treat it as a local package (Soon DBFunctor will be added to Hackage and you will not need to use it as a local package.). So you have to include it in your stack.yaml file, as a local package that you want to link your code to. packages: - location: . - location: <path where DBFunctor package has been cloned> extra-dep: true And of course, you must not forget to add the dependency of your app to the DBFunctor package in your .cabal file build-depends: ... , DBFunctor Changes Changelog for DBFunctor 0.1.0.0 - Initial Version. Includes a full-working version of - Julius: A type-level Embedded Domain Specific (EDSL) Language for ETL - all common Relational Algebra operations, - the ETL Mapping and other typical ETL constructs and operations - operations applicable to all kinds of tabular data - In-memory, database-less data processing.
https://www.stackage.org/nightly-2018-12-04/package/DBFunctor-0.1.0.0
CC-MAIN-2018-51
refinedweb
2,765
51.07
Difference between revisions of "Xmonad/Notable changes since 0.8" Revision as of 23:29, 25 October 2009 This page is for keeping a record of significant changes in darcs xmonad and xmonad-contrib since the 0.8.* releases. - 1 Updates that require changes in xmonad.hs - 2 Changes to the xmonad core - 3 Changes to xmonad-contrib - 4 Related Projects See also, the haddock documentation for darcs modules mentioned here. Updates that require changes in xmonad.hs Modules formerly using Hooks.EventHook now use Event from core. Note: EwmhDesktops users must change configuration by removing the obsolete ewmhDesktopsLayout from layoutHook, (it no longer exists), and updating to the current ewmh support WindowGo or safeSpawn users may need to change command lines due to safeSpawn changes. Changes to the xmonad core - Supports using local modules in xmonad.hs For example: to use definitions from ~/.xmonad/lib/XMonad/Etc/Foo.hs import XMonad.Etc.Foo -. (With focus follows mouse enabled). -. Hooks - Hooks.EwmhDesktops ewmhDesktopsLayoutmust be removed from layoutHook; it no longer exists. Its tasks are now handled by handleEventHook = ewmhDesktopsEventHookand startupHook = ewmhDesktopsStartup. - Your config will be automatically upgraded if you're using ewmh via Config.Desktop, Config.Gnome, etc. but people using defaultConfig with explicit EwmhDesktops hooks and the ewmhDesktopsLayout modifier should remove them and instead use the new ewmhfunction to add all the proper EWMH support at once. The ewmhconfig modifier is provided by the EwmhDesktops module. (Panel users should keep avoidStruts and manageDocks or use one of the desktop configs as a base config in place of defaultConfig.) Until current documentation is available, see above for a brief example. For further details, see the Hooks.EwmhDesktops and Config.Desktop source. - Hooks.DynamicLog module has a new 'statusBar' function to simplify status bar configuration. Similar dzenand xmobarquick bar functions have changed type for easier composition with other XConfig modifiers. dynamicLogDzenand dynamicLogXmobarhave been removed. Format stripping functions for xmobar and dzen have been added to allow independent formatting for ppHidden and ppUrgent. -.", allowing the use of gnome-terminal or any other app capable of setting its resource - UTF-8 handling has been improved throughout core and contrib. New contrib modules. Deleted modules - Config.PlainConfig is now a separate project, shepheb's xmonad-light. - Hooks.EventHook is superseded by handleEventHook from core.. dzen-utils provides combinators for creating and processing dzen2 input strings. hlint parses haskell source and offers suggestions on how to improve it. (Requires >=ghc-6.10) xmonad and dbus integration
https://wiki.haskell.org/index.php?title=Xmonad/Notable_changes_since_0.8&diff=prev&oldid=31138
CC-MAIN-2021-49
refinedweb
411
51.34
(Using Windows and Allegro 5.0.6 here)I have some images I'd like to use in my program. While I can put them in a different folder from my program for organization, I'd really like to put them in the executable itself and read from it. Unfortunately, after hours of fruitless searching, I'm at a loss. I've found tools like dat2c and exedat, but they're both for Allegro 4. Is there any way to read images from the EXE file's resources into Allegro? I don't care too badly if it's platform specific, as long as it can be done. 1. Base64 encode the files you need 2. Put the resultant strings into your program 3. Decode said strings into memory during program initialization 4. Use the memfile addon to map that memory as files 5. Use the _f prefixed function (`al_load_bitmap_f`) to load the images You can skip the Base64 step... but it'll probably be best if you don't in the long run. EDIT: Actually nevermind, that was stupid. Just put the contents of your file into this kind of form: const char my_bitmap = {0xff, 0x0f ...}; (you can write a simple program that fopen's the file and outputs a .c file with those contents). Then you'd mount that my_bitmap pointer with the memfile addon and then use al_load_bitmap_f to load the actual bitmap. "For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro] The downside is that that it will dramatically inflate the size of the file, so it's mostly only practical for smaller image files. --AllegroFlare • allegro.cc markdown • Allegro logo I wrote some code for extracting BMP files from .exe files into Allegro bitmaps. There's an A4.0 version that shouldn't be too difficult to update to A5.0. Unfortunately, my motherboard has died, so I can't upload it here, but the uploaded version in this thread should still work. The downside is that that it will dramatically inflate the size of the file, so it's mostly only practical for smaller image files. Are you sure? For me the exe size increases exactly by the amount of the file I encode in the way I suggested in my post. Here's the generator I used (haven't actually tested to see if it actually loads the files though ): If he was referring to the size of the C source file vs the image file, then his observation is both true and irrelevant. --RTFM | Follow Me on Google+ | I know 10 people Wow, SiegeLord's method seems rather straight-forward now that I see what's going on. But, it requires a separate program and only works with bitmaps and simple data/text. As Mark pointed out, it requires a huge c file, and if you want to change it, you have to recompile the entire program connected to it (because you have to #include it to reference it). However, while browsing the Allegro source code (I can't even remember why), I came up with another idea. What if I just made an extension, in a sense, that created ALLEGRO_FILE instances that could read from the EXE's resources themselves? I made some code that should do just that. I hope. I'll test it out in the morning and let you know how it goes, but if it works it should let you include any kind of data file in the resources of the EXE and read them through Allegro's normal functions without anything more complicated than setting an RC definition. Plus, if you make changes, you're only recompiling the RC script, so it's faster. (right? I think this is how the C++ linker works) (I'd post the code here, but it'd be 200 lines. I think that's a little much.) NASM can include binary blobs, you'd have to relink whenever it changed. I did this just last week with a 4+Mb text file to avoid having to get the file size, malloc(), read in in, etc. It was to test a word wrap function on the king james bible. But, it requires a separate program And what do you think dat2c was . only works with bitmaps and simple data/text It works for any file type. The only issue is that you have to use it through the Allegro file routines. This means it can be any file Allegro can read, but also any file other libraries can read as long as you can override their file interface. (because you have to #include it to reference it). Ever heard of header files? It'll work for this method too, I just didn't include a header generation code in my code. It'll just contain the ALLEGRO_FILE* load_foo(); function. read them through Allegro's normal functions without anything more complicated than setting an RC definition. So it'll have the same limitations as my method . EDIT: Also, are these resource scripts even available under Linux? The method I presented is 100% cross-platform as far as I can tell. I also looked around a bit, and turns out there is a Linux program does more or less what my little code snipped did: xdd -i some_file out.c It'll produce the same static array (and a length variable) as my code, but without the nice loading function. Lastly, it turns out you can directly encode a file into an object file bypassing the compilation step. This time the exact procedure is compiler specific, so if you're not using GCC it might be a bit too much: linky. Hehe, I was really tired last night when I posted that. *embarrassed Yes, your method does work with any file-type and mine has the same limitations as yours, if not more. I suppose what I was getting at was that with your method, the file has to be run through an external program and included as a C file (a header file can be used for compiling efficiency), which is a little less straight-forward than simply including it in the resource files. It is, however, more cross-platform compatible as well as much better protected than mine, and it works just as well. Thank you very much, that was very helpful! I'm sorry if my ill-formed words offended you. So, conclusion (for easy searches): To pack an image/other file into an EXE/other executable and still use it in Allegro 5, include the file as a byte[] constant in a C file in your code and use the memfile addon to pretend it's a file. Continue with normal Allegro functions. P.S., I tested my program this morning and it worked! I could include any kind of data file in the resources and read it out perfectly fine. I didn't try to break it, but it works fine for normal operations. Accessing EXE resources may be useful in any case, so if anyone wants the code I'll be glad to hand it out. It's just a C file and a header. If he was referring to the size of the C source file vs the image file, then his observation is both true and irrelevant. Unless you're distributing the source. But yes, that's what I was saying. There is a method with GCC, it works on Windows, Linux etc:ld -r -b binary -o myimage.o myimage.pngAdd myimage.o when linking, and in your program, the data will be stored at the following address:extern char * binary_myimage_png_start; ld -r -b binary -o myimage.o myimage.png Thanks! ld -r -b binary -o myimage.o myimage.png boom. there it is. I was thinking the ld method couldn't give information on the size of the binary blob, but this SO page says it can. It could be inconvenient with namespaces though, seeing as how the variable names depend on the filename. If I understand correctly, the data will be loaded into memory as if it were a global variable. It might be nifty, if somebody would know a way to access a tar or (uncompressed) zip archive this way (only one file gets linked in; file system like It might be nifty, if somebody would know a way to access a tar or (uncompressed) zip archive this way (only one file gets linked in; file system like access). PhysFS has a function for this, PHYSFS_mountMemory. You'd mount that memory, setup the PhysFS interface via the PhysFS addon and every function in Allegro will work transparently with that archive. Thanks for posting that. I had trouble navigating their docs.
https://www.allegro.cc/forums/thread/610216/954770
CC-MAIN-2018-09
refinedweb
1,473
71.95
I encountered the following problem when working with the built-in sqlite database and using TLF TextFields in Flash CS5 When I tried to use TLF TextFields alone, I don't face any probelem, but when I start using a database connectivity code, the TLF TextFields placed on the stage are not shown, but instead, the SWF file is showing the built-in preloader with five dots looping. I tried changing the default Linkage in ActionScript 3 Settings to Merge Mode, but in this case nothing is shown, not the textfields, neither the preloader. I think the problem is related to loading the TLF Text Engine, but I couldn't figure out what to do. The following is my code placed in first frame: ========================================== import flash.data.SQLConnection; import flash.events.SQLErrorEvent; import flash.events.SQLEvent; import flash.filesystem.File; var conn:SQLConnection = new SQLConnection(); conn.addEventListener(SQLEvent.OPEN, openHandler); conn.addEventListener(SQLErrorEvent.ERROR, errorHandler); // The database file is in the application directory var folder:File = File.applicationDirectory; var dbFile:File = folder.resolvePath("DBSample.db"); conn.openAsync(dbFile); function openHandler(event:SQLEvent):void { trace("the database was created successfully"); } function errorHandler(event:SQLErrorEvent):void { trace("Error message:", event.error.message); trace("Details:", event.error.details); } stop(); ========================================== and I am using one TLF TextField on the stage for later use. Publish Settings>> Player: AIR 2.6 and not to forget, when I test the file using Contol Panel >> Test in Air Debug Launcher (Desktop), the file is working correctly. but when I open the generated SWF file, the problem appears. I hope that I find some help Thanks. North America Europe, Middle East and Africa Asia Pacific South America
http://forums.adobe.com/message/4367968
CC-MAIN-2013-20
refinedweb
278
58.79
In the last post, we added a small collectible system that will allow us to quickly prototype some ideas. In this article, we'll create a collectible prefab that will change the player's movement speed for a short duration, and then reset back to the initial speed. The Prefab I'll base the Boost Powerup on our Collectable prefab by duplicating it and renaming it to Boost Prefab. Now, I want to set the Collectable Type of this prefab to Boost, which we currently don't have. Let's update the enums from our last post. public enum CollectableType { Boost, } Now I'll set the type property in the inspector. Speed Boost Inside the Player script, I'll check against the collectible type and implement a speed boost. I'll create a separate private method that will activate all future collectibles to isolate the behavior from OnTriggerEnter, as a personal preference to keep the message method clean and simple. I made sure that the prefab had a tag set to Collectable to identify it. I prefer to create constant strings, so you'll notice the use of my static Constants.cs class to access Tag.Collectable, which simply holds the string value of the tag name "Collectable". private void OnTriggerEnter(Collider other) { if (other.CompareTag(Constants.Tag.Collectable)) { ActivateCollectable(other.GetComponent<Collectable>()); } } The ActivateCollectable method takes the collectible object that the player collided with and applies the effects to the player. For now, we just have a boost, which increases the speed by value from the constants file Collectable.BoostAmount. private void ActivateCollectable(Collectable collectable) { Destroy(collectable.gameObject); switch (collectable.Type) { case CollectableType.Boost: speed *= Constants.Collectable.BoostMultiplier; StartCoroutine(CollectablePowerdownRoutine(collectable.Duration, UndoBoost)); break; default: break; } } Power Down This accomplishes the goal to boost speed, but we also need a way to undo or power down the effect. To do this, I'll pass an anonymous method for the coroutine to execute once the wait time has elapsed. First, we need to add a duration for the effect. I'll do that by adding a field and public get accessor to Collectable.cs. If you're unfamiliar with the read-only property syntax, see expression-bodied members. public class Collectable : MonoBehaviour { [ ] private float speed = 5; [ ] private float duration = 5; [ ] private CollectableType type; public float Duration => duration; public CollectableType Type => type; ... } Now I'll add the coroutine to power down the effects based on this duration. private IEnumerator CollectablePowerdownRoutine(float duration, Action resetDelegate) { yield return new WaitForSeconds(duration); resetDelegate?.Invoke(); } Here's the result Summary We now have a boost powerup the player can collect with a duration that can be easily set by designers. Take care. Stay awesome.
https://blog.justinhhorner.com/add-a-speed-boost-pickup
CC-MAIN-2022-27
refinedweb
449
56.66
09 October 2009 16:37 [Source: ICIS news] By John Richardson ?xml:namespace> But even the most bullish of chemicals traders have been consistently putting this recovery into worrying context. "I have done reasonable business this year and made quite good returns, but volumes are way down," said one trader, who deals in toluene and mixed xylenes (MX). "Cracker-based aromatics producers are being exceptionally cautious and are very unwilling to risk building inventory. "Whereas I used to get, say, 5,000 tonnes a month from a particular company it's a maximum of 2,000-3,000 tonnes and sometimes none at all." "The end-user demand is simply not there. All we've really seen is some re-stocking, the cost-push from higher crude and a lot of speculation by Chinese traders.” A second Singapore-located trader - this time in polyolefins - added: “We are facing a lot of indigestion. “We have to wait for end-November when pricing should pick up. Manufacturing usually increases ahead of the next Chinese New Year (February 2010). "If it doesn't this is a sign of some big supply imbalances." But even if there was a brief rally at the end of November, he predicted that afterwards there would be a prolonged trough on new capacities and a fall in Chinese bank lending. These views are being expressed at a time when a long bull-run in operating rates is coming to an end. Asian naphtha cracker operators have started cutting production in October after five months of high output, ICIS news reported earlier this week. Some 50,000-60,000 tonnes of spot ethylene are due to be loaded this month at a time of weak Cracker rate cuts are in response to a 64.5% ($109/tonne) drop in Northeast Asian (NEA) ethylene margins, with high-density PE ((HDPE) margins down by 23.8% ($93/tonne) at the start of the fourth quarter. This is according to data from the ICIS weekly Asian ethylene and PE margin reports. It’s becoming increasingly difficult to make a convincing case for better news in the fourth quarter because of broad-based overstocking in It is not just the well-documented huge increase in bank loans that could have overheated Government subsidies/loans for imported raw material purchases might have been used to keep factories running and minimise unemployment. Commodity stockpiling may also have also taken place as a hedge against a weaker US dollar. And export tax rebates have been increased from 11% to 16% for many products. False confidence might have been created by what several sources have said were low chemical and polymer stock levels in bonded warehouses until at least July-August - the likely result of extra incentives to import raw materials and manufacture finished goods. But there are reports of high levels of intermediate and finished goods inventories, resulting in sharp cuts in operating rates in some manufacturing sectors. This has been the case in the textiles chain, contributing to recent falls in pricing – as reported by ICIS pricing - all the way up the chain to aromatics. The debate now is whether export-focused manufacturers have over-produced for the Christmas season in the West. Year-on-year sales of finished goods to retailers in October-November will surely look good after the disastrous same two months in 2008. The real measure should be against October-November 2007. Take the seven to eight straight months of increased chemical and polymer imports by Low-density polyethylene (LDPE) imports increased by 96% to 690,000 tonnes for the 12 months up until June this year, according to New York State-based International Trader Publications (ITP) Inc. But still, total global trade in the polymer declined by 3% to 8.2m tonnes for the same period, added ITP - a provider of trade data and analysis on chemicals and polymers. Other official government statements have made it clear, though, that less bank lending will be available for speculative purposes. Perhaps this is a factor behind the 63% drop in September volumes on the Dalian Commodity Exchange’s linear-low density PE (LLDPE) futures contract as against April - the peak month so far this year. Pricing has also fallen. “This is a sign of weak overall sentiment. Traders have suffered heavy losses and so they have less cash to spend in the physical markets,” added the Singapore-based polyolefins trader. And macro-economic data continue to suggest deep-seated with Western demand. So the next time you bump into a chief executive officer or some other senior official from a chemicals company, it might be worth asking the following questions: 1. “How much of your improvements over the last few months has been the result of cost-cutting and restocking?” 2. “When both come to an end (and this may well have already happened for re-stocking) how confident are you on a scale of 1-10 that you'll be able to continue delivering quarter-on-quarter improvements in your financial performance in 2010-11? In other words, can you grow volumes and increase profitability.” The answers could be telling. Reach John Richardson’s Asian Chemical Connections blog
http://www.icis.com/Articles/2009/10/09/9254285/insight-weak-sentiments-in-china-markets.html
CC-MAIN-2014-52
refinedweb
867
60.24
// Hybrid.java // To run applet under Netscape Communicator 4.7 browser or Internet // Explorer 5.0 internal JVM, compile using javac -target 1.1 Hybrid.java import java.applet.*; import java.awt.*; import java.awt.event.*; import java.io.*; import java.net.*; import java.util.*; public class Hybrid extends Applet implements ActionListener { static Hybrid h; static Stub s; Image im; public void init () { System.out.println ("init () called"); System.out.println ("isActive () returns " + isActive ()); // Create simple GUI. Button b = new Button ("Visit Javasoft"); b.addActionListener (this); add (b); b = new Button ("Giggle"); b.addActionListener (this); add (b); // Obtain an Image object in preparation for loading. String imageName = getParameter ("image"); im = getImage (getCodeBase (), imageName); } public void start () { System.out.println ("start () called"); System.out.println ("isActive () returns " + isActive ()); } public void paint (Graphics g) { // Load and draw an image. if (im != null) g.drawImage (im, 0, 0, this); } public void actionPerformed (ActionEvent e) { if (e.getActionCommand ().equals ("Giggle")) { String soundName = getParameter ("sound"); if (soundName != null) { AudioClip ac = getAudioClip (getDocumentBase (), soundName); ac.play (); } return; } try { URL u = new URL (""); getAppletContext ().showDocument (u); } catch (MalformedURLException exc) { System.out.println (e); } } public void stop () { System.out.println ("stop () called"); System.out.println ("isActive () returns " + isActive ()); } public void destroy () { System.out.println ("destroy () called"); System.out.println ("isActive () returns " + isActive ()); } public static void main (String [] args) { Frame frame = new Frame ("Hybrid as an Application"); h = new Hybrid (); frame.add (new Panel ().add (h)); // Create the frame's peer. Peer is not visible. frame.addNotify (); h.setStub (s = new Stub (args)); h.init (); frame.setSize (300, 200); frame.setVisible (true); s.setActive (true); h.start (); frame.addWindowListener (new WindowAdapter () { public void windowClosing (WindowEvent w) { s.setActive (false); h.stop (); h.destroy (); System.exit (0); } }); } } /* The Stub class provides a mechanism for obtaining information from the run-time environment. Typically, this environment is maintained by a Web browser. For this program, a Web browser environment is being simulated. */ class Stub implements AppletStub { private boolean active = false; private Hashtable ht = new Hashtable (); private Context c; // Create a new Stub object. The application's array of command // arguments are passed to this constructor, where they are saved // in a Hashtable object, for later retrieval by the getParameter // method. Stub (String [] args) { c = new Context (); // Create an applet context. // Make sure an even number of arguments has been passed. if ((args.length & 1) != 0) return; for (int i = 0; i < args.length; i += 2) ht.put (args [i], args [i + 1]); } // Return the current state of an applet. During initialization, // the applet is not active (and this method returns false). The // applet's active state is set to true just before the start // method is called. public boolean isActive () { return active; } // Return the complete URL of the HTML document containing the // applet. This URL includes the name of the document's file. public URL getDocumentBase () { URL u = null; try { u = new URL ("" + (new File ("").getAbsolutePath ()) + "/x.html"); // Use a fake document. } catch (MalformedURLException e) {} return u; } // Return the complete URL of the applet's .class file(s). This // method is often used with the getImage and getAudioClip // methods to load image/audio files relative to the .class files. public URL getCodeBase () { URL u = null; try { u = new URL ("" + new File ("").getAbsolutePath () + "/"); } catch (MalformedURLException e) {} return u; } // Return the value of the applet parameter, identified by the // name argument. If not present, null is returned. The Applet // class contains a getParameter method that calls this method. public String getParameter (String name) { return (String) ht.get (name); } // Return a reference to the applet's context. The Applet class // contains a getAppletContext method that calls this method. public AppletContext getAppletContext () { return c; // Return current applet context. } // Resize the applet. The Applet class contains a pair of resize // methods that call this method. Note: Web browsers don't permit // applets from being resized. public void appletResize (int width, int height) { } // The following method is an extra method that is called to set // the value of the private active variable. public void setActive (boolean active) { this.active = active; } } /* The Context class provides a mechanism to control the environment in which the program is running. Typically, this environment is maintained by a Web browser. For this program, a Web browser environment is being simulated. */ class Context implements AppletContext { // Load the file located by the url argument. The Applet // class contains a pair of getAudioClip methods that call // this method. public AudioClip getAudioClip (URL url) { return Applet.newAudioClip (url); } // Prepare to load the image located by the url argument. The // image is loaded when needed (by one of Graphics' drawImage // methods). The Applet class contains a pair of getImage // methods that call this method. public Image getImage (URL url) { Toolkit tk = Toolkit.getDefaultToolkit (); return tk.getImage (url); } // Fetch the Applet (identified by name) from the current HTML // document. public Applet getApplet (String name) { return null; } // Return an enumeration to all Applets located on the current HTML // page. public Enumeration getApplets () { return null; } // Show the HTML document, located by the url argument, in the // current Web browser window. public void showDocument (URL url) { System.out.println ("Showing document " + url); } // Show the HTML document, located by the url argument, in the // Web browser window, identified by the frame argument. public void showDocument (URL url, String frame) { try { showDocument (new URL (url.toString () + frame)); } catch (MalformedURLException e) {} } // Show a status message, identified by the message argument, in // the Web browser's status bar. The Applet class contains a // showStatus method that calls this method. public void showStatus (String message) { System.out.println (message); } // The following three methods are required by SDK 1.4. To learn // about those methods, please refer to the SDK 1.4 documentation. public InputStream getStream (String key) { return null; } public Iterator getStreamKeys () { return null; } public void setStream (String key, InputStream stream) { } } As you study the source code, you'll encounter methods for loading images ( getImage()), loading audio clips ( getAudioClip() and newAudioClip()), showing a Webpage ( showDocument()), and showing status information ( showStatus()). I'll talk about those methods in a future Java 101 article. For now, you might want to check out the Java 2 SDK documentation to learn more about them. To compile Hybrid, type javac Hybrid.java. Compiling that source file results in the creation of several class files. However, you only need to focus on the Hybrid.class file. After compiling successfully, we'll first run Hybrid as an application. The Hybrid program requires an image file and a sound file. Any image and sound file can be specified as command arguments to Hybrid. I've chosen an image of a cartoon character that serves as Sun's Java mascot. I've stored that character, known as Duke (who looks like a drunken penguin), and its image in duke.gif. In addition, I've selected a giggling sound, which I imagine sounds like Duke after a few cocktails. Type either of the following lines below to run Hybrid as an application. Make sure the image and sound files are located in the same directory as Hybrid.class and the other class files. java Hybrid image duke.gif sound giggle.wav java Hybrid sound giggle.wav image duke.gif When run, Hybrid displays a GUI consisting of an image of Duke and a pair of buttons. If you press the Giggle button, Duke will giggle. If you press the Visit Javasoft button, you won't jump to Javasoft's Webpage because showDocument has not been written to do that. Figure 3 illustrates Hybrid's GUI. Now run Hybrid as an applet. To do that, you'll need an HTML file containing code similar to that shown in Listing 6. Listing 6. Hybrid.html <applet code="Hybrid.class" width=300 height=200> <param name="image" value="duke.gif"> <param name="sound" value="giggle.wav"> </applet> If you choose to run this applet by using appletviewer, you still will not advance to Javasoft's Webpage. The appletviewer program is not a real browser; it only recognizes <applet> (and a few other tags). However, if you choose to run this applet via a Web browser, you will arrive at that Webpage, from which you can obtain the latest information on Java. (Note: If you run Hybrid in a Web browser, you will probably not hear Duke giggle. That is because many browsers contain older JVMs that do not recognize the WAV file format -- so they will not recognize giggle.wav. You can rectify that problem by using Java Plug-in.) Review Java 101 is changing. This article has set the tone for a column that will completely explore Java 2 Standard Edition Version 1.4 over the next few years. The sidebar, A Road Map, details where we'll go and what we'll discover. In this article, I presented applications, applets, and hybrid programs, and described their architectures and how to run them. Next month, I start exploring Java's non-object-oriented language basics. Learn more about this topic - Download this article's source code and resource files - For a glossary specific to this article, homework, tips, and more, see the Java 101 study guide that accompanies this article - "Learn Java from the Ground Up," Jacob Weintraub (JavaWorld, March 2000) - Ready for the next Java 101 lesson? Read "Non-Object-Oriented Language Basics, Part 1," Jeff Friesen (JavaWorld, December 2000) - "Plug into Java with Java Plug-in," Jeff Friesen (JavaWorld, 1999) to learn more about Java Plug-in
https://www.javaworld.com/article/2076224/core-java/applications--applets--and-hybrids.html?page=3
CC-MAIN-2018-22
refinedweb
1,570
59.9
Mantis covers very well the area of bugtracker, however, when developers are implementing features that require some sort of upfront documentation / design, the notes approach may become a bit confusing. I find that notes is a good approach for people to provide their thoughts. However, eventually the developer needs to put this all together into one artifact that will reflect what exactly will be implemented. For agile environments a wiki will be a good options for putting together such documentation. This can include a description of the feature, some implementation notes, and even some testing notes. Click the "Wiki" link next to the "Send Reminder" to see a prototype of this feature. Please provide feedback as notes to this issue. tyler 2006-05-10 10:14 reporter ~0012814 Are there any plans on implementing a PDF export, etc, feature for the wiki (are there wikis that support this?) I don't let non-developers any access to my tracker, but the ability to export such a design wiki page to PDF would be nice to send to clients, etc. (After developers have collaborated on the feature/bug/etc) deboutv 2006-05-10 10:30 reporter ~0012815 The Wiki needs a link to return to the bug description page. It is a good idea. isa21493 2006-05-12 04:09 reporter ~0012818 will the wiki be implemented in the next version of mantis? stevemagruder 2006-05-12 11:48 reporter ~0012822 I like this idea a lot, but it needs to be able to hook into the predominant wiki engine, MediaWiki. vboctor 2006-05-16 10:40 manager ~0012847 This is a mod to DokuWiki to allow template selection. Tempaltes are stored as normal wiki page within "Templates" namespace. Following is a link to the page which includes a list of DokuWiki plug-ins: jreb 2006-05-16 19:40 reporter ~0012851 I see the working [Wiki] link next to the [Send a reminder] link. I also see the label that indicates this site is Mantis 1.0.3. How is this done? Is there a customization feature that I missed in 1.0 to do this? Is this site running some patches to 1.0.3? Are they listed somewhere? Thanks. 2006-05-17 11:08 manager ~0012857 jreb, the Wiki feature is a patch and is not part of Mantis 1.0.3. The Wiki integration is not planned to be implemented in 1.0.x releases. However, it may be part of the Mantis 1.1 release. See the attached Wiki for both the vision for this feature as well as detailed steps on how to setup a similar integration. Note that the patch is "experimental" and won't be supported. I would like feedback from anyone who uses this patch. Also ideas for the vision will be most welcome. 2006-05-17 18:51 2006-05-17 18:55 manager ~0012862 I've attached a PDF snapshot of the Wiki associated with this issue. This was done using CutePDF Writer ( ) which is a free printer driver that generates PDF file from any Windows application. This approach will work well if the PDF is to be generated from a single Wiki page. For Wikis that can generate a single PDF from multiple pages (if any), then it will make sense to use the Wiki implementation. If there is a working implementation in the Wiki, it will also be cross-platform (although I'm sure there are tools that are similar to CutePDF for Linux). jugg 2006-05-24 13:50 reporter ~0012896 From wiki: mantisbt:7075:integration_with_dokuwiki#configuration1 $g_wiki_engine_url = $t_protocol . '://' . $t_host . '/%wiki_engine%/'; This will not allow one to use different subdomains for the bug tracker and wiki. For example, we use dokuwiki as our main webpage, and mantisbt is located at bugs.ourdomain.com. Of course I've seen instances where it is bugs.domain.com and wiki.domain.com as well. 2006-05-24 17:19 manager ~0012897 The $g_wiki_engine_url defined is just the default value, it can be overritten the same way $g_path can be overwritten in config_inc.php. This will all work as long as both Mantis and the Wiki are installed on the same server. 2006-05-24 17:57 reporter ~0012898 Will this integration allow a user to login via the wiki, and as a result also be logged in to mantis? Or is the authentication one way only? Only mantis to wiki? It'd be nice if it were both ways. (from the quick testing I did on the setup here, it is only mantis to wiki, and not the reverse). Obviously, regardless of which interface is used to login, it should use the mantisbt authentication. Also, allow the wiki namespace could be configured per project via the mantis project admin web interface. Our wiki already has individual project namespaces, and I'd like to use the format of "<project>:issues:<bugid>" instead of the flat "mantis:<bugid>". That also leads to being able to make the wiki feature enabled/disabled per project. So, have a global config file option to enable the wiki support with all of the relevant backend config, but by default no projects have wiki support enabled unless done through the web interface. And one more thing in wiki_dokuwiki_get_url_for_page_id() you hardcode the use of 'doku.php?id='. And while that should basically always be valid (unles some wierd custom install), some installations use mod_rewrite () to clean up the url that is used. It'd be nice to be able to use a 'clean' url for the wiki when linked from mantis. So, perhaps this could be made part of the configuration. $g_wiki_engine_url_link = 'doku.php?id=' or just set to empty if using mod_rewrite. tandler 2006-06-12 12:03 reporter ~0012956 I really like this idea! One issue for integration not mentioned yet are files attached to notes vs. files attached to wiki pages (or uploaded to the wiki in general) - I think files linked to mantis issues should refer to the same files that belong to the wiki page (or the other way round). Without having used your integration (much) yet, I could imagine that there need to be clear rules of what is written in the wiki and what in mantis, e.g. where should I add comments - directly to the page or as a note to the issue. (In both cases you need a good notification of changes.) At least I was just confused about this myself right now: should I add the new requirement (attached files) directly in the wiki page or post it here? (I guess the notification about changes in the wiki is not yet integrated: users watching an issue should also receive notification about related wiki pages. Maybe you could specify the level of the distance from the main issue page.) Well, I think in fact that the description of the issue could actually /be/ the corresponding wiki page - what do you think? Currently, the wiki page attached to this issue only contains links to other pages -- these links should be better placed directly in the description! I think it should be avoided that you need the description again on the wiki page. (As a side effect, you would also be able to access the history of the description.) DGtlRift 2006-07-13 17:35 reporter ~0013100 Is there any progress on a actual design that will integrate into a release of mantis. I get the impression that Victor intended this as a proof of concept, but it seems fairly clean to me even with the points that Peter brought up. Any idea where this fits into the road map? It would be nice at the very least to attach a relationship for this to a future release so at the very least there can be a goal for it. kraades 2006-07-24 08:56 reporter ~0013133 Using Mantis 0.19.1. I followed the steps in the PDF attachment but have a few problems: (BTW: a few lines in the PHP code in the PDF output are a bit too long are therefore incomplete) I do have a Wiki-link next to the Docs-link but it links to wiki.php which does not exists!? (example:) And I don't have a Wiki-link next to the "Send a reminder"-link!? 2006-08-09 04:00 manager ~0013239 I've update the associated Wiki page to include an update version of the code as well as missing pieces like wiki.php, updates to bug_view_page.php, bug_view_advanced_page.php, etc. I've also committed the current implementation to CVS. Hence, this stuff will be part of Mantis 1.1. Now that it is there, I'll be working on implementing improvements that I've included in the vision or that are recommended here. 2006-08-09 04:02 manager ~0013240 Note that I didn't update the attached PDF. The master is always the Wiki. The PDF is just an example that the Wiki contents can be exported/printed to PDF. 2006-09-12 02:29 manager ~0013362 The first version of the integration was released as part of Mantis 1.1.0a1.
https://mantisbt.org/bugs/view.php?id=7075
CC-MAIN-2019-35
refinedweb
1,532
63.39
EDT:Discussion topics from the language meetings Contents What the heck is this page? I've pasted in my notes from the EDT language meetings, where we discuss what should and should not be part of EGL in EDT. In some cases I've gone back and made updates, but beware! this might be very out-of-date. Topics that need more discussion are italicized. June 16 Meeting Tim's list of differences between RBD and EDT - not all types supported? - math rules, overlfow, etc. will be native to the target language - nullable is not a value of a variable (no AS/ISA nullable type) - not all part types (datatable) - not all statements (openui, forward, move) - operators (new bit operators, all ops semantic is defined by the types) - types are defined in egl, including operators - system libraries...e.g. System not SysLib...they won't all be there, they're just types - would rather put functions in types, in place of StrLib, DateTimeLib, etc. - stereotypes from RBD may or may not be there Core vs. Everything else Core is the language minus all the data types Core is a grammar, syntax and semantics of execution Core includes the statements Order of execution...and so on Test framework *could* define its own types, just for the purposes of the test The core spec will talk about "architypes" for primitive types without being specific, for example there should be a type to represent true and false (otherwise can't talk about if/while/etc.) but the spec won't say exactly what Boolean has to look like. Also have literals for numbers and string, but that doesn't mean those are in core Core includes the stereotypes because they're necessary for defining your types We don't have to support all of the conversions that RBD supports (don't like string to numeric, date/time, etc.) (don't like conversions controlled by variables) Tim will start writing the core spec very soon Parts are really just types Parts in core: everything...program library handler record datatable dataitem etc. But we don't have to support all of them in all of the implementations All of our types outside of core should be under org.eclipse.edt, there is no egl.* for things outside of core. Take the case of int divided by int. Java's native / yields int, but in JS the fractional part is kept. The return type of the / operator comes from the int.egl file. We could have one int.egl for Java and another for JS, or we could pick one and make the generators deal with it. Which is better? This has a big impact on our test framework. Tim says take what's supported by JS in RBD and use it as the basis for EDT Java & JS. June 17 Meeting We're starting to go over each part of EGL now...most of that's in the EGL Language page. We should decide what can be thrown, and when. Tim might have some new ideas about array initializers. Should the arg list in main take string[] or string...? What if I don't want any args? Allow more than one main? The parser can't tell if foo(3) is a type or a function invocation so Tim considers a change to the declaration syntax, maybe by adding a colon as in "x : foo(3);" no it looks like a label on a function call. Have to change the parser, allow types & function calls in both places & then do a semantic check later. Should we allow ? on reference types? If it's there they can be null, otherwise they can't be null. We'll talk about subtypes (SQL, CSV, Exception, etc.) when we discuss stereotypes. Since we're supporting top-level functions, we should support IncludeReferencedFunctions. July 5 Meeting We mentioned the desire to keep things "pretty much" in line with RBD & CE. One option is to leave stuff out of "base EDT" but also create a separate build/package which includes many of the other features. revisiting some decisions from last meeting... still no structured records, we will have a way to call native cobol/rpg/etc without them still no date, time, char, etc. do support some variations of move: between two references, move-byname, move-for ...move-byname is crazy for structured records, the way it's done now isn't very clean rather than set empty and set initial, have setEmpty and setInitial functions on records and handlers and arrays and anything else...this needs more discussion! literals: null, bool, numbers, string, string w/ux prefix, bytes literal is 0x folllowed by a hex string for example 0x03AB is bytes(2) operators and expressions <<, >>, >>> bitwise shifting on bigint, smallint, int...and shift= ~ for bitwise not on bigint, smallint, int...and ~= add ternary ?: hurray some way to put an annotation on something without putting it in curlies normally you wrote x int { myAnnotation = 3 }; another way to do this is x int {@myAnnotation{3}}; in EDT we will allow that to be outside of curlies and before the declaration, for example @myAnnotation{3} x int; don't allow substring as an L-value or an inout parameter as and isa: can't include ? in the type nullable types: it's not a flag, the thing can really be null, so if you say myNullRecord.field it'll throw a NullValueException operators will behave differently with nullable types than in RBD: they'll throw a NullValueException if an operand is null no support for is/not like and matches are replaced by functions on string for future discussion: regular expression matching on string in operator is replaced by a function Array, but can't do "value in myRecordArray.field" July 6 Meeting we're talking about conversions result of conversion errors? -- document them, as we said for operators and expressions implicit conversions? (e.g. what RBD does for int = boolean, string = any) they're not in the model until the generator or tooling calls makeCompatible() support conversions between non-primitive types? (record to record?) Tim sez: no defaultNumericFormat, defaultMoneyFormat, defaultTimestampFormat, defaultDateFormat, defaultTimeFormat Tim sez: don't use them, can call a library function called "format" to do this, we could have an RBDString type which does this under the covers SUPPORTED CONVERSIONS all to any any to all boolean to string: gives true/false string to timestamp: the string is parsed, fields come from the timestamp's declaration, we'll pick a format...no conversion to ts w/o pattern allowed because it's ambiguous string to numbers: follow our literal syntax for numbers string to bytes: use the underlying bit pattern to make a bytes value timestamp to string: we'll pick a format all numbers to all numbers all numbers to bytes...use the number's bit pattern...if the bytes has a length it must match the size of the number in bytes bytes to numbers...same in reverse BYTES = BYTES...valid even when only one has a length, and when both have a length & they're different...truncate longer values on the right...if the source is shorter then don't pad just don't update what was there before...so if your bytes(3) is 0x123456 and you assign it a bytes(1) of 0x99 then the bytes(3) ends up with 0x993456 BYTES COMPARED-TO BYTES...they have to be the same size, compare bytes from left to right, until you find a difference, the operand with a one instead of a zero is greater (UNSIGNED!) OVERFLOWS...normally a conversion doesn't have any special check for overflow, we do whatever the underlying language does...there will be special syntax for overflow checking...what is the syntax, and what happens on overflow when you're checking for it? our philosophy is that we can't make edt behave the same in every environment when there are edge cases...so if you're really concerned about overflows you have to tell us specifically to check for it...places where we truncate are an overflow?...support the RBD option to round instead of truncate?...syntax for a checked assignment looks like a function call, but it's a builtin thing, an operation it's OK to use substring operator on a bytes converting xml, json, format function for ts->string, numbers->string, and so on...we'll have libraries for this kind of thing (needs to be defined in the future) a timestamp declared without a pattern means it can hold any timestamp, there's no implicit pattern date/time math will be done with functions defined on timestamp lots of RBD system functions will move to the types, for example DateTimeLib stuff should be functions on timestamps we need to discuss all libraries we need, and the functions in all the types we haven't talked about yet July 7 Meeting We're talking about stereotypes and annotations, we'll say no to most of them now, and add them back (possibly changed) as we decide that we need their features Program stereotypes: only support BasicProgram, but remove msgTablePrefix Library stereotypes: none supported (we thought about using native libraries for DLLs, but decided ExternalTypes should be used instead) Record stereotypes: support Exception Handler stereotypes: support RUIHandler, RUIWidget, BirtHandler ExternalType stereotypes: support NativeType, JavaObject, JavaScriptObject, HostProgram Since we're supporting top-level functions, we should support IncludeReferencedFunctions. It's an annotation in RBD but should be a field of the stereotypes in EDT. Some interesting ideas and questions from Tim. 1. Should we add class as a part type? It's in the model already. It lets you have single inheritance and implement interfaces. It could replace handler altogether. Example: class Square extends Rectangle implements Shape, TwoDimensional private lengthOfSide int; function area() returns( int ) return ( lengthOfSide * 2 ); end end 2. Should we expand the way you can use the embed keyword? Ways to make it more powerful: embed things that aren't records, use it to embed fields from other parts, etc. 3. Dynamic access lets you look up fieds, why not functions too? 4. Types as values, such as having a field of type Record. Similar to the Class class in Java. Helpful in defining annotations and stereotypes, since they refer generically to records and fields. Soon, very soon, we need to go through all of the types defined in EDT and RBD, come up with all of the functions & fields & libraries & types for EDT. Now talking about Language Compliance Testing. What exactly should we test? EDT has a core of EGL, and our own extensions. Just about everything we're doing here falls into the category of extensions. The core is always the same. Compliance testing of core stuff, for example the if statement always runs the "then" block when the condition is true. How variables are initialized. The order of execution. Can't actually test these things without using a particular extension. Only testing core isn't very interesting or useful. But maybe Core needs to be specific about the architypes so real tests can be written. The tests shouldn't be coded to run in a particular environment or expect results which only come from a particular environment. We decided to start writing tests based on the API/extension stuff, not Core. Base our expectations on what RBD does. We can't expect to write tests for somebody else's extensions. But we can test that the implementation of the architypes works properly, for example all booleans must do boolean things. Tests can be generated from the MOF model. July 8 Meeting egl.lang is EDT CORE, includes definitions of only ~4 types (string, int, boolean, ...), keep these as compact as possible and then we extend them in eglx.edt (which is NOT CORE), and put the other types there too the extensions will include much of what we really want the type to have (more conversions, functions, operations, etc.) RBD types and libraries (FUTURE) can go in eglx.rbd each type should be in its own file fully commented in egldoc including what is thrown & why move RBD system functions from Libs into types when it makes sense to do so we need *Libs for things that don't belong in types continue to use SysLib but verify it only contains "system" things August 23 Meeting 1. Reference-type variables support nullability Yes! Reference variables declared with a question mark can be null, and their initial value is null. Reference variables declared without a question mark are never null, and their initial value is an object of the appropriate type. Curly braces on the declaration of a nullable reference variable will cause it to be initialized to a non-null value. A validation check will be added to ensure that things which can't be instantiated (interfaces & things with private default constructors) must be declared nullable. Constructors can be in ETs and (soon) handlers. This change means people have to add ? on their reference type variables when moving code from RBD. 2. Changes to arrays 2a.. 2b.. 2c. Initializing arrays with set-values blocks In EDT, };". 3. Equality operators In the past == and != on reference variables have always tested if the two variables point to the same underlying value. We made string, number, timestamp (with no pattern), and decimal (with no length) be references in EDT, but we want == and != to do a string/numeric comparison. We discussed adding === and !== operators for testing if two variables point to the same thing. It wouldn't be possible to redefine them, just as you can't redefine the meaning of = for assignment. We decided not to add these operators right now. We might add them in the future.
http://wiki.eclipse.org/EDT:Discussion_topics_from_the_language_meetings
CC-MAIN-2015-18
refinedweb
2,304
63.7
One More Example: Escape Sequences Let's run one more printing example, one that makes use of some of C's special escape sequences for characters. In particular, the program in Listing 3.10 shows how the backspace (\b), tab (\t), and carriage return (\r) work. These concepts date from when computers used teletype machines for output, and they don't always translate successfully to contemporary graphical interfaces. For example, Listing 3.10 doesn't work as described on some Macintosh implementations. Listing 3.10 The escape.c Program /* escape.c -- uses escape characters */ #include <stdio.h> int main(void) { float salary; printf("\aEnter your desired monthly salary:");/* 1 */ printf(" $_______\b\b\b\b\b\b\b"); /* 2 */ scanf("%f", &salary); printf("\n\t$%.2f a month is $%.2f a year.", salary, salary * 12.0); /* 3 */ printf("\rGee!\n"); /* 4 */ return 0; } What Happens When the Program Runs Let's walk through this program step by step as it would work under an ANSI C implementation. The first printf() statement (the one numbered 1) sounds the alert signal (prompted by the \a) and then prints the following: Enter your desired monthly salary: Because there is no \n at the end of the string, the cursor is left positioned after the colon. The second printf() statement picks up where the first one stops, so after it is finished, the screen looks as follows: Enter your desired monthly salary: $_______ The space between the colon and the dollar sign is there because the string in the second printf() statement starts with a space. The effect of the seven backspace characters is to move the cursor seven positions to the left. This backs the cursor over the seven underscore characters, placing the cursor directly after the dollar sign. Usually, backspacing does not erase the characters that are backed over, but some implementations may use destructive backspacing, negating the point of this little exercise. At this point, you type your response, say 2000.00. Now the line looks like this: Enter your desired monthly salary: $2000.00 The characters you type replace the underscore characters, and when you press Enter (or Return) to enter your response, the cursor moves to the beginning of the next line. The third printf() statement output begins with \n\t. The newline character moves the cursor to the beginning of the next line. The tab character moves the cursor to the next tab stop on that line, typically, but not necessarily, to column 9. Then the rest of the string is printed. After this statement, the screen looks like this: Enter your desired monthly salary: $2000.00 $2000.00 a month is $24000.00 a year. Because the printf() statement doesn't use the newline character, the cursor remains just after the final period. The fourth printf() statement begins with \r. This positions the cursor at the beginning of the current line. Then Gee! is displayed there, and the \n moves the cursor to the next line. Here is the final appearance of the screen: Enter your desired monthly salary: $2000.00 Gee! $2000.00 a month is $24000.00 a year. Flushing the Output When does printf() actually send output to the screen? Initially, printf() statements send output to an intermediate storage area called a buffer. Every now and then, the material in the buffer is sent to the screen. The standard C rules for when output is sent from the buffer to the screen are clear: It is sent when the buffer gets full, when a newline character is encountered, or when there is impending input. (Sending the output from the buffer to the screen or file is called flushing the buffer.) For instance, the first two printf() statements don't fill the buffer and don't contain a newline, but they are immediately followed by a scanf() statement asking for input. That forces the printf() output to be sent to the screen. You may encounter an older implementation for which scanf() doesn't force a flush, which would result in the program looking for your input without having yet displayed the prompt onscreen. In that case, you can use a newline character to flush the buffer. The code can be changed to look like this: printf("Enter your desired monthly salary:\n"); scanf("%f", &salary); This code works whether or not impending input flushes the buffer. However, it also puts the cursor on the next line, preventing you from entering data on the same line as the prompting string. Another solution is to use the fflush() function described in Chapter 13, "File Input/Output."
https://www.informit.com/articles/article.aspx?p=350580&seqNum=7
CC-MAIN-2021-21
refinedweb
769
73.37
python tutorial - Python interview questions and answers - learn python - python programming python interview questions :111 What is PEP8? - PEP8 consists of coding guidelines for Python language so that programmers can write readable code making it easy to use for any other person, later on. python interview questions :112 Is all the memory freed when Python exits? - No it is not, because the objects that are referenced from global namespaces of Python modules are not always de-allocated when Python exits. python interview questions :113 What does _init_.py do? - - python interview questions :114 What is the different between range () and xrange () functions in Python? - range () returns a list whereas xrange () returns an object that acts like an iterator for generating numbers on demand. python interview questions :115 How can you randomize the items of a list in place in Python? - Shuffle (lst) can be used for randomizing the items of a list in Python Learn python - python tutorial - python randomize - python examples - python programs python interview questions :116 What is a pass in Python? - Pass in Python signifies a no operation statement indicating that nothing is to be done. Learn python - python tutorial - python pass statement - python examples - python programs python interview questions :117 If you are gives the first and last names of employees, which data type in Python will you use to store them? - You can use a list that has first name and last name included in an element or use Dictionary. - There are various data types in Python. Some of the important types are listed below. - Python Numbers - Python List - Python Tuple - Python Strings - Python Set - Python Dictionary Learn python - python tutorial - python-dictionarys - python examples - python programs python interview questions :118 What happens when you execute the statement mango=banana in Python? - A name error will occur when this statement is executed in Python. Learn python - python tutorial - python errors executed - python examples - python programs python interview questions :119 Optimization. Learn python - python tutorial - python-dictionarys - python examples - python programs Output python interview questions :120 What is monkey patching in Python? - Monkey patching is a technique that helps the programmer to modify or extend other code at runtime. - Monkey patching comes handy in testing but it is not a good practice to use it in production environment as debugging the code could become difficult.
https://www.wikitechy.com/tutorials/python/python-interview-questions-and-answers
CC-MAIN-2022-21
refinedweb
389
60.45
This action might not be possible to undo. Are you sure you want to continue? Volume One PROGRAM IN PYTHON Full Circle Magazine is neither affiliated, with nor endorsed by, Canonical Ltd. About Full Circle. Full Circle Magazine Specials Find Us Website: Forums: forumdisplay.php?f=270 IRC: #fullcirclemagazine on chat.freenode.net Welcome to another 'single-topic special' In response to reader requests, we are assembling the content of some of our serialised articles into dedicated editions. For now, this is a straight reprint of the series 'Programming in Python', Parts 1-8 from issues #27 through #34; nothing fancy, just the facts. Please bear in mind the original publication date; current versions of hardware and software may differ from those illustrated, so check your hardware and software versions before attempting to emulate the tutorials in these special editions. You may have later versions of software installed or available in your distributions' repositories. Editorial Team Editor: Ronnie Tucker (aka: RonnieTucker) ronnie@fullcirclemagazine.org Webmaster: Rob Kerfia (aka: admin / linuxgeekeryadmin@fullcirclemagazine.org Podcaster: Robin Catling (aka RobinCatling) podcast@fullcirclemagazine.org Communications Manager: Robert Clipsham (aka: mrmonday) mrmonday@fullcirclemagazine.org Please note: this Special Edition is provided with absolutely no warranty whatsoever; neither the contributors nor Full Circle Magazine accept any responsibility or liability for loss or damage resulting from readers choosing to apply this content to theirs or others computers and equipment. Enjoy! Ubuntu projects and the views and opinions in the magazine should in no way be assumed to have Canonical endorsement. Full Circle Magazine HOW-TO N/A Program In Python - Part 1 to set it to be executable. Do this by typing chmod +x hello.py (Graphical User Interface) programming. Let's jump right in, creating a simple application. Simply put, this prints the first line "Hello. I am a python program." on the terminal. name = raw_input("What is your name? ") Our First Program Using a text editor such as gedit, let's type some code. Then we'll see what each line does and go from there. Type the following 4 lines. CD/DVD HDD USB Drive Laptop Wireless in the folder where you saved your python file. Now let's run the program. greg@earth:~/python_examples$ ./hello.py Hello. I am a python program. What is your name? Ferd Burphel Hello there, Ferd Burphel! greg@earth:~/python_examples$ Dev Graphics Internet M/media System #!/usr/bin/env python print 'Hello. program.' I am a python mong the many programming languages currently available, Python is one of the easiest to learn. Python was created in the late 1980's, and has matured greatly since then. It comes preinstalled with most Linux distributions, and is often one of the most overlooked when picking a language to learn. We'll deal with command-line programming in this article. In a future one, we'll play with GUI A name = raw_input("What is your name? ") print "Hello there, " + name + "!" That was simple. Now, let's look at what each line of the program does. #!/usr/bin/env python That's all there is to it. Save the file as hello.py wherever you would like. I'd suggest putting it in your home directory in a folder named python_examples. This simple example shows how easy it is to code in Python. Before we can run the program, we need full circle magazine #27 This line tells the system that this is a python program, and to use the default python interpreter to run the program. print 'Hello. I am a python program.' This one is a bit more complex. There are two parts to this line. The first is name =, and the second is raw_input("What is your name? "). We'll look at the second part first. The command raw_input will print out the prompt in the terminal ("What is your name? "), and then will wait for the user (you) to type something (followed by {Enter}). Now let's look at the first part: name =. This part of the command assigns a variable named "name". What's a variable? Think of a variable as a shoebox. You can use a shoe-box to store things -- shoes, computer parts, papers, whatever. To the shoe-box, it doesn't really matter what's in there -- it's just stored there. In this case, it stores whatever you type. In the case of my entry, I typed Ferd Burphel. Python, in this contents ^ 7 instance, simply takes the input and stores it in the "name" shoe-box for use later in the program. PROGRAM IN PYTHON - PART 1 Oct 5 2008, 19:24:49) [GCC 4.3.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> ^ SyntaxError: invalid syntax >>> If we type "print var" again we'll get this: >>> print var 4 >>> print "Hello there, " + name + "!" That's because the word "print" is a known command, while "Print" is not. Case is very important in Python. Now let's play with variables a bit more. Type: var = 2+2 Once again, we are using the print command to display something on the screen -- in this case, "Hello there, ", plus whatever is in the variable "name", and an exclamation point at the end. Here we are concatenating or putting together three pieces of information: "Hello there", information in the variable "name", and the exclamation point. Now, let's take a moment to discuss things a bit more deeply before we work on our next example. Open a terminal window and type: python You are now in the python shell. From here, you can do a number of things, but let's see what we got before we go on. The first thing you should notice is the python version -mine is 2.5.2. Next, you should notice a statement indicating that, for help, you should type "help" at the prompt. I'll let you do that on your own. Now type: print 2+2 var hasn't changed. It's still the sum of 2+2, or 4. This is, of course, simple programming for this beginner's tutorial. Complexity will increase in subsequent tutorials. But now let's look at some more examples of variables. In the interpreter type: >>>>> print strng The time has come for all good men to come to the aid of the party! >>> You'll see that nothing much happens except Python returns the ">>>" prompt. Nothing is wrong. What we told Python to do is create a variable (shoebox) called var, and to stick into it the sum of "2+2". To see what var now holds, type: print var and press enter. You'll get back >>> print 2+2 4 >>> and press enter. >>> print var 4 >>> You should get something like this: greg@earth:~/python_examples$ python Python 2.5.2 (r252:60911, Notice that we typed the word "print" in lower case. What would happen if we typed "Print 2+2"? The response from the interpreter is this: >>> Print 2+2 File "<stdin>", line 1 Print 2+2 Now we can use var over and over again as the number 4, like this: >>> print var * 2 8 >>> You've created a variable named "strng" (short for string) containing the value 'The time has come for all good men to come to the aid of the party!'. From now on (as long as we are contents ^ full circle magazine #27 8 PROGRAM IN PYTHON - PART 1 in this instance of the interpreter), our strng variable will be the same unless we change it. What happens if we try to multiply this variable by 4? >>> print strng * 4 The time has come for all good men to come to the aid of the party!The time has come for all good men to come to the aid of the party!The time has come for all good men to come to the aid of the party!The time has come for all good men to come to the aid of the party! >>> It looks as though s contains the integer 4, but it doesn't. Instead it contains a string representation of 4. So, if we type 'print s * 4' we get... >>> print s*4 4444 >>> integer and then multiplied by 4 to give 16. You have now been introduced to the print command, the raw_input command, assigning variables, and the difference between strings and integers. Let's go a bit further. In the Python Interpreter, type quit() to exit back to the command prompt. Save the program as "for_loop.py". Before we try to run this, let's talk about what a for loop is. A loop is some code that does a specified instruction, or set of instructions, a number of times. In the case of our program, we loop 10 times, printing the value of the variable cntr (short for counter). So the command in plain English is "assign the variable cntr 0, loop 10 times printing the variable cntr contents, add one to cntr and do it all over again. Seems simple enough. The part of the code "range(0,10)" says start with 0, loop until the value of cntr is 10, and quit. Now, as before, do a chmod +x for_loop.py Once again, the interpreter knows that s is a string, not a numerical value. It knows this because we enclosed the number 4 with single quotes, making it a string. We can prove this by typing print type(s) to see what the system thinks that variable type is. >>> print type(s) <type 'str'> >>> Simple For Loop Now, let's explore a simple programming loop. Go back to the text editor and type the following program. #! /usr/bin/env python for cntr in range(0,10): Well, that is not exactly what you would expect, is it? It printed the value of strng 4 times. Why? Well, the interpreter knew that strng was a string of characters, not a value. You can't perform math on a string. What if we had a variable called s that contained '4', as in the following: >>>>> print s 4 Confirmation. It's a string type. If we want to use this as a numerical value, we could do the following: >>> print int(s) * 4 16 >>> print cntr The string (s), which is '4', has now been converted to an full circle magazine #27 Be sure to tab the "print cntr" line. This is important. Python doesn't use parentheses "(" or curly braces "{" as do other programming languages to show code blocks. It uses indentations instead. 9 and run the program with ./for_loop.py in a terminal. greg@earth:~/python_examples$ ./for_loop.py 0 1 contents ^ 2 3 4 5 6 7 8 9 greg@earth:~/python_examples$ PROGRAM IN PYTHON - PART 1 Well, that seems to have worked, but why does it count up to only 9 and then stop. Look at the output again. There are 10 numbers printed, starting with 0 and ending with 9. That's what we asked it to do -- print the value of cntr 10 times, adding one to the variable each time, and quit as soon as the value is 10. Now you can see that, while programming can be simple, it can also be complex, and you have to be sure of what you ask the system to do. If you changed the range statement to be "range(1,10)", it would start counting at 1, but end at 9, since as soon as cntr is 10, the loop quits. So to get it to print "1,2,3,4,5,6,7,8,9,10", we should use range(1,11) - since the for loop quits as soon as the upper range number is reached. Also notice the syntax of the statement. It is "for variable in range(start value,end value):" The ":" says, we are starting a block of code below that should be indented. It is very important that you remember the colon ":", and to indent the code until the block is finished. If we modified our program to be like this: #! /usr/bin/env python for cntr in range(1,11): print cntr print 'All Done' indentation shows the block formatting. We will get into more block indentation thoughts in our next tutorial. That's about all for this time. Next time we'll recap and move forward with more python programming instructions. In the meantime, you might want to consider installing a python specific editor like Dr. Python, or SPE (Stani's Python Editor), both of which are available through Synaptic. We would get an output of... greg@earth:~/python_examples$ ./for_loop.py 1 2 3 4 5 6 7 8 9 10 All Done greg@earth:~/python_examples$ Make sure your indentation is correct. Remember, full circle magazine #27 is owner of ,a consulting company in Aurora, Colorado, and has been programming since 1972. He enjoys cooking, hiking, music, and spending time with his family. 10 contents ^ HOW-TO FCM#27 - Python Part 1 Program In Python - Part 2 t','Nov','Dec'] n the last installment, we looked at a simple program using raw_input to get a response from the user, some simple variable types, and a simple loop using the "for" statement. In this installment, we will delve more into variables, and write a few more programs. I Dev Graphics Internet M/media System To create the list, we bracket all the values with square brackets ( '[' and ']' ). We have named our list 'months'. To use it, we would say something like print months[0] or months[1] (which would print 'Jan' or 'Feb'). Remember that we always count from zero. To find the length of the list, we can use: print len(months) Up to now, we have created a list using strings as the information. You can also create a list using integers. Looking back at our months list, we could create a list containing the number of days in each one: DaysInMonth = [31,28,31,30,31,30,31,31,30,3 1,30,31] CD/DVD HDD USB Drive Laptop Wireless I received an email from David Turner who suggested that using the Tab-key for indentation of code is somewhat misleading as some editors may use more, or less, than four spaces per indent. This is correct. Many Python programmers (myself included) save time by setting the tab key in their editor to four spaces. The problem is, however, that someone else's editor may not have the same setting as yours, which could lead to ugly code and other problems. So, get into the habit of using spaces rather than the Tab-key. Let's look at another type of variable called lists. In other languages, a list would be considered an array. Going back to the analogy of shoeboxes, an array (or list) would be a number of boxes all glued side-by-side holding like items. For example, we could store forks in one box, knives in another, and spoons in another. Let's look at a simple list. An easy one to picture would be a list of month names. We would code it like this... months = ['Jan','Feb','Mar','Apr','May ','Jun','Jul','Aug','Sep','Oc which returns 12. Another example of a list would be categories in a cookbook. For example... categories = ['Main dish','Meat','Fish','Soup','C ookies'] Then categories[0] would be 'Main dish', and categories[4] would be 'Cookies'. Pretty simple again. I'm sure you can think of many things that you can use a list for. If we were to print DaysInMonth[1] (for February) we would get back 28, which is an integer. Notice that I made the list name DaysInMonth. Just as easily, I could have used 'daysinmonth' or just 'X'... but that is not quite so easy to read. Good programming practices suggest (and this is subject to interpretation) that the variable names are easy to understand. We'll get into the whys of this later on. We'll play with lists some more in a little while. Before we get to our next sample program, let's look at a few other things about Python. contents ^ full circle magazine #28 7 PROGRAM IN PYTHON - PART 2 be the space after 'time'. We briefly discussed strings in Part 1. Let's look at string a bit closer. A string is a series of characters. Not much more than that. In fact, you can look at a string as an array of characters. For example if we assign the string 'The time has come' to a variable named strng, and then wanted to know what the second character would be, we could type: strng = 'The time has come' print strng[1] We can find out how long our string is by using the len() function: print len(strng) ['The', 'time', 'has', 'come']. This is very powerful stuff. There are many other built-in string functions, which we'll be using later on. 1,30,31] for cntr in range(0,12): print '%s has %d days.' % (Months[cntr],DaysInMonth[cnt r]) The result from this code is: There is one other thing that I will introduce before we get to our next programming example. When we want to print something that includes literal text as well as variable text, we can use what's called Variable Substitution. To do this is rather simple. If we want to substitute a string, we use '%s' and then tell Python what to substitute. For example, to print a month from our list above, we can use: print 'Month = %s' % month[0] Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec has has has has has has has has has has has has 31 28 31 30 31 30 31 31 30 31 30 31 days. days. days. days. days. days. days. days. days. days. days. days. which returns 17. If we want to find out where in our string the word 'time' is, we could use pos = strng.find('time') The result would be 'h'. Remember we always count from 0, so the first character would be [0], the second would be [1], the third would be [2], and so on. If we want to find the characters starting at position 4 and going through position 8, we could say: print strng[4:8] Now, the variable pos (short for position) contains 4, saying that 'time' starts at position 4 in our string. If we asked the find function to find a word or sequence that doesn't exist in the string like this: pos = strng.find('apples') Something important to understand here is the use of single quotes and double quotes. If you assign a variable to a string like this: st = 'The time has come' the returned value in pos would be -1. We can also get each separate word in the string by using the split command. We will split (or break) the string at each space character by using: print strng.split(' ') This would print 'Month = Jan'. If we want to substitute an integer, we use '%d'. Look at the example below: Months = ['Jan','Feb','Mar','Apr','May ','Jun','Jul','Aug','Sep','Oc t','Nov','Dec'] DaysInMonth = [31,28,31,30,31,30,31,31,30,3 or like this: st = “The time has come” which returns 'time'. Like our for loop in part 1, the counting stops at 8, but does not return the 8th character, which would the result is the same. However, if you need to include a single quote in the string like this: which returns a list containing full circle magazine #28 8 contents ^ PROGRAM IN PYTHON - PART 2 st = 'He said he's on his way' code. you will get a syntax error. You need to assign it like this: st = “He said he's on his way” Think of it this way. To define a string, you must enclose it in some kind of quotes ‒ one at the beginning, and one at the end ‒ and they must match. If you need to mix quotes, use the outer quotes to be the ones that aren't in the string as above. You might ask, what if I need to define a string like “She said “Don't Worry””? In this case, you could define it this way: st = 'She said “Don\'t Worry”' We need to learn a few more things to be able to do our next example. First is the difference between assignment and equate. We've used the assignment many times in our samples. When we want to assign a value to a variable, we use the assignment operator or the '=' (equal sign): variable = value Don't worry about the if and the colon shown in the example above yet. Just remember we have to use the double-equal sign to do evaluation. Now we will return to the "if" statement we showed briefly above. When we want to make a decision based on values of things, we can use the if statement: if loop == 12: Notice the backslash before the single quote in 'Don't'. This is called an escape character, and tells Python to print the (in this case) single-quote ‒ without considering it as a string delimiter. Other escape character sequences (to show just a few) would be '\n' for new line, and '\t' for tab. We'll deal with these in later sample However, when we want to evaluate a variable to a value, we must use a comparison operator. Let's say we want to check to see if a variable is equal to a specific value. We would use the '==' (two equal signs): variable == value The next thing we need to discuss is comments. Comments are important for many things. Not only do they give you or someone else an idea of what you are trying to do, but when you come back to your code, say 6 months from now, you can be reminded of what you were trying to do. When you start writing many programs, this will become important. Comments also allow you to make Python ignore certain lines of code. To comment a line you use the '#' sign. For example: # This is a comment This will check the variable 'loop', and, if the value is 12, then we do whatever is in the indented block below. Many times this will be sufficient, but, what if we want to say If a variable is something, then do this, otherwise do that. In pseudo code you could say: if x == y then do something else do something else and in Python we would say: if x == y: do something else: do something else more things to do So, if we have a variable named loop and we want to see if it is equal to, say, 12, we would use: if loop == 12: You can put comments anywhere on a code line, but remember when you do, Python will ignore anything after the '#'. The main things to remember here are: 1. End the if or else statements full circle magazine #28 9 contents ^ with a colon. PROGRAM IN PYTHON - PART 2 2. INDENT your code lines. Assuming you have more than one thing to check, you can use the if/elif/else format. For example: x = 5 if x == 1: print 'X elif x < 6: print 'X 6' elif x < 10: print 'X 10' else: print 'X greater' is 1' is less than is less than is 10 or a loop doing a series of loop = 1 steps over and over, while loop == 1: until a specific response = raw_input("Enter something or 'quit' to end => ") if response == 'quit': threshold has been print 'quitting' reached. A simple loop = 0 example would be else: assigning a variable print 'You typed %s' % response “loop” to 1. Then while the loop variable is less => quit quitting than or equal to 10, print the In this example, we are value of loop, add one to it and combining the if statement, Notice that when we typed continue, until, when loop is while loop, raw_input 'QUIT', the program did not greater than 10, quit: statement, newline escape stop. That's because we are sequence, assignment loop = 1 evaluating the value of the operator, and comparison while loop <= 10: response variable to 'quit' operator ‒ all in one 8 line print loop (response == 'quit'). 'QUIT' loop = loop + 1 program. does NOT equal 'quit'. run in a terminal would produce the following output: 1 2 3 4 5 6 7 8 9 10 Notice that we are using the '<' operator to see if x is LESS THAN certain values - in this case 6 or 10. Other common comparison operators would be greater than '>', less than or equal to '<=', greater than or equal to '>=', and not equal '!='. Running this example would produce: Enter something end => FROG You typed FROG Enter something end => bird You typed bird Enter something end => 42 You typed 42 Enter something end => QUIT You typed QUIT Enter something end or 'quit' to or 'quit' to or 'quit' to Finally, we'll look at a simple example of the while statement. The while statement allows you to create This is exactly what we wanted to see. Fig.1 (above right) is a similar example that is a bit more complicated, but still simple. full circle magazine #28 or 'quit' to or 'quit' to One more quick example before we leave for this month. Let's say you want to check to see if a user is allowed to access your program. While this example is not the best way to do this task, it's a good way to show some things that we've already learned. Basically, we will ask the user for their name and a password, compare them with information that we coded inside the program, and then make a decision based on what we find. We will use two lists ‒ one to hold the allowed users and contents ^ 10 PROGRAM IN PYTHON - PART 2 one to hold the passwords. Then we'll use raw_input to get the information from the user, and finally the if/elif/else statements to check and decide if the user is allowed. Remember, this is not the best way to do this. We'll examine other ways in later articles. Our code is shown in the box to the right. Save this as 'password_test.py' and run it with various inputs. The only thing that we haven't discussed yet is in the list checking routine starting with 'if usrname in users:'. What we are doing is checking to see if the user's name that was entered is in the list. If it is, we get the position of the user's name in the list users. Then we use users.index(usrname) to get the position in the users list so we can pull the password, stored at the same position in the passwords list. For example, John is at position 1 in the users list. His password, 'dog' is at position 1 of the passwords list. That way we can match the two. Should be #----------------------------------------------#password_test.py # example of if/else, lists, assignments,raw_input, # comments and evaluations #----------------------------------------------# Assign the users and passwords users = ['Fred','John','Steve','Ann','Mary'] passwords = ['access','dog','12345','kids','qwerty'] #----------------------------------------------# Get username and password usrname = raw_input('Enter your username => ') pwd = raw_input('Enter your password => ') #----------------------------------------------# Check to see if user is in the list if usrname in users: position = users.index(usrname) #Get the position in the list of the users if pwd == passwords[position]: #Find the password at position print 'Hi there, %s. Access granted.' % usrname else: print 'Password incorrect. Access denied.' else: print "Sorry...I don't recognize you. Access denied." pretty easy to understand at this point. is owner of ,a consulting company in Aurora, Colorado, and has been programming since 1972. He enjoys cooking, hiking, music, and spending time with his family. full circle magazine #28 11 contents ^ HOW-TO FCM#27-28 - Python Parts 1-2 Program In Python - Part 3 tell Python to do this. Line seven sets up a 'for' loop to print 14 random numbers. Line eight uses the randint() function to print a random integer between 1 and 10. Notice we must tell Python what module the function comes from. We do this by saying (in this case) random.randint. Why even create modules? Well, if every possible function were included directly into Python, not only would Python become absolutely huge and slow, but bug fixing would be a nightmare. By using modules, we can segment the code into groups that are specific to a certain need. If, for example, you have no need to use database functionality, you don't need to know that there is a module for SQLite. However, when you need it, it's already there. (In fact, we'll be 7 #======================================= # random_example.py # Module example using the random module #======================================= import random # print 14 random integers for cntr in range(1,15): print random.randint(1,10) Dev Graphics Internet M/media System CD/DVD HDD USB Drive Laptop Wireless n the last article, we learned about lists, literal substitution, comments, equate versus assignment, if statements and while statements. I promised you that in this part we would learn about modules and functions. So let's get started. I those that come with Python, or use modules that others have created. Python itself comes with hundreds of various modules that make your programming easier. A list of the global modules that come with Python can be found at x.html. Some modules are operating system specific, but most are totally cross platform (can be used the same way in Linux, Mac and Microsoft Windows). To be able to use an external module, you must import it into your program. One of the modules that comes with Python is called 'random'. This module allows you to generate pseudo-random numbers. We'll use the module shown above right in our first example. Let's examine each line of code. The first four lines are comments. We discussed them in the last article. Line five tells Python to use the random module. We have to explicitly full circle magazine #29 using database modules later on in this series.) Once you really get started in Python programming, you will probably make your own modules so you can use the code you've already written over and over again, without having to re-type it. If you need to change something in that group of code, you can, with very little risk of breaking the code in your main program. There are limits to this and we will delve into this later on. Now, when we used the 'import random' statement earlier, we were telling Python to give us access to every function within the random module. If, however, we only needed to use the randint() function, we contents ^ Modules Modules are a way to extend your Python programming. You can create your own, or use can re-work the import statement like this: PROGRAM IN PYTHON - PART 3 from random import randint Now when we call our function, we don't have to use the 'random.' identifier. So, our code changes to from random import randint # print 14 random integers for cntr in range(1,15): print randint(1,10) add them, then multiply them, and then subtract them, displaying the numbers and results each time. To make matters worse, we have to do that three times with three sets of numbers. Our silly example would then look like the text shown right. #silly example' %d ' % (1,2,1+2) %d = %d ' % (1,2,1*2) %d = %d ' % (1,2,1-2) %d ' % (1,4,1+4) %d = %d ' % (1,4,1*4) %d = %d ' % (1,4,1-4) %d ' % (10,5,10+5) %d = %d ' % (10,5,10*5) %d = %d ' % (10,5,10-5) Functions When we imported the random module, we used the randint() function. A function is a block of code that is designed to be called, usually more than once, which makes it easier to maintain, and to keep us from typing the same code over and over and over. As a very general and gross statement, any time you have to write the same code more than once or twice, that code is a good candidate for a function. While the following two examples are silly, they make good statements about using functions. Let's say we wanted to take two numbers, select for the function, and function, we can do it without Not only is this a lot of then a list of parameters (if causing too many issues to our typing, it lends itself to errors, any) in parentheses. This line is main program. We call our either by typing or having to then closed by a colon (:). The function, in this case, by using change something later on. code in the function is the function name and putting Instead, we are going to create indented. Our improved silly the parameters after. a function called 'DoTwo' that example (#2) is shown below. takes the two numbers and Here is another example of does the math, printing the As you can see, there's a lot a function. Consider the output each time. We start by less typing involved — 8 lines following requirements. using the 'def' key word (which instead of 12 lines. If we need says that we are going to to change something in our We want to create a define the functi #silly example 2...still silly, but better def DoTwo(num1,num2): on). print 'Adding the two numbers %d and %d = %d ' % (num1,num2,num1+num2) After print 'Multiplying the two numbers %d and %d = %d ' % (num1,num2,num1*num2) print 'Subtracting the two numbers %d and %d = %d ' % (num1,num2,num1-num2) 'def' print '\n' we add DoTwo(1,2) the DoTwo(1,4) DoTwo(10,5) name we full circle magazine #29 8 contents ^ PROGRAM IN PYTHON - PART 3 program that will print out a list of purchased items in a pretty format. It must look something like the text below. The cost of each item and for the total of all items will be formatted as dollars and cents. The width of the print out must be able to be variable. The values on the left and right must be variable as well. We will use 3 functions to do this task. One prints the top and bottom line, one prints the item detail lines including the total line and one prints the separator line. Luckily, there are a number of things that Python has that will make this very simple. If you recall, we printed a string multiplied by 4, and it returned four copies of the same string. Well we can use that to our benefit. To print our top or bottom line we can take the desired width, subtract two for the two + characters and use “ '=' * (width-2)”. To make things even easier, we will use variable substitution to put all these items on one line. So our string to print would be coded as 's ('+',('=' * width-2)),'+'). Now we could have the routine print this directly, but we will use the return keyword to send the generated string back to our calling line. We'll call our function 'TopOrBottom' and the code for this function looks like this. def TopOrBottom(width): # width is total width of returned line return '%s%s%s' % ('+',('=' * (width-2)),'+') could modify the function we just made to include a parameter for the character to use in the middle of the pluses. Let's do that. We can still call it TopOrBottom. def TopOrBottom(character,width): # width is total width of returned line # character is the character to be placed between the '+' characters return '%s%s%s' % ('+',(character * (width2)),'+') Let's call the new function 'Fmt'. We'll pass it 4 parameter values as follows: – the value to print on the left – the width of this “column” – the value to print on the right (which should be a floating value) – the width of this “column” The first task is to format the information for the right side. Since we want to format the value to represent dollars and cents, we can use a special function of variable substitution that says, print the value as a floating point number with n number of places to the right of the decimal point. The command would be '%2.f'. We will assign this to a variable called 'part2'. So our code line would be 'part2 = '%.2f' % val2'. We also can use a set of functions that's built into Python strings called ljust and rjust. Ljust will left justify the string, padding the right side with whatever character you want. Rjust does contents ^ We could leave out the comment, but it's nice to be able to tell at a glance what the parameter 'width' is. To call it, we would say 'print TopOrBottom(40)' or whatever width we wish the line to be. Now we have one function that takes '+===============================+' care of two of the '| Item 1 X.XX |' lines. We can make a '| Item 2 X.XX |' new function to take '|-------------------------------|' care of the separator '| Total X.XX |' '+===============================+' line using the same kind of code...OR we full circle magazine #29 Now, you can see where comments come in handy. Remember, we are returning the generated string, so we have to have something to receive it back when we make the call to it. Instead of assigning it to another string, we'll just print it. Here's the calling line. print TopOrBottom('=',40) So now, not only have we taken care of three of the lines, we've reduced the number of routines that we need from 3 down to 2. So we only have the center part of the print out to deal with. 9 PROGRAM IN PYTHON - PART 3 the same thing, except the padding goes on the left side. Now for the neat bit. Using substitutions we throw together a big string and return that to the calling code. Here is our next line. return 'ss' % ('| ',val1.ljust(leftbit-2,' '),part2.rjust(rightbit-2,' '),' |') While we should really do some error checking, you can use that as something to play with on your own. So...our Fmt function is really only two lines of code outside of the definition line and any comments. We can call it like this. print Fmt('Item 1',30,item1,10) While this looks rather daunting at first, let's dissect it and see just how easy it is: - We will send back our created string to the calling code. - We are going to stick in 4 values in the string. Each %s is a place holder. - Starts the variable list - Print these literals Take the variable val1 that we were passed, left justify it with spaces for (leftbit-2) characters. We subtract 2 to allow the '| ' on the left side. Right justify the formatted string of the price rightbit-2 spaces. ' |' - finish the string. That's all there is to it. Again, we could assign the return value to another string, but we can just +======================================+ print it. Notice | Item 1 3.00 | that we are | Item 2 15.00 | +--------------------------------------+ sending 30 for | Total 18.00 | the width of the +======================================+ left bit and 10 for the width of output should look something the right. That equals the 40 like the text shown above right. that we sent to our TopOrBottom routine earlier. While this is a very simple So, fire up your editor and type example, it should give you a in the code below. good idea of why and how to use functions. Now, let's Save the code as extend this out a bit and learn 'pprint1.py' and run it. Your #pprint1.py #Example of semi-useful functions def TopOrBottom(character,width): # width is total width of returned line return '%s%s%s' % ('+',(character * (width-2)),'+') def Fmt(val1,leftbit,val2,rightbit): # prints two values padded with spaces # val1 is thing to print on left, val2 is thing to print on right # leftbit is width of left portion, rightbit is width of right portion part2 = '%.2f' % val2 return '%s%s%s%s' % ('| ',val1.ljust(leftbit-2,' '),part2.rjust(rightbit-2,' '),' |') # Define the prices of each item item1 = 3.00 item2 = 15.00 # Now print everything out... print TopOrBottom('=',40) print Fmt('Item 1',30,item1,10) print Fmt('Item 2',30,item2,10) print TopOrBottom('-',40) print Fmt('Total',30,item1+item2,10) print TopOrBottom('=',40) full circle magazine #29 10 contents ^ PROGRAM IN PYTHON - PART 3 more about lists. Remember back in part 2 when we first discussed lists? Well one thing that I didn't tell you is that a list can contain just about anything, including lists. Let's define a new list in our program called itms and fill it like this: itms = [['Soda',1.45],['Candy',.75], ['Bread',1.95],['Milk',2.59]] like this now. #item1 = 3.00 #item2 = 15.00 itms = [['Soda',1.45],['Cand y',.75],['Bread',1.95 ],['Milk',2.59]] itms = [['Soda',1.45],['Candy',.75],['Bread',1.95],['Milk',2.59]] print TopOrBottom('=',40) total = 0 #NEW LINE for cntr in range(0,4): #NEW LINE print Fmt(itms[cntr][0],30,itms[cntr][1],10) #NEW LINE total += itms[cntr][1] #NEW LINE print TopOrBottom('-',40) print Fmt('Total',30,total,10) #CHANGED LINE print TopOrBottom('=',40) If we were to access this as a normal list we would use print itms[0]. However, what we would get back is ['Soda',1.45], which is not really what we were looking for under normal circumstances. We want to access each item in that first list. So we would use 'print itms[0][0]' to get 'Soda' and [0][1] to get the cost or 1.45. So, now we have 4 items that have been purchased and we want to use that information in our pretty print routine. The only thing we have to change is at the bottom of the program. Save the last program as 'pprint2.py', then comment out the two itemx definitions and insert the list we had above. It should look Next, remove all the lines that call Fmt(). Next add the following lines (with #NEW LINE at the end) to make your code look like the text shown right. wild and crazy, you could add a line for tax as well. Handle it close to the same way we did the total line, but use (total * .086) as the cost. I set up a counter variable for loop that cycles through the print Fmt('Tax:',30,total*.086,10) list for each item there. Notice that I've also added a variable If you would like to, you can called total. We set the total to add more items to the list and 0 before we go into our for see how it works. loop. Then as we print each item sold, we add the cost to That's it for this time. Next our total. Finally, we print the time we'll concentrate on total out right after the classes. separator line. Save your program and run it. You +======================================+ should see | Soda 1.45 | something like | Candy 0.75 | | Bread 1.95 | the text shown | Milk 2.59 | below. If you wanted to get +--------------------------------------+ | Total 6.74 | +======================================+ is owner of ,a consulting company in Aurora, Colorado, and has been programming since 1972. He enjoys cooking, hiking, music, and spending time with his family. contents ^ full circle magazine #29 11 HOW-TO FCM#27-29 - Python Parts 1-3 Program In Python - Part 4 Dev Graphics Internet M/media System class is a method we use to implement this. For example, we have three dogs at home. A Beagle, a Lab and a German CD/DVD HDD USB Drive Laptop Wireless Shepherd/Blue Heeler mix. All three are dogs, but are all promised last time that we different. There are common attributes among the three of would discuss classes. So, them, but each dog has that's what we'll separate attributes as well. For concentrate on. What are classes and what good are they? example, the Beagle is short, chubby, brown, and grumpy. The Lab is medium-sized, A class is a way of black, and very laid back. The constructing objects. An object Shepherd/Heeler mix is tall, is simply a way of handling skinny, black, and more than a attributes and behaviors as a bit crazy. Right away, some group. I know this sounds attributes are obvious. confusing, but I'll break it down Short/medium-sized/tall are all for you. Think of it this way. An attributes of height. Grumpy, object is a way to model laid back, and crazy are all something in the real world. A attributes of mood. On the behavior side of things, we can consider eating, sleeping, playing, and other actions. All three are of the class 'Dog'. Going back to the attributes that we used to describe each above, we have things such as Dog.Name, Dog.Height, Dog.Build (skinny, chubby, etc.), and Dog.Color. We also have behaviors such as Dog.Bark, Dog.Eat, Dog.Sleep, and so on. As I said before, each of the dogs is a different breed. Each breed would be a sub-class of the class Dog. In a diagram, it would look like this. 7 /--Beagle Dog ---|-- Lab \--Shepherd/Heeler I Each sub-class inherits all of the attributes of the Dog class. Therefore, if we create an instance of Beagle, it gets all of the attributes from its parent class, Dog. Beagle = Dog() Beagle.Name = 'Archie' Beagle.Height = 'Short' Beagle.Build = 'Chubby' Beagle.Color = 'Brown' Starting to make sense? So, let's create our gross Dog class (shown above). We'll start with the keyword "class" and the name of our class. full circle magazine #30 contents ^ Before we go any further in our code, notice the function that we have defined here. The function __init__ (two underscores + 'init' + two underscores) is an initialization function that works with any class. As soon as we call our class in code, this routine is run. In this case, we have set up a number of parameters to set some basic information about our class: we have a name, color, height, build, mood, age, and a couple of variables Hungry and Tired. We'll revisit these in a little bit. Now let's add some more code. PROGRAM IN PYTHON - PART 4 name, color, and so on. The next four lines simply query the Beagle object and get back information in return. Time for more code. Add the code shown in the top right box into the class after the __init__ function. Now we can call it with Beagle.Eat() or Beagle.Sleep(). Let's add one more method. We'll call it Bark. Its code is shown right. This one I've made more flexible. Depending on the mood of the dog, the bark will change. Shown on the next page is the full class code so far. So, when we run this we'll get Beagle.name print Beagle.color print Beagle.mood print Beagle.Hungry carefully, all we have to do is create two more instances of our dog class. Lab = Dog('Nina','Black','Medium',' Heavy','Laid Back',7) Heeler = Dog('Bear','Black','Tall','Sk inny','Crazy',9) print 'My Name is %s' % Lab.name print 'My color is %s' % Lab.color print 'My Mood is %s' % Lab.mood print 'I am hungry = %s' % Lab.Hungry Lab.Bark() Heeler.Bark() This is UNINDENTED code that resides outside of our class, the code that uses our class. The first line creates an instance of our dog class called Beagle. This is called instantiation. When we did this, we also passed certain information to the instance of the class, such as the Beagle's My name is Archie My color is Brown My mood is Grumpy I am hungry = False Sniff Sniff...Not Hungry Yum Yum...Num Num GRRRRR...Woof Woof Notice that I created the instances of both of the dogs before I did the print statements. That's not a problem, since I “defined” the instance before I called any of the methods. Here is the full output of our dog class program. My name is Archie My color is Brown My mood is Grumpy I am hungry = False Sniff Sniff...Not Hungry Yum Yum...Num Num GRRRRR...Woof Woof My Name is Nina contents ^ Now, that takes care of the grumpy old Beagle. However, I said earlier that I have 3 dogs. Because we coded the class full circle magazine #30 8 PROGRAM IN PYTHON - PART 4 My color is Black My Mood is Laid Back I am hungry = False Yawn...ok...Woof Bark Bark Bark Bark Bark Bark Bark 'My name is %s' % Beagle.name print 'My color is %s' % Beagle.color print 'My mood is %s' % Beagle.mood print 'I am hungry = %s' % Beagle.Hungry Beagle.Eat() Beagle.Hungry = True Beagle.Eat() Beagle.Bark() Now that you have the basics, your homework will be to expand our dog class to allow for more methods, such as maybe Play or EncounterStrangeDog or something like this. Next time, we will start discussing GUI or Graphical User Interface programming. We will be using for this. is owner of ,a consulting company in Aurora, Colorado, and has been programming since 1972. He enjoys cooking, hiking, music, and spending time with his family. full circle magazine #30 9 contents ^ HOW-TO FCM#27-30 - Python Parts 1-4 Program In Python - Part 5 The top frame is called the tool frame. The bottom-left frame is the inspector frame, and the bottom-right frame is the editor frame. On the tool frame, you have various tabs (New, Containers/Layout, etc.) that will allow you to start new projects, add frames to existing projects, and add various controls to the frames for your application. The inspector frame will become very important as we start to add controls to our application. The editor frame allows us to edit our code, save our projects, and more. Moving our attention back to the tool frame, let's take a look at each tab - starting with the “New” tab. While there are many options available here, we will discuss only two of them. They are the 5th and 6th buttons from the left: wx.App and wx.Frame. Wx.App allows us to create a complete application beginning with two autogenerated files. One is a frame file and the other is an application file. This is the method I prefer to use. The wx.Frame is used to add more frames to our application and/or create a standalone app from a single source file. We'll discuss this later. Now look at the Containers/Layout tab. Many goodies here. The ones you'll use most are the wx.Panel (first on the left) and the sizers (2,3,4,5 and 6 from the right). Under Basic Controls, you'll find static text controls (labels), text boxes, check boxes, radio buttons, and more. Under Buttons, you'll find various forms of buttons. List Controls has data grids and other list boxes. Let's jump to Utilities where you'll find timers and menu items. Here are a few things to remember as we are getting ready for our first app. There are a few bugs in the Linux version. One is that SOME controls won't allow you to move them in the designer. Use the <Ctrl>+Arrow keys to contents ^ need to get some ground work covered before we can really talk about trying to program. FIRST you need to install Boa Constructor and wxPython. Use Synaptic and select both wxPython and Boa Constructor. Once installed, you should find Boa under Applications|Programming\Boa Constructor. Go ahead and start it up. It will make things a bit easier. Once the application starts, you will see three different windows (or frames): one across the top, and two across the bottom. You might have to resize and move them a bit, but get things to a point where it looks something like this: Dev Graphics Internet M/media System CD/DVD HDD USB Drive Laptop Wireless I f you are like me, you will HATE the first part of this installation. I HATE it when an author tells me that I have to double read every word in their book/chapter/article, because I just KNOW it will be a snore - even when I know it's for my own good, and I will end up doing it anyway. Consider yourself warned. PLEASE read the following boring stuff carefully. We'll get to the fun stuff soon, but we full circle magazine #31 move or tweak the position of your controls. Another one you'll find when you try the tutorials that come with Boa Constructor - when placing a panel control, it's hard to see. Look for the little boxes (I'll show you this soon). You can also use the Objs tab on the Inspector frame and select it that way. PROGRAM IN PYTHON - PART 5 Now let's examine a few of the buttons on the Editor Tool bar. The important ones for now are the Save (5th from the left) and Run (Yellow arrow, 7th from the left). If you are in a frame tab (Frame1 for example) there will be some extra buttons you need to know about. For now it's the Designer button: It is an important one. It allows us to design our GUI frame - which is what we'll do now. When you click on it you will be presented with a blank frame. says not to put controls (other than a wx.panel) directly on a frame. So, click on the Containers/Layout tab in the Tool Frame, then click on the wx.Panel button. Next, move over to the new frame that you are working on and click somewhere on the inside of the frame. You'll know it worked if you see something like this: can see the tool box for the Editor frame. Two new buttons have appeared: a check and an “X”. The “X” will cause the changes you made to be thrown away. The Check button: is called the “Post” button. This will cause your changes to be written into our frame file. You still have to save the frame file, but this will get the new things into the file. So, click on the Post button. There's also a post button on the Inspector frame, but we'll deal with that later. Now save your file. Go back into the Design mode. Click the 'Buttons' tab on the Tool frame and then click the first button on the left, the wx.Button. Then add it somewhere close to the middle of your frame. You'll have something that looks close to this: Okay, here we go. Under the 'New' tab of the tool frame, select wx.App (5th button from the left). This will create two new tabs in the editor frame: one named “*(App1)*”, the other named “*(Frame1)*”. Believe it or not, the VERY first thing we want to do is save our two new files, starting with the Frame1 file. The save button is the 5th button from the left in the Editor Frame. A “Save As” frame will pop up asking you where you want to save the file and what you want to call it. Create a folder in your home folder called GuiTests, and save the file as “Frame1.py”. Notice that the “*(Frame1)*” tab now shows as “Frame1”. (The “*(“ says that the file needs to be saved.) Now do the same thing with the App1 tab. This is a blank canvas for you to put whatever controls you need to (within reason). The first thing we want to do is place a wx.panel control. Almost everything I have read full circle magazine #31 Remember when I warned you about the bugs? Well, this is one of them. Don't worry. See the 8 little black squares? That's the limits of the panel. If you wanted, you could click and drag one of them to resize the panel, but for this project what we want is to make the panel cover the entire frame. Simply resize the FRAME just a little bit at this point. Now we have a panel to put our other controls on. Move the frame you are working on until you contents ^ PROGRAM IN PYTHON - PART 5 Notice that there are 8 small squares around it just like the panel. These are resize handles. It also shows us what control is currently selected. In order to move this closer to the center of the frame, hold down the Control key (Ctrl) and while that's being pressed, use the arrow keys to move it where you want it. Now, let's look at the Inspector frame. There are four tabs. Click on the 'Constr' tab. Here we can change the label, name, position, size and style. For now, let's change the name to 'btnShowDialog' and the Label property to 'Click Me'. tell the button to do anything. For that, we need to set up an event to happen, or fire, when the user clicks our button. Click on the X in the upper-right corner to finish running the frame. Next, go back to the designer, select the button and go into the 'Evts' tab in the inspector frame. Click on ButtonEvent and then double click on the wx.EVT_BUTTON text that shows up, and notice that in the window below we get a button event called 'OnBtnShowDialogButton'. Post and save. this is a boa file. It's ignored by the Python compiler, but not by Boa. The next line imports wxPython. Now jump down to the class definition. At the top, there's the __init_ctrls method. Notice the comment just under the definition line. Don't edit the code in this section. If you do, you will be sorry. Any place BELOW that routine should be safe. In this routine, you will find the definitions of each control on our frame. Next, look at the __init__ routine. Here you can put any calls to initializing code. Finally, the OnBtnShowDialogButton routine. This is where we will put our code that will do the work when the user clicks the button. Notice that there is currently an event.Skip() line there. Simply stated, this says just exit when this event fires. Now, what we are going to do is call a message box to pop up with some text. This is a common thing for programmers to do to allow the user to know about something an error, or the fact that a contents ^ Now, let's skip over all the rest of that tab and go to the Objs tab. This tab shows all the controls you have and their parent/child relationships. As you can see, the button is a child of panel1, which is a child of Frame1. Post (check button) and save your changes. Go back to the designer once again, and notice that (assuming you still have the 'Objs' tab in the inspector frame selected), Frame1 is now selected. This is good because it's what we want. Go back to the 'Constr' tab, and change the title from 'Frame1' to 'Our First GUI'. Post and save one more time. Now let's run our app. Click the yellow Run button on the Editor frame. Click all you want on the button, but nothing will happen. Why? Well, we didn't full circle magazine #31 Before we go any further, let's see what we've got in the way of code (page 11). The first line is a comment that tells Boa Constructor that PROGRAM IN PYTHON - PART 5 process has finished. In this case, we will be calling the wx.MessageBox built in routine. The routine is called with two parameters. The first is the text we wish to send in the message box and the second is the title for the message box. Comment out the line event.Skip() and put in the following line. wx.MessageBox('You Clicked the button', 'Info') Understand here that this is just about the simplest way to call the messagebox routine. You can have more parameters as well. Here's a quick rundown on how to change the way the icons work on the message box (more next time). #Boa:Frame:Frame1 import wx def create(parent): return Frame1(parent) [wxID_FRAME1, wxID_FRAME1BTNSHOWDIALOG, wxID_FRAME1PANEL1, ] = [wx.NewId() for _init_ctrls in range(3)] class Frame1(wx.Frame): def _init_ctrls(self, prnt): # generated method, don't edit wx.Frame.__init__(self, id=wxID_FRAME1, name='', parent=prnt, pos=wx.Point(543, 330), size=wx.Size(458, 253), style=wx.DEFAULT_FRAME_STYLE, title=u'Our First GUI') self.SetClientSize(wx.Size(458, 253)) self.panel1 = wx.Panel(id=wxID_FRAME1PANEL1, name='panel1', parent=self, pos=wx.Point(0, 0), size=wx.Size(458, 253), style=wx.TAB_TRAVERSAL) self.btnShowDialog = wx.Button(id=wxID_FRAME1BTNSHOWDIALOG, label=u'Click Me', name=u'btnShowDialog', parent=self.panel1, pos=wx.Point(185, 99), size=wx.Size(85, 32), style=0) self.btnShowDialog.Bind(wx.EVT_BUTTON, self.OnBtnShowDialogButton, id=wxID_FRAME1BTNSHOWDIALOG) def __init__(self, parent): self._init_ctrls(parent) def OnBtnShowDialogButton(self, event): event.Skip() Save and click the Run button (yellow arrow). You should see something like this: - Show a question icon - Show an alert icon And when you click the button you should see something like this: - Show an error icon Show an info icon to use that suited the situation. There are also various button arrangement assignments which we'll talk about next time. is owner of ,a consulting company in Aurora, Colorado, and has been programming since 1972. He enjoys cooking, hiking, music, and spending time with his family. The way to write this would be wx.MessageBox('You Clicked the button', 'Info', wx.ICON_INFORMATION) or whatever icon you wanted full circle magazine #31 contents ^ HOW-TO FCM#27-31 - Python Parts 1-5 Program In Python - Part 6 Now click on the Props tab. Click on the Centered property and set it to wx.BOTH. Click the post check-mark and save your work. Now run your application by clicking on the button with the yellow arrow. Our application shows up in the center of the screen with the title of “Main Frame”. Now close it by clicking on the “X” in the upper right corner of the app. Bring FrameMain back into the designer. Add two wx.Buttons to the frame, one above the other, and close to the center of the frame. Select the top button, name that “btnShowNew”, and set the label to “Show the other frame” in the Constr tab of the Inspector frame. Use the Shift+Arrow combination to resize the button so that all the text is visible, and then use the Ctrl+Arrow combination to move it back to the center of the frame. Select the bottom button, name that “btnExit”, and set the label to “Exit”. contents ^ Start up Boa Constructor and close all tabs in the Editor frame with the exception of Shell and Explorer by using the (Ctrl-W) key combination. This ensures that we will be starting totally fresh. Now create a new project by clicking on the wx.App button (see last time's article if needed). Before you do anything else, save Frame1 as “FrameMain.py” and then save App1 as “Gui2.py”. This is important. With the GUI2 tab selected in the Editor frame, move to the Toolbar frame, go back to the New tab, and add another frame to our project by clicking on wx.Frame (which is right next to the wx.App button). Make sure that the Application tab shows both frames under the Module column. Now go back to the new frame and save it as “FrameSecond.py”: Next, open FrameMain in the designer. Add a wx.Panel to the frame. Resize it a bit to full circle magazine #32 Dev Graphics Internet M/media System CD/DVD HDD USB Drive Laptop Wireless I hope you've been playing with Boa Constructor since our last meeting. First we will have a very simple program that will show one frame, then allow you to click on a button that will pop up another frame. Last time we did a message box. This time we will do a totally separate frame. This can be helpful when doing an application with multiple frames or windows. So... here we go... make the panel cover the frame. Next we are going to change some properties - we didn't do this last time. In the inspector frame, make sure that the Constr tab is selected and set the title to “Main Frame” and the name to “FrameMain”. We'll discuss naming conventions in a bit. Set the size to 400x340 by clicking on the Size check box. This drops down to show height and width. Height should be 400 and width should be 340: Post, save, and run to see your changes. Exit our app and go back to the designer. We are going to add button click events. Select the top button, and in the inspector frame, select the Evts tab. Click on ButtonEvent, then double click on wx.Evt_BUTTON. Notice you should have “OnBtnShowNewButton” below. Next, select the btnExit button. Do the same thing, making sure it shows “OnBtnExitButton”. Post and save. Next go to the Editor frame and scroll down to the bottom. Make sure you have the two event methods that we just created. Here's what the frame should look like so far: PROGRAM IN PYTHON - PART 6 Set the name to “FrameSecond”, and the title to “Second Frame”. Set centering to wx.BOTH. Add a wx.Button, and center it towards the lower part of the frame. Set the name to “btnFSExit”, and change the title to “Exit”. Set up a button event for it. Next add a wx.StaticText control in the upper portion of the frame close to the middle. Name it “stHiThere”, set the label to “Hi there...I'm the second form!”, and set the font to Sans, 14 point and weight to wxBOLD. Now reset the position to be centered in the form right and left. You can do this by unchecking the Position attribute and use the X position for right and left, and Y for up and down until you are happy. Post and save: Now that we have designed our forms, we are going to create the “glue” that will tie all this together. In the Editor frame, click on the GUI2 tab, then, below that, click on the Source tab. Under the line that says “import FrameMain”, add “import FrameSecond”. Save your changes. Next, select the “FrameMain” tab. Under the line that says “import wx”, add a line that says “import FrameSecond”. Next scroll down, and find the line that says “def __init__(self, parent):”. Add a line after the “self._init_ctrls(parent)” line that says “self.Fs = FrameSecond.FrameSecond(self )”. Now under the “def OnBtnShowNewButton(self, event):” event, comment out “event.Skip()” and add the following two lines: self.Fs.Show() self.Hide() The first thing we did was to make sure that the application knew we were going to have two forms in our app. That's why we imported both FrameMain and FrameSecond in the GUI2 file. Next we imported a reference for FrameSecond into FrameMain so we can call it later. We initialized it in the “_init_” method. And in the “OnBtnShowNewButton” event we told it that when the button was clicked, we want to first show the second frame, and to hide the main frame. Finally we have the statement to close the application when the Exit button is clicked. Now, switch to the code for FrameSecond. The changes here are relatively small. Under the “_init_” method, add a line that says “self.parent = parent” which adds a variable self.parent. Finally, under the click event for FSExitButton, comment out the “event.Skip()” line, and add the following two lines: self.parent.Show() self.Hide() Now it's time to deal with our other frame. Open FrameSecond in the designer. full circle magazine #32 Finally, under “OnBtnExitButton” method, comment out “event.Skip()”, and add a line that says “self.Close()” What does all this do? OK. contents ^ PROGRAM IN PYTHON - PART 6 Remember we hid the main frame when we showed the second frame, so we have to reshow it. Finally we hide the second frame. Save your changes. Here is all the code for you to verify everything (this page and following page): Now you can run your application. If everything went right, you will be able to click on btnShownNew, and see the first frame disappear and second frame appear. Clicking on the Exit button on the second frame will cause that frame to disappear and the FrameMain code: #Boa:Frame:FrameMain import wx import FrameSecond def create(parent): return FrameMain(parent) [wxID_FRAMEMAIN, wxID_FRAMEMAINBTNEXIT, wxID_FRAMEMAINBTNSHOWNEW, wxID_FRAMEMAINPANEL1, ] = [wx.NewId() for _init_ctrls in range(4)] class FrameMain(wx.Frame): def _init_ctrls(self, prnt): # generated method, don't edit wx.Frame.__init__(self, id=wxID_FRAMEMAIN, name=u'FrameMain', parent=prnt, pos=wx.Point(846, 177), size=wx.Size(400, 340), style=wx.DEFAULT_FRAME_STYLE, title=u'Main Frame') self.SetClientSize(wx.Size(400, 340)) self.Center(wx.BOTH) self.panel1 = wx.Panel(id=wxID_FRAMEMAINPANEL1, name='panel1', parent=self, pos=wx.Point(0, 0), size=wx.Size(400, 340), style=wx.TAB_TRAVERSAL) self.btnShowNew = wx.Button(id=wxID_FRAMEMAINBTNSHOWNEW, label=u'Show the other frame', name=u'btnShowNew', parent=self.panel1, pos=wx.Point(120, 103), size=wx.Size(168, 29), style=0) self.btnShowNew.SetBackgroundColour(wx.Colour(25, 175, 23)) self.btnShowNew.Bind(wx.EVT_BUTTON, self.OnBtnShowNewButton, id=wxID_FRAMEMAINBTNSHOWNEW) contents ^ GUI2 code: #!/usr/bin/env python #Boa:App:BoaApp import wx import FrameMain import FrameSecond modules ={u'FrameMain': [1, 'Main frame of Application', u'FrameMain.py'], u'FrameSecond': [0, '', u'FrameSecond.py']} class BoaApp(wx.App): def OnInit(self): self.main = FrameMain.create(None) self.main.Show() self.SetTopWindow(self.main) return True def main(): application = BoaApp(0) application.MainLoop() if __name__ == '__main__': main() full circle magazine #32 PROGRAM IN PYTHON - PART 6 self.btnExit = wx.Button(id=wxID_FRAMEMAINBTNEXIT, label=u'Exit', name=u'btnExit', parent=self.panel1, pos=wx.Point(162, 191), size=wx.Size(85, 29), style=0) self.btnExit.SetBackgroundColour(wx.Colour(225, 218, 91)) self.btnExit.Bind(wx.EVT_BUTTON, self.OnBtnExitButton, id=wxID_FRAMEMAINBTNEXIT) def __init__(self, parent): self._init_ctrls(parent) self.Fs = FrameSecond.FrameSecond(self) def OnBtnShowNewButton(self, event): #event.Skip() self.Fs.Show() self.Hide() def OnBtnExitButton(self, event): #event.Skip() self.Close() FrameMain Code (cont.): parent=prnt, pos=wx.Point(849, 457), size=wx.Size(419, 236), style=wx.DEFAULT_FRAME_STYLE, title=u'Second Frame') self.SetClientSize(wx.Size(419, 236)) self.Center(wx.BOTH) self.SetBackgroundStyle(wx.BG_STYLE_COLOUR) self.panel1 = wx.Panel(id=wxID_FRAMESECONDPANEL1, name='panel1', parent=self, pos=wx.Point(0, 0), size=wx.Size(419, 236), style=wx.TAB_TRAVERSAL) self.btnFSExit = wx.Button(id=wxID_FRAMESECONDBTNFSEXIT, label=u'Exit', name=u'btnFSExit', parent=self.panel1, pos=wx.Point(174, 180), size=wx.Size(85, 29), style=0) self.btnFSExit.Bind(wx.EVT_BUTTON, self.OnBtnFSExitButton, id=wxID_FRAMESECONDBTNFSEXIT) self.staticText1 = wx.StaticText(id=wxID_FRAMESECONDSTATICTEXT1, label=u"Hi there...I'm the second form!", name='staticText1', parent=self.panel1, pos=wx.Point(45, 49), size=wx.Size(336, 23), style=0) self.staticText1.SetFont(wx.Font(14, wx.SWISS, wx.NORMAL, wx.BOLD, False, u'Sans')) def __init__(self, parent): self._init_ctrls(parent) self.parent = parent def OnBtnFSExitButton(self, event): #event.Skip() self.parent.Show() self.Hide() FrameSecond code: #Boa:Frame:FrameSecond import wx def create(parent): return FrameSecond(parent) [wxID_FRAMESECOND, wxID_FRAMESECONDBTNFSEXIT, wxID_FRAMESECONDPANEL1, wxID_FRAMESECONDSTATICTEXT1, ] = [wx.NewId() for _init_ctrls in range(4)] class FrameSecond(wx.Frame): def _init_ctrls(self, prnt): # generated method, don't edit wx.Frame.__init__(self, id=wxID_FRAMESECOND, name=u'FrameSecond', full circle magazine #32 contents ^ PROGRAM IN PYTHON - PART 6 main frame to re-appear. Clicking on the Exit button on the main frame will close the application. You can come up with your own ideas for naming conventions as you grow as a programmer, and in some instances your employer might have conventions already in place. Next time, we will leave GUI programming aside for a bit and concentrate on database programming. Meanwhile, get and loaded on your system. You will also need and for SQLite. If you want to experiment with MySql as well, that's a good idea. All are available via Synaptic. I promised you we'd discuss naming conventions. Remember way back, we discussed commenting your code? Well, by using wellformed names for GUI controls, your code is fairly selfdocumenting. If you just left control names as staticText1 or button1 or whatever, when you are creating a complex frame with many controls, especially if there are a lot of text boxes or buttons, then naming them something that is meaningful is very important. It might not be too important if you are the only one who will ever see the code, but to someone coming behind you later on, the good control names will help them out considerably. Therefore, use something like the following: Control type - Name prefix Static text - st_ Button - btn_ Text Box - txt_ Check Box - chk_ Radio Button - rb_ Frame - Frm_ or Frame_ is owner of ,a consulting company in Aurora, Colorado, and has been programming since 1972. He enjoys cooking, hiking, music, and spending time with his family. by Richard Redei by Richard Redei contents ^ full circle magazine #32 HOW-TO FCM#27-32 - Python Parts 1 - 6 Program In Python - Part 7 and filing cabinets and wasted time, if only people would believe in computers and her. She called this power a “Database”. She said that the “Database” could replace the entire filing system. Some people did, and soon their lives were very happy. Some didn't, and their lives stayed the same, lost in mountains of paper. All fairy promises, however, come with some sort of requirement. That requirement was that whoever wanted to use the power of See-Quill needed to learn a bit of a different language. It wouldn't be too difficult a language to learn. In fact, it was much like the one the people already used. It just has a different way of saying things, and you had to think about things very carefully BEFORE you said them - to use the power of SeeQuill. One day, a young boy named, curiously enough, User, came to see See-Quill. He was very impressed with her beauty, and said “SeeQuill, Please teach me to use your power.” See-Quill said that she would. She said, “First, you have to know how your information is laid out. Show me your papers.” Being a young boy, User had only a few pieces of paper. See-Quill said, “User, right now you could live with papers and file folders. However, I can get glimpses of the future, and you will someday have so many papers that they would, if placed on top of each other, be taller than you by 15 times. We should use my power.” So, working together, User and See-Quill created a “database thingie” (a fairy technical term), and User lived happily ever after. paper. In each filing cabinet was something called a file folder, which attempted to organize relevant papers together. But after time, they would get over-stuffed, and fall apart when they got old or opened too many times. Using these filing cabinets properly required a college degree. It could take days to find all the papers that were in the various cabinets. Businesses suffered horribly. It was a very dark time in the history of man- and womankind. Then one day, from the top of a mountain somewhere (I personally think it was Colorado, but I'm not sure), came a lovely fairy. This fairy was blue and silver - with beautiful wings and white hair, and was about 1 foot tall. Her name, believe it or not, was See-Quill. Isn't that a funny name? Anyway, See-Quill said that she could fix everything having to do with all the paper full circle magazine #33 Dev Graphics Internet M/media System CD/DVD HDD USB Drive Laptop Wireless ood morning Boys and Girls. It's story time. Everyone get settled and comfy. Ready? Good! Once upon a time, the world was ruled by paper. Paper, paper everywhere. They had to make special homes for all that paper. They were called filing cabinets, and were big metal things that would take rooms and rooms and rooms at businesses to house all the G Of course, the story is not contents ^ completely true. However, using databases and SQL can make our lives easier. This time, we will learn about some simple SQL queries, and how to use them in a program. Some people might think that this might not be the “correct” way or the “best” way, but it is a reasonable way. So let's begin. Databases are like the filing cabinets in our story above. Data tables are like the file folders. The individual records in the tables are like the sheets of paper. Each piece of information is called a field. It falls together very nicely, doesn't it? You use SQL (pronounced See-Quill) statements to do things with the data. SQL stands for Structured Query Language, and is basically designed to be an easy way to use databases. In practice, however, it can become very complicated. We will keep things pretty simple for this installment. We need to create a plan, like starting any construction project. So, think of a recipe card, which is a good thing to think about, since we are going PROGRAM IN PYTHON - PART 7 to create a recipe database program. Around my house, recipes come in various forms: 3x5 card, 8x10 pieces of paper, napkins with the recipe scribbled on it, pages from magazines, and even stranger forms. They can be found in books, boxes, binders, and other things. However, they all pretty much have one thing in common: the format. In almost every case, at the top you have the recipe title and maybe how many servings it makes and where it came from. The middle contains the list of ingredients, and the bottom contains the instructions dealing with the order that things are done in, the cooking time, and so on. We will use this general format as the template of our database project. We will break this up into two parts. We'll create the database this time, and the application to read and update the database next time. Here's an example. Let's say we have the recipe shown right. Notice the order we just discussed. Now when we design our database - we could full circle magazine #33 make it very large and have one record for everything in the recipe. That, however, would be clumsy and hard to deal with. Instead, we are going to use the recipe card as a template. One table will handle the top of the card, or the gross information about the recipe; one table will handle the middle of the card, or the ingredients information; and one table will handle the bottom, or the instructions. Make sure you have installed SQLite and APSW. SQLite is a small database engine that doesn't require you to have a separate database server, which makes it ideal for our little application. Everything you learn here can be used with larger database systems like MySQL and others. The other good thing about SQLite is that it uses limited data types. These types are Text, Numeric, Blob, and Integer Primary Key. As you have learned already, text is pretty much anything. Our Serves: 4 : Greg Walters 1 cup parboiled Rice (uncooked) 1 pound Hamburger 2 cups Water 1 8 oz can Tomato Sauce 1 small Onion chopped 1 clove Garlic chopped 1 tablespoon Ground Cumin 1 teaspoon Ground Oregano Salt and Pepper to taste Salsa to taste Brown hamburger. Add all other ingredients. Bring to boil. Stir, lower to simmer and cover. Cook for 20 minutes. Do not look, do not touch. Stir and serve. contents ^ PROGRAM IN PYTHON - PART 7 ingredients, instructions, and the title of our recipe are all text types - even though they have numbers in them. Numeric datatypes store numbers. These can be integer values or floating point or real values. Blobs are binary data, and can include things like pictures and other things. Integer Primary Key values are special. The SQLite database engine automatically puts in a guaranteed unique integer value for us. This will be important later on. APSW stands for Another Python SQLite Wrapper and is a quick way to communicate with SQLite. Now let's go over some of the ways to create our SQL statements. To obtain records from a database, you would use the SELECT statement. The format would be: SELECT [what] FROM [which table(s)] WHERE [Constraints] SELECT * FROM Recipes would be INSERT INTO Recipes (name,servings,source) VALUES (“Tacos”,4,”Greg”) If you wish to obtain just a record by its primary key, you have to know what that value is (pkID in this instance), and we have to include a WHERE command in the statement. We could use: SELECT * FROM Recipes WHERE pkID = 2 typing and very redundant. We can use a method called aliasing. We can do it like this: SELECT r.name, r.servings, r.source, i.Instructions FROM Recipes r LEFT JOIN instructions i ON (r.pkid = i.recipeid) WHERE r.pkid = 1 To delete a record we can use DELETE FROM Recipes WHERE pkID = 10 Simple enough...right? Pretty much plain language. Now, suppose we want to just get the name of the recipe and the number of servings it makes - for all recipes. It's easy. All you have to do is include a list of the fields that you want in the SELECT statement: SELECT name, servings FROM Recipes There's also an UPDATE statement, but we'll leave that for another time. More on SELECT In the case of our database, we have three tables, each can be related together by using recipeID pointing to the pkID of the recipe table. Let's say we want to get all the instructions for a given recipe. We can do it like this: SELECT Recipes.name, Recipes.servings, Recipes.source, Instructions.Instructions FROM Recipes LEFT JOIN instructions ON (Recipes.pkid = Instructions.recipeid) WHERE Recipes.pkid = 1 It's shorter and still readable. Now we will write a small program that will create our database, create our tables, and put some simple data into the tables to have something to work with. We COULD write this into our full program, but, for this example, we will make a separate program. This is a run-once program - if you try to run it a second time, it will fail at the table creation statements. Again, we could wrap it with a try...catch handler, but we'll do that another time. We start by importing the APSW wrapper. import apsw To insert records, we use the INSERT INTO command. The syntax is INSERT INTO [table name] (field list) VALUES (values to insert) So, if we want to get all the fields from the Recipes table we would use: So, to insert a recipe into the recipe table the command full circle magazine #33 However, that is a lot of The next thing we need to do is create a connection to our database. It will be located in the same directory where we contents ^ PROGRAM IN PYTHON - PART 7 have our application. When we create this connection, SQLite automatically looks to see if the database exists. If so, it opens it. If not, it creates the database for us. Once we have a connection, we need what is called a cursor. This creates a mechanism that we can use to work with the database. So remember, we need both a connection and a cursor. These are created like this: # Opening/creating database connection=apsw.Connection("c ookbook1.db3") cursor=connection.cursor() RECIPES -----------pkID (Integer Primary Key) name (Text) source (Text) serves (Text) INSTRUCTIONS ---------------------pkID(Integer Primary Key) recipeID (Integer) instructions (Text) INGREDIENTS -------------------pkID (Integer Primary Key) recipeID (Integer) ingredients (Text) structure like this. Each column is a separate table as shown above right. Each table has a field called pkID. This is the primary key that will be unique within the table. This is important so that the data tables never have a completely duplicated record. This is an integer data type, and is automatically assigned by the database engine. Can you do without it? Yes, but you run the risk of accidentally creating a duplicated record id. In the case of the Recipes table, we will use this number as a reference for which instruction and which set of ingredients go with that recipe. We would first put the information into the database so that the name, source and number served goes into the recipe table. The pkID is full circle magazine #33 Okay - we have our connection and our cursor. Now we need to create our tables. There will be three tables in our application. One to hold the gross recipe information, one for the instructions for each recipe, and one to hold the list of the ingredients. Couldn't we do it with just one table? Well, yes we could, but, as you will see, it will make that one table very large, and will include a bunch of duplicate information. We can look at the table automatically assigned. Let's pretend that this is the very first record in our table, so the database engine would assign the value 1 to the pkID. We will use this value to relate the information in the other tables to this recipe. The instructions table is simple. It just holds the long text of the instructions, its own pkID and then a pointer to the recipe in the recipe table. The ingredients table is a bit more complicated in that we have one record for each ingredient as well as its own pkID and the pointer back to our recipe table record. So in order to create the recipe table, we define a string variable called sql, and assign it the command to create the table: sql = 'CREATE TABLE Recipes (pkiD INTEGER PRIMARY KEY, name TEXT, servings TEXT, source TEXT)' Next we have to tell ASPW to actually do the sql command: cursor.execute(sql) Now we create the other tables: sql = 'CREATE TABLE Instructions (pkID INTEGER PRIMARY KEY, instructions TEXT, recipeID NUMERIC)' cursor.execute(sql) sql = 'CREATE TABLE Ingredients (pkID INTEGER PRIMARY KEY, ingredients TEXT, recipeID NUMERIC)' cursor.execute(sql) Once we have the tables created, we will use the INSERT INTO command to enter each set of data into its proper table. Remember, the pkID is contents ^ PROGRAM IN PYTHON - PART 7 automatically entered for us, so we don't include that in the list of fields in our insert statement. Since we will be using the field names, they can be in any order, not just the order they were created in. As long as we know the names of the fields, everything will work correctly. The insert statement for our recipe table entry becomes INSERT INTO Recipes (name, serves, source) VALUES (“Spanish Rice”,4,”Greg Walters”) Next we need to find out the value that was assigned to the pkID in the recipe table. We can do this with a simple command: SELECT last_insert_rowid() Why is this? Well, when we get data back from ASPW, it comes back as a tuple. This is something we haven't talked about yet. The quick explanation is that a tuple is (if you look at the code above) like a list, but it can't be changed. Many people use tuples rarely; others use them often; it's up to you. The bottom line is that we want to use the first value returned. We use the 'for' loop to get the value into the tuple variable x. Make sense? OK. Let's continue... Next, we would create the insert statement for the instructions: sql = 'INSERT INTO Instructions (recipeID,instructions) VALUES( %s,"Brown hamburger. Stir in all other ingredients. Bring to a boil. Stir. Lower to simmer. Cover and cook for 20 minutes or until all liquid is absorbed.")' % lastid cursor.execute(sql) place the pkID of the recipe (lastid) into the sql statement. Finally, we need to put each ingredient into the ingredient table. I'll show you just one for now: sql = 'INSERT INTO Ingredients (recipeID,ingredients) VALUES ( %s,"1 cup parboiled Rice (uncooked)")' % lastid cursor.execute(sql) some time reading up on SQL programming. You'll be happy you did. It's not too hard to understand at this point. Next time it will get a bit more complicated. If you would like the full source code, I've placed it on my website. Go to to download it. Next time, we will use what we've learned over the series to create a menu-driven front end for our recipe program - it will allow viewing all recipes in a list format, viewing a single recipe, searching for a recipe, and adding and deleting recipes. I suggest that you spend However, it doesn't just come out as something we can really use. We need to use a series of statements like this: sql = "SELECT last_insert_rowid()" cursor.execute(sql) for x in cursor.execute(sql): lastid = x[0] Notice that we are using the variable substitution (%s) to full circle magazine #33 is owner of ,a consulting company in Aurora, Colorado, and has been programming since 1972. He enjoys cooking, hiking, music, and spending time with his family. contents ^ HOW-TO FCM#27-33 - Python Parts 1 - 7 Program In Python - Part 8 #!/usr/bin/python #-----------------------------------------------------# Cookbook.py # Created for Beginning Programming Using Python #8 # and Full Circle Magazine #-----------------------------------------------------import apsw import string import webbrowser class Cookbook: def Menu(): cbk = Cookbook() # Initialize the class Menu() runs in a terminal, so we need to create a menu. We will also create a class that will hold our database routines. Let's start with a stub of our program shown above right. Now we will layout our menu. We do that so we can stub our class. Our menu will be a rather big loop that will display a list of options that the user can perform. We'll use a while loop. Change the menu routine to look like the code shown below right. Next we stub the menu with an if|elif|else structure which is shown at the top of the next page. Let's take a quick look at our menu routine. We start off by printing the prompts that the user can perform. We set a variable (loop) to True, and then use the while function to continue looping until loop = False. We use the raw_input() command to wait for the user to select an option, and then full circle magazine #34 Dev Graphics Internet M/media System CD/DVD HDD USB Drive Laptop Wireless e will continue programming our recipe database that we started in Part 7. This will be a long one, with a lot of code, so grab on with all your might and don't let go. But remember, keep your hands and feet inside the car at all times. We have already created our database. Now we want to display the contents, add to it and delete from it. So how do we do that? We will start with an application that W def Menu(): cbk = Cookbook() # Initialize the class loop = True while loop == True: print '===================================================' print ' RECIPE DATABASE' print '===================================================' print ' 1 - Show All Recipes' print ' 2 - Search for a recipe' print ' 3 - Show a Recipe' print ' 4 - Delete a recipe' print ' 5 - Add a recipe' print ' 6 - Print a recipe' print ' 0 - Exit' print '===================================================' response = raw_input('Enter a selection -> ') contents ^ PROGRAM IN PYTHON - PART 8 if response == '1': # Show all recipes pass elif response == '2': # Search for a recipe pass elif response == '3': # Show a single recipe pass elif response == '4': # Delete Recipe pass elif response == '5': # Add a recipe pass elif response == '6': # Print a recipe pass elif response == '0': # Exit the program print 'Goodbye' loop = False else: print 'Unrecognized command. Try again.' /usr/bin/python -u "/home/greg/python_examples/APSW/cookbook/cookbook_stub.py" =================================================== RECIPE DATABASE =================================================== 1 - Show All Recipes 2 - Search for a recipe 3 - Show a Recipe 4 - Delete a recipe 5 - Add a recipe 6 - Print a recipe 0 - Exit =================================================== Enter a selection -> our if routine to handle whichever option the user selected. Before we can run this for a test, we need to create a stub inside our class for the __init__ routine: def __init__(self): pass Now, save your program where you saved the database you created from the last time, and run it. You should see something like that shown above right. It should simply print the menu over and over, until you type “0”, and then print “Goodbye” and exit. At this point, we can now start stubs of our routines in the Cookbook class. We will need a routine that will display all the information out of the Recipes data table, one that will allow you to search for a recipe, one that will show the data for a single recipe from all three tables, one that will delete a recipe, one that will allow you to add a recipe, and one that will print the recipe to the default printer. The PrintAllRecipes routine doesn't need a parameter other than the (self) parameter, neither does the SearchforRecipe nor full circle magazine #34 the EnterNew routines. The PrintSingleRecipe, DeleteRecipe and PrintOut routines all need to know what recipe to deal with, so they will need to have a parameter that we'll call “which”. Use the pass command to finish each stub. Under the Cookbook class, create the routine stubs: def PrintAllRecipes(self): pass def SearchForRecipe(self): pass def PrintSingleRecipe(self,which) : pass def DeleteRecipe(self,which): pass def EnterNew(self): pass def PrintOut(self,which): pass items, we will want to print out all of the recipes from the Recipe table – so the user can pick from that list. These will be options 1, 3, 4 and 6. So, modify the menu routine for those options, replacing the pass command with cbk.PrintAllRecipes(). Our response check routine will now look like the code at the top of the next page. One more thing to do is to set up the __init__ routine. Replace the stub with the following lines: def __init__(self): global connection global cursor self.totalcount = 0 connection=apsw.Connection( "cookbook.db3") cursor=connection.cursor() contents ^ For a number of the menu PROGRAM IN PYTHON - PART 8 if response == '1': # Show all recipes cbk.PrintAllRecipes() elif response == '2': # Search for a recipe pass elif response == '3': # Show a single recipe cbk.PrintAllRecipes() elif response == '4': # Delete Recipe cbk.PrintAllRecipes() elif response == '5': # Add a recipe pass elif response == '6': # Print a recipe cbk.PrintAllRecipes() elif response == '0': # Exit the program print 'Goodbye' loop = False else: print 'Unrecognized command. Try again.' Recipes' cntr = 0 for x in cursor.execute(sql): cntr += 1 print '%s %s %s %s' %(str(x[0]).rjust(5),x[1].lju st(30),x[2].ljust(20),x[3].lj ust(30)) print '-------------' self.totalcount = cntr as the item for each recipe. This will allow us to select the correct recipe later on. When you run your program, you should see the menu, and when you select option 1, you'll get what's shown at the top of the next page. That's what we wanted, except if you are running the app in Dr.Python or the like, the program doesn't pause. Let's add a pause until the user presses a key so they can look at the output for a second or two. While we are at it, let's print out the total number of recipes from the variable we set up a moment ago. Add to the bottom of option 1 of the menu: First we create two global variables for our connection and cursor. We can access them from anywhere within the cookbook class. Next, we create a variable self.totalcount which we use to count the number of recipes. We'll be using this variable later on. Finally we create the connection and the cursor. The next step will be to flesh out the PrintAllRecipes() routine in the Cookbook class. Since we have the global variables for connection and cursor, we don't need to recreate them in each routine. Next, we will want to do a “pretty print” to the screen for headers for our recipe list. We'll use the “%s” formatting command, and the left justify command, to space out our screen output. We want it to look like this: Item Name Serves Source The cntr variable will count the number of recipes we display to the user. Now our routine is done. Shown below is the full code for the routine, just in case you missed something. Notice that we are using the tuple that is returned from the cursor.execute routine from ASPW. We are printing the pkID --------------------------- Finally, we need to create our SQL statement, query the database, and display the results. Most of this was covered in the article last time. sql = 'SELECT * FROM def PrintAllRecipes(self): print '%s %s %s %s' %('Item'.ljust(5),'Name'.ljust(30),'Serves'.ljust(20), 'Source'.ljust(30)) print '---------------------------------' sql = 'SELECT * FROM Recipes' cntr = 0 for x in cursor.execute(sql): cntr += 1 print '%s %s %s %s' %(str(x[0]).rjust(5),x[1].ljust(30),x[2].ljust(20),x[3 ].ljust(30)) print '---------------------------------' self.totalcount = cntr contents ^ full circle magazine #34 PROGRAM IN PYTHON - PART 8 Enter a selection -> 1 Item Name Serves Source -------------------------------------------------------------------------------------1 Spanish Rice 4 Greg 2 Pickled Pepper-Onion Relish 9 half pints Complete Guide to Home Canning -------------------------------------------------------------------------------------=================================================== RECIPE DATABASE =================================================== 1 - Show All Recipes 2 - Search for a recipe 3 - Show a Recipe 4 - Delete a recipe 5 - Add a recipe 6 - Print a recipe 0 - Exit =================================================== Enter a selection -> print 'Total Recipes - %s' %cbk.totalcount print '--------------------------------------------------' res = raw_input('Press A Key -> ') We'll skip option #2 (Search for a recipe) for a moment, and deal with #3 (Show a single recipe). Let's deal with the menu portion first. We'll show the list of recipes, as for option 1, and then ask the user to select one. To make sure we don't get errors due to a bad user input, we'll use the Try|Except structure. We will print the prompt to the user (Select a recipe → ), then, if they enter a correct response, we'll call the PrintSingleRecipe() routine in our Cookbook class with the pkID from our Recipe table. If the entry is not a number, it will raise a ValueError exception, which we handle with the except ValueError: catch shown right. Next, we'll work on our PrintSingleRecipe routine in the Cookbook class. We start with full circle magazine #34 the connection and cursor again, then create our SQL statement. In this case, we use 'SELECT * FROM Recipes WHERE pkID = %s” % str(which)' where which is the value we want to find. Then we “pretty print” the output, again try: from the tuple returned by ASPW. In this case, we use x as the gross variable, and then each one with bracketed index into the tuple. Since the table layout is pkID/name/servings/source, we can use x[0],x[1],x[2] and x[3] as the detail. Then, we want to select everything from the ingredients table where the recipeID (our key into the recipes data table) is equal to the pkID we just used. We loop through the tuple returned, printing each ingredient, and then finally we get the instructions from the instructions table – just like we did for the ingredients table. Finally, we wait for the user to press a key so they can see the recipe on the screen. The code is shown on the next page. Now, we have two routines res = int(raw_input('Select a Recipe -> ')) if res <= cbk.totalcount: cbk.PrintSingleRecipe(res) elif res == cbk.totalcount + 1: print 'Back To Menu...' else: print 'Unrecognized command. Returning to menu.' except ValueError: print 'Not a number...back to menu.' contents ^ PROGRAM IN PYTHON - PART 8 out of the six finished. So, let's deal with the search routine, again starting with the menu. Luckily this time, we just call the search routine in the class, so replace the pass command with: cbk.SearchForRecipe() Database Browser, our like statement uses a wildcard character of “%”. So, to look for a recipe containing “rice” in the recipe name, our query would be: SELECT * FROM Recipes WHERE name like '%rice%' Now to flesh out our search code. In the Cookbook class, replace our stub for the SearchForRecipe with the code shown on the next page. There's a lot going on there. After we create our connection and cursor, we display our search menu. We are going to give the user three ways to search, and a way to exit the routine. We can let the user search by a word in the recipe name, a word in the recipe source, or a word in the ingredient list. Because of this, we can't just use the display routine we just created, and will need to create custom printout routines. The first two options use simple SELECT statements with an added twist. We are using the “like” qualifier. If we were using a query browser like SQLite However, since the “%” character is also a substitution character in our strings, we have to use %% in our text. To make it worse, we are using the substitution character to insert the word the user is searching for. Therefore, we must make it '%%%s%%'. Sorry if this is as clear as mud. The third query is called a Join statement. Let's look at it a bit closer: sql = "SELECT r.pkid,r.name,r.servings,r.so urce,i.ingredients FROM Recipes r Left Join ingredients i on (r.pkid = i.recipeid) WHERE i.ingredients like '%%%s%%' GROUP BY r.pkid" %response def PrintSingleRecipe(self,which): sql = 'SELECT * FROM Recipes WHERE pkID = %s' % str(which) print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~' for x in cursor.execute(sql): recipeid =x[0] print "Title: " + x[1] print "Serves: " + x[2] print "Source: " + x[3] print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~' sql = 'SELECT * FROM Ingredients WHERE RecipeID = %s' % recipeid print 'Ingredient List:' for x in cursor.execute(sql): print x[1] print '' print 'Instructions:' sql = 'SELECT * FROM Instructions WHERE RecipeID = %s' % recipeid for x in cursor.execute(sql): print x[1] print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~' resp = raw_input('Press A Key -> ') We are selecting everything from the recipe table, and the ingredients from the ingredients table, joining or relating the ingredient table full circle magazine #34 ON the recipeID being equal to the pkID in the recipe table, then searching for our ingredient using the like statement, and, finally, grouping the result by the pkID in the recipe table to keep duplicates from being shown. If you remember, we have peppers twice in the second recipe (Onion and pepper relish), one green and one red. That could create confusion in our user's mind. Our menu uses searchin = raw_input('Enter Search Type -> ') if searchin != '4': which says: if searchin (the value the user entered) is NOT equal to 4 then do the options, if it is 4, then don't do contents ^ def SearchForRecipe(self): # print the search menu print '-------------------------------' print ' Search in' print '-------------------------------' print ' 1 - Recipe Name' print ' 2 - Recipe Source' print ' 3 - Ingredients' print ' 4 - Exit' searchin = raw_input('Enter Search Type -> ') if searchin != '4': if searchin == '1': search = 'Recipe Name' elif searchin == '2': search = 'Recipe Source' elif searchin == '3': search = 'Ingredients' parm = searchin response = raw_input('Search for what in %s (blank to exit) -> ' % search) if parm == '1': # Recipe Name sql = "SELECT pkid,name,source,servings FROM Recipes WHERE name like '%%%s%%'" %response elif parm == '2': # Recipe Source sql = "SELECT pkid,name,source,servings FROM Recipes WHERE source like '%%%s%%'" %response elif parm == '3': # Ingredients sql = "SELECT r.pkid,r.name,r.servings,r.source,i.ingredients FROM Recipes r Left Join ingredients i on (r.pkid = i.recipeid) WHERE i.ingredients like '%%%s%%' GROUP BY r.pkid" %response try: if parm == '3': print '%s %s %s %s %s' %('Item'.ljust(5),'Name'.ljust(30),'Serves'.ljust(20),'Source'.ljust(30),'Ingredient'.ljust(30)) print '--------------------------------------------------------------------------------------' else: print '%s %s %s %s' %('Item'.ljust(5),'Name'.ljust(30),'Serves'.ljust(20),'Source'.ljust(30)) print '--------------------------------------------------------------------------------------' for x in cursor.execute(sql): if parm == '3': print '%s %s %s %s %s' %(str(x[0]).rjust(5),x[1].ljust(30),x[2].ljust(20),x[3].ljust(30),x[4].ljust(30)) else: print '%s %s %s %s' %(str(x[0]).rjust(5),x[1].ljust(30),x[3].ljust(20),x[2].ljust(30)) except: print 'An Error Occured' print '--------------------------------------------------------------------------------------' inkey = raw_input('Press a key') full circle magazine #34 contents ^ PROGRAM IN PYTHON - PART 8 anything, just fall through. Notice that I used “!=” as Not Equal To instead of “<>”. Either will work under Python 2.x. However, in Python 3.x, it will give a syntax error. We'll cover more Python 3.x changes in a future article. For now, start using “!=” to make your life easier to move to Python 3.x in the future. Finally, we “pretty print” again our output. Let's look at what the user will see, shown right. Enter a selection -> 2 ------------------------------Search in ------------------------------1 - Recipe Name 2 - Recipe Source 3 - Ingredients 4 - Exit Enter Search Type -> 1 Search for what in Recipe Name (blank to exit) -> rice Item Name Serves Source -------------------------------------------------------------------------------------1 Spanish Rice 4 Greg -------------------------------------------------------------------------------------Press a key You can see how nicely the program prints the output. Now, the user can go back to the menu and use option #3 to print whichever recipe they want to see. Next we will add recipes to our database. Again, we just have to add one line to our menu routine, the call to the EnterNew routine: cbk.EnterNew() Easy enough. Now for the ingredient search... Enter a selection -> 2 ------------------------------Search in ------------------------------1 - Recipe Name 2 - Recipe Source 3 - Ingredients 4 - Exit Enter Search Type -> 3 Search for what in Ingredients (blank to exit) -> onion Item Name Serves Source Ingredient The code that needs to replace the stub in the Cookbook class for EnterNew() is at:. We start by defining a list named “ings” – which stands -------------------------------------------------------------------------------------1 Spanish Rice 4 Greg 1 small Onion chopped 2 Pickled Pepper-Onion Relish 9 half pints Complete Guide to Home Canning 6 cups finely chopped Onions -------------------------------------------------------------------------------------Press a key full circle magazine #34 contents ^ PROGRAM IN PYTHON - PART 8 for ingredients. We then ask the user to enter the title, source, and servings. We then enter a loop, asking for each ingredient, appending to the ing list. If the user enters 0, we exit the loop and continue on asking for the instructions. We then show the recipe contents and ask the user to verify before saving the data. We use INSERT INTO statements, like we did last time, and return to the menu. One thing we have to be careful of is the single quote in our entries. USUALLY, this won't be a problem in the ingredient list or the instructions, but in our title or source fields, it could come up. We need to add an escape character to any single quotes. We do this with the string.replace routine, which is why we imported the string library. In the menu routine, put the code shown above right under option #4. Then, in the Cookbook class, use the code shown below right for the DeleteRecipe() routine. Quickly, we'll go through the delete routine. We first ask the user which recipe to delete (back in the menu), and pass that pkID number into our delete routine. Next, we ask the user 'are they SURE' they want to delete the recipe. If the response is “Y” (string.upper(resp) == 'Y'), then we create the sql delete statements. Notice that this time we have to delete records from all three tables. We certainly could just delete the record from the recipes table, but then we'd have orphan records in the other two, and that wouldn't be good. When we delete the record from the recipe table, we use the pkID field. In the other two tables, we use the recipeID field. Finally, we will deal with the routine to print the recipes. We'll be creating a VERY simple HTML file, opening the default browser and allowing them to print from there. This is why we are importing the webbrowser library. In the menu routine for option #6, insert the code shown at the top of the next page. Again, we display a list of all the recipes, and allow them to full circle magazine #34 cbk.PrintAllRecipes() print '0 - Return To Menu' try: res = int(raw_input('Select a Recipe to DELETE or 0 to exit -> ')) if res != 0: cbk.DeleteRecipe(res) elif res == '0': print 'Back To Menu...' else: print 'Unrecognized command. Returning to menu.' except ValueError: print 'Not a number...back to menu.' def DeleteRecipe(self,which): resp = raw_input('Are You SURE you want to Delete this record? (Y/n) -> ') if string.upper(resp) == 'Y': sql = "DELETE FROM Recipes WHERE pkID = %s" % str(which) cursor.execute(sql) sql = "DELETE FROM Instructions WHERE recipeID = %s" % str(which) cursor.execute(sql) sql = "DELETE FROM Ingredients WHERE recipeID = %s" % str(which) cursor.execute(sql) print "Recipe information DELETED" resp = raw_input('Press A Key -> ') else: print "Delete Aborted - Returning to menu" select the one that they wish to print. We call the PrintOut routine in the Cookbook class. That code is shown at the top right of the next page. We start with the fi = open([filename],'w') command which creates the file. We then pull the information from the recipe table, and write it to the file with the fi.write command. We use the <H1></H1> header 1 tag for the title, the contents ^ PROGRAM IN PYTHON - PART 8 <H2> tag for servings and source. We then use the <li></li> list tags for our ingredient list, and then write the instructions. Other than that it's simple queries we've already learned. Finally, we close the file with the fi.close() command, and use webbrowser.open([filename]) with the file we just created. The user can then print from their web browser – if required. cbk.PrintAllRecipes() print '0 - Return To Menu' try: res = int(raw_input('Select a Recipe to DELETE or 0 to exit -> ')) if res != 0: cbk.PrintOut(res) elif res == '0': print 'Back To Menu...' else: print 'Unrecognized command. Returning to menu.' except ValueError: print 'Not a number...back to menu.' This was our biggest application to date. I've posted the full source code (and the sample database if you missed last month) on my website. If you don't want to type it all in or have any problems, then hop over to my web site, to get the code. is owner of ,a consulting company in Aurora, Colorado, and has been programming since 1972. He enjoys cooking, hiking, music, and spending time with his family. def PrintOut(self,which): fi = open('recipeprint.html','w') sql = "SELECT * FROM Recipes WHERE pkID = %s" % which for x in cursor.execute(sql): RecipeName = x[1] RecipeSource = x[3] RecipeServings = x[2] fi.write("<H1>%s</H1>" % RecipeName) fi.write("<H2>Source: %s</H2>" % RecipeSource) fi.write("<H2>Servings: %s</H2>" % RecipeServings) fi.write("<H3> Ingredient List: </H3>") sql = 'SELECT * FROM Ingredients WHERE RecipeID = %s' % which for x in cursor.execute(sql): fi.write("<li>%s</li>" % x[1]) fi.write("<H3>Instructions:</H3>") sql = 'SELECT * FROM Instructions WHERE RecipeID = %s' % which for x in cursor.execute(sql): fi.write(x[1]) fi.close() webbrowser.open('recipeprint.html') print "Done" full circle magazine #34 contents ^
https://www.scribd.com/doc/123160817/PYTHON
CC-MAIN-2017-09
refinedweb
18,890
74.39
The road less taken Manveen's Weblog 2015-02-23T12:31:52+00:00 Apache Roller Announcing Project Socialsite manveen 2008-05-09T09:57:51+00:00 2008-05-09T17:06:08+00:00 <a href="" title="Social Site"> <img src=""/> </a> <p> The <a href="">SocialSite</a>. </p> <p> Check out <a href=""> this cool video. </a> </p> Blob vs file system storage manveen 2008-03-19T14:38:10+00:00 2008-03-19T21:38:10+00:00 <p> Images can be stored as a database blob or in the file system. How do you decide what to choose? What are the performance impacts of each one? </p> <p> Well, there are several reasons why you should not store binary data in your database: </p> <p> <ul> <li> The whole point of storing data in a SQL database, is to put some kind of ordering and structure on your data, as well as being able to search on these data. But how do you search in the binary data of a picture? </li> <li> For large data sets, storing binary data will quickly run up the size of your database files, making it harder to control the size of the database. </li> <li>In order to store binary data in the database, you must continually escape and unescape the data to ensure that nothing breaks. </li> <li>Storing images on the file system has a marginally faster retrieval rate. </li> </ul> </p> <p> Now here are some reasons why you should: </p> <p> <ul> <li>There is one good reason why you might want to store the binary data in the database: Replication. Storing images in a database allows for all of your data to be central stored which is more portable, and easy to replicate. </li> </ul> </p> <p> Here's one solution that takes into account the points above: </p> <p> Store a link (e.g. a file path) to the image file in the database. Whenever you need the image, use the link in whatever program you use to retrieve the file containing the image. </p> <p> Or you could think of storing your images in the database to gain the benefits there (preferable for smaller images and for limited images), but also use file system caching of these to obtain the performance benefits. </p> <p> Some tips for getting the best performance out of the file system: </p> <p> <ul> <li> Limit the number of images in any one directory. </li> <li> Include not only an image identifier in the filename, but also a secret code. </li> </ul> </p> Boxing and unboxing manveen 2008-03-13T23:47:49+00:00 2008-03-14T06:47:49+00:00 <h2> What's boxing anyway? </h2> <p> Every type in Java is either a reference type or a primitive type. A reference type is any class, instance, or array type. All reference types are subtypes of class Object, and any variable of reference type may be set to the value null. There are eight primitive types, and each of these has a corresponding library class of reference type. The library classes are located in the package <a href="">java.lang.</a> </p> <pre> Primitive : Reference Mapping byte : Byte short : Short int : Integer long : Long float : Float double : Double bool : Boolean char : Character </pre> <p> Conversion of a primitive type to the corresponding reference type is called <i>boxing</i> and conversion of the reference type to the corresponding primitive type is called <i>unboxing</i>. </p> <h2>But what's autoboxing? </h2> <p> Autoboxing / unboxing is the automated <i>under the covers</i> conversion between primitive types and their equivalent object types. For example, the conversion between an int primitive and an Integer object or between a boolean primitive and a Boolean object. This was introduced in Java 5. </p> <pre> // Assigning primitive type to wrapper type : Boxing Integer iWrapper = 10; // Assigning object to primitive : Unboxing public void intMethod(Integer iWrapper){ int iPrimitive = iWrapper; } </pre> <h2> The basic differences between primitive and reference types</h2> <p> <ul> <li> Primitive types are typically fast and lightweight - they have a small memory footprint. Object equivalents are heavyweight and can burden the VM if created in large numbers. </li> <li> Primitive types are not objects - they are not polymorphic and cannot exhibit any object behavior. Object equivalents behave polymorphically (the numeric ones form an inheritance hierarchy derived from java.lang.Number) and exhibit useful behaviour - e.g. Integer.compareTo(Integer).</li> <li> Primitive types are passed by value into methods. Object equivalents are passed by reference. </li> <li> Primitive types are not reference counted or garbage collected. Object equivalents are.</li> <li> Primitive types are mutable. Their object equivalent classes are all immutable. </li> </ul> </p> On Generics manveen 2008-03-09T00:40:42+00:00 2008-03-09T08:40:42+00:00 <h2> What is Generics? </h2> <p> <a href="">Generics</a>. </p> <p>. </p> <h2> When was Java Generics introduced? </h2> <p> JDK 1.5 </p> <h2> How is this different from C++'s template mechanism? </h2> <p> You might think that generics are similar, but the similarity is superficial. Generics do not generate a new class for each specialization, nor do they permit “template metaprogramming.” </p> <h2> Generics and subtyping </h2> <p> Is the following code snippet legal? </p> <pre> List<String> ls = new ArrayList<String>(); //1 List<Object> lo = ls; //2 </pre> <p> Line 1 is definitely legal. But line 2 will give a compile time error. </p> <p> Well, take a look at the next few lines: <pre> lo.add(new Object()); // 3 String s = ls.get(0); // 4: attempts to assign an Object to a String! </pre> </p> <p>. </p> <h2> What is the supertype of all kinds of collections? </h2> <p> It’s written as <pre> Collection<?> </pre> (pronounced “collection of unknown”) , that is, a collection whose element type matches anything. </p> <h3> Is this valid? </h3> Is the following code valid? <pre> Collection<?> c = new ArrayList<String>(); //1 c.add(new Object()); //2 </pre> Line 2 will give a compile time error since we don't know what element type of c stands for, we cannot add arbitrary data to it. Java Serialization manveen 2008-03-04T11:51:19+00:00 2008-03-04T19:53:56+00:00 <h2> What is <a href="">serialization? </a></h2> <p> The process of saving an object's state to a <b>sequence of bytes</b>, as well as the process of rebuilding those bytes into a live object at some future time. </p> <h2> What can I use? </h2> <p> <a href="">The Java Serialization API.</a> </p> <h2> The Rules </h2> <h4>Rule #1: The object to be persisted must implement the Serializable interface or inherit that implementation from its object hierarchy.</h4> <p> To persist an object in Java, we must have a persistent object. An object is marked serializable by implementing the <i>java.io.Serializable</i> interface, which signifies to the underlying API that the object can be flattened into bytes (and subsequently inflated in the future). </p> <pre> public class PersistedClass implements Serializable </pre> <h4> Rule #2: The object to be persisted must mark all nonserializable fields transient </h4> <p> On the other hand, certain system-level classes such as Thread, OutputStream and its subclasses, and Socket are not serializable. These should be marked <b>transient</b> since it doesn't make sense to serialize them. </p> <pre> transient private Thread notserialthread; </pre> <h2> Version control Gotcha! </h2> <p> Imagine you have a serialized flattened object sitting in your file system for sometime. Meanwhile, you update the class file, perhaps adding a new field. What happens when you try to read in the flattened object? </p> <p> An <i>java.io.InvalidClassException </i> will be thrown! -- because all persistent-capable classes are automatically given a unique identifier. If the identifier of the class does not equal the identifier of the flattened object, the exception will be thrown. </p> <p> If you wish to control versioning, you simply have to provide the serialVersionUID field manually and ensure it is always the same, no matter what changes you make to the classfile. </p> <b>How? </b> JDK distribution comes with a utility called <i>serialver</i> which returns the generated <a href="">serialVersionUID. </a>(it is just the hash code of the object by default). </p> <br> <img src=""/> <p> Simply copy the returned line with the version ID and paste it into your code. </p> <p> The version control works great as long as the changes are compatible. Compatible changes include adding or removing a method or a field. Incompatible changes include changing an object's hierarchy or removing the implementation of the Serializable interface. </p> Final keyword in Java manveen 2008-03-04T11:06:46+00:00 2008-03-04T19:06:46+00:00 <p> The <b>Final</b> <i>keyword</i> is used in Java on an entity that cannot be changed. </p> <p> Here are a few different contexts in which final can be used: </p> <p> <h2>Final class:</h2> </p> <p> <b>Meaning</b>: A final class cannot be subclassed. <b>Advantage</b>: Security or efficiency. <b>Example</b>: java.util.System </p> <p> <h2>Final method:</h2> </p> <p> <b>Meaning</b>: The method cannot be overridden in a subclass. <b>Advantage</b>: Preventing unexpected behavior crucial to the functionality of the class. </p> <p> <h2>Final variable:</h2> </p> <p> <b>Meaning</b>: The variable is immutable, meaning once assigned, it cannot be reassigned. This is different from <b>constant</b> keyword since constant should be known at compile time and <b>final</b> need not be. <b>Advantage</b>: Optimization. </p> <img src=""/> This leap year: hop to it! manveen 2008-02-29T13:58:33+00:00 2008-02-29T21:58:33+00:00 <p> <img src=""/> </p> <p> Today is what makes this year a <a href=""> leap year! </a> </p> <p> <img src=""/> </p> JPA Query with wildcards manveen 2008-02-26T11:01:28+00:00 2008-02-26T19:01:28+00:00 <p> Here is a short tip on using <a href=""> LIKE expression with JPA </a>. </p> <p> What we are trying to do is get all the items that matches a pattern anywhere in their name. </p> <p> In simple SQL, what I want to do is: </p> <p> <pre> SELECT userName FROM Profile p WHERE p.userName LIKE %pattern%; </pre> </p> <p> I'm using annotations to create a named query. </p> <pre> @NamedQuery( <p> <i>Here's how:</i> </p> <p> <pre> <s:password </pre> </p> <p> <i>How does this work?</i> </p> <p> The framework takes care of this. <br> In the text filed, the expression <b>%{getText('password')}</b> tells the framework to lookup "password" in the message resources. </p> On code monkeys and project schedules manveen 2008-02-20T15:08:12+00:00 2008-02-20T23:08:12+00:00 <p> On project scheduling ... </p> <p> <img src=""/> </p> <p> and deliverables ... </p> <p> <img src=""/> </p> <p> and implementation ... </p> <p> <img src=""/> </p> Ramblings on Struts 2 manveen 2008-02-20T14:51:22+00:00 2008-02-20T22:51:22+00:00 <p> <a href="">Apache Struts</a> is a free open-source framework for creating Java web applications. </p> <img src=""/> <p> Web applications differ from conventional websites in that web applications can create a dynamic response. Many websites deliver only static pages. A web application can interact with databases and business logic engines to customize a response. </p> </p>. </p> <p> The framework provides three key components: </p> <ul> <li>A "request" handler provided by the application developer that is mapped to a standard URI.</li> <li>A "response" handler that transfers control to another resource which completes the response.</li> <li>A tag library that helps developers create interactive form-based applications with server pages.</li> </ul> <p> The framework's architecture and tags are buzzword compliant. Struts works well with conventional REST applications. </p> <p> Here's a really comprehensive set of the <a href=""> underlying and related technologies</a>. </p> Using JDBCRealm with self-registration manveen 2008-02-15T15:56:15+00:00 2008-02-15T23:56:15+00:00 <p> My <a href="">last blog</a> talked about a pattern to implement self-registration. As a follow up, in this blog I talk about how to use a JDBCRealm in this context. </p> <p> First we need to create a <a href="">data realm</a> in glassfish. Here is how you can do it using an ant task. (You need to populate the variables appropriately, ofcourse). </p> <pre> <exec executable="${ASADMIN_SCRIPT}"> <arg line="create-auth-realm" /> <arg line="--user ${AS_ADMIN_USER}" /> <arg line="--passwordfile ${PASSFILE}" /> <arg line="--host ${AS_SERVER_NAME}" /> <arg line="--port ${AS_ADMIN_PORT}" /> <arg line="--classname com.sun.enterprise.security.auth.realm.jdbc.JDBCRealm" /> <arg line='--property digest-algorithm=SHA:encoding=Hex:user-name-column=USERNAME :password-column=PASSPHRASE:group-name-column=ROLENAME :jaas-context=jdbcRealm: </exec> </pre> <p> Then, your persistence.xml would have an entry for the persistence unit that maps the PU name to the data source: </p> <pre> <persistence-unit <provider>oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider</provider> <non-jta-data-source>java:comp/env/jdbc/CommonDB</non-jta-data-source> <class>com.x.y.User</class> <class>com.x.y.UserRole</class> </persistence-unit> </pre> <p> The User management implementation should talk to this realm. So the EntityManagerFactory should be created looking up this JNDI. </p> <pre> String <br> <p> <b>The VIEW</b> </p> <p> Or the Front end: (You need an input form that can be a JSF or a JSP, if you're a Java developer like me). </p> <p> A typical form would have inputs such as: a username, screen name, password (and a repeat password), full name, address, email, secret question for password recovery, to name a few. </p> <p> A view also forwards user input to a controller. </p> <br> <b>The CONTROLLER</b> <p> A controller defines application behavior. It dispatches user requests. A handler is an implementation of the controller. </p> <p> Form Validation: Many checks can be done at the controller, such as checking for the length of fields, (you may want password lengths>8, and consisting of a mix of numbers and alphabets etc.), making sure password and repeat passwords match, email validation etc. This is where you would add such checks. </p> <p> The controller controls the application behavior. If the form entries are invalid, then one would return to form entry view highlighting errors. If the form is valid, then the backend should be contacted. </p> <br> <b>The MODEL</b> <p> Or the Backend. </p> <p> A model represents business data and business logic or operations that govern access and modification of this business data. </p> <p> This is the actual data model. You could use database tables to store your user entries. </p> <p> You could expose operations on the table through a User Management API to allow creating, retrieving, updating users. As an example, the User Management API can be implemented using Persistence APIs in Java. </p> <p> The controller would process the results and display a success page or an error if a user already exists. </p> <p> At the highest level, an application should be able to control whether it wants to turn self-registration on or off. Read my other blog on <a href="">LDAP based user authentication</a>. </p> <p> I hope you've found this blog useful. Comments are welcome. </p> Weblog Server for Glassfish manveen 2007-09-20T13:56:05+00:00 2007-09-20T21:15:50+00:00 Weblog server for Glassfish now available! <p> Sun's Weblog server for <a href="">Glassfish</a> was released earlier this week. </p> <p> The bits are available through the <a href="">Glassfish update center</a>. To get to them, launch $GLASSFISH_HOME/updatecenter/bin/updatetool. </p> <p> Once the update tool is up and running choose Social Software category components and click install. The instance of GlassFish will now be weblog enabled! </p> <img src=""/> LDAP based user authentication in glassfish manveen 2007-07-26T00:52:51+00:00 2007-07-26T08:05:14+00:00 <p> If you're using glassfish and developing a new web application that needs to be authenticated against an LDAP server, this blog talks about how you can do it. </p> <img src=""/> <p> For a normal (default) file-realm based authentication, your web.xml would have a security-constraint that should look something like: </p> <pre> <security-constraint> <web-resource-collection> <web-resource-name>build</web-resource-name> <url-pattern>\*.jsf</url-pattern> <url-pattern>/download/\*</url-pattern> <url-pattern>/resource/\*</url-pattern> <http-method>DELETE</http-method> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>PUT</http-method> </web-resource-collection> <auth-constraint> <role-name>admin</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>FORM</auth-method> <realm-name>admin-realm</realm-name> <form-login-config> <form-login-page>/login.jsf</form-login-page> <form-error-page>/loginError.jsf</form-error-page> </form-login-config> </login-config> <security-role> <role-name>admin</role-name> </security-role> </pre> <p> Now you want to change this to have an authentication against your LDAP server. You need to do the following: </p> <p> First, you should create an LDAP realm in glassfish appserver i.e. the domain.xml entries should look something like: </p> <pre> <auth-realm <property name="directory" value="ldap://myldapserver:portnumber"/> <property name="base-dn" value="dc=sun,dc=com"/> <property name="jaas-context" value="ldapRealm"/> </auth-realm> </pre> <p> Now in your web.xml file configure your app to use this LDAP i.e. the web.xml entries should look like: </p> <pre> <security-constraint> <web-resource-collection> <web-resource-name>protected</web-resource-name> <url-pattern>\*.jsf</url-pattern> <url-pattern>/download/\*</url-pattern> <url-pattern>/resource/\*</url-pattern> <http-method>DELETE</http-method> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>PUT</http-method> </web-resource-collection> <auth-constraint> <role-name>USER</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>FORM</auth-method> <realm-name>myLDAPRealm</realm-name> <form-login-config> <form-login-page>/login.jsf</form-login-page> <form-error-page>/loginError.jsf</form-error-page> </form-login-config> </login-config> <security-role> <role-name>USER</role-name> </security-role> </pre> <p> Your sun-web.xml should look something like, </p> <pre> <security-role-mapping> <role-name>USER</role-name> <group-name>people</group-name> <group-name>Employee Group</group-name> </security-role-mapping> </pre> <img src=""/> <p> VOILA! </p> Getting to know JSFTemplating manveen 2007-07-10T13:57:17+00:00 2007-07-10T21:05:29+00:00 <p> <a href="">JSFTemplating</a> provides a templating mechanism for JavaServer Faces Technology that works with JavaServer Faces to make building pages and components easier. </p> <p> Since its an open-source project at <a href="">java.net</a>, it's easy to gain access to the source code, try it out and even contribute! It sounds fun enough to <a href="">get started</a> with the setup. </p> <p> <b>You will need :</b> <br> <ol> <li><a href="">Glassfish</a></li> <li><a href="">Get the files</a></li> <li><a href="">Start creating pages!</a> </li> </ol> </p> <img src=""/> Pack200 and compression through ant manveen 2007-07-10T12:51:44+00:00 2007-07-10T20:01:56+00:00 <p> If you are interested in compressing your jar file, you can use <a href="">Pack200 for compression</a>. </p> <p> You can also do this through ant using the Pack200Task. (Pack200Task.jar is shipped is shipped with glassFish). Here is a code snippet that does this: </p> <pre> <property name="jarpack-task.jar" value="${glassfish_home}/lib/Pack200Task.jar"/> <target name="jarpack-tasks" > <taskdef name="pack200" classname="com.sun.tools.apache.ant.pack200.Pack200Task" classpath="${jarpack-task.jar}" /> <taskdef name="unpack200" classname="com.sun.tools.apache.ant.pack200.Unpack200Task" classpath="${jarpack-task.jar}" /> </target> <!-- Target to pack the jars using the Pack200 ant optional task --> <target name="jar-pack" depends="jarpack-tasks" description="Applying the pack utility on jars"> <mkdir dir="${pack.jar.dir}/normalized" /> <pack200 src="${pack.jar.dir}/${pack.jar.name}" destfile="${pack.jar.dir}/normalized/${pack.jar.name}" repack="true" stripdebug="false" deflatehint="keep" unknownattribute="pass" keepfileorder="true" /> </target> <target name="jar-unpack" depends="jarpack-tasks"> <unpack200 src="${pack.jar.dir}/${pack.jar.name}.pack.gz" dest="${pack.jar.dir}/${pack.jar.name}" /> <delete file="${pack.jar.dir}/${pack.jar.name}.pack.gz" /> </target> <target name="pack-all"> <antcall target="jar-pack"> <param name="pack.jar.dir" value="${build}" /> <param name="pack.jar.name" value="roller_configurator_${uc_version}.jar" /> </antcall> </target> </pre> <p> In case you are running into Out Of Memory, (especially on windows), you can try changing your JVM heap size. You can do this using ant's ANT_OPTS as below, and then invoke the ant pack-all target. <pre> set ANT_OPTS=-Xmx512m </pre> </p> In case you are interested in signing your compressed file, then you may need to pack, unpack and then repack your jar. This extra step is needed since is how this repackaging can be done: </p> <pre> <pack200 src="${pack.jar.dir}/${pack.jar.name}" destfile="${pack.jar.dir}/normalized/${pack.jar.name}" repack="true" stripdebug="false" deflatehint="keep" unknownattribute="pass" keepfileorder="true" /> <unpack200 src="${pack.jar.dir}/normalized/${pack.jar.name}.pack.gz" dest="${pack.jar.dir}/normalized/a-${pack.jar.name}" /> <pack200 src="${pack.jar.dir}/normalized/${pack.jar.name}" destfile="${pack.jar.dir}/normalized/${pack.jar.name}.pack.gz" gzipoutput="true" stripdebug="false" deflatehint="keep" unknownattribute="pass" keepfileorder="true"/> </pre> How can I check if my updatecenter module got installed successfully? manveen 2007-07-02T14:20:13+00:00 2007-07-02T21:26:24+00:00 <p> <i>How can I check if my updatecenter module got installed successfully?</i> </p> <p> The module gets unzipped or expanded under <i>GF_V2_HOME/updatecenter/registry/glassfish/packagename.of.your.module</i> </p> <p> <i>If something went wrong, where can I see it?</i> </p> <p> In the logs under <i>GF_V2_HOME/updatecenter/logs</i>. GF_V2 FCS will contain some bug fixes that promise better logging capabilites. That would be really nice! </p> Ask Guruji: The Indian search engine manveen 2007-06-05T10:14:45+00:00 2007-06-05T17:14:45+00:00 <img src=""/> <a href=""><img src=""/></a> <p> <a href="">Guruji.com</a> is India’s first internet search engine which focuses on developing search products for enhancing the Indian user experience.This is your quick guide to important information about your restaurants, dentists, doctors... in your city.The cities covered are Delhi, Mumbai Kolkatta, Chennai, Bangalore, Pune, Hyderabad, Ahmedabad, Jaipur, Indore, Noida, Mysore, Ludhiana, Mangalore, Vadodara. You can even query in some indian languages. </p> Strawberry picking manveen 2007-05-14T11:45:22+00:00 2007-05-14T18:45:22+00:00 <p> Did you know that strawberries grow in small bunches like this? </p> <p> <img src=""/> </p> <p> It's the start of the strawberry picking season. It's always fun to <a href=" "> pick fresh organic strawberries yourself</a>. The taste of fresh strawberry right from the plant into your mouth is well ... unforgettable. </p> <b><i>Why pick strawberries yourself?</i></b> <p> It you've never done it before, it could be an experience of a lifetime! </p> <p> <ul> <li>The quality is much better than any store or farm stand, when you choose the fruit and get it right from the plant. It looks and tastes better. </li> <li> Many farms have organic produce. </li> <li> The costs are usually substantially less; the farmer doesn't need to pay labor to pick, and he has no packaging or shipping costs. </li> <li> And... best of all... it's fun. </li> </ul> </p> Climate change is serious business manveen 2007-05-04T11:18:33+00:00 2007-05-05T05:14:38+00:00 <p> <a href="">Global warming</a> is a cause close to my heart. </p> <img src=""> <p> <a href="">Study after study</a> has confirmed that global warming is already occurring and that it is caused primarily by human activities. It's <a href="">time</a> to take action. </p> <p> Want to know how can we live lightly on the Earth and save money at the same time? </p> <p> Here is a <a href="">list of 10 things</a> <b>YOU</b> can do today that will not only reduce your <a href="">ecological footprint</a>, but also save you money and help you live a happier, healthier life. </p> <a href=""> <img src=""/></a> <p> Let's think about our children and make a difference to their tomorrow. <img src=""/> </p> Kids Workshops at Home Depot manveen 2007-05-03T11:01:50+00:00 2007-05-09T19:27:37+00:00 <a href=""><img src=""/></a> <a href=""> The Home Depot </a> has been offering <a href=""> Kids Workshops </a> - an award-winning program that has been offered in stores since 1997. </p> <p> Children, accompanied by an adult, use their skills to create objects that can be used in and around their homes or communities. </p> <p> The <a href="">workshops </a> are <b>free</b>, how-to clinics designed for children ages 5-12, available on the first Saturday of each month between 9 a.m. and noon at all The Home Depot stores. </p> Take your daughters and sons to work day manveen 2007-04-25T14:20:56+00:00 2007-04-26T21:03:28+00:00 <img src=""/> <p> April 26th is national <a href=""> Take Our Daughters and Sons to Work Day </a>. </p> <p> The program was started by the <a href="">Ms. Foundation for Women</a> 15 years ago as a means of encouraging young girls to pursue fulfilling careers (especially in fields where women have traditionally been underrepresented). </p> <p> It. </p> <p> It is a wonderful way to help our children discover the power and possibilities associated with a balanced work and family life. </p> <p> I wish many more companies supported this program... Anyone listening? </p> Who is reading your blog? manveen 2007-04-16T10:43:59+00:00 2007-04-16T18:14:41+00:00 The first time I saw <a href="">ClustrMaps</a> was on Masood's blog<a href=""> On the margins...</a>. <p> I found the whole idea of being able to view the geographic coordinates of the people who are reading your blogs pretty fascinating - fascinating enough to want to add it to my own blog entry. </p> <p>It was amazingly simple. I must add a disclaimer here that ClustrMaps in not paying me to endorse their product in any way. Now that the disclaimer is taken care of.. <a href="">here</a> is what I did. </p> <p> <ol> <li>To get your free account, <a href="">REGISTER HERE</a>. </li> <li> You will be sent a password by email with login instructions. </li> <li> Copy/paste a few lines of HTML from your ClustrMaps login page onto your own site. </li> </ol> Voila! <p> Do you have some cool ideas to add on to the left and right navigation bars of your blog? Please share them.... </p>
http://blogs.oracle.com/manveen/feed/entries/atom?cat=%2FGeneral
CC-MAIN-2015-48
refinedweb
4,596
56.25
Quick Link Swift Closures Tutorial 1- What is Closure? - Closure: Closure is a special block, it may have 0 or more parameters, and can have return type. It's almost like a block in C or Object-C. - To simpler, you can see the following declaration, can you guest its meaning? - Above declaration can be explained in this illustration below: - This is a syntax to declare variables with the data type, and assign values to variables that you are familiar: - MyFirstClosure.swift import Foundation // Declare the variable myVar1, with data types, and assign value. var myVar1 : () -> () = { print("Hello from Closure 1"); } // Declare the variable myVar2, with data types, and assign value. var myVar2 : () -> (String) = { () -> (String) in return "Hello from Closure 2" } // Declare the variable myVar3, with data types, and assign value. var myVar3 : (Int, Int) -> (Int) = { (a : Int, b: Int) -> (Int) in var c : Int = a + b return c } func test_closure() { // Execute Closure. myVar1() // Execute Closure, and get returns value. var str2 = myVar2() print(str2) // Execute closure, pass parameters // and get returns value. var c: Int = myVar3(11, 22) print(c) } - Closure is the block, there may be parameters, and can have the return type: Syntax of Closure: { (parameters) -> returntype in // statements } 2- Function vs Closure - function is a special case of Closure. function is Closure named or can be said Closure is a anonymous function. definition Using: 3- Anonymous Closure - When declaring Closure, you may not need to write the name of the parameters, the parameters can be referenced through $0, $1, ... - AnonymousClosure.swift import Foundation // Declaring a Closure in the usual way. var myClosure : (String, String) -> String = { (firstName: String, lastName: String) -> String in return firstName + " " + lastName } // Declare a Closure in the anonymous way. // (Ignore parameter names). var anonymousClosure : (String, String) -> String = { // Using // $0: For first parameter // $1: For second parameter. return $0 + " " + $1 } - Note: $0, $1, ... are the anonymous parameters, they are only used in the anonymous Closure, if used in the common Closure you will get an error message: Anonymous closure arguments cannot be used inside a closure that has explicit arguments - For example, the anonymous Closure (2) - AnonymosClosure2.swift import Foundation func test_anonymousClosure() { // Declare a variable type of Closure. var mySum : ( Int, Int ) -> (Int) // Assign a anonymous closure. // $0: For first parameter. // $1: For second parameter. mySum = { return $0 + $1 } var value = mySum(1, 2) print(value) } 4- Implicit Return Values - If the content of Calosure have an single expression, you can omit the return keyword. - ImplicitReturnValues.swift import Foundation // This is a Closure that its content is only an expression. var closureWithReturn = { (a: Int, b: Int) -> Int in // Only an expression. return a + b } // Can omit the 'return' keyword. var closureWithoutReturn = { (a: Int, b: Int) -> Int in // If only a single expression. // Omit 'return' keyword. a + b } 5- Closure in a function - TODO
http://o7planning.org/en/10577/swift-closures-tutorial
CC-MAIN-2017-22
refinedweb
468
56.55
Geeks With Blogs Geeks with Blogs, the #1 blog community for IT Pros Start Your Blog Harish Ranganathan 375 Posts | 6881 General C# Programming Enterprise Library Physics Non Work F# Performance May 2011 Entries Integrating Twitter with your ASP.NET MVC 3 and Razor Web Application ...... Posted On Tuesday, May 24, 2011 11:06 AM Using the MailDefinition class in .NET Framework 4 Console Application, Visual Studio 2010 One of my colleagues pinged me to check, how to use the MailDefinition class for creating formatted mails from a .NET Console Application. She was referring to this article... which seems to be building a nice template email that can be sent. But this article was related to ASP.NET and hence didn’t have issues in referencing the MailDefinition Class which is part of System.Web.UI.WebControls namespace. The application ...... Posted On Thursday, May 12, 2011 1
http://geekswithblogs.net/ranganh/archive/2011/05.aspx
CC-MAIN-2016-44
refinedweb
147
57.27
8.3. Recurrent Neural Networks¶ In the previous section we introduced \(n\)-gram models, where the conditional probability of word \(w_t\) at position \(t\) only depends on the \(n-1\) previous words. If we want to check the possible effect of words earlier than \(t-(n-1)\) on \(w_t\), we need to increase \(n\). However, the number of model parameters would also increase exponentially with it, as we need to store \(|V|^n\) numbers for a vocabulary \(V\). Hence, rather than modeling \(p(w_t|w_{t-1}, \ldots w_{t-n+1})\) it is preferable to use a latent variable model in which we have For a sufficiently powerful function \(h_t\) this is not an approximation. After all, \(h_t\) could simply store all the data it observed so far. We discussed this in the introduction to the current chapter. Let’s see why building such models is a bit more tricky than simple autoregressive models where As a warmup we will review the latter for discrete outputs and \(n=2\), i.e. for Markov model of first order. To simplify things further we use a single layer in the design of the RNN. Later on we will see how to add more expressivity efficiently across items. 8.3.1. Recurrent Networks Without Hidden States¶ Let us take a look at a multilayer perceptron with a single hidden layer. Given a mini-batch of instances \(\mathbf{X} \in \mathbb{R}^{n \times d}\) with sample size \(n\) and \(d\) inputs (features or feature vector dimensions). Let the hidden layer’s activation function be \(\phi\). Hence the hidden layer’s output \(\mathbf{H} \in \mathbb{R}^{n \times h}\) is calculated as Here, we have the weight parameter \(\mathbf{W}_{xh} \in \mathbb{R}^{d \times h}\), bias parameter \(\mathbf{b}_h \in \mathbb{R}^{1 \times h}\), and the number of hidden units \(h\), for the hidden layer. Recall that \(\mathbf{b}_h\) is just a vector - its values are replicated using the broadcasting mechanism) to match those of the matrix-matrix product. Also note that hidden state and hidden layer refer to two very different concepts. Hidden layers are, as explained, layers that are hidden from view on the path from input to output. Hidden states are technically speaking inputs to whatever we do at a given step. Instead, they can only be computed by looking at data at previous iterations. In this sense they have much in common with latent variable models in statistics, such as clustering or topic models where e.g. the cluster ID affects the output but cannot be directly observed. The hidden variable \(\mathbf{H}\) is used as the input of the output layer. For classification purposes, such as predicting the next character, the output dimensionality \(q\) might e.g. match the number of categories in the classification problem. Lastly the the output layer is given by Here, \(\mathbf{O} \in \mathbb{R}^{n \times q}\) is the output variable, \(\mathbf{W}_{hq} \in \mathbb{R}^{h \times q}\) is the weight parameter, and \(\mathbf{b}_q \in \mathbb{R}^{1 \times q}\) is the bias parameter of the output layer. If it is a classification problem, we can use \(\text{softmax}(\mathbf{O})\) to compute the probability distribution of the output category. This is entirely analogous to the regression problem we solved previously, hence we omit details. Suffice it to say that we can pick \((w_t, w_{t-1})\) pairs at random and estimate the parameters \(\mathbf{W}\) and \(\mathbf{b}\) of our network via autograd and stochastic gradient descent. 8.3.2. Recurrent Networks with Hidden States¶ Matters are entirely different when we have hidden states. Let’s look at the structure in some more detail. Assume that \(\mathbf{X}_t \in \mathbb{R}^{n \times d}\) is the mini-batch input and \(\mathbf{H}_t \in \mathbb{R}^{n \times h}\) is the hidden layer variable of time step \(t\) from the sequence. Unlike the multilayer perceptron, here we save the hidden variable \(\mathbf{H}_{t-1}\) from the previous time step and introduce a new weight parameter \(\mathbf{W}_{hh} \in \mathbb{R}^{h \times h}\), to describe how to use the hidden variable of the previous time step in the current time step. Specifically, the calculation of the hidden variable of the current time step is determined by the input of the current time step together with the hidden variable of the previous time step: Compared with the multilayer perceptron, we added one more \(\mathbf{H}_{t-1} \mathbf{W}_{hh}\) here. From the relationship between hidden variables \(\mathbf{H}_t\) and \(\mathbf{H}_{t-1}\) of adjacent time steps, we know that those variables captured and retained the sequence’s historical information up to the current time step, just like the state or memory of the neural network’s current time step. Therefore, such a hidden variable is also called a hidden state. Since the hidden state uses the same definition of the previous time step in the current time step, the computation of the equation above is recurrent, hence the name recurrent neural network (RNN). There are many different RNN construction methods. RNNs with a hidden state defined by the equation above are very common. For time step \(t\), the output of the output layer is similar to the computation in the multilayer perceptron: RNN parameters include the weight \(\mathbf{W}_{xh} \in \mathbb{R}^{d \times h}, \mathbf{W}_{hh} \in \mathbb{R}^{h \times h}\) of the hidden layer with the bias \(\mathbf{b}_h \in \mathbb{R}^{1 \times h}\), and the weight \(\mathbf{W}_{hq} \in \mathbb{R}^{h \times q}\) of the output layer with the bias \(\mathbf{b}_q \in \mathbb{R}^{1 \times q}\). It is worth mentioning that RNNs always use these model parameters, even for different time steps. Therefore, the number of RNN model parameters does not grow as the number of time steps increases. The figure below shows the computational logic of an RNN at three adjacent time steps. In time step \(t\), the computation of the hidden state can be treated as an entry of a fully connected layer with the activation function \(\phi\) after concatenating the input \(\mathbf{X}_t\) with the hidden state \(\mathbf{H}_{t-1}\) of the previous time step. The output of the fully connected layer is the hidden state of the current time step \(\mathbf{H}_t\). Its model parameter is the concatenation of \(\mathbf{W}_{xh}\) and \(\mathbf{W}_{hh}\), with a bias of \(\mathbf{b}_h\). The hidden state of the current time step \(t\) \(\mathbf{H}_t\) will participate in computing the hidden state \(\mathbf{H}_{t+1}\) of the next time step \(t+1\), the result of which will become the input for the fully connected output layer of the current time step. Fig. 8.2 An RNN with a hidden state. As discussed, the computation in the hidden state uses \(\mathbf{H}_t = \mathbf{X}_t \mathbf{W}_{xh} + \mathbf{H}_{t-1} \mathbf{W}_{hh}\) to generate an object matching \(\mathbf{H}_{t-1}\) in dimensionality. Moreover, we use \(\mathbf{H}_t\) to generate the output \(\mathbf{O}_t = \mathbf{H}_t \mathbf{W}_{hq}\). In [1]: from mxnet import nd # Data X and hidden state H X = nd.random.normal(shape=(3, 1)) H = nd.random.normal(shape=(3, 2)) # Weights W_xh = nd.random.normal(shape=(1, 2)) W_hh = nd.random.normal(shape=(2, 2)) W_hq = nd.random.normal(shape=(2, 3)) def net(X, H): H = nd.relu(nd.dot(X, W_xh) + nd.dot(H, W_hh)) O = nd.relu(nd.dot(H, W_hq)) return H, O The recurrent network defined above takes observations X and a hidden state H as arguments and uses them to update the hidden state and emit an output O. Since this chain could go on for a very long time, training the model with backprop is out of the question (at least without some approximation). After all, this leads to a very long chain of dependencies that would be prohibitive to solve exactly: books typically have more than 100,000 characters and it is unreasonable to assume that the later text relies indiscriminately on all occurrences that happened, say, 10,000 characters in the past. Truncation methods such as BPTT and Long Short Term Memory are useful to address this in a more principled manner. For now, let’s see how a state update works. In [2]: (H, O) = net(X,H) print(H, O) [[0. 0.] [0. 0.] [0. 0.]] <NDArray 3x2 @cpu(0)> [[0. 0. 0.] [0. 0. 0.] [0. 0. 0.]] <NDArray 3x3 @cpu(0)> 8.3.3. Steps in a Language Model¶ We conclude this section by illustrating how RNNs can be used to build a language model. For simplicity of illustration we use words rather than characters, since the former are easier to comprehend. Let the number of mini-batch examples be 1, and the sequence of the text be the beginning of our dataset, i.e. “the time machine by h. g. wells”. The figure below illustrates how to estimate the next character based on the present and previous characters. During the training process, we run a softmax operation on the output from the output layer for each time step, and then use the cross-entropy loss function to compute the error between the result and the label. Due to the recurrent computation of the hidden state in the hidden layer, the output of time step 3 \(\mathbf{O}_3\) is determined by the text sequence “the”, “time”, “machine”. Since the next word of the sequence in the training data is “by”, the loss of time step 3 will depend on the probability distribution of the next word generated based on the sequence “the”, “time”, “machine” and the label “by” of this time step. Fig. 8.3 Word-level RNN language model. The input and label sequences are The Time Machine by H. and Time Machine by H. G. respectively. The number of words is huge compared to the number of characters. This is why quite often (such as in the subsequent sections) we will use a character-level RNN instead. In the next few sections, we will introduce its implementation. 8.3.4. Summary¶ - A network that uses recurrent computation is called a recurrent neural network (RNN). - The hidden state of the RNN can capture historical information of the sequence up to the current time step. - The number of RNN model parameters does not grow as the number of time steps increases. - We can create language models using a character-level RNN. 8.3.5. Exercises¶ - If we use an RNN to predict the next character in a text sequence, how many output dimensions do we need? - Can you design a mapping for which an RNN with hidden states is exact? Hint - what about a finite number of words? - What happens to the gradient if you backpropagate through a long sequence? - What are some of the problems associated with the simple sequence model described above?
http://d2l.ai/chapter_recurrent-neural-networks/rnn.html
CC-MAIN-2019-18
refinedweb
1,873
52.49
Getting Started with Rider and Unity We recently released Rider, a new IDE for C# and .NET developers. It runs cross platform, on Windows, Mac and Linux, and comes with built-in support for Unity – code completion for event functions, inspections and quick-fixes for Unity code, support for shader files and more. Today, we’re going to take a look at how you get started, and how Rider will help with your Unity code. Here’s a quick overview video that shows Rider in action with Unity code. Read on for more details. If you haven’t encountered Rider before, it’s a new IDE for .NET and C#, based on the best bits of ReSharper and IntelliJ IDEA. ReSharper provides the C# language engine, with code completion, navigation, find usages, thousands of inspections, quick-fixes, refactorings and more, while IntelliJ provides the rich, cross platform user interface – editor, debugger, test runner and so on. You can download a free 30-day trial now and get started right away. Getting started Getting started with Rider and Unity is nice and easy.. Note that you can also do this manually, through the External Tools page of Unity’s *Preferences* dialog. The initial Rider 2017.1 release required this to be done manually. This has been fixed in the recently released Rider 2017.1.1. Install Rider using the Toolbox App to make it easy to stay up to date. When Rider first opens your Unity project, it will install a small Unity Editor plugin into your project. This plugin improves integration between Unity and Rider, such as speeding up the time to open a C# script at the right line, making sure that all necessary references are added to the generated C# project files, and making debugging the Editor easier. It also adds a Rider page to the Preferences dialog with some options. This plugin should be committed to your source control, and Rider will automatically keep this file up to date when new versions are available (this behavior can be disabled in Rider’s options). More details can be found here. Furthermore, when a Unity project is opened, Rider will check its own auto-save settings, and recommend changing behavior to work better with Unity. By default, Rider will automatically save files when you switch to another application or when Rider is idle. This is usually a great way to work, but it can have a negative impact with Unity projects, as it will cause a recompile which can reset game data if you’re in play mode. To prevent this, Rider suggests disabling auto-save, so you have to be explicit about saving your files. Simply click the link in the notification, and Rider will make the changes – and even let you know what was changed. (And don’t worry, Local History keeps you covered, keeping track of all unsaved changes in the background.) Features – code completion, inspections and more Rider ships with knowledge of the Unity API, and will mark classes, methods and fields that are implicitly used by Unity directly in the editor. And of course, Rider makes it easy to generate Unity event functions, either using the Generate menu or simply by typing, with autocompletion. Hovering over a method or method parameter will display the Unity documentation as a tooltip, or in the QuickDoc popup, calling out event functions that can be coroutines. Rider’s Solution Wide Analysis will allow Rider to find (among other things) public types and type members that are not in use. They will be greyed out, and you can safely delete them, knowing they’re not used in your code. But Unity doesn’t work like that – there are classes, methods and fields that aren’t used by your code, but called and set implicitly by the Unity engine. Because they’re not used explicitly by your code, Rider would ordinarily mark them as unused, but deleting them would obviously cause issues with your game. Fortunately, Rider knows about the Unity API, and will mark these implicitly used types and type members as in use. Even better, it knows that a field value is set implicitly by Unity, but will still warn you if the value isn’t accessed in your code. Of course, if your event function is empty, Rider will still suggest you remove it, for efficiency. Rider adds a number of Unity specific inspections and quick-fixes. For example, Rider will warn you against using an. A quick Alt+Enter later, and Rider will fix this up to call gameObject.AddComponent<T>() or ScriptableObject.CreateInstance<T>(). Because Rider knows about the Unity API, it can verify that all of your “magic methods” have the correct signature. It will ensure that all of your event functions have the correct signature, or it will remove the marker that shows that Unity will call the method implicitly. It will show warning highlights and a quick-fix will correct the method signature. Any unused optional parameters are highlighted, ready for removal, and a context action is available to change the return type to IEnumerable for methods that can be coroutines. There are also inspections for the [InitializeOnLoad] and [InitializeOnLoadMethod] attributes, ensuring they have the correct method or constructor signatures, and Rider will grey out a redundant [InitializeOnLoad] attribute if the class doesn’t have a static constructor, with a quick-fix to either quickly remove the attribute, or create the constructor. And Rider understands Unity’s color types, too, showing the color in the editor, and providing an Alt+Enter context action to pick the color from a palette. One of the more powerful features of the ReSharper engine that powers Rider is the ability to add “references” to string literals. What this means for Unity developers is that Rider knows that the string literal arguments to methods like Invoke, InvokeRepeating, StartCoroutine and StopCoroutine are references to methods. So Rider can validate that the name is correct, and even better, offer code completion, directly in the string literal. And that’s just the start – these string literals take part in navigation, find usages and are updated when the method is renamed. Rider also introduces initial support for .shader files. ShaderLab content has more support than Cg sections just now, but we’re working on this for future releases. Shader files get syntax highlighting and simple word completion (aka “hippie completion”). ShaderLab content also gets syntax error highlighting, code folding, commenting/uncommenting, and brace matching, as well as recognizing To Do items. Debugging. And Rider’s advanced breakpoint editor allows you to enable or disable a breakpoint, set a condition or hit count, and also decide what happens when it’s hit – suspend execution, or continue, after logging a message or value. You can even remove a breakpoint after it’s been hit, or only enable it after another breakpoint has been reached – allowing for complex breakpoints in methods that are called frequently, such as Update, or chaining breakpoints in a complex bit of game logic. And more… Of course, these are just Rider’s Unity specific features – there are plenty more features that will help you navigate, create and refactor your game logic. Rider has over 2,000 code inspections, over 1,000 quick-fixes, nearly 350 context actions and 50 refactorings. It boasts rich navigation to jump straight to a type, find usages, go to derived or base classes. There’s unit testing support, code cleanup, integrated version control, Local History to save your code between commits, NuGet support (and we’ve made it fast), database tooling and more. It also has access to IntelliJ’s rich plugin system, with over 800 plugins already available, from Markdown support to VIM keyboard bindings (the Key Promoter X plugin is fantastic for learning keyboard shortcuts). All of which adds up to a very rich and powerful IDE, aimed at C# and .NET developers, with Unity support built in. But the best way to find out how good Rider is, is to try it yourself. Download the fully featured, 30-day trial version for Windows, Mac or Linux, and try it with your project today. And don’t forget about our Toolbox App, as a great way to manage updates, projects and other IDEs. And finally, the Unity support is Open Source, on GitHub. If there’s a Unity specific feature you think Rider is missing, let us know! 36 Responses to Getting Started with Rider and Unity Dew Drop - August 30, 2017 (#2551) - Morning Dew says:August 30, 2017 […] Getting started with Rider and Unity (Matt Ellis) […] Mike-EEE says:August 30, 2017 Someone there @ JB please knock some sense into the folks at Unity and get them to understand the value of interfaces and well-known contracts… and general .NET design principles while you’re at it. 😉 Tobias Brohl says:March 23, 2018 Dev says:August 31, 2017 That’s just C# 2 support with extra steps! Yuri says:August 31, 2017 BS, just give us net core 2 support already!!! VsCode is tenting me! Julo says:September 1, 2017 Please make it possible to run EF6 commands (add-migration, update-database …) otherwise it’s useless for a lot of folks on ASP .Net Greg says:September 2, 2017 Is the debugger crash with Rider and Unity fixed yet? I had to uninstall Rider and go back to VS because of it. George Cook says:September 3, 2017 you got an issue number? I’ve had no crashes debugging for months, across Unity versions 5.6>2017.2 beta, and God knows how many Rider versions. Hope that helps. Andrew says:September 4, 2017 Can you attach the debugger to standalone builds of your game? This is something that visual studio makes very easy and it’s extremely useful. Matt Ellis says:September 4, 2017 Not yet, but it will be a part of Rider 2017.2 Matt Ellis says:September 4, 2017 And it’s already in the first build of the Rider 2017.2 EAP previews. Andrew says:September 7, 2017 Thanks. Downloaded it. So far so good, except none of the hierarchy hotkeys seem to be working (call hierarchy, type hierarchy, etc). Matt Ellis says:September 7, 2017 Rider doesn’t yet implement the views for call or type hierarchy. You can still navigate up and down the class hierarchy with the “Navigate This” menu – “Go to Derived Type” or “Go to Base Type” Andrew says:September 7, 2017 I see. When are the views for the hierarchy expected to be available? Matt Ellis says:September 8, 2017 Call/value tracking hierarchy will be in Rider 2017.2, but we’re not yet sure when the type hierarchy views will be ready. Stepan Stulov says:October 10, 2017 How does syntax highlighting granularity compare with MonoDevelop and/or Visual Studio? I see some elements of different nature having the same color. Thank you for the hints. Matt Ellis says:October 11, 2017 The ShaderLab part of the .shader file is highlighted by parsing the contents of the file and building a syntax tree. We then use this tree to know how to classify the file, as number, string literal, keyword or comment. We don’t have any other classification types yet (variable reference is one that will come in the future). As of Rider 2017.1, the Cg/HLSL blocks are highlighted by simple keyword matches, and there are known inaccuracies in the syntax highlighting here. For Rider 2017.2, we’ve written a parser for the Cg/HLSL language, and we’re now using this for syntax highlighting (it’s not yet complete, as Cg/HLSL is a complex language, so we’re not yet using it to show syntax errors, or anything other advanced features). Rider 2017.2 will be much more accurate with highlighting. However, there are bound to be differences between the implementations, as classification is somewhat open to interpretation (should the “Front” in a ShaderLab “Cull” command be treated as a value, or as a keyword?). That said, if there are things that are obviously wrong, or just plain weird, let us know, send us a repro, or a screenshot and we’ll take a look). Stepan Stulov says:October 12, 2017 Hi, Matt. Thank you very much for the detailed explanation. I apologize for not being clearer by I meant C# syntax highlighting granularity. That’s a known problem with Visual Studio family and one of the main reasons for me to stay with MonoDevelop despite all of its horrible bugs. I’d jump ship any moment, but from what I see Rider has the same limited granularity: method arguments, properties, methods all have the same color. Would you mind giving us a comment on that? Thank you very much! Matt Ellis says:October 12, 2017 It depends on what color scheme you use. If you use “Visual Studio Light” or “Visual Studio Dark”, then yes, many identifiers are colored the same. This scheme is designed to mimic Visual Studio, to be familiar and comfortable to new users coming from Visual Studio. Looking back at the screenshots above, I think I was using “Visual Studio Light” here. However, you can use the “ReSharper Light” or “ReSharper Dark” color schemes. These have more semantic highlighting – method names, fields, classes, interfaces, etc. have different colors. And of course you can edit the colors as well, in Preferences → Editors → Color Scheme → Language Defaults. There are a lot of different categories here, such as “local variable” and “local variable (mutable)”, or “method” and “extension method”. You should be able to configure it just how you like. Stepan Stulov says:January 24, 2018 Hi, Matt. I finally got to try the Rider. Things are looking pretty good. However I don’t see any specific C# coloring, only “Language Defaults”. As an example to what I’d love to see colored differently is preprocessor directives (#region, #if, #elseif, #else, etc.). I prefer them to be very close to the color of the background to make sure they are there but not distracting. In the current color scheme logic #region is treated as a keyword (which it isn’t), and region’s own name is treated as something else too. Or am I missing something? Cheers! Matt Ellis says:January 26, 2018 Yes, C# uses the Language Defaults section (showing its heritage as it was originally a C# IDE). Most things are properly configurable via this page, but you’re right, Rider does classify preprocessor directives as keywords. I’ve created an issue for this: RIDER-13163 Stepan Stulov says:January 30, 2018 Hey, Matt. Thanks for opening the issue. Another thing I had problems finding is camel/pascal case subword navigation. In MonoDevelop I can do ctrl+arrows to jump whole words and alt+arrows to jump between subwords in a properly camel-cased word. This is golden. One more thing is I didn’t find C# specific templates/code snippets. Here is a desired scenario: I define a code snipped with a number of parameters for example $type$ and $name$ (whichever special symbols are used). For each such parameters I can specify whether it’s an identifier (so its restricted to what symbols may constitute its value) as well as a default value. Additionally there is a set of embedded parameters like for example $end$ for where the cursor will be when the snipped is completed. There are also so called “surround” snippets, that will work with a selected text and surround it with elements, like the $region snippet. These are the things I’m automatically missing in Rider having been (way too) long with MonoDevelop+Unity. For now Rider looks ridiculously good but just this LITTLE BIT not there for me to jump ship. Hope this helps. Matt Ellis says:February 5, 2018 Rider already has support for camel hump navigation. Make sure you have “Use camel humps” selected in the Editor → General → Typing Assistance preferences page. This should change the behaviour of the “Move caret to next/previous word” actions to use camel humps. There are additional “camel hump” actions if you don’t want to change the default behvaiour – go to Keymap in preferences and search for “camel”. Rider also has support for Live Templates, just as you describe – it exposes the same functionality from ReSharper. Unfortunately, right now, we don’t have a way to edit the templates in Rider. You can vote for this issue to track this feature request: RIDER-548 Jason says:November 26, 2017 Is debugging for external dlls supported? Matt Ellis says:December 1, 2017 Yes, as long as the external .dlls have Unity/Mono compatible .mdb debug files. On Windows, the latest nightly builds of Rider 2017.3 ship with pdb2mdb, and will automatically convert any .pdb files added as assets into .mdb files. On Mac/Linux, the Mono compiler should be used to produce .mdb files. Modern installs of Mono will use Microsoft’s CSC C# compiler, and generate “portable” .pdb files, which are not compatible with either Unity’s Mono or the pdb2mdb executable. You can tell Rider to use Unity’s Mono toolset for compiling, by selecting the Unity versions of the Mono executable and xbuild from the drop downs in the Preferences → Build, Execution and Deployment → Toolset and Build. This will compile with Unity’s version of the Mono toolset, and generate .mdb files. This will however, limit you to C# 6. George cook says:January 20, 2018 I’ve tried all night to get rider to debug a unity 2017.3 build. It just won’t do it. Built with relevant settings and debug symbols. What am I doing wrong? Really need build debugging. ! George cook says:January 21, 2018 Resolved it. For anyone else interested you need to go to the debug menu and select attach to unity. The option from the debug profiles doesn’t work. Shame it’s not possible to run the credit automatically thoguh as you end up sweating to alt tab and start in time. Matt Ellis says:January 22, 2018 Hi George. I’m not sure what you’ve been struggling with here. The Unity support in Rider should add an “Attach to Unity Editor” run configuration, and have it automatically selected – you should see it in the right side of the toolbar at the top of the window. Just hitting the debug button next to it should attach you to the currently running Unity Editor instance. What were you seeing that was different? Stepan Stulov says:February 1, 2018 Hey, George. I believe the problem may be the slightly confusing name. By hitting the “Attach to Unity Editor” you don’t initiate the debug session, you actually just select the profile. You need to hit the bug button to the right of it. Could this be the problem? Cheers Stepan Stulov says:January 31, 2018 Suggestion for when renaming private fields with [SerializeField] or public fields within a Unity script: OPTIONALLY insert [FormerlySerializedAs(“”)] attribute with the previous name. This will make life easier when renaming fields of MonoBehavior or a ScriptableObject of which there are a jizillion instances. Hope this helps. Matt Ellis says:February 5, 2018 Yes, this is a request we’ve had before. We’re hoping to add something in the near future. See #54 for renaming fields, and also #55 for encapsulating fields. George Cook says:February 12, 2018 How does one build for release? i.e. do the same build that unity does from “build and run”? Debugging the #if UNITY_EDITOR mishaps that asset store developers have made would be much more pleasant. Matt Ellis says:February 14, 2018 Rider will compile the project and find normal compilation errors, but a full build still requires switching to Unity. Could you describe your scenario a little more? There might be something else we can do to help. Manu says:May 21, 2018 macOS related question: How do you setup Unity, if you use the Jetbrains Toolbox? Since the executable of Rider is within the Jetbrains Toolbox package. A workaround currently is to open all .cs files directly with Rider and use “Open by file extension” in Unity, but I dislike that, since my main editor is sublime for quick file viewing. Manu says:May 21, 2018 Arg, took a look at the wrong file path. It’s simply here: ~/Applications/Rider.app Eloque says:July 12, 2018 After using Rider/Unity for quite some time, it’s now become unusable. Every time I try to build I get 3 errors like this: .NETFramework,Version=v4.5.AssemblyAttribute.cs(2, 46): [CS0234] obj/Debug/.NETFramework,Version=v4.5.AssemblyAttribute.cs(2,46): error CS0234: The type or namespace name ‘TargetFrameworkAttribute’ does not exist in the namespace ‘System.Runtime.Versioning’ (are you missing an assembly reference?) I’ve tried reinstalling both Unity and Rider from scratch. No luck. I am on macOS 10.13.3 and can no longer build anything with Rider. Until this is fixed, I can’t use Rider anymore
https://blog.jetbrains.com/dotnet/2017/08/30/getting-started-rider-unity/
CC-MAIN-2020-29
refinedweb
3,521
63.9
in Myeclipse - Hibernate Reg Hibernate in Myeclipse Hi, My table name is user... without any error. I opened HQL editor and executed from User My output... com.myeclipse.hibernate.User.48ec32 I didn't understand y i got this. Sombody the application I got an exception that it antlr..... Exception.Tell me the answer plz.If any application send me thank u (in advance). Hi friend...hibernate Hai,This is jagadhish I have a problem while Hai this is jagadhish while running a Hibernate application i got the exception like this what is the solution for this plz inform me.... Hi friend, Read for more information, Java - Hibernate Java friends plz help me. when i run hybernate program i got, this type of output. ---------------------------- Inserting Record Done Hibernate... FirstExample { public static void main(String[] args) { Session session = null Hibernate Hibernate hi sir i need hibernate complete tutorial for download Hi Friend, Please visit the following link: Hibernate Tutorials Thanks Hi i have 2 doubbts regarding Hibernate ,.. 1)Can we rename hibernate.cfg.xml? 2? can we use multiple mapping resource in hibernate.cf.xml file ? pls let me know soon ;Hi Friend, Please visit the following link: Hi Good Morning Will u please send me the some of the tutorials of hibernate.Because ,i have to learn the hibernate.i am new to this hi - Hibernate hi hi, what is object life cycle - Hibernate Hibernate when I run Hibernate Code Generation wizard in eclipse I'm getting the following error. I added ojdbc14.jar to my build path. Still I'm...;Hi Friend, Please visit the following link: java - Hibernate ........................................................... The aboue error is got when i downloaded the code and run using Hibernate and annotation..plse help me... Hi friend, Read for more information. while generating the hibernate code i got the error like org.hibernate.MappingException JAVA - Hibernate . why hibernate..? 3. Hibernate Vs JDBC...? plz plz plz answer me, i have seminar in that topic.. Hi friend, Hibernate It is a free, open... feature in Hibernate it allows developers to port applications to almost all can you show the insert example of Hibernate other than session.save(obj); Hi I am sending a link where u can find lots of example related to hibernate... Hi - Hibernate Interview Questions Hi please send me hibernate interview questions about hibernate - Hibernate about hibernate tell me the uses and advantages of hibernate Hi friend, I am sending a link. This link will help you. Please visit for more information. Thanks......_* all the column and keys are same. How can we use hibernate Hi friend,read for more information. Hibernate - Hibernate Hibernate application development help Hi, Can anyone help me in developing my first Java and Hibernate application Struts-Hibernate-Integration - Hibernate Struts-Hibernate-Integration Hi, I was executing struts hibernate...) javax.servlet.http.HttpServlet.service(HttpServlet.java:802) Hi Friend, Please visit the following link: Hope Criteria Queries - Hibernate Hibernate Criteria Queries Can I use the Hibernate Criteria Query... = session.createCriteria(TreasuryClient.class); Hi friend, package... Configuration(); // configuring hibernate SessionFactory problem - Hibernate hibernate code problem String SQL_QUERY =" from Insurance...: " + insurance. getInsuranceName()); } in the above code,the hibernate... this lngInsuranceId='1'. but i want to search not only the value 1 but the value hibernate 4.3.0 hibernate 4.3.0 Hi, I am trying to find out the good tutorials of hibernate 4.3.0. Is there any good links of the tutorial. Thanks hibernate............... hibernate............... goodevining. I am using hibernate on eclipse, while connection to database Orcale10g geting error .........driver ARNING: SQL Error: 0, SQLState: null 31 May, 2012 8:18:01 PM Hibernate hibernate - Hibernate hibernate is there any tutorial using hibernate and netbeans to do a web application add,update,delete,select Hi friend, For hibernate tutorial visit application Hibernate application Hi, I am using Netbeans IDE.I need to execute a **Hibernate application** in Netbeans IDE.can any one help me to do Hibernate - Hibernate Hibernate how to call the stored procedure in mysql database server using hibernate..?? Hi Friend, Please visit the following link: Thanks error in eclipse Hibernate error in eclipse Hi... while running my application i got these type errors...can youme please? Exception in thread "main... code........ I will try to short out your problem Hibernate - Hibernate Hibernate What is a lazy loading in hibernate?i want one example of source code?plz reply one example Hi mamatha, Hibernate 3.0, the latest Open Source... not understandable for anybody learning Hibernate. Hibernate provides a solution to map.../hibernate/ Thanks. Amardeep Hibernate what is difference between dynamic-update and dynamic-insert? Hi friend, It should be neccesary to have both a namespace.... Hibernate SessionFactory Can anyone please give me an example of Hibernate SessionFactory? Hi friend,package roseindia;import...;1.0"?><!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate java - Hibernate java tell me how to configure hibernate in eclipse,where i have to add jar files and how can i get the jar files please tell me the clear procedure.If possible give me sample application also. Thank you. Hi friend Hi Hi I need some help I've got my java code and am having difficulty to spot what errors there are is someone able to help import java.util.Scanner; public class Post { public static void main(String[] args) { Scanner sc Hi Hi I have got this code but am not totally understanding what the errors. Could someone Please help. Thanks in advance! import java.util.Random; import java.util.Scanner; private static int nextInt() { public class
http://www.roseindia.net/tutorialhelp/comment/2980
CC-MAIN-2014-49
refinedweb
924
50.73
Red Hat Bugzilla – Bug 239936 Review Request: oyranos - The Oyranos Colour Management System (CMS) Last modified: 2013-01-13 06:53:51 EST Spec URL: SRPM URL: Description: The Oyranos Colour Management System (CMS) Well same case for cinepaint, there is still some rpath issues ( fltk is guilty...). But the remaining rpath might not be caused by fltk... I think some data (specially thoses provided in standard_profiles/eci/ ) might not be bundled into the package because license isn't compatible for fedora inclusion... This is the fist package (i known ) that uses elektra. I'm not sure how key should be registred/unregistred... (get inspired from the Makefile install_usersettings section ) TODO: .desktop file remove remaining rpath check various licenses for compatibility. This package is a cinepaint dependency Spec URL: SRPM URL: Description: The Oyranos Colour Management System (CMS) Some updates (remaining rpaths) Some random notes: * Please make the compile log more verbose * Add 'INSTALL="%{__install} -p" to make install * What rpm own %{syscolordir}? (Please check directories' ownership) * While %syscolordir is used, %{_datadir}/color/ is also used in spec file * Is the definition %usercolordir needed (for this spec file)? (and there seems to be other unused macros) * %configure already uses --libdir=%_libdir * For make install: -------------------------------------------------------- make DESTDIR=$RPM_BUILD_ROOT install install_gui -------------------------------------------------------- This will be sufficient. * Would you tell me what %post script actually does? (especially, does %post script change some files?) (In reply to comment #1) > Some updates (remaining rpaths) * Would you tell me what rpath issues remain? (mock build log may be useful) * For %clean: why do you have to remove __doc directory explicitly? Ok here is a state of art... * Might requires also FLU This one seems optionnal and also not supported with fltk = 1.1.7 (or =< also, i suppose) * I don't knwo why the compile log isn't more verbose at this time... (i expect some echo "what i'm doing "@..." * OK it uses now install -p * About %{syscolordir} - diffents packages use this directory yum whatprovides /usr/share/color show theses: epic / java-1.4.2-gcj-compat-javadoc / pork / python-kiwi-docs I think this should be owned by filesystem if it became a standard directory * Removed %usercolordir others macros seems * About Make install - Actually i think usersetting shouldn't be called at buildtime but only in a %post section... using one line make install* for them. * This post script install keys in elektra registry database, this is mandatory to uses elektra with oyranos unless it will ask for this own version, with static libs... Hope to have advices from Pertusus about theses... * no more remaining rpath * Since i uses temporary dir "__doc" to put the docs in the right place, i thought i have to remove them ... I can also leave them here... Spec URL: SRPM URL: Description: The Oyranos Colour Management System (CMS) Well, I have not checked this in detail yet... * Dependency - main package should have: 'Requires: %{name}-libs = %{version}-%{release}' - Check if -devel package should require main package (if you create library subpackage) - Well, then would you tell me what files does %post scriptlet change? - Well, this is not owned by any package on my system. So for now please make the directory owned by this package explicitly. * build log - To make build log more verbose. please do: --------------------------------------------------------------------------- for f in `find . -name [mM]akefile\*` configure.sh ; do sed -i.silent -e '/.SILENT/d' $f ; done --------------------------------------------------------------------------- If makefile contains '.SILENT:' line, the make log is suppressed. * fltk - Again, is fltk compatible with GPL? Okay, now we can restart this review request as fltk license issue is solved. So would you update the srpm again? Ok i will update it... For now, i've tested the build with cinepaint and it seems that the main oyranos package is needed at runtime...So i expect that oyranos is not multilib compatible. This mean that the libs requires the main package (with binaries) so i will drop the -libs subpackage and ask for adding oyranos to some exclude multilib list... Spec URL: SRPM URL: Description: The Oyranos Colour Management System (CMS) I've dropped the elektra key registration users should lauch oyranos to set the default for it... I didn't drop -libs sub-package... I'm still a little weak about multibs compatibility... Actually, only cinepaint is using oyranos, so I will move cinepaint for fc8 to multilibs compatibility...(and rebuild fc7 version that live in testing becasue of that) That will be the same scheme as gimp and gimp-libs... To come... There is a new icc_examin 0.44 version that should build externally (from cinepaint). But for now i'm only abble to build it internally from cinepaint... I don't plan to work on it (externally) for Fedora 8... But i will submit a review as soon as I have some success... Currently I meet a strange buildroot failure on koji so I cannot rebuild your new srpm and I am asking fedora-devel mailing list why... For 0.1.7-6: * ldconfig - Please call ldconfig for -libs. - And main package does not seem to require ldconfig call * pkgconfig .pc file: - fix up pc files * Name entry is the same * oyranos_monitor.pc - contains duplicate includedir entry - has strange Libs entry (-X11 ?) * oyranos-config -------------------------------------------------- [tasaka1@localhost oyranos]$ oyranos-config --version 0.1.7 Package elektra was not found in the pkg-config search path. Perhaps you should add the directory containing `elektra.pc' to the PKG_CONFIG_PATH environment variable No package 'elektra' found -------------------------------------------------- - oyranos-config complains elektra.pc is not found (however I don't think oyranos-devel should require elecktra-devel) - And move the corresponding man file (oyranos-config.1) from main package to -devel subpackage. * License - As far as I checked the actual source code, this is licensed under GPLv2+. Spec URL: SRPM URL: Description: The Oyranos Colour Management System (CMS) About oyranos-config. The better fix is to use pkg-config for apps that links to oyranos, but for now cinepaint (and icc_examin, maybe others) still uses oyranos-config. So for now the quickfix is to BuildRequires elektra-devel also... I will use, this in cinepaint to prevent linking issues: # clean unused-direct-shlib-dependencies sed -i -e 's! -shared ! -Wl,--as-needed\0!g' libtool For -7: * Again pc file - This time oyranos_monitor.pc has: -------------------------------------------- Requires: x11 xinerama -------------------------------------------- This means that -devel package should have: "Requires: libX11-devel libXinerama-devel". However, are these Requires really needed? #include macro search returns: -------------------------------------------- $ grep -h '#include ' `rpm -ql oyranos-devel | grep /usr/include` | sort | uniq #include "oyranos.h" #include "oyranos_definitions.h" #include "oyranos_version.h" #include <stdlib.h> /* for linux size_t */ -------------------------------------------- and it does not seem to require those two -devel package. * Documents - IMO all files "AUTHORS COPYING ChangeLog README" must be installed into -libs subpackage, because both main and -devel package require -libs subpackage. ping? So, i've made few improvements since devel was failed (because of Fedora -> Pacakges changes during the freeze ) The above error are corrected but i still need to test why i have erros when i removed commercial icc profiles... Whereas if I have them installed, there is still the same error... I will do some testing with a fresh installed rawhide... Now there is a new (optionnal) dependency. But there is a need to figure out what to do with standard paths as it bundles also some icc profiles. I've just remembered that oyranos is know as an alpha state (with 1.7), so maybe i will wait for improvement before to continue the review... Until then i will do some runtime test to provide a default profile suitable for Fedora (meant using non-commercial icc profiles by default)... Again would you update the status? FE-LEGAL blocked: I would like to know if we are allowed to provides icc content... There is differents License for theses contents from within this package (and xcalib). I expect some of them to require to be removed... But I wonder if we could leave some)... Some usefull links about an open source version of Adobe icc: As I have some standard profiles packaged in Oyranos, which further requires them to work correctly, I wonder how they should be packaged. Oyranos creates several RPM's with the make rpm target. They contain profiles sorted by licenses plus the binary and developers RPM. @Kai-Uwe Behrmann thx for joining I would say, I will probably handle this somehow. the mandatory thing will be to remove from the source archive of oyranos the icc profiles that cannot be redistributed from the Fedora point of view. Then I will probably take the "upstream" site of theses profiles to packages them on a well know third part repository (if the site allow us to redistribute them). This may requires some naming rules from the new package (or I could stay with oyranos-LStarRGB-0.4-16.1.noarch.rpm - what if I choose color-icc-LStarRGB for example?) I haven't checked which one could be allowed or not, maybe we could have the compatibleWithAdobeRGB1998.icc but that's all. I wonder if i will split it to another package (that will be required by oyranos) Maybe cineon profiles can be allowed, in this case i may provide another package with color-icc-Cineon for LStar case - I wonder if we can allow it for Fedora (but not for EPEL) as: Permission to use, copy, and distribute this profile and its documentation for any other than commercial purpose (including bundling with products, enhancing a product´s performance or value)... I don't know the status for HeidelbergLicense probably allowed to redistribute but not Free (to modify, even if it does not make sense - so it will not be allowed into Fedora). The LStar profile will be replaced with ECIv2. It will become a ISO. Not shure when. The license is more liberal regarding distribution, but modification is not allowed. I can relicense my profiles, the cineons, to be BSD. Would that help? Recreation of any of the profiles, possibly except the cineons, should be easily possible with packages like ArgyllCMS or Scarse. So the ECI licenses should not hurt anyone. The base characterisation data to create the profiles is always open to use. We could even go as far to include the data files to make it convenient to create own profiles and let user do whatever they like with them. For naming, I'd like to stay with oyranos in the package name to make it clear its the reference. Some features like user policy checking and colour conversions rely on this. The standard profiles involved should be the same on every supported platform. The naming would help for this. @Patrice Dumas Are we expecting a the new elektra 7 for F-9, there is a rebuilt failure that we need to fix for gcc43 I may use oyranos 1.8 depending if oyranos is rebased on elektra 7 or current 6.10. Kai-Uwe any tips for this question ? Indeed I am more or less waiting for 0.7., even a release candidate since I remember a post on the mailing list showing good progress, but I cannot find one. I am not too concerned with the failing rebuild, I would prefer not to spend time on the older release. But if needed I'll do it. Well, I will work on oyranos 1.7 (with elektra 6.10) and update to 1.8 (along with elektra 7. if possible).). About oyranos_version.h, i will rip out OY_SRCDIR and OY_SRC_LOCALEDIR and keep the timestramps from a reference file. (I wonder if this file should be used anyway). (In reply to comment #22) >). Why don't you package the profiles that are acceptable in fedora in oyranos itself? Because they may be needed by other applications? Because they have separate upstreams? >. It should certainly requires all the profiles that can be shipped in fedora. >). That seems quite complicated. In any case a documentation, even minimal, of oyranos-monitor-nvidia and oyranos-monitor is missing. On the same subject, the oyranos-config-fltk man page should be in the main package. > About oyranos_version.h, i will rip out OY_SRCDIR and OY_SRC_LOCALEDIR and keep > the timestramps from a reference file. (I wonder if this file should be used > anyway). It is not clear, indeed. It is a bit dubious to have all these symbols defined in the API. The API should be platform independent and it is clearly not the case here. There should be no file name separator in the API, for example. There are 2 rpmlint warnings relevant for -devel: oyranos-devel.i386: E: zero-length /usr/share/doc/oyranos-devel-0.1.7/html/structoyComp__s____coll__graph.map This one is a bit strange, maybe doxygen is doing wrong things with maps since there is a reference in the html to oyComp__s____coll__map oyranos-devel.i386: W: file-not-utf8 /usr/share/doc/oyranos-devel-0.1.7/ChangeLog Spec URL: SRPM URL: Description: The Oyranos Colour Management System (CMS) changelog - Comment out the oyranos default policy until some profiles are available within the repository. - Add Requirement needed by oyranos_moni.pc - Add README-Fedora-ICC_PROFILES - Uses color-filesystem BR and macros - Split the doc - Fix wrong file encoding not-utf8 ChangeLog - Tweak oyranos_version.h - Repackage the sources. In my view, the icc profiles need to be noarch (without a dist tag). With one packages for one different archive (and upstream), So i'm still working to have them packaged, then required at runtime with a virtual provides (color-icc-profiles). I still have the oyranos-docs.x86_64: E: zero-length /usr/share/doc/oyranos-docs-0.1.7/structoyComp__s____coll__graph.map What seems safer ? do delete it and try to have a file not found error, or can i leave it for now ? Removing FE-Legal, okay as GPLv2+ For 0.1.7-9: * Requires: - Would you explain why -devel subpackage "Requires" color-filesystem? * From build.log: ------------------------------------------------------------------ 901 /builddir/build/BUILD/oyranos-0.1.7/oyranos_monitor.c:1386: Warning: The following parameters of oyGetDisplayNameFromPosition(const char *display_name, int x, int y, oyAllocFunc_t allocate_func) are not documented: 902 parameter display_name 903 sh: dot: command not found 904 Problems running dot: exit code=127, command='dot', arguments='"structoyComp__s____coll__graph.dot" -Tpng -o "structoyComp__s____coll__graph.png"' 905 /builddir/build/BUILD/oyranos-0.1.7/oyranos_config.h:41: Warning: Found unknown command `\autor' 906 /builddir/build/BUILD/oyranos-0.1.7/oyranos_config.h:65: Warning: Found unknown command `\autor' 907 sh: dot: command not found 908 Problems running dot: exit code=127, command='dot', arguments='"graph_legend.dot" -Tpng -o "graph_legend.png"' ------------------------------------------------------------------ - Perhaps graphviz is missing from BuildRequires (as you create document files by doxygen). * Mandir - From spec file: ------------------------------------------------------------------ mv $RPM_BUILD_ROOT%{_mandir}/man1/oyranos-config.1 $RPM_BUILD_ROOT%{_mandir}/man3/oyranos-config.3 ------------------------------------------------------------------ Well, moving -config man file to section 3 is correct, however this also requires to fix man file itself. Currently "man oyranos-config" shows the section is 1. * Comment on %scriptlet part ------------------------------------------------------------------ #if [ "`elektra-kdb ls system/sw/oyranos 2>/dev/zero | wc -l`" -eq 0 ]; then # oyranos-policy %{_settingscolordir}/office.policy.xml #fi || : ------------------------------------------------------------------ - Then "rpm -q --scripts oyranos" shows this, which is not desirable because this actually executes a /bin/sh script file (with all comments) needlessly. The correct method is to put this part in %if macro like: ------------------------------------------------------------------ %if 0 ...... %endif ------------------------------------------------------------------ * Directory ownership issue - On my system: ------------------------------------------------------------------ [tasaka1@localhost ~]$ LANG=C rpm -qf /usr/share/color/settings/office.policy.xml oyranos-0.1.7-9.fc9.i386 [tasaka1@localhost ~]$ LANG=C rpm -qf /usr/share/color/settings/ file /usr/share/color/settings is not owned by any package ------------------------------------------------------------------ ! Multilib conflict - From configure: ------------------------------------------------------------------ 584 test -n "$ECHO" && $ECHO "sbindir=$sbindir" >> $CONF_SH 585 test -n "$ECHO" && $ECHO "libdir=$libdir" >> $CONF_SH 586 test -n "$ECHO" && $ECHO "includedir=$includedir" >> $CONF_SH ------------------------------------------------------------------ (here $CONF_SH is oyranos-config) This configure part creates oyranos-config different between 32 bits arch vs 64 bits arch. For scripts installed under %_bindir and packaged in -devel package, this multilib conflict is not allowed. * Please check and try to fix this multilib conflict. * Or if you feel fixing configure is not easy, you can - move oyranos-config to oyranos-config-{32,64} according to the architecture - Then install oyranos-config as: ------------------------------------------------------------------- #!/bin/sh ARCH=$(uname -s) case $ARCH in x86_64 | ia64 | s390 ) exec oyranos-config-64 $* ;; * ) exec oyranos-config-32 $* ;; esac ------------------------------------------------------------------- for example (I guess this work). Spec URL: SRPM URL: Description: The Oyranos Colour Management System (CMS) Changelog - Comment out %%post with %%if 0 - Add BR graphviz - Upate Doxygen before generation (-u) (set CLASS_GRAPH and COLLABORATION_GRAPH to NO ) - Remove Requires color-filesystem for -devel - Rename man1 to man3 within man pages. The directory ownership issue have been fixed within the color-filesystem package. I think that the name is enought generic to be commonly shared (even if for now, only oyranos will use this directory). Also this concern "default data setting" not targetted to be modifyed from this directory (once imported in the elektra registry, they can be modified system wide or by users). I have also improved the fix_bash patch to remove libdir value within oyrano-config,so they might be the same (and same timestramps) for both arches. rebuild of cinepaint went fine with this. Well, almost okay, however (In reply to comment #27) > I have also improved the fix_bash patch to remove libdir value within > oyrano-config,so they might be the same (and same timestramps) for both arches. > rebuild of cinepaint went fine with this. The problem with this is that $libdir is still used in oyranos-config like (on i386, for example) -------------------------------------------------------------------------- 33 if [ -n "$PKG_CONFIG_PATH" ]; then 34 PKG_CONFIG_PATH=$PKG_CONFIG_PATH:$libdir/pkgconfig 35 else 36 PKG_CONFIG_PATH=$libdir/pkgconfig 37 fi (and some other places) --------------------------------------------------------------------------- and removing all these $libdir seems not. Well i don't knwo. it remains dirty and the only way to do this would be to use a pkg-config generated from autotools... But next oyranos 0.1.8 will have a different scheme so it would be better to have a full autotools support written from scratch... (In reply to comment . (Note: -L/usr/lib64 is always unneeded because /usr/lib64 is in default search path of libraries) This didn't fail fortunately because this file actually didn't need liboyranos.so (-loyranos). Try: TEMP.c as ------------------------------------------------------------------------ #include <png.h> int main(){ png_sig_cmp(0, 0, 0); return 0; } ------------------------------------------------------------------------- and ------------------------------------------------------------------------- [tasaka1@localhost TEMP]$ LANG=C gcc -o TEMP__ TEMP.c -lpng [tasaka1@localhost TEMP]$ LANG=C gcc -o TEMP__ TEMP.c -L -lpng /tmp/ccuLIQFO.o: In function `main': TEMP.c:(.text+0x29): undefined reference to `png_sig_cmp' collect2: ld returned 1 exit status -------------------------------------------------------------------------- So would you try tricks written on or what I said on comment 26? Spec URL: SRPM URL: Description: The Oyranos Colour Management System (CMS) Changelog - Make oyranos-config a wrapper for pkgconfig As oyranos-config are now the same from lib or lib64 (and have the same timestramps). I would prefer to use it as a wrapper for pkg-config until oyranos.pc can support all the functions requested to oyranos-config. For 0.1.7-11: * -libs dependency - Oops, I don't know why I could not notice this before, but the correct dependency is that main (oyranos) rpm should have "Requires: %{name}-libs = %{version}-%{release}" and -libs subpackage should not have "Requires: %{name} = ...." Other things are okay. ---------------------------------------------------------------- This package (oyranos) is APPROVED by me ---------------------------------------------------------------- I can do that, but cinepaint will need to requires oyranos then (it use /usr/bin/oyranos-config-fltk ) Well i will do that before importing. It will make oyranos requirement to be found more easily as i expect. Thx New Package CVS Request ======================= Package Name: oyranos Short Description: The Oyranos Colour Management System (CMS) Owners: kwizart Branches: F-8 Cvsextras Commits: yes cvs done.
https://bugzilla.redhat.com/show_bug.cgi?id=239936
CC-MAIN-2017-26
refinedweb
3,293
57.06
Introduction: Raspberry Pi Photocell Log and Alert System This is a simple instructable for making a RasPi-based photosensor-triggered alert and logging system which can easily be extended to show possible entries into a room like a closet or equipment locker that's usually dark, or to monitor light levels for any other reason. In this tutorial I use LEDs as the outputs, but they can really be anything. This tutorial pretty much only requires the Adafruit RPi Starter Kit (here). It's a great kit and is pretty cheap, too. If you don't want that or already own the components, here's the bill of materials: 3x 10mm diffused LED (red, blue, and green) 1x 1uF capacitor 1x photocell resistor 3x 560 Ohm resistors 1x Pi Cobbler breakout 10x Breadboard wires 1x full-length solderless breadboard Now there's one problem with this. The Raspberry Pi has no onboard ADC. But, if all we want is a basic (and actually fairly accurate) photosensor, we can run the current through a small capacitor and time how long it takes to charge. Step 1: Prerequisites The program is written using Python. I highly recommend you use Adafruit's distro, Occidentalis, but if you don't want to do that just make sure you have the RPi.GPIO library installed. This particular project is pretty low power, so a good power supply isn't really needed. You should make sure you have a good light source to test this with, preferably one that you can change the brightness of. OK, let's get started. Step 2: Wiring and Testing the Photocell's RC Circuit Plug the Cobbler into one end of the breadboard. Make sure that no pins are on the same rail; if they are you could SERIOUSLY damage your Pi! Take a breadboard wire and connect the 3v3 pin to the positive rail of your breadboard, and connect the ground (the one next to the 5v0 pins) to the ground rail on the other side of the breadboard. Place the photocell across the gap between the two halves of the breadboard. On one side, connect another lead from one side of the photocell to the breadboard. On the other side, connect a wire from pin 18 to the photocell and the 1uF capacitor to ground. We're now ready to set up the calibration assistant to test the circuit. Enter this code as a python script and run it. You should see a long list of numbers appear, which will become lower when you shine a light on the photocell. This code is based on Adafruit's tutorial on this technique. #!/usr/bin/env python # Example for RC timing reading for Raspberry Pi # Must be used with GPIO 0.3.1a or later - earlier verions # are not fast enough! # Set for resistive input on pin 18 Step 3: Wiring the LEDs Now we're going to wire the LEDs. Place 3 LEDs on the breadboard. We will use blue as ACT, red as HI, and green as LO. For each LED, connect one of the resistors to the positive pin. From the other end of that resistor, connect a wire to: 22 for ACT 25 for HI 24 for LO Then, on the negative pin of each LED, connect a wire to ground. Step 4: The Final Code This code has a few parameters. They are near the top, written as: #Settings IN_RC= 18 #Input pin OUT_LOW = 24 #Low-light output OUT_HIGH = 25 #High-light output OUT_STATE = 22 #Program state output Using these, you can change the GPIO pins that the program is getting input from and sending output to. Here's the full code: #!/usr/bin/env python # Photocell input and parsing for Ras Pi # Must be used with GPIO 0.3.1a or later - earlier verions # are not fast enough! # Set for photocell input on pin 18 by default import RPi.GPIO as GPIO, time, os, sys #Settings IN_RC= 18 #Input pin OUT_LOW = 24 #Low-light output OUT_HIGH = 25 #High-light output OUT_STATE = 22 #Program state output DEBUG = 1 GPIO.setmode(GPIO.BCM) GPIO.setup(OUT_LOW, GPIO.OUT) GPIO.setup(OUT_HIGH, GPIO.OUT) GPIO.setup(OUT_STATE, GPIO def ledOut(state): if state == 0: GPIO.output(OUT_LOW, True) GPIO.output(OUT_HIGH, True) return 0 if state == 1: GPIO.output(OUT_LOW, False) GPIO.output(OUT_HIGH, True) return 0 if state == 2: GPIO.output(OUT_LOW, True) GPIO.output(OUT_HIGH, False) return 0 if state == 3: GPIO.output(OUT_LOW, False) GPIO.output(OUT_HIGH, False) return 0 return 1 def photocellParse(reading): out = "" if reading <= 65: out = "0" ledOut(0) return out if reading <= 150: out = "o" ledOut(1) return out if reading <= 350: out = "." ledOut(2) return out out = " " ledOut(3) return out while True: GPIO.output(OUT_STATE, True) sys.stdout.write(photocellParse(RCtime(IN_RC))) # Read RC timing using RC_IN pin, parse it, and spit it into stdout #print photocellParse(RCtime(IN_RC)), sys.stdout.flush() GPIO.output(OUT_STATE, False) Step 5: Usage How the display works: The ACT LED shows that the program is running. The LO LED turns on when the light is at a low but noticable level. The HI LED turns on when the light is at a moderate level. Both HI and LO LEDs turn on when the light is at a high level. On the console: A space means no light, . means low light, o means moderate light level, and 0 means bright light. The output looks something like this, if I were to operate in a softly-lit room and then shine a light on the cell: .........................o0000000000000000000000ooooo......................... NOTE: you MUST run the programs as superuser or using SuDo! Have fun. gheciobanu made it! Recommendations We have a be nice policy. Please be positive and constructive.
http://www.instructables.com/id/Raspberry-Pi-Photocell-log-and-alert-system/
CC-MAIN-2018-22
refinedweb
962
73.47
Amazon Inspector Construct Library All classes with the Cfnprefix in this module (CFN Resources) are always stable and safe to use. This module is part of the AWS Cloud Development Kit project. import * as inspector from '@aws-cdk/aws-inspector'; There are no official hand-written (L2) constructs for this service yet. Here are some suggestions on how to proceed: - Search Construct Hub for Inspector construct libraries - Use the automatically generated L1 constructs, in the same way you would use the CloudFormation AWS::Inspector::Inspector. (Read the CDK Contributing Guide and submit an RFC if you are interested in contributing to this construct library.)
https://www.npmtrends.com/@aws-cdk/aws-inspector
CC-MAIN-2022-27
refinedweb
104
51.58
NAME | SYNOPSIS | DESCRIPTION | RETURN VALUES | ERRORS | USAGE | SEE ALSO #include <sys/resource.h>int getrlimit(int resource, struct rlimit *rlp); Limits on the consumption of a variety of system resources by a process and each process it creates may be obtained with the getrlimit() and set with setrlimit() functions. Each call to either getrlimit() or setrlimit() identifies a specific resource to be operated upon as well as a resource limit. A resource limit is a pair of values: one specifying the current (soft) limit, the other a maximum an effective user ID of super-user can raise a hard limit. Both hard and soft limits can be changed in a single call to setrlimit() subject to the constraints described above. Limits may have an “infinite” value of RLIM_INFINITY. The rlp argument is a pointer to struct rlimit that includes the following members: rlim_t rlim_cur; /* current (soft) limit */ rlim_t rlim_max; /* hard limit */ The type rlim_t is an arithmetic data type to which objects of type int, size_t, and off_t can be cast without loss of information. The possible resources, their descriptions, and the actions taken when the current limit is exceeded are summarized as follows: The maximum size of a core file in bytes that may be created by a process. A limit of 0 will prevent the creation of a core file. The writing of a core file will terminate at this size. The maximum amount of CPU time in seconds used by a process. This is a soft limit only. The SIGXCPU signal is sent to the process. If the process is holding or ignoring SIGXCPU, the behavior is scheduling class defined. The maximum size of a process's heap in bytes. The brk(2) function will fail with errno set to ENOMEM. The maximum size of a file in bytes that may be created by a process. A limit of 0 will prevent the creation of a file. The SIGXFSZ signal is sent to the process. If the process is holding or ignoring SIGXFSZ, continued attempts to increase the size of a file beyond the limit will fail with errno set to EFBIG. One more than the maximum value that the system may assign to a newly created descriptor. This limit constrains the number of file descriptors that a process may create. The maximum size of a process's stack in bytes. The system will not automatically grow the stack beyond this limit. Within a process, setrlimit() will increase the limit on the size of your stack, but will not move current memory segments to allow for that growth. To guarantee that the process stack can grow to the limit, the limit must be altered prior to the execution of the process in which the new stack size is to be used. Within a multithreaded process, setrlimit() has no impact on the stack size limit for the calling thread if the calling thread is not the main thread. A call to setrlimit() for RLIMIT_STACK impacts only the main thread's stack, and should be made only from the main thread, if at all. The SIGSEGV signal is sent to the process. If the process is holding or ignoring SIGSEGV, or is catching SIGSEGV and has not made arrangements to use an alternate stack (see sigaltstack(2)), the disposition of SIGSEGV will be set to SIG_DFL before it is sent. The maximum size of a process's mapped address space in bytes. If this limit is exceeded, the brk(2) and mmap(2) functions will fail with errno set to ENOMEM. In addition, the automatic stack growth will fail with the effects outlined above. This is the maximum size of a process's total available memory, in bytes. If this limit is exceeded, the brk(2), malloc(3C), mmap(2) and sbrk(2) functions will fail with errno set to ENOMEM. In addition, the automatic stack growth will fail with the effects outlined above. Because limit information is stored in the per-process information, the shell builtin ulimit command must directly execute this system call if it is to affect all future processes created by the shell. The value of the current limit of the following resources affect these implementation defined parameters:. A limit whose value is greater than RLIM_INFINITY is permitted. The exec family of functions also cause resource limits to be saved. See exec(2). Upon successful completion, getrlimit() and setrlimit() return 0. Otherwise, these functions return -1 and set errno to indicate the error. The getrlimit() and setrlimit() functions will fail if: The rlp argument points to an illegal address. An invalid resource was specified; or in a setrlimit() call, the new rlim_cur exceeds the new rlim_max. The limit specified to setrlimit() would have raised the maximum limit value, and the effective user of the calling process is not super-user. The setrlimit() function may fail if: The limit specified cannot be lowered because current usage is already higher than the limit. The getrlimit() and setrlimit() functions have transitional interfaces for 64-bit file offsets. See lf64(5). brk(2), exec(2), fork(2), open(2), sigaltstack(2), ulimit(2), getdtablesize(3C), malloc(3C), signal(3C), sysconf(3C), lf64(5), signal(3HEAD) NAME | SYNOPSIS | DESCRIPTION | RETURN VALUES | ERRORS | USAGE | SEE ALSO
https://docs.oracle.com/cd/E19455-01/817-5439/6mkt6nsgs/index.html
CC-MAIN-2019-26
refinedweb
875
62.17
Building Your First Spring Boot Web Application Building Your First Spring Boot Web Application Want to learn how to build your first Spring Boot web application using Maven and a web application controller? Check out this tutorial to learn how with Spring Boot! Join the DZone community and get the full member experience.Join For Free In my previous article, I wrote about the Spring Boot fundamentals. Now, its time to show you a Spring Boot web application example. In this module, you will learn how to create a simple Spring Boot application and what dependencies and technologies do you need to get you started. Furthermore, we will go a little deeper into the basic fundamentals. I will explain some of the most crucial behind the scenes working mechanisms, which you need to understand if you would like to be a professional developer. How to Build Your First Spring Boot Web Application Prerequisites To create a new Spring Boot application, we will use the following in our example: You can simply download the provided links above and install them. If you not familiar with them: - Spring STS: The Spring Tool Suite (STS) is an Eclipse-based development environment that is customized for developing Spring applications. - Maven: A tool that you can use for building and managing any Java-based project. You do not need to know much about them for now, we will just use their basic functionality. In one of my upcoming articles, we will talk about Spring STS and Maven in greater detail. Creating Your Project After you started your STS, right-click the package explorer and select New —> Maven project. Then, we can use our default Workspace location. Select the default maven-archetype-quickstart artifact. Give any name for your Spring Boot application. Set up Your pom.xml After you clicked the finish button, open the pom.xml and add the spring-boot-starter-parent and the spring-boot-starter-web dependency as below: <project xmlns="" xmlns: <modelVersion>4.0.0</modelVersion> <groupId>springboot</groupId> <artifactId>my-first-application</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>my-first-application</name> <url></url> <!-- --> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.0.3.RELEASE</version> </parent> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <scope>test</scope> </dependency> <!-- --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> </dependencies> </project> Here, we use the Spring Boot version v 2.0.3, but you can give any valid version that you would like. Configure Your Application as a Spring Boot Application After that seek for your App.java class, the Maven quickstart archetype created it for us. This is where our public static void main( String[] args)method is located, which is the entry point of all Java applications. It should be in our root package. You will need to open it and add the @SpringBootApplication annotation to the class and the SpringApplication.run(App.class, args) row to our main method. @SpringBootApplication public class App { public static void main( String[] args ) { SpringApplication.run(App.class, args); } } Set up Your Web Application Controller If you all have done this, create a new package called controller. Add a new class for this package and update the following: @RestController public class TestController { @RequestMapping("/") public String home() { return "Spring boot is working!"; } } The class and method name does not matter — you can call them anything. As you can see, we added the @ RestController for the class, and we created a method with the @RequestMapping annotation. These are the part of the Spring MVC framework. We will talk about that in greater detail later. Start Your Web Application So, that's it. Now, we can start our newly created Spring Boot web application as a simple Java application: Right click on our App.java class -> Run As -> Java Application. After this, you should see something similar in the Console log: - 2018-08-06 20:22:53.221 INFO 9152 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' - 2018-08-06 20:22:53.225 INFO 9152 --- [ main] springboot.my_first_application.App : Started App in 1.862 seconds (JVM running for 2.419) Here, you can see that our application has been started. Furthermore, we have a running Tomcat web server, which registered to our 8080 port. Try Out Your Web Application To have a look at whether our application is it working or not, we can simply open a web browser and type:. As we saw above, 8080 was the port where our web server is registered. You should see the following: How Does it Work Behind the Scenes? Dependency Management After you added your starting dependencies, Maven downloaded all of the libraries considered to be necessary for our project. You can check this by opening the pom.xml and clicking on the dependency hierarchy tab shown below: It knows which dependency versions are the latest and stable for each other. This makes our life easier, because you do not need to take care of the different transitive dependencies. You only need to add one starter dependency, and your greatest dependencies start to download automatically. The spring-boot-starter-parent that is added to the starter section tells the default version of our Spring Boot which one we would like to use. Thus, we do not need to give version numbers later on, because every other one is going to align like this. How Does a Spring Boot Application Start? - Firstly, the application starts with a simple Java public static main method. - Next, Spring starts your Spring context by looking up the auto-config initializers, configurations, and annotations that direct how to initialize and start up the Spring context. - Lastly, it starts and auto-configures an embedded web server. The default application server is Tomcat for Spring Boot. Benefits of Containerless Deployment As you can see here, we didn’t need to configure our web-container. Therefore, we gain a lot of time and could get rid of some of our configuration files. The reason that Spring Boot runs an embedded web-server and configures it itself is that it knows that we need it from the web dependency, which we added to our pom.xml. Spring Boot opens up a bunch of new opportunities for us— we can simply run a web app by copying a basic .jar file anywhere Java is installed and just run it. This is a big step towards cloud architecture because we can handle our independent deployments more easily than before. Summary In conclusion, you just learned how to build up a very basic Spring Boot application from scratch. Now, you know how to add your dependencies and how they resolve. Furthermore, we talked about how your web application starts and the benefits of moving towards containerless deployments. Special thanks to Bunker Dan for his high-quality video training where I learned a lot of this information. }}
https://dzone.com/articles/building-your-first-spring-boot-web-application-ex?fromrel=true
CC-MAIN-2019-47
refinedweb
1,182
57.27
0 Hi I'm new to this. I'm a mechanical engineering student and have to learn c++, i have this book and there's a program source code. My question is: why do we need to include ctime in this code and what is it good for? #include <iostream> #include <cstdlib> [B]#include <ctime>[/B] #include <cmath> using namespace std; int rand_0toN1(int n); void draw_a_card(); char *suits[4] = { "hearts", "diamonds", "spades", "clubs"}; char *ranks[13] = {"ace", "one", ; r = rand_0toN1(13); s = rand_0toN1(14); cout << ranks[r] << " of " << suits[s] << endl; } int rand_0toN1(int n) { return rand () % n; } P.S I'm using Visual Studi 2010 (c++). Thanks for any reply. Edited by WaltP: Added CODE Tags -- please use them
https://www.daniweb.com/programming/software-development/threads/407234/what-does-include-ctime-do
CC-MAIN-2017-17
refinedweb
121
81.53
31 October 2008 13:09 [Source: ICIS news] LONDON (ICIS news)--Equistar will temporarily idle its olefins unit in La Porte, Texas, due to declining market and economic conditions, its parent company LyondellBasell said on Friday. The plant, which has a nameplate capacity of 1.7bn pounds/year (771,107 tonnes/year) of ethylene and 700m pounds/year (31,751 tonnes/year) of propylene, was expected to be down until early 2009. "We have elected to take the olefin plant at ?xml:namespace> "Today's tight credit markets and low consumer confidence also are being felt by our customers and this is contributing to decreased demand for petrochemical derivatives,” he added. Dineen said the unit would be ready to restart sooner if market conditions were to improve. The company said it would use this down time to conduct opportunity maintenance at the Equistar's low density polyethylene (LDPE), linear low density polyethylene (LLDPE), acetic acid and vinyl acetate monomer (VAM) units at the site would not be affected by the shutdown. Please visit the complete ICIS plants and projects database
http://www.icis.com/Articles/2008/10/31/9168104/equistar-temporarily-idles-texas-olefins-plant.html
CC-MAIN-2014-41
refinedweb
180
50.16
Your Kubernetes YAML files need validation. Jack Wallen shows you a very easy tool that can drastically simplify that task. If you've been navigating the waters of Kubernetes, you know how challenging it is. Not only are there a lot of moving parts, your pod and container configuration files can become quite complicated. When those manifests grow in size, you could easily overlook a configuration option which could be costly. Think about it this way: A poorly configured Kubernetes manifest could lead to security issues or could even cost you money--especially when you're deploying your pods on a cloud-hosted service like AWS or Google Cloud, where you pay for services used. Misconfigure a pod and it might use too much in the way of one or more resources--there goes your monthly budget. Why not take the time to lint your configuration files? Because that can be time consuming. However, there's an easier way. With the help of the kube-score tool, you can test your YAML files for things like: Ingress targets a Service CronJobs have a configured deadline All pods have resource limits and requests set All pods have the same requests as limits on resources set All pods have the same CPU requests as limits set An explicit non-latest tag is used The pullPolicy is set to Always All StatefulSets are targeted by a PDB The full list of checks can be found here. The tool is incredibly easy to use and the output will help you tighten up your YAML files so there aren't gaping security holes or malformed resources. How do you use this handy tool? Let me show you. SEE: Kubernetes security guide (free PDF) (TechRepublic) What you'll need A running instance of Kubernetes A user with sudo privileges How to install kube-score This is quite easy, because kube-score comes as a simple binary file. I'll be demonstrating on Ubuntu Server 20.04. To install kube-score on this platform, log in to the server and download the necessary file with the command: wget Note: Make sure to check the kube-score release page to ensure you're downloading the latest version. Unpack the tar file with the command: tar xvzf kube-score_1.10.1_linux_amd64.tar.gz You should now see the kube-score file in the current working directory. Let's move that with the command: sudo mv kube-score /usr/local/bin You're ready to score your manifests. How to use kube-score Using kube-score is incredibly easy. Let's say you have the file test.yaml you want to check. Change into the directory housing the test.yaml file and issue the command: kube-score score test.yaml The output will list WARNING or CRITICAL for any problems it finds (Figure A). Figure A The output of kube-score against my test YAML file. At this point, you can clearly see what configurations need attention in your YAML file. Make sure you address those issues before deploying. If you have running containers or pods, you can run kube-score against them with the command: kubectl api-resources --verbs=list --namespaced -o name | xargs -n1 -I{} bash -c "kubectl get {} --all-namespaces -oyaml && echo ---" | kube-score score - You'll probably find considerably more output this way (Figure B). Figure B Using kube-score against running containers within a Kubernetes cluster. Of course, kube-score isn't perfect and it might not run the specific checks you need--make sure to look through the full checklist to see if it's complete enough for you. Even if it doesn't check for everything you need, kube-score will be much better at validating your YAML files than a manual check, especially if you have complex and numerous manifests. Give kube-score a try and see if it doesn't make your kubernetes deployments a bit more secure and reliable. Subscribe to TechRepublic's How To Make Tech Work on YouTube for all the latest tech advice for business pros from Jack Wallen.)
https://www.techrepublic.com/article/how-to-quickly-validate-your-kubernetes-configuration-files/
CC-MAIN-2021-17
refinedweb
682
70.13
At the thoughtbot Boston office, when nature calls, there is an arduous, uphill both ways and often snow covered trek to the bathroom area. The bathrooms are also out of sight from anywhere in the office. Many times someone will turn the corner to the bathrooms and see that they are all occupied. That person now must either wait around until a bathroom opens up or go back to their desk and try again later. We wanted an indicator, visible throughout our Boston office, that informs whether a bathroom is available. We used the power of hardware to hack together a solution. Overview. They will have to communicate wirelessly since their distance apart could be long. The bathroom door microcontroller has to run from batteries since there is not a nearby outlet. We also decided that besides using an LED to show if there was an available door, we would also post the info to a remote server so other applications could digest it. Hardware We’ll be using Arduinos for the microcontrollers because there is a ton of support and information about how to use them available online. For similar reasons, we will use XBee V1 radios for the communication between the Arduinos. The door sensing Arduino will be an Arduino Fio because it comes with a LiPo battery charging circuit and an easy way to plug in an XBee radio. The reporting Arduino will be an Arduino Yún because of its WiFi capabilities. The door sensors are generic reed switches with magnets. We’ll also use a couple of solderless breadboards to prototype extra circuitry needed to tie things together. Door Sensors These sensors are a combination of a magnet and a reed switch. The magnet is placed on the door and the reed switch is on the frame. When the door closes, the magnet closes the reed switch, closing the circuit. We connect one side of the switch to power and the other side goes into a GPIO port on the Arduino. We must also connect a pull down resistor from the GPIO port to ground so when the switch is open, the port reads a low signal. Now when the door is shut, the switch is closed and there is a high signal on the port. When the door is open, the switch is open and there is a low signal on the port. Arduino Fio (sensor module) This module is responsible for sensing the bathroom door’s state and sending that state to the reporter. First, we wire up the two door sensors to the Fio. We’ll use digital pins 2 and 3 on the Fio so we can use the interrupts later for power savings. Then we connect a common 10K resistor as pull down to ground from each pin. The only thing left to do is plug the XBee into the dedicated connector on the Fio. Now that everything is connected, we need to write the code that reads the sensors and sends their state to the XBee. void setup() { pinMode(2, INPUT); pinMode(3, INPUT); Serial1.begin(9600); } Here we initialize digital pins 2 and 3 as inputs and create a serial interface for communication with the XBee. The XBee is programmed at 9600 baud by default so we need to create the serial connection to match. Now in the main program loop, lets check the door sensors and send their state to the XBee. void loop() { int leftDoor = digitalRead(2); int rightDoor = digitalRead(3); Serial1.write(0xF0); Serial1.write(leftDoor); Serial1.write(rightDoor); Serial1.write(0xF1); delay(1000); } Here we read the status of the doors and then send it to the XBee. Lets add a delay of 1000ms or 1 second so we are not constantly sending data, but also update often enough so the data isn’t stale. We also add a prefix and postfix to our transmission so it is easier to receive on the other end. This implementation isn’t very efficient because there could be long periods of time where neither door has changed. Lets save the state of the doors and transmit only if one has changed. int leftDoorState = 0; int rightDoorState = 0; boolean hasChanged = false; void loop() { int leftDoor = digitalRead(2); int rightDoor = digitalRead(3); if (leftDoor != leftDoorState) { leftDoorState = leftDoor; hasChanged = true; } if (rightDoor != rightDoorState) { rightDoorState = rightDoor; hasChanged = true; } if (hasChanged) transmit(); delay(1000); } void transmit() { Serial1.write(0xF0); Serial1.write(leftDoorState); Serial1.write(rightDoorState); Serial1.write(0xF1); hasChanged = false; } Here we create two global variables to hold the current door’s state. When we read the doors’ states, we check if it has changed since the last read. If either of the doors has changed state, we transmit the data with the XBee. This will save us from transmitting every second to maybe only 20 to 50 transmits per day depending on bathroom use. The XBee module is using the most power when transmitting so reducing the number of times it has to transmit will reduce the power consumption. We can go one step further and use interrupts to notify the Arduino that a door state has changed.); } void loop() { } We attach a change interrupt to each of our door sensors. Any time a door state changes, the interrupt will execute the function transmitDoorState. Since our interrupt is doing all the work now, we can remove the global variables and all functionality from the main program loop. The sensor module is now waiting for a change on the interrupt pins before it does anything. While its waiting, the CPU is active and processing no-op commands. This is wasting power because we’re keeping the CPU on when we’re not doing anything. To save power lets sleep the CPU when we don’t need it. #include <avr/sleep.h>); delay(100); } void loop() { set_sleep_mode(SLEEP_MODE_IDLE); sleep_enable(); sleep_mode(); } This will put the Arduino into an idle sleep while waiting for a change on the interrupts. When a change occurs, the Arduino will wake up, transmit the state, then go back to sleep. We need to add a delay after we send data to the XBee to ensure that all the data has been sent before we try to sleep the processor again. Further Improvements The transmitDoorState function is an Interrupt Service Routine or ISR. An ISR should be quick because you want to be out of it before another interrupt occurs. The transmitDoorState function is not very quick because it has to send data serially and delay for 100ms. We probably won’t run into any issues since the time between interrupts (door opening and closing) will most likely be greater than the time it takes to transmit, but to be safe we could move this code into the program loop and execute it after the sleep function. We could also reduce the power consumption further by using the sleep mode SLEEP_MODE_PWR_DOWN. This sleep mode affords us the most power savings but doesn’t allow us to use the CHANGE interrupt. Instead we would have to use level interrupts at LOW or HIGH and manage which one to interrupt on depending on the state of the door. Arduino Yún (reporter module) This module is responsible for receiving the door state data and reporting that state to the office. First, we need to wire up the XBee module. It would be easy to get a XBee shield for the Arduino and use that but we have an XBee explorer instead so we need to manually wire up the XBee. Connect XBee explorer power and ground to the Arduino, then the RX and TX to digital pins 8 and 9 respectively (pins chosen at random). We also need to add a 10K pull up resistor from the RX and TX lines to power. Now add an LED in series with a 330 ohm resistor (or similar value) to digital pin 4 (also chosen at random). The LED & resistor combo should be connected low side and we’ll drive it high from the Arduino to turn it on. Time to code! #include <Bridge.h> #include <Process.h> #include <SoftwareSerial.h> SoftwareSerial xbee(8, 9); // RX, TX void setup() { Bridge.begin(); pinMode(4, OUTPUT); digitalWrite(4, LOW); xbee.begin( 9600 ); } Here we set up the Bridge to the Linux processor on the Yún, set the mode of our LED pin to an output, start the LED off, and initialize the software serial connection for the XBee. Next, we need to receive data from the XBee and report the door status. int leftDoor, rightDoor; enum state { waiting_for_prefix, get_left_door, get_right_door, waiting_for_postfix }; state currentState = waiting_for_prefix; void loop() { if (xbee.available()) { int data = xbee.read(); switch (currentState) { case waiting_for_prefix: if (data == 0xF0) currentState = get_left_door; break; case get_left_door: leftDoor = data; currentState = get_right_door; break; case get_right_door: rightDoor = data; currentState = waiting_for_postfix; break; case waiting_for_postfix: if (data == 0xF0) { currentState = waiting_for_prefix; reportState(); } break; } } } We’ll use a state machine to receive the data from the XBee. There are four states: one for each the prefix and postfix to see the start and end of our data and one for each door. The door states will record the data from the XBee and the prefix and postfix states are used for flow control. When we see the postfix data we report the door states with reportState(). This function should turn on an LED if both doors are closed. void reportState() { digitalWrite(4, leftDoor & rightDoor); } When the doors are closed their state is HIGH or 1 so when they are both HIGH the bitwise & operator will evaluate that to HIGH and turn on the LED. Lets also report the door status online. Connect the Arduino Yún to your WiFi network. The Arduino processor can send shell commands to the Linux processor on the board. We will use the curl command to post the door states to an external API. void reportState() { digitalWrite(4, leftDoor & rightDoor); Process curl; curl.begin("curl"); curl.addParameter("--data"); String data = "leftDoor="; data += leftDoor; data += "&rightDoor="; data += rightDoor; curl.addParameter(data); curl.addParameter(YOUR_EXTERNAL_API_SERVICE); curl.run(); } That’s all! Now, you’ll be able to see if there is an available bathroom via an LED in the room or with a cool application that digests the API web service. Areas For Improvement During the project we realized that the power consumption of the XBee module was very high and had no way of going into a low power state. We measured the current consumption from the Fio board by placing a 0.5 ohm resistor in series with the power from the battery. The voltage drop across the resistor divided by the resistance gives us the current which was around 50mA to 60mA. The battery we are using is a 400mAh battery, so we would get about 8 hours of use. Charging the battery everyday is not an option so more research needs to be done into a lower power wireless communication solution. Also, this solution was expensive and more research can be done to find a lower cost implementation. thoughtbot Boston has two other bathrooms also located in remote places. We eventually want door sensors on these as well and a low cost setup for the door sensor would make this easier.
https://robots.thoughtbot.com/arduino-bathroom-occupancy-detector
CC-MAIN-2017-09
refinedweb
1,870
70.63
Created on 2011-03-14 16:58 by ev, last changed 2011-03-16 21:18 by python-dev. This issue is now closed. I've expanded the coverage of the posixpath test. The following scenarios have been added: - lexists with a non-existent file. - ismount with binary data. - ismount with a directory that is not a mountpoint. - ismount with a non-existent file. - ismount with a symlink. - expanduser with $HOME unset. - realpath with a relative path. - sameopenfile with a basic test. I have tested it on Ubuntu natty (20110311) and Mac OSX 10.6.6. Fixed a typo in the previous patch. Added an updated patch that includes testing whether ismount would succeed by faking path being on a different device from its parent. It's probably best to give the fake stats inode numbers, so if the code does fail to check the st_dev fields, it will fail the test. I've updated the patch with this. The patch includes an unconditional "import posix" that will fail on Windows. posixpath is available on windows (although not *all* its functionality makes sense), so the whole test should not be skipped - but it is reasonable to skip just the new tests using posix. Something like: try: import posix except ImportError: posix = None Then decorated tests that use posix with: @unittest.skipIf(posix is None, "Test requires posix module") Note that instead of try finally constructs you can create a cleanup function and call self.addCleanup(...). This reduces extra levels of indentation in your test code. In the new code in "test_ismount", from "# Non-existent mountpoint" on, I would turn this into a new test. (Does os.symlink *always* exist - if it is platform dependent you should skip the test if it isn't available.) Scratch the comment about symlink - in test_mount. I see you already protect that code with "if support.can_symlink()". Will posixpath.sameopenfile work on Windows? That may need skipping. Updated the patch to address Michael's concerns. I've looked at the sameopenfile code, and can see no reason why it would not work on Windows. I've asked Brian to verify this. I haven't used addCleanup here, but have noted it for the future. In this case, using it would require adding another function to handle the reassignment, which I think is a bit more messy than the extra bit of indentation. Tested the patch on Windows -- all tests pass. New changeset f11da6cecffd by Michael Foord in branch '3.2': Closes issue 11503. Improves test coverage of posixpath.
http://bugs.python.org/issue11503
CC-MAIN-2015-22
refinedweb
423
77.23
Investors in Rent-A-Center Inc. (Symbol: RCII) saw new options become available today, for the July 5th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the RCII options chain for the new July 5th contracts and identified one put and one call contract of particular interest. The put contract at the $23.00 strike price has a current bid of $1.10. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $23.00, but will also collect the premium, putting the cost basis of the shares at $21.90 (before broker commissions). To an investor already interested in purchasing shares of RCII, that could represent an attractive alternative to paying $23.22.78% return on the cash commitment, or 40.60% annualized — at Stock Options Channel we call this the YieldBoost. Below is a chart showing the trailing twelve month trading history for Rent-A-Center Inc., and highlighting in green where the $23.00 strike is located relative to that history: Turning to the calls side of the option chain, the call contract at the $23.50 strike price has a current bid of $1.05. If an investor was to purchase shares of RCII stock at the current price level of $23.22/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $23.50. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 5.73% if the stock gets called away at the July 5th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if RCII shares really soar, which is why looking at the trailing twelve month trading history for Rent-A-Center Inc., as well as studying the business fundamentals becomes important. Below is a chart showing RCII's trailing twelve month trading history, with the $23.50 strike highlighted in red: Considering the fact that the $23.52% boost of extra return to the investor, or 38.38% annualized, which we refer to as the YieldBoost. The implied volatility in the put contract example, as well as the call contract example, are both approximately 52%. Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $23.22) to be.
https://www.nasdaq.com/articles/interesting-rcii-put-and-call-options-july-5th-2019-05-23
CC-MAIN-2021-31
refinedweb
415
65.93
The Desika SrI Sookthi revered as SrImath Rahasya Thraya Saaram is one of the most important one in our KaalakshEpam tradition. This Grantham has 32 chapters and was blessed to us in the MaNipravALa (Sanskritized Tamil) format. The meanings of the individual chapters of SrImath Rahasya Thraya Saaram have been summarized in the Desika Prabhandham by Swamy Desikan in Tamil in this Prabhandham revered as AdhikAra Sangraham. There are 56 paasurams in AdhikAra Sangraham(40-95). The TEN topics covered by Swamy Desikan in this Prabhandham are: (1) The Ten AzhwArs (2) Madhura Kavi Vaibhavam (3) Eight AchAryAs (4) the importance of having firm Bhakthi to our revered AchAryAs (5) The immeasurable help given to our SiddhAntham by AchAryAs like EmperumAnAr , AaLavanthAr and Naatha Muni (6) Brief meaning of each of the 32 chapters of SrImath Rahasya Thraya Saaram (7) Salutations through one Paasuram for the EmperumAns of SrIrangam(Koil) , Thirumalai (VeRpu)and Kaanchipuram/ PerumAL Koil , (8) Special tribute to the Mudhal AzhwArs , who lit the lamp of of Jn~Anam with their three Prabhandhams to create this Prabhandham of AdhikAra Sangraham (9) The unique glories of Lord RanganAthA's sacred feet , which can not be forsaken by anyone and (10) the tribute to the AdhikAris of this Prabhandham. Clear comprehension of Veda Manthrams with the help of AzhwArs' Prabhandhams *********************************************************************** (40) Poygai AzhwAr, BhUthatthAzhwAr , Pey AzhwAr , Swamy NammAzhwAr, PeriyAzhwAr , KulasEkhara AzhwAr , ThiruppANAzhwAr , ThoNDardippodi AzhwAr , Thirumazhisai AzhwAr and Thirumangai AzhwAr are the ten AzhwArs who enjoyed the KalyANa guNams of the Lord and blessed us with Dhivya Prabhandhams as a result of those extraordinary anubhavams. We learnt from our AchAryAs these sacred Prabhandhams of AzhwArs with meanings through the route of adhyayanam and the end result for us has been the blessing of comprehending the meanings of Veda manthrams that were not clearly understood until then. The key passage is " IvargaL (AzhwArgal) mahizhnthu paadum Tamizh mAlaikaL Naam teLiya Odhi , teLiyAtha maRai nilangaL teLihinROm". The Path shown by Madhura Kavi is the blessed path ************************************************** (41) The Jeevans have ten kinds of relationships with their Lord : (1) The enjoyment of the Lord , who is the embodiment of bliss (2) Seeking Him as the goal for total Self-surrender (3) Accepting Him as the ultimate fruit appropriate for their Svaroopam (4) Establishing many types of personal relationships with the Lord (5) Sundering attachment to inaapropriate things identified by SaasthrAs with His help (6) Development of attachment to Him (7) destruction of the Sins (8) Becoming the object of the Lord's DayA (9) Growth of Tatthva Jn~Anam as a result of His anugraham and growing attachment to Him (10) Gaining the boon of becoming like Him with His sankalpam. Madhura Kavi AzhwAr was present on this earth, when Lord KrishNa had His Vibhava avathAram. Thus , He was a contemproary of Lord KrishNa. Yet, Madhura Kavi did not engage His mind in the Lord but chose Swamy NammAzhwAr as his Lord and developed the ten kinds of relationships referred to above. He expressed his commitment to Swamy NammAzhwAr over Lord KrishNa as His AchAryan and blessed us with the dhivya Prabhandham of "KaNNInuNN SirutthAmpu" that deals with the ten kinds of relationships with his AchAryan. The noble path shown by Madhura Kavi (seeking surrender at the sacred feet of one's AchAryan as PurushArTam) is the best and most effective path to chase away the alpa sukhams (transient and insignificant sukhams) for the courageous aasthikAs. The Key passage here is " Thunbhu aRRa Madhura Kavi thOnRak-kAttum tholl vazhiyE thuNivArkatkku nall vazhikaL". The Tribute to the AchArya Paramparai (GhOshti Vandhanam) ********************************************************* Since AchArya Paramparai branches out into different paths after Bhagavath RaamAnujA , Swamy Desikan salutes the lineage of AchAryAs common to all. (42) My AchAryans performed sadhupadEsams (beneficial instructions) and pointed out the most important truth that my aathmA is the eternal servant to the Lord . I salute my immediate AchAryan and his AchArya Paramparai /lineage starting with BhAshyakArar and ascending from then on to Periya Nampi , AlavanthAr , MaNakkAl Nampi , UyyakkoNDAr , Naatha Muni, NammAzhwAr, VishvaksEnar , Periya PirAtti and ending up with SrIman NaarAyaNan as ParamAchAryan.adiyEn performs SaraNAgathi through the AchArya Paramaprai to SrIman NaarAyaNan. The Key passage of this Paasuram is: " Yenn uyir tanthu aLitthavarai SaraNam pukki --ivarai munnittu EmperumAn ThiruvadigaL adaihinREnE". The devotion to the SrI Sookthis of BhaashyakArar ************************************************* After saluting all the AchAryAs in the Guru Paramparai starting with his own AchAryan and ending up with the Lord as the ParamAchAryan for all, Swamy Desikan acknowledges the special upakArams of SrI BhAshyakArar , AlavanthAr and Naatha MunigaL with one passurams to each of the three . (43) Sri BhAshyakArar destroyed the noisy debates of Haithukars (who ignore core pramANams like VedAs and demand the reasons/hEthus for whatever we comment about) , who intrepret VedAnthAms as their wandering minds direct .To them (hithukars) , SrI BhAshyakArar was like the mighty elephant in rut that toppled the banana trees . We who are deeply rooted in AchArya RaamAnujA's SrI Sookthis will not consider even by mind any act that are prohibited by BhagavAn's SaasthrAs . The key passage in this Paasuram is: " IrAmAnusa Muni inn urai sEr seer aNi sinthayinOm init-theevinai sinthiyOm". The Help (upakAram) of Swamy AaLavanthAr ***************************************** (44) For aeons , we struggled and suffered in this karma bhUmi as nithya samsAris. In this birth, however , we are blessed to benefit from the essence of upadEsams of the One who incarnated to rule us (aaLa vanthAr) and held firmly in our mind the cardinal tenets taught by him like the Jeevan is different from the body and the other Tatthvams.We became the daasAs to Swamy AaLavanthAr. From here on , we shall never engage in the studies of Para Mathams that teach inappropriate subjects in the context of our ujjeevanam. The key passage here is: " AaLavanthAr adiyOm ini alvazhakkup-padiyOm". The Pride and joy in worshipping Swamy NaathamunikaL **************************************************** (45) The two devout nephews (KeezhayatthAzhwAn and MElahayathAzhwAn) were taught by their uncle , Swamy Naathamuni , the gaandharva vEdam (Naadham and TaaLam) for singing dhivya prabhandhams with the appropriate beats. Swamy Naathamuni also blessed his other disciple with instruction on the path of yOgam (yOga Maargam) for the benefit of the world. We are redeemed by meditating daily on the sacred feet of Swamy Naathamuni of illustrious fame for our spiritual upliftment. For us who are blessed with this good fortune of links to Swamy Naathamuni , there is no one who can equal us . The key passage of this paasuram is: " Naathamuni kazhalE naaLum ThozhuthezhuvOm---naanilatthu namakku nihar aarr? ". (46) First Passuram dealing with UpOdhgAthAdhikAram : The good fortune of knowing about the meanings of the three Rahasyams ********************************************************************* Our AchAryAs have firmly held onto the Thiruvadis of SrIman NaarAyaNan and have instructed us that those sacred Thiruvadis are the sole means for our Moksham . They are totally convinced that seeking refuge at the holy feet of our Lord will result in Him accepting full responsibility for our protection from SamsAric afflictions and the granting of Moksha Sukham.Our AchAryas out of their infinite compassion for us have decided to banish our evergrowing aj~nAnam and have revealed to us the profound inner meanings of the three rahasyams. We in turn are delighted to reflect always upon these rahasyams and their esoteric meanings in this Karma bhUmi and feel blessed. The key words in this Paasuram are: " tahavu udayAr-Odhiya mUnRin uLLam ingE nALum uhakka namakku Ohr vidhi" . "tahavu udayAr" are the compassionate AchAryAs. "Odhiya mUnRin uLLam" is the meanings of the three rahasyams. "inghE nALum uhakka" means that those upadEsams are for our constant enjoyment . "namakku Ohr vidhi" means that it is our bhAgya VisEsham to reflect on these upadEsams by our merciful AchAryAs. Additional Comments on UpOdhgAtAdhikAram **************************************** This first chapter of SrImath Rahasya Thraya Saaram is about the blessings of SadAchAryas deeply conversant in arTa panchakam to us. The AchAryAs save the bhaddha jeevans from the horror of drowning in the fierce and fast moving river of their Karmaas through the development of Tatthva Jn~Anams , which leads to their upliftment. The infinite mercy of the SadAchAryAs saves thus the jeevans bound until then in the strong bonds of their karmAs. There are six reasons cited for gaining the grace of SadAchAryAs: (1) The help of our Lord/EmperumAn as an anukoolan (facilitator) (2) accidental good fortune / unplanned sukrutham (3) the KatAksham of EmperumAn at the womb itself /Jaayamaana KatAksham of SrIman NaarAyaNan. (4) Not developing enimity to SarvEswaran. (5) Staying in a state of anticipation of receiving His anugraham. (6) Sathsangham with SaathvikAs (Righteous ones) and sambhAshaNam (Conversations ) with them. (47) The second Paasuram on UpOdhgAtAdhikAram ********************************************* There is no more garbha Vaasam after AchArya KatAksham ****************************************************** Our PirAtti arising from the milky ocean chose Her Lord's chest as Her permanent place of residence. Similarly , the KousthubhA gem taking birth from the same milky ocean reached the Lord's chest and has become His cherished jewelery .We the Jeevans having the nithyasoori with KOusthubham as His Sarreram as our abhimAna dEvathai and have the acquired the privelege to perform eternal service (nithya kaimakryam) to our Lord. Inspite of this distinction , we get pummeled in the Karma pravAham ( fast flowing river of Karma ) and lose the opportunity to perform our nithya kaimakryam to Him. Our most merciful AchAryAs competent in the jn~Anam about the esoteric meanings of the three rahasyams and arTa panchakam performed upadEsams to us on these rahasyams to lift us up from the turmoil of our karmAs . There are no more worry for us after this magnanimous help of our SadAchAryAs. The key passage of this Paasuram is: " Kadu vinai aaRRil vizhunthu ozhuhAthu aruvudan aindhu aRivAr aruL seya amainthanar". This is a celebration of the matchless upakAram of our AchAryAs. (48) SaaranishkarsdhAdhikAram: Adoption of the Essence of three Rahasyams *********************************************************************** Our AchAryAs knowing fully well that life long learning of all vidhyAs upto the 18 vidhyA sTAnams is a futile pursuit by us and would only be a fruitless burden when it comes to seeking moksham. Our merciful AchAryAs knowing that the knowledge of the essence of the three rahasyams celebrated even by the nithyasooris is the only path to salvation teach us the essence of the three rahasyams for our upliftment .They also succeed in making us accept the sureness of this path in gaining the blessing of Moksham. Our AchAryAs follow this path and help us adopt this ageless path revealed to them by thewir own AchAryAs. The key passage from this Paasuram is: " ettu iraNDu yeNNiya (namakku) namm samaya aasiriyar sathirkkum tani nilai tanthanar " Our SiddhAntha Pravartaka AchAryAs conversant with and observing the glorious rahasya thrayam themselves as the essence for our upliftment blessed us with the Jn~Anam to accept the three rahasyams as our way for the deliverance from the terrors of this SamsAric afflictions. (49) PradhAna prathitantrAdhikAram: ChEthanam and AchEthnam are the Lord's body. (Meaning): SrIman NaarAyaNan gives svaroopam and sustenance to all sentient and insentient beings . He is the receiver of the fruits of all karmAs . Our SarvEswaran is matchless and all chEthanams and achEthanams stay as His body (sarIram). The sathya siddhAntham of the VedAnthis is that we the sentients are the unconditional and eternal servants (niruphAdhika nithya daasAs) of our Lord and it is our destiny to perform nithya kaimakryam to Him in SrI Vaikuntam. Additional a God who can provide DhArakathvam, Niyanthruthavam and display sEshithvam can have the distinction of having others as His body. These three sambhandhams are inherent to SrIman NaarAyaNan that enables Him to have the chEthanams and achEthanams as His SarIram.This sambhandham (relationship) between Him as the Lord and us as His eternal servants is unique to VisishtAdhvaitham and is celebrated as SarIrAthma BhAvam. This relationship( sambhandham) is known as PradhAna Prathitantram. DhArakathvam is the conferral of Svaroopam and sustenance for sentients and insentients by the Lord. Niyanthruthvam is the power to command the jeevans to engage in the performance of their karmAs. sEshithvam is the grace of the Lord , our Master , to treat His prayOjanam as the prayOjanams of chEthanams and being pleased about recieving them. Only a God who can have the above three sambhandhams can have the sentients and insentients as His SarIram . SrIman NaarAyaNan is the only God fit to qualify as SarvEswaran because of having these three attributes. This is the essence of PradhAna Prathitantram and is the doctrine that is unique to SrI VisishtAdhvaitha SiddhAntham. (50) Third Chapter of Rahasya Thraya Saaram: ArTapanchakAdhikAram ***************************************************************** Isvaran , Jeevan, UpAyam , Phalan and VirOdhi for that Phalan are the five doctrines known as arTa panchakam . Our AchAryAs performed upadEsam on arTa panchakam to destroy our ajn~Anam (plain ignorance, erroneous knowledge and perverted knowledge). They taught us that (1) Sriman NaarAyaNan is the ultimate goal ( Parama PurushArTam). (2) We the Jeevans have the rights and claims to perform eternal service to SrIman NaarAyaNan (3) The means for gaining that ultimate goal of life is to practise Bhakthi or Prapatthi yOgam (4) The fruits of practicing (anushtAnam) of one of the above two means/upAyams is the bliss/fruit of Moksham (5) VirOdhis (enemies) for gaining that Moksham are the chains of KarmAs ( both PuNyam and Paapam) that binds us to SamsAram . The key passage to this Paasuram is: " Aindhu aRivAr iruL onRu ilA vahai yemm manam tERa iyambinar". The AchAryAs conversant with the arTa panchakam taught us for our clear comprehension their meanings for our upliftment. 51) The Fourth AdhikAram: Tatthva Thraya chinthanAdhikAram ********************************************************** This paasuram is about the important conclusions on the three tatthvams. PoorvAchAryAs like Swamy AlavanthAr and BhAshyakArar , who were fully conservant with VedAntha Saasthrams performed upadEsams for us the chEthanams on how to banish SarIrAthma bramam and SvatantrArTa bramam to uplift us from the horrors of SamsAric sufferings. They willed to save us and created SrI sookthis like Siddhi Thrayam and SrI BhAshyam and taught us the individual Savaroopam and SvabhAvam of ChEthanam , achEthanam and Iswaran . (SarIrAthma Bramam is the confusion that the sarIram --which is achEthanam-- is the aathmA (Jeevan). SvatantrAthma bramam is the confusion resulting from the belief that JeevAthmA is independent and is not the unconditional servant (sesham) of the Lord . The key passage of this Paasuram is: " maRai nool tantha aadhiyar--viyan tatthuvam moonRum aruLAl tERa iyambinar". Our PoorvAchAryAs taught us about the three wonderful tattvam in a way in which we can grasp them without any confusion. ParadEvathA ParamArTyadhikAram: 6th Chapter of Srimath Rahasya Thraya Saaram *********************************************************************** 52) The distortionists and the ill-informed were dizzy with their egos rooted in the knowledge of Tarka Saasthrams. Driven by their egos, they argued that BrahmA or IndhrA and others bound by their KarmAs were responsible for the origin of the Universe; those arguments made the Vedams and the saadhu janams shudder .The babble of these egotists were not based on any accepted pramANams. Our AchAryAs being fully conversant with the true inner meanings of VedAntham countered these babblings and established with the help of PramANams firmly that SrIman NaarAyaNan is the fundamental kaaraNam for the origin of the Universe and its beings. The key words of this Paasuram are: "aaraNa Desikar pOthu amarum Thiru Maathudan ninRa namm PurANanayE sARRinar" (Our VedAnthAchAryAs established that our Lord standing with His divine consort , Mahalakshmi as the Jagath KaaraNan ". The key passage from the Sanskrit rendition of this Paasuram in SrImath Rahasya Thraya Saaram is: " Jagathi parichitha nigamAnthA: yEka: janthu SRISAHAAYAM GATHIM PASYATHI". (In this world, one who is conversant with VedAnthams comprehend SrIman NaarAyaNan as the ultimate protector). Swamy Desikan is stating here that any one without clear knowledge about the identity of Para Devathai as SrIman NaarAyaNan will not become a ParamaikAnthi ; any one who does not become a ParamaikAnthi can not aspire for Moksham without dealy.All the other Gods remain as SrIman NaarAyaNA's body and are Karma VasyAL (subject to the influence of their KarmAs) . Therefore , Only SrIman NaarAYaNan is the Supreme One responsible for Jagath KaaraNam, RakshaNam and Dissolution . Seventh AdhikAram of SRTS : MumukshuthvaadhikAram ************************************************* 53) Moksham is the ultimate desired goal , when one performs kaimakryam to the Timeless Lord , who is never ever separated from His Devi , MahA Lakshmi. This Moksha Sukham is perenial and will be growing forever for the one , who has been blessed with Moksha Sukham. In contrast , the pleasures enjoyed in the SamsAric world are inherently perishable and do not last long. Our AchAryAs evaluated both these kinds of Sukhams and concluded that Moksha Sukham is by far the best and instructed us to seek it. Our AchAryAs with the sole goal of uplifting others from the SamsAric sufferings have blessed us with knowledge about the three tatthvams (Tatthva Jn~Anam) so that we can disengage our indhriyams from insignificant pleasures of this SamsAric worldand set our goals on seeking the lasting pleasures of Moksham. The key passages of the Paasuram are: "navinRavar nal aruLAl, pulankaLai venRu Perum payan veedinai vENDum " (Through SadAchArya UpadEsam , the chEthanam controls its sensory organs and seeks the Moksha Sukham). This chapter of SrImath Rahasya Thraya Saaram (SRTS) is about the generation of desire for Moksham in the Chethanams.Through the recitation of and reflection on the deep meanings of AshtAkshara MahA manthram , the ChEthanam banishes pravrutthi dharmams and engages in pursuit of nivrutthi dharmams and becomes Mumukshu or the one desirous of gaining Moksham, liberation from the cycles of repeated births and deaths in this SamsAric world. The Sanskrit slOkam passage echoing these thoughts in SRTS is: " niravadhi mahAnandha BrahmAnubhUthi kuthUhali DhaivAth jihAsitha Samsruthi: bhavithA ". ( With BhagavAn"s anugraham , the ChEthanam becomes desirous of enjoying the limitless , great bliss of MokshAnubhavam and becomes a Mumukshu). AdhikAri VibhAdhikAram: the Eighth Chapter of SRTS *************************************************** The 15th Paasuram of AdhikAra Sangraham deals with AdhikAri vibhAgam of Mumukshus ( the subdivisions of those who seek MOksham based on the upAya anushtAnam chosen by them ).This is the 54th Desika Prabhandham . 54) The Mumukshus choose either Bhakthi yOgam or Prapatthi yOgam for gaining Moksham based on their Sukrutha taaratamyams (variations in their previous karmAs). For those who select Bhakthi yOgam as the means for mOksham , they do not need to perform prapatthi for achieving mOksham and yet they must observe prapatthi as a rite for the removal of the obstacles in the form of of karmAs that stand in their way of performing the upAyam for Moksham. The key passage of the 54th paasuram is: " vithi vahayAl yERkum anbar onRil mooNDu moola vinai mARRutalil Mukundhan adi pooNDu anRi maRRu ohR puhal onRu ilai yena ninRanarE" (Due to Sukrutha tAratamyams, Mumukshus observe one of the two upAyams for removal of all the karmas that retain them in SamsAram and recognize that there is no upAyam for Moksham except performing Prapatthi at the sacred feet of the Lord). Here Swamy Desikan instructs us that the ultimate goal of Moksham attained by those practising Bhakthi or Prapatthi yOgam is identical. One who observes Bhakthi yOgam as the means is known as SadhvAraka Prapannan . He attains moksham thru the fulfilment of Bhakthi yOgam with the performance of prapatthi to remove all obstacles that stood in his way of seeking mOksham . One who seeks Prapatthi yOgam as the UpAyam for Moksham is known as adhvAraka Prapannan. Both have to observe Prapatthi . Both have Bhakthi . For Bhakthi yOgam observer , prapatthi is an angam (part) of bhakthi yOgam. Prapatthi yOgam observer uses Prapatthi as the direct means(upAyam) for Moksham . Swamy Desikan says in the slOkam echoing these thoughts: " Sucharitha paribhAga bhidayaa dhvEdhA" ( due to their differences in karmAs , they fall in to two categories : sadhvAraka and adhvAraka prapannan) "yEkasya prApthi: viLambhEna ( For the one who observes Bhakthi yOgam , MokshAnubhavam happens wiht delay); " Parasya aasu: prApthi: " (for the other observing Prapatthi, the MokshAnubhavam is experienced quickly). Ninth ChApter of SRTS: UpAya VibhAgAdhikAram ******************************************** The 16th Paasuram of AdhikAra Sangraham and the 55th Desika Prabhandha Paasuram deals with UpAya VibhAgAdhikAram .Here Swamy Desikan contrasts the two upAyams for Moksham . 55) Those who feel unfit to practise the rigors of Bhakthi yOgam and yet are eager to gain Moksham without dealy perform prapatthi yOgam . Those who clearly understand Prapatthi as the upAyam that is quick to practise ( kshaNa karthavyam: done in a trice) and fast to yield phalan are the ones who understand the true message of Vedam. The rigors of Bhakthi yOgam as upAyam for Moksham are formidable and very few have the capabiliites to practise it in this yugam . The key passage of this Pasuram is : " karumamum Jn~Anamum patthiyum onRumilA viraivArkku anRu aruLAl payan tarum aaRum aRindhavar anthaNar" For those who can not practise Karma , Jn~Ana and Bhakthi yOgams ,Prapatthi yOgam is the only UpAyam for Moksha siddhi. For such PrapannAs ,SrIman NaarAyaNan stands in place of Bhakthi yOgam (sTAnam of Bhakthi yOgam ) and grants the fruits of Moksham , when desired . Parapatthi yOgyAdhikAram : The Tenth chapter of SRTS/Introductory Remarks: *********************************************************************** Bhakthi and Prapatthi have four requirements: (1) Desire to gain Moksham (2) Understanding of the true meanings of Saasthrams (3) Putting into practise the instructions of the SaasthrAs and (4) fitness based on Jaathi and guNams prescribed by the SaasthrAs to practise one of the two UpAyams . All of the above four requirements are a must for any one wishing to practise Bhakthi yOgam .Many will become unqualified to practise Bhakthi yOgam due to deficiency in one or other of the four requirements .All can however practise Prapatthi yOgam , which does not have all these rigors. For the one , who wishes to practise Prapatthi yOgam for Moksham , there are three "must " requirements: (1) Inability to practise Bhakthi yOgam as upAyam for Moksham /aakinchanyam ; the origin of aakinchanyam arises from the lack of knowledge about Bhakthi yOgam and the impatience to wait for Moksham (2) and (3) not seeking any other fruits other than ParipoorNa BrahmAnubhavam and not seeking any other God except SrIman NaarAyaNan as the Supreme God. The combination of (2) and (3) is known as ananya gathithvam . Swamy Desikan covers in the 17th SlOkam of AdhikAra Sangraham the qualifications of one(adhikAri) fit to perform Prapatthi yOgam , which is the 56th slOkam of Desika Prabhandham. 56) SarvEswaran is eternally present. He stands as the origin and cause of this universe. His grace (aruL) towards us will not diminish even if we commit serious aparAdhams. He accepts as an excuse (vyAjam) any accidental , small auspicious acts done by us and rushes to give his hand as help to us. At the sacred feet of this most merciful Lord , any and every body can perform Prapatthi. There is no restrictions based on Jaathi or gender as in Bhakthi yOgam . All are adhikAris , who want to gain freedom from the scorching heat of samsAram . The only qulaifications for them to practise this redeeming Prapatthi are: (1) seeking no other UpAyams (2) seeking no other fruits and (3) seeking no other Gods (aakinchanyam and ananya gathithvam). In the SRTS slOkam condensing these thoughts , Swamy Desikan states: "nija adhikriyA: Santha: Sreesam Svatantra-prapadhana vidhinA mukthayE nirvisankhA: samsrayanthE". Parikara VibhAgAdhikAram :The 11th Chapter of SRTS ************************************************** The Five AngAs of Prapatthi *************************** 57) The UpAyams like Bhakthi Yogam make us doubt whether it would be possible at all for us to practise them because of their rigors. For us, who recognize that we are incapable of practising such an upAyam for gaining Moksham and still long for enjoying Moksha sukham, AchAryAs blessed us with the UpAyam of Prapatthi with its five angams. In initating us into Prapatthi yOgam , our AchAryAs had a serious objective for us to recognize the soulabhyam of our Lord and not to go away from Him due to our fear of unfitness to appraoch Him because of His matchless Vaibhavam. Our AchAryAs removed our temerity and fear to approach Him by instructing us on the ways of performing Prapatthi at the sacred feet of our Lord , SrIman NaarAyaNan to gain Moksham . Had hey not done so ,we would have ended up begging our equals like Brahma, Indran and Rudran to banish our samsAric sufferings and being disappointed by approaching a powerless adhikAri. Our AchAryAs saved us from such futile attempts through their instructions on the ways to perform Prapatthi . The key passage of this 57th Paasuram is : " ThuRavit-tuniyil thuNaiyAm Paranai varikkum vahai anbar aRivitthanar". (Our loving and merciful AchAryAs taught us (who were suffering from samsAric sorrows) the way to perform Prapatthi at the holy feet of the Lord waiting always for us as our protector. Additional ******************* " Paranai varikkum vahai " revealed to us by our SadAchAryAs is to perform Prapatthi with its five angams (Aanukoolya Sankalpam, PrAthikoolya Varjanam , Mahaa VisvAsam , KaarpaNyam and gOpthruva VaraNam ). Our AchAryas instructed us that the UpAyam of Prapatthi must not leave out any one of the five angams of Prapatthi; otherwise , the performed Prapatthi will not be successful. Of all the five angams, Mahaa VisvAsam in the Lord is the most important angam. It is not easy to gain MahA VisvAsam without AchArya KatAksham and UpakAram. For Prapatthi done solely for Moksham , Saathvika ThyAgam is considered as another angam of Prapatthi. Our Lord has Parathvam (Supermacy) as well as Soulabhyam (ease of access by one and all). Those who get overwhelmed by His Parathvam and are afraid to approach Him belong to the adhama (lower) category among men ; those who comprehend His Soulabhyam and approach Him are Parama AasthikAs . This is the view of Swamy AppuLLAr , the AchAryan of Swamy Desikan. Saanga-PrapadhanAdhikAram: the 12th chapter of SRTS *************************************************** Explanation of the Svaroopam of Prapatthi ***************************************** The 58th slOkam of Desika Prabhandham (the19th slOkam of AdhikAra Sangraham) explains the ways to perform one of the four kinds of Prapatthi(Svanishtai, Ukthi Nishtai , AchArya Nishtai and BhAgavatha Nishtai). 58) Our Lord has vowed to protect us because of our eternal and indissoluable relationship with Him. He longs from ancient times (anAdhi Kaalam) to gain us and blesses us with His rejuvenating shower of Mercy. Our AchAryAs -- who have seen the shores of VedAnthA-- recognize us as the Jeevans needing that protection of the most merciful Lord and have placed us at His sacred feet for unfailing protection. The key passage of this 58th Paasuram is: " MaRai soodiya namm mannavar----perum tahavu uRRa PirAn adik-keezh bharam aRavE yenRu adaikkalam vaitthanar" (Our AchAryAs as Emperors of VedAntham have placed us at the feet of our most merciful Lord as objects that need protection from Him. They did so because they wanted the chEthanam to banish any thoughts about protecting itself by itself. Additional ******************* The act of reverential presentation (samarpaNam) of the AathmA is the angi with five angAs , Aathma SamarpaNam or Prapatthi. This angi (Prapatthi) or Bhara SamarpaNam with its five angams has the following three divisions: 1) Svaroopa SamarpaNam ( surrender of Aathma Svaroopam) 2) Aathma RakshA Bhara SamarpaNam ( surrender of the burden of the chEthanam's protection to the Lord Himself) 3) Phala SamarpaNam ( Surrender of the fruits of such a protection to the Lord Himself). Krutha-krithyAdhikAram: The 13th Chapter of SRTS ************************************************ The state of freedom from worries after Prapatthi ************************************************* The 59th Desika Prabhandha Paasuram is the 20th Paasuram of AdhikAra Sangraham. 59) Our Lord VaradarAjan grants the boon of Moksham to the chEthanam that has performed Prapatthi . He does not stop there. He also grants Moksham even to those , who are connected to the chEthnam. Even after that , our PeraruLALan keeps reflecting on what other boons He can grant to the Prapannan. As a result of performing their Prapatthi, those blessed PrapannAs acquire the qualites of being (1) our kings (2) nithyasooris (3) solely dedicated to their enjoyment of Parama Padham (4) ParamahamsAs and (5) completers of all Yaj~nams. The key passage of this Paasuram is: " anbu udayArkku yenna varam tara yenra Namm Atthigiri ThirumAl , namm mukkiyar Mannavar , ViNNavar,Vaann karutthOr , Annavar vELvi anaitthum muditthanar" aaha aruLinAn. Additional ******************* The reasons for celebrating the ChEthnam , who has performed Prapatthi as Krutha Kruthyan and KruthArTan are : (1) The burden of protection has been removed (2) the worries and fears about the sureness of gaining the fruit of Moksham has been banished and the tranquility was gained (3) ParamaikAnthithvam resulted and the bonds to other dhaivams were shattered . These all resulted as a direct consequence of the Lord, who is Sarva Sakthan (Omnipotent) , Sarvaj~nan (Omniscient) and Shaja Suhrudhi ( our well wisher by SvabhAvam) . He accepted the burden of protecting us ( Sveekrutha BharE ) and as a result , the Prapannan became Krutha Krutyan and Kaimkarya dhaninan (the possessor of the wealth of Kaimkaryam to the Lord). The Prapannan got immersed in the ParipoorNa BrahmAnandham of the limitless ocean of Bliss (nirupadhi MahAnandha Jaladhi) . The Prapannan became krutha Kruthyan and has nothing to do in this world for Moksha PrayOjanam any more (YadharTam puna: karthavyam yath kinchidhapi, iha na: na). The Prapannan (One who has performed Prapatthi) is now revered as Budhan or the One who has clear knowledge about his svaroopam. He abandons therefore all the acts that are forbidden by SaashtrAs of the Lord and yield insignificant and transient pleasures (parimitha sukha prApthyai kruthyam akruthyavath praheeNam). During the rest of the days in this samsAram ( post-prapatthi period) , the command of the Lord as revealed by His Saasthrams are used as a guide for conductance of the life (iha VibhO: aj~nAsthu: param Bhudhai: anupAlyathE). SvanishtAbhijn~Ana adhikAram: 14th Chapter of SRTS ************************************************** Prapannan recognizing special features in him after Prapatthi ************************************************************* (60) Those PrapannAs , who possess the three kinds of nishtais (marks) perform appropriate nithya-naimitthika karmas and will not perform any acts forbidden by Lord's SaasthrAs. Those PrapannAs who recognize these marks of Svanishtai in them are equivalent to the nityasooris who reside on this earth. For those , who performed Prapathti based on a clear understanding of the inner meanings of the three rahasyAs see in themselves special features. These are the the unmistakable marks of Svanishtai : (1) clear knowledge about Svaroopam , UpAyam and PurushArTam. They recognize them and become happy . The key passage in this Paasuram is: " Mukkiya Manthiram kaattiya MoonRil nilai udayAr--mEdhini mEviya ViNNavarE " (Those blessed ones having the three nishtais (Svaroopa, UpAyaand PurushArTa nishtais) revealed by Moola Manthram are the nithyasooris that have incarnated in this world. Additional comments on the marks of Svanishtai ********************************************** The five defenite marks from gaining the fruits of Svanishtai are: (1) Knowledge that the insults of others do not affect his aathma svaroopam and thus are not affected by those insults (2) Having mercy on those , who acquire pApams as a result of insulting him (3) thanking them for pointing out the deficencies in him (4) considering that others insult him prompted by the Lord and not being cross at them as a result (5) feeling happy that his sins are reduced by others insulting him and the transferral of those sins to them. The Marks of UpAya Nishtai ************************** The five marks are: (1) Conviction that the Lord alone is the Protector (2) Welcoming death , when it happens as a dear guest (3) Feeling consoled over the thought of having the Lord as the Protector in times of suffering (4) after gaining the fruits of BharanyAsam , staying unengaged in other upAyams and efforts (5) staying convinced that the destruction of inauspiciousness as well as the auspiciousness is Lord's responsibility. The Marks of PurushArTa Nishtai ******************************* The three marks of this type of Nishtai for a Prapannan are: (1) lack of worry about the nourishment of this body ; accepting that the nurturing of the body is taken care of by the Lord and enjoying bhOgams that came his way without any effort and considering those bhOgams as steps to reduce his karmAs (2) Intense engagement in Bhagavath kaimkaryam independent of the pleasures or sorrow that come his way (3) great dhvarai (haste) in enjoying Bhagavath anubhavam. Swamy Desikan states that the Krutha Kruthya PrapannAs spend the time on this earth in between Prapatthi and ascent to Parama Padham as a dream state and spend that time on this earth as a result of special bhAgyam . During this time, they thank the Lord for the marks of svanishtai that they see in them and treat those blessed marks as sEsha Vasthram (Parivattam) on their heads and spend their remaining days on this earth (Swamy DatthAm svanishtAm sirasi sEshAm kruthvA sEsham aayu: nayanthi). Utthara KruthyAdhikAram: The 15th Chapter of SRTS ************************************************* Kaimkaryams done in the Post-Prapatthi period ********************************************* (61) After the Lord accepts the plea for accepting the burden of protection of the chEthanam , the Prapannan is freed from the debts to DevAs, Rishis and Pithrus .With this glory , Prapannan acquires the power of uninterrupted kaimkaryams performed by nithyasooris in Srivaikuntam , right here on this earth during their remaining days here. There are still duties that are to be performed by the Prapannan in his post-prapatthi period until he ascends to Srivaikuntam to join the MukthAs and the Nithyasooris for nithya kaimkaryam to the Lord . He should engage in Bhagavath-BhAgavatha Kaimkaryam with delight and without expecting anything in return ( Svayam-prayOjanam). He should learn with humility from ParamaikAnthis , move with them and banish even a touch of ego about his status . He should understand and meditate on the avathAra rahasyams of the Lord taught to him in SrImad Bhagavath GitA . He should consume Saathvik aaaharam and continue to do Aj~nA , anuj~nA kaimakryams without discontinuity. The key pasage of this Paasuram is: " VaNN Thuvarai KaNNan koLLa kadankaL kazhaRRiya BhagavargaL vilakku inRi mEvum adimai yellAm maNN ulahatthil mahizhnthu adaihinRanar" (After the Lord of DhwArakai accepts their Prapatthi, the PrapannAs gain all kaimkaryams done by the nithyasooris in Srivaikuntam right here on this earth) . The Six items to practise and banish in Post-Prapatthi period ************************************************************* To forget: desire to enjoy material pleasures To think about: The UpakArams of AchAryan starting from His first Glance. To avoid: Talking about his glories & Superiority To engage in: : Recitation of Dhvaya manthram To avoid by all means: ApachArAm to BhAgavathAs and Brahmavidhs by Manas , Speech or body To engage with all means: Bhagavath-BhAgavatha-AchArya Kaimkaryams by mind , speech and body. PurushArTa KaashtAdhikAram: the 16th Chapter of SRTS **************************************************** BhAgavatha Kaimkaryam is the limit of all Kaimkaryams ***************************************************** (62) The BhagavathAs of the Lord have clear comprehension of' the VedAnthams and will continue to marvel at the anantha kalyANa gunams of their Lord . We as the servants of the Lord should perform kaimkaryams to the Lord's BhAgavathAs with affection and should not swerve from the path shown by the SaasthrAs of the Lord. The Key passage of this Paasuram is: " NaaTan vahuttha vahai peRum nAmm nalladiyArkku aadharam mikka isainthu nilai kulayA vahai ninRanam" (We who have gained the grace of the Lord through our Prapatthi stay on the righteous path shown by the Lord through our loving kaimakryams to His BhAgavathAs). Additional ******************* The most desired kaimkaryam by BhagavAn is the kaimkaryam that the PrapannAs perform for His BhAgavathAs. Therefore , BhAgavatha Kaimkaryam is the ultimate limit of Bhagavath Kaimkaryam. BhAgavathAs are free of five kinds of blemishes: (1) desire to committ aparAdhams / transgressions prohibited by the Lord's SaasthrAs (2) Doubts about the meanings of Saasthrams (3) links to DevathAntharams (4) Thought that the bhOgams that come their way are a result of their endeavours (5) the thought that the fruits of the bhOgams that came their way belong to them. PrapannAs should perform kaimkaryams to such ParamaikAnthi BhAgavathAs free from the above five kinds of blemishes and recieve their anugrahams as svayam prayOjanam. SaasthrIya NiyamanAdhikAram: The 17th Chapter of SRTS ***************************************************** Prapannan not transgressing the Lord's SaasthrAs ************************************************ (63) We the PrapannAs followe unswervingly the directions of the SaasthrAs and performed Bhagavath Kaimkaryam to attain the status equivalent to the Nithyasooris. In this dark night of SamsAram , where our ajn~Anam grows rapidly , we have no agents to banish that darkness except the cool light of Vedams . BhagavAn uses the PramANams of Vedams to develop His Saasthrams to define the auspicious and the inauspicious.We follow therfore those Saasthrams to become like the famed nithyasooris. The key passage of this Paasuram is: " Nilam aLanthAn ithu nanRu , ithu theeyathu yenRu nadatthiya nAnn maRayAl anbhudai vAnOr nilayil ninRanam " ( The Lord who measured the Universe during His ThrivikramAvathAram showed us the right and wrong way through His SaasthrAs rooted in the VedAs . We followed them faithfully and have been blessed to reach the status of Nithyasooris in performing kaimkaryams ). Additional ******************* The upadEsams of BhAshyakArar will help us understand and follow the ways prescribed by the SaasthrAs.Saasthram is the hand lamp in this samsAric world filled with the darkness of ajn~Anam. The Prapannan folowing the way prescribed by the Lord's SaasthrAs should be vigilant about three beliefs: (1) SrI VaishNavAs are anukoolars (Freinds) (2) Bhagavath virOdhis are prathikoolars( Foes) (3) SamsAris who are indifferent to the Lord are inbetween. When a Prapannan meets SrI VishNavAs , he should be delighted as when he experiences bhOgya vasthus. When he comes across Bhagavath virOdh is, he should have fear and disgust as when he meets a snake and the visha janthus. When an indifferent samsAri comes his way , the Prapannan should treat them like worthless objects l(e.g)., a blade of grass or a broken tile .These are the ways shown by the SaasthrAs. There are two kinds of Kaimkaryams clasisfied by the SaasthrAs: (1) Aj~nA Kaimkaryam like ThrikAla SandhyAvandhanam and other nithya karmAs. If one does not perform them, we incur the wrath of the Lord (2) anuj~nA kaimkaryams like pushpa kaimkaryam to the Lord. If one does not perform them, there are no sins accumulated. If one performs them, one can get their fruits. Improper performance of even anuj~nA kaimkaryam can result in pApams. AparAdha ParihAra adhikAram : The 18th Chapter of SRTS ****************************************************** Paapams not joining the Prapannan ********************************* (64) A Prapannan afraid of the bundles of his sins performs prapatthi; after that , he will not commit knowingly sins that interfere with Bhagavath kaimkaryams. If he has the ignorance/avivEkam to commit such pApams, he will perform PrAyascchittha prapatthi to destroy the roots for the generation of such pApams. After the performance of the PrAyascchittha Prapatthi, the avivEkam causing those sins will be thoroughly destroyed. Additional ***************** The Paapams collected before Prapathti are destroyed by the act of Prapatthi. After Prapatthi, those sins accumulated knowingly during dangerous situations and those unknowingly accrued from kaala , dEsa viparIthams will be destroyed without coming near the Prapannan. Those knowingly committed can be destroyed by PrAyascchittha Prapatthi.The way in which the later is destroyed is as follows: (1) Quarter of the Paapams is destroyed by repentence over committing such a pApam (2) Quarter is destroyed by stopping from engaging in such pApams (3) the third quater of the pApams is banished by the efforts to do prAyascchittham and (4) the last quarter is destroyed by the act of PrAyascchitth Prapatthi. Thus all of these pApams of the PraapnnAs are destroyed. Those PrapannAs, who do not perform PrAyascchittha prapatthi will receive some punishment from the Lord in this world and thus will be cleared of the pApams. Thus for a Prapannan, rebirth or naraka vAsam does not happen by any means. Bhagavatha apachAram is the most heinous of all Paapams and should be avoided at all costs. STAna VisEshAdhikAram: The 19th Chapter of SRTS *********************************************** The Fittest place to reside is where the BhAgavathAs live ********************************************************* (65) The places of residence of ParamaikAnthis is equivalent in sacredness to the combination of the sacredness of dhivya dEsams in HimAlayan mountain , the banks of Ganges river, CauvEri banks , SvEtha dhveepam and AyOddhi . Such places are the fittest places of residence for a Prapannan. When the Prapannan is not fortunate to live in a Bhagavath KshEthram populated by ParamiakAnthis, then any place where ParamaikAnthi BhAgavathAs reside is fit for the residence of a Prapannan. The key passage of this Paasuram is: "Vaann nAdu uhanthavar vaiyatthu iruppidam nall nilam". (Those places populated bythe BhAgavathAs of the Lord should be the preferred residences for the PrapannAs during the post-prapatthi period). NiryANAdhikAram: The 20th Chapter of SRTS ***************************************** All times & Places are Fit for Prapannan leaving his body ********************************************************* (66) There is no special prerequisite for the Prapanna Jeevan to exit from the body via Brahma nAdi . There is no special requirement that this exit happens at a special KshEthram or at a particular time or a particular season.Whatever the time or place or season , all of them are equally auspicious for the exit of the Jeevan by the special naadi. In the ChEthanam's heart , 101 naadis (nerve pathways ) originate. There is one special naadi in the middle of the 100 naadis. It is called Brahma naadi or Moordhanya naadi or Sushumnai. The jeevans that travel by any one of the 100 naadis --other than the Brahma naadi- reach lOkAs other than SrI Vaikundam. Prapanna Jeevan alone is assisted by the Lord to travel by Brahma naadi and archirAdhi mArgam ( the path of light ) and reach SrI Vaikuntam. The key passage of this Paasuram is: " adiyavarkku angu vilakku ilathu. athu nalnilamAm , athu naldEsamAm , athu nalpahalAm , athu nalnimittham yennalum aamm" ( For PrapannAs , the time and place of departure is not restricted. Any place from which they leave their bodies would be an equally auspicious place. The time at which they leave their bodies would be the auspcious time . The nimittham associated with the exit of their jeevans would be an auspicious nimittham. There is no requirement that the Prapanna Jeevan has to have the thoughts of the Lord during the last moments (anthima smruthi). This requirement is only for those Jeevans, which practiced Bhakthi yOgam as the upAyam for MOksham .There is no requirement that Prapanna Jeevan has to exit from the body during Sukla paksham or UttharAyaNam. Such is the power of Prapathti ! Gathi visEshAdhikAram: The 21st Chapter of SRTS *********************************************** The Explanation of the ArchirAdhi Maargam (The Path of Light) ************************************************************* (67) The Prapanna Jeevan leaves the body by Brahma Naadi and then travels with sookshma sarIram (subtle body) via ArchirAdhi Maargam to its destination of SrI Vaikuntam. On the way , it is greetd and honored by Agni dEvathai, the devathai for the Day , Sukla Paksha dEvathai, UttharAyaNa dEvathai, Samvathsara dEvathai, Vaayu dEvathai, Sooryan, Chandran, Lightning dEvathai , VaruNan , Indhran and PrajApathi . These dEvathias are known as AadhivAhAs and are representatives of the Lord with the assigned duties of welcoming the Prapanna Jeevan to their stations and guide the Jeevan to the next station . The upachArams that the Prapanna Jeevan recieves is not due to the fruit of their karmaas but is due to the power of the UpAyam (Prapatthi yOgam) that they practised. After enjoying the upachArams from the AadhivAhikAs, the Jeevan arrives at SrI Vaikuntam , its final station in its journey. The key passage of this Paasuram is: " nadai peRa-pirasApathy yenRu ivarAl ---idai idai bhOgankaL yeythi yezhil padham yERuvar". (Thus the Jeevan traveling on its path to mOksham recieves upachArams from aadivAhikAs like PrajApathi and others , enjoys the pleasures and finally arrives at the radiant SrI Vaikuntam to perform nithya kaimkaryam to the Lord and enjoy total bliss in the company of Mukthaas and Nithyasooris). ParipoorNa BrahmAnubhavAdhikAram: The 22nd chapter of SRTS ********************************************************** The enjoyment of unalloyed & total bliss in SrI Vaikuntam ********************************************************* (68) SarvEswaran is pleased with the jeevans , who undertake the simple upAyam of Prapatthi , and rushes forward to bless them with the fruits of that UpAyam(Moksham) . At the SarvEswaran's sacred feet , our AchAryAs are performing nithya kaimkaryams. Our merciful AchAryAs wished that we also become beneficiaries of performing such kaimkaryams. Therefore, we should perform Prapatthi, reach SrI Vaikuntam and stay at the feet of our AchAryAs and enjoy the limitless bliss of Bhagavath-anubhavam without interruption. The key passage of this Paasuram is: " Yezhil padham yERi -GurukkaL kuzhAngaL kurai kazhal keezh mARuthal inRi yezhum bhOhatthu mahizhnthu mannuvam " ( After ascending the beautiful Parama Padham , we will be rooted blissfully at the sacred feet of the ghOshti of AchAryas moving around with the naadham of their ankle bells and stay there enjoying Bhagavath guNAnubhavam with them and without ever returning to SamsAric world . Swamy Desikan describes this ParipoorNa BrahmAnubhavam as " Kanath mahAnandha Brahma anubhava parIvAhA: " Such a Muktha Jeevan enjoys the Lord without satiety ( Achyutham nithyam anubhavathi) as a result of this easy to perform upAyam of Prapatthi. SidhdOpAya-sOdhana-adhikAram : The 23rd Chapter of SRTS ******************************************************* SrIman NaarAyaNan is the veritable SiddhOpAyam ********************************************** (69) SrIman NaaRaYaNan is the Sarva vidha Bandhu (related to us at many levels and in many ways). He is the Ocean of Mercy who destroys our ajn~Anam. As MahA Lakshmi pleads on behalf of us , His dayA grows further and further.That compassionate Lord stands in place of other difficult-to-practise upAyams for us who are incompetent to practise any upAyam other than Prapatthi and becomes the SiddhOpAyan and grants the fruits of Prapatthi (Moksha Phalan) to us . SiddhOpAyam is SarvEswaran , who is the ancient and timeless UpAyam that exists before any upAyam that we practise (SaadhyOpAyam) such as Bhakthi or Prapatthi yOgam. The key passage of this Paasuram is: " inn amuthatthu amudhAl irangum Thiru NaaraNanE maRRu ohr paRRanRi varippavarkku manniya vaNN SaraNN" (SrIman NaarAyNan persuaded by the sweet intercession of His nectrine consort takes pity on us, who have no recourse and choose Him for their protection. For those Prapanna janams, He becomes the durable SidhOpAyan). Additional ******************** (1) In SRTS , Swamy Desikan sums up the way in which the Lord becomes SiddhOpAyan: " Niravadhi DayA-Dhivya-udhanvAnn Parama Purusha: Jaladhi-sudhayA saardham Jagath paripAlayan prathishta-bhara: SATHAAM SIDDHOPAAYA: " ( The Lord with innate compassion , who is the Ocean of Divine DayA rules the world with MahA Lakshmi , accepts the burdens of protecting PrapannAs and becomes the wish yielding SiddhOpAyan for them). (2) SidhOpAya sOdhanAdhikAram is one of the four adhikArams of what is revered as SthrIkaraNa BhAgam of SRTS. Here , Swamy Desikan resolves the doubts raised by others about the Lord, who is the SiddhOpAyan , the embodiment of an upAyam that is not done by us and exists before any of our UpAya anushtAnams.He is the most important upAyam for us. (3) Swamy Desikan flattens the objections of doubters who question (a) EmperumAns" SvAtantryam and Sahaja KaaruNyam as guNams (the GuNams of independence and intrinsic compassion) (b) The Sesha-Seshi relation between the Jeevan and Iswaran and (c) The role of His divine consort as adumbrated in the "SrImath Sabdham " stressing Her role as UpAyam (Means) as well as UpEyam (Goal). (4)Swamy Desikan establishes the indispensablity of the divine consort of the Lord in the RakshaNam of all and Her staying together with Him as UpAyam and UpEyam based on pramANams from Sruthi, Smruthi ,AchArya Sookthis and SampradhAyam . (5) As the divine consort , PirAtti is present in three forms : LayArcchai , BhOgArcchai and AdhikArArcchai . The Form of PirAtti , who stays without seperation on the Lord's chest is LayArcchai. The Devis staying by the side of the Lord as Ubhaya NaacchimArs represent BhOgArcchai. The Devi , who has Her own temple represents AdhikArArcchai . In the context of UpAyam and UpEyam doctrines associated with PirAtti , Swamy Desikan points out that ParamaikAnthis not seeking any fruits for their kaimkaryams , worship the PirAtti in Laya and bhOga Archais. PirAtti is AdhikArArchai is generally associated with granting of desired phalans . Since She is the Consort of the Lord , ParamaikAnthis offer their namaskArams to Her as well. SaadhyOpAya sOdhanAdhikAram : The 24th Chapter of SRTS ****************************************************** Bhakthi and Prapatthi yOgams are SaadhyOpAyams ********************************************** (70) Upanishads assert that the Lord accepts the chEthanam for rotection and granting the phalan of Moksham after that chEthanam undertakes the upAyam of Prapatthi or Bhakthi yOgam. These upAyams pracised by us ( SaadhyOpAyams) reduce the anger of the Lord over our trespasses and generate in Him the compassion to grant us mOksham .Realizing that He is the SiddhOpAyan , we perform as per our capacity one of the SaadhyOpAyams of Bhakthi or Prapatthi yOgam and are protected by Him to enjoy Moksha Sukham. The key passage of this Paasuram is: " Aadhalin nAmm uraikkinRa nalneRi Ohrum padikaLil Ohrnthu Ulaham dharikkinRa ThArakanAr TahavAl dharikkinRanam" (Upanishad states: YamEvaisha vruNutE tEna labhya:/ Whom the Lord selects, that Jeevan alone becomes qualified to attain the Lord. Therefore, we seek the upAyams of Prapatthi or bhakthi yOgams recommended by the Sruthi and Upanishads and are saved from SamsAram by the Lord , who has vowed to come to the rescue of those practising these two upAyams). Additional ******************* Swamy Desikan responds to the doubts raised by those who question the validity and efficacy of SaadhyOpAyams like Bhakthi and Prapatthi yOgams in this chapter.He clears the doubts about the eligibilty of all to pracise Prapatthi . He establishes that all are eligible for Prapatthi upAya anushtAnam for gaining the fruits of Moksham . Swamy Desikan answers those who question the Svaroopam of Prapatthi. He establishes that the plea by the Jeevan to the Lord to accept the responsibility of protecting it through Prapatthi with its five angams is the true svaroopam of Prapatthi. Swamy Desikan goes on to answer many questions raised by doubters based on many pramANams in this important chapter on SaadhyOpAyam. PrabhAvavyayasTAdhikAram ************************ (71) Propelled by His abundant grace , our Lord is waiting to bless the PrapannAs with KalyANa guNams identical to His own and is eager to transform these prapanna jeevans in to Muktha jeevans. This most merciful Lord took KrishNAvathAram to bless us with the Charama slOkam for our upliftment. Sages like VyAsa comprehended fully the inner meanings of this Charama slOkam of the Lord . They will not hence agree to the dimuniiton of the Mahimai of Prapannan as established by the VedAnthams .Our AchAryAs instructed us on these truisms about the glories of Prapatthi. The key words of this Paasuram are: " Mey aruL Vitthahan urayin ahavaai aaraNa neethi neRi kulaithal uhavAr " (Our Lord with unfailing krupA has blessed us with Charama slOkam , where He assures us with His unfailing promise. Our AchAryAs , who clearly understand the inner meanings of Charama slOkam of the Lord will not be party to any dimunition of the nyAya mArgam rooted in VedAs . (72) This Paasuram deals with the 26th chapter of SrImath Rahasya Thraya Saram: PrabhAva RakshAdhikAram. This chapter is in defense of the glories of Prapatthi without exaggerations or understements. (Meaning of the Paasuram): The VedAnthams , which are rooted in Sathyam (truth and righteousness) assert that the Lord's glories are limitless.Those who have performed Prapatthi at the Lord's feet may be born in lower jaathis/kulams. Inspite of birth in a lower jAthi , the PrapannAs are blessed to have the Lord's undiminished love (abhimAnam) and qualify them totally for gaining the primary phalan of Moksham. Therefore , the glories of Prapatthi can not be overstated. Our AchAryAs instruct us impartially on these TatthvArTams. The key words of this Paasuram are: " UtthamanAr kazhal paNivAr taNmai kidakka taram aLavu " (The glories of those , who perform SaraNAgathi at the sacred feet of PurushOtthaman is limitless independent of the fact that they are born in lower kulams or higher kulams) . Swamy Desikan establishes that Prapatthi has the power of destroying prArabdha KarmAs. In contrast , Bhakthi yOgam does not have this power. Therefore , Prapatthi yOgam is superior to Bhakthi yOgam. In addition to destroying sins committed before Prapatthi and accumulated in the post-prapatthi period ,Prapatthi has the power to grant Moksham for those who long for it ardentlly to engage in the nithya kaimkaryam to the Lord at His Supreme abode. Thus Prapatthi is superior to all other means for gaining mOksham. In this posting , adiyEn will cover 3 more Paasurams( 73-75) of Desika Prabhandham , which are the 34th to the 36th Paasurams of AdhikAra Sangraham.They deal with the most important rahasyams of AshtAksharam (Moola Manthram) , Manthra Rathnam (Dhvayam) and BhagavaAn's own upadEsam (Charama SlOakm) for our upliftment . (73)Moola ManthrAdhikAram ************************* SrIman NaarAtyaNan is the sole cause for the Universe and stands superior to all gods as SarvEswaran. He protects all of His creation. We who are different from insentients stand as His eternal servants. We pray for the boon of nithya kaimkaryam peformed by Nithya Sooris in SrI Vaikuntam by banishing our ahankAra-mamakArams , by seeking the Lord alone as the upAyam (means) for gaining Moksha sukham. We hold on to our Lord's sacred feet firmly and place the burden of protection and the fruit of protection at His holy feet to enjoy this eternal and joyous kaimkaryam to Him at His Supreme abode of SrI Vaikuntam. We have been initated in to the Moola Manthram with its deep meanings by our AchAryAs and we recite it and reflect on its meanings for sathsathi. The key passage in this Prabhandha Paasuram is: " uyirAy mAymai theernthu , maRRu ohr vazhi inRi , adaikkalamAy payanthavan NaarAYaNan paadhangaL sErnthu nall manu Ohdinam " (meaning): After removal of the ajn~nam about the Svaroopam of the JeevAthman/chEthanam (being different from the insentients/achEthanam) , we hold on to the sacred feet of our Lord , SrIman NaarAyaNan alone as the unfailing means for our protection and recitte His Moola manthram and meditate on its many rejuvenating meanings . 74) DhvayAdhikAram ****************** SrIman NaarAyaNan united two separate passages of Kata Valli of Vedam and transformed it in to Dhvaya Manthram and blessed us with it as the loftiest gem among manthrAs. We are firm about the meaning of this manthram (as instructed by our AchAryAs) as being the performance of nithya Kaimkaryam to the Divine couple (dhivya Dampathis) at SrI Vaikuntham without the ahankAra-mamakArams after seeking our Lord's holy feet as the sole means for our protection. We reflect on these meanings of dhvayam . The key words of this paasuram are: " Ohdhum iraNDai isaitthu udhavum ThirumAl paadham iraNDUm saran yena paRRi , namm PankayatthAL NaaTanai naNNI ,nalam thihazh nAttil adimai yellAm kOdhu il uNartthiyudan koLLumARu kuRitthanam" "Odhum iraNDu, isaitthu udhavum ThirumAl " refers to the helpful Lord , who united the two Vaakyams from the Vedam and blessed us with the resulting dhvaya manthram out of His infinite compassion for us. "ThirumAl padham iraNDum saraN yena paRRi namm PankayatthAL NaaTanai NaNNInOm " . We approached the Lord of MahA Lakshmi and held on to His sacred feet as the sole upAyam for our protection from SamsAric afflictions and left the burden and fruits of that protection unto Him. We did this with a purpose : " nalam thihazh nAttil adimai yellAm kOthu il uNartthiyudan kuRitthanam" We recited the dhvaya manthram and reflected on its profound meanings to be blessed with nithya kaimakryam free of ahankAra mamakAra blemishes in the land of endless bliss, His supreme abode. 75) The Meaning of our Lord's Charama slOkam ******************************************** This paasuram the 36th paasuram of AdhikAra Sangraham and deals with the content of the 29th chapter of SrImath Rahasya Thraya Saaram , Charama slOka adhikAram . (Meaning ) : With a determined mind and limitless compassion , Lord KrishNa blessed us with His Charama slOkam to banish our sufferings as SamsAris , who do not have the capabilities and fitness to practise the difficult-to-observe upAyams like Bhakthi yOgam to gain Moksham. He blessed us the Charama slOkam with great care using Arjuna as an excuse. His firm intention has been to banish all the bundles of sins form those, who surrender unto His holy feet and request His protection. He assured His protection and asked us not to worry anymore after performing Prapatthi to Him . We as PrapannAs recite the charama slOkam , reflect on its deep meanings and have our ajn~Anam destroyed, sorrows banished and are free from any doubts about the phalan of protection assured by our Lord . The key passage in this Paasuram is : " KuRippudan mEvum dharumangaL inRi akkOvalanAr aruL vAchahatthAl maruL aRRanam " (Being unfit and powerless to observe the upAyams like Bhakthi yOgam that need enormous discipline and single mindedness, we chose the easy-to-practise upAyam revealed to us in His Charama slOkam, which was blessed to us by our Lord out of His firm grace and limitless dayA to destroy all of our past sins and those accumulated consciously or otherwise during the post-prapatthi stage in our lives . In this posting , adiyEn will cover 4 more Paasurams( 76-79) of Desika Prabhandham , which are the 37th to the 40th Paasurams of AdhikAra Sangraham.They deal with the final three chapters(30 to 32) of Swamy Desikan's magnum opus , SrImath Rahasya Thraya Saaram : AchArya KruthyAdhikAram , Sishya KruthyAdhikAram and NigamanAdhikAram . (76) AchArya KruthyAdhikAram : The UpakAram given by AchAryAs ************************************************************* (Meaning ): Our SadAchAryAs known for their freedom from even a trace of ajn~Anam protected the meanings of the three rahasyams from falling in the hands of the undeserving . When the time for them to ascend to their Lord's Supreme Abode , they had the merciful thought to banish the ajn~Anam of the SamsAris and decided that latter group should perform prapatthi at the sacred feet the Lord and get uplifted. They chose a select list of sishyAs known for their discriminating intellect (vivEkam) and taste (ruchi) for Bhagavath Kiamkaryam and lit in their minds , the auspicious dheepam of Sath SampradhAyam and caused the spread of this sampradhAyam around the world. The key passage in this paasuram is: "maruL aRRa Desikar aruL uRRa chinthainAl teruL uRRa siRanthavar paal azhiyA viLaku yERRinar ". (Our SadAchAryAs untainted by ajn~Anam lit the ever shining lamp of sath sampradhAyam in the minds of the specail sishyAs known for their VivEkam, Sraddhai and Kaimkarya ruchi to the Lord and His BhAgavathAs). 77) Sishya kruthyAdhikAram: Duties of a Sishyan *********************************************** A SadAchAryan instructs his sishyan on the three Tatthvams (ChEthanam ,achEthanam and Iswaran) , UpAyam (means) and PurushArTam (ultimate goal) and lights up the dheepam of Jn~Anam to banish the darkness of ajn~Anam . There is no way in which one can repay the AchAryan for his matchles upakAram . Even the omniscient (Sarvaj~nan) Lord can not describe the right way to repay the SadAcAryan for his mahOpakAram. If this were to be so for the omniscient Iswaran , then how would it be possible for lesser beings (humans) to suggest the methods to pay back the debt of gratitude of Sishyans to their AchAryans . Inspite of it , sishyAs delight in praising their AchAryans . They meditate on the AchAryan's glories. The Sishyans spread the reputation of the AchAryans by praising him . All of these attempts of sishyAs to eulogize and celebrate their AchAryans is an expression of their affection and reverence and will never ever compensate for the abundant help that they have recieved from their most merciful AchAryAs. The two key passages from this paasuram are : 1. " Manatthu yezhil jn~Ana viLakkai yERRi iruL anaitthum mARRinavarkku oru kaimmARu Maayanum kaaNahillAn " (For the AchAryan , who lit the lamp of true knowledge and banished all the darkness that existed before , even the all-knowing Lord Himself can not come up with a way to pay back for the upakArams recieved). 2. (aathalin nAmm) pORRi uhappathum punthiyil koLvathum pongu puhazh sARRi vaLarppathum , munnam peRRathaRkku saRRu allavO? (Therefore, what we do in terms of praising the aathma guNams of the AchAryan , meditate on him and spread his glory in the world by talking about it are insignificant acts compared to the greatest of help recieved from that AchAryan earlier). 78) NigamanAdhikAram: The Four parts of SrImath Rahasya Thraya Saaram ******************************************************************** We composed the grantham of SrImath Rahasua Thraya Saaram to bless Aasthika sishyAs with vivEkam about their AchAryans , who instruct them (the sishyans) about the glories of Prapatthi yOgam. First , we explained the savroopams and svabhAvams of Tatthva Thrayam , UpAyam and PurushArTam as we learnt from our AchAryAs. Second , we answered the objections raised about them and established their unshakable validity. Third , we explained the ways to gather the meanings of the three rahasyams and pointed out the essence of such meanings is that the Jeevan has no independence of its own and is the property of the Lord , the sarva sEshi and he ( the jeevan ) is His unconditional, eternal servant (sesham). Fourth , we explained the greatness of the most merciful SadAchAryAs , who bless the qualified sishyAs with sath sampradhAya Jn~Anam. Therefore , it is futile to spend one's life on this earth chasing evanescent pleasures instead of enjoying the SrI Sookthi of SrImath Rahasya Thraya Saaram. Swamy Desikan states that he covered the four topics outlined above in his SrI sookthi ( ivai seppinam) . What is the purpose of wasting one's life pursuing trivia instead of learning about the lofty and rejuvenating concepts housed in SrImath Rahasya Thraya Saaram ? ( chinn paRRi yennpayan ?). There is the learned AchAryan (MaRayavar) , who instructs us on the glories of SaraNAgathy at the Lord's feet , who is Jagadheesavaran ( mann paRRi ninRa vahai uraikkinRa maRayavar).Here , we are as discerning sishyAs , who approach the AchAryan to gain the Jn~Anam aboutthe true meanings of VedAntham .We composed this Sri Sookthi of SrImath Rahasya Thraya Saaram to focus on the VivEkam acquired by the Sishyan from the AchAryan and the resulting clearance of doubts as well as the attainment of relaization that he( the jeevan ) has no independence whatsoever and as a jeevan he is totally dependent on the Lord , his Master at all times , places and conditions. 79) NigamanAdhikAram: Part II: These Paasurams beautify Tamil language ********************************************************************** BhagavAn and His BhAgavathAs bless with delight the qualified persons, who possess aathma guNams and do not swerve from the Saasthrams . We composed the paasurams of htis Prabhandham with the benedictions ofBhagavAn and His BhAgavathAs after familiarizing ourselves with the VedAnthic doictrines. These paasurams serve as nectar to the ears of the listeners even if they do not understand their meanings. These 32 paasurams about SrImath Rahasya Thraya Saaram set in the andhAthi format are alankArams ( decorative objects ) for the language of Tamil constituting Iyal, Isai and Naatakam ( prose , sangItham and drama). The key passages of this Paasuram are : " ivai muppatthiraNDu muttamizh sErntha mozhi ThiruvE " ." Mutthamizh sErntha Mozhi " refers to three sets of triads characterizing these paasurams : A .(1) delectable taste of the chosen words (2) deep meanings of the thoughts housed in these paasurams and (3) appropriateness to set them to most enjoyable music. B. (1) the reputation of the composer (2) the glory of the topic and (3) the beauty of the passages of the Paasurams C.(1) Aasu Kavi ( the instanteneous and fluent )composition (2) the intricate geometric patterns found in the paasurmas and (3) sweetness associatd with them . With the 79th Pasuram of AdhikAra Sangraham , the 32 paasurams on SrImath Rahasya Thraya Saaram starting from the 47th Paasuram of Desika Prabhandham set in the andhAthi format comes to an end 80) The way in which Tatthvams rest in the Lord's Body ****************************************************** Lord VearadarAjan of Kaanchi shines as the lamp on top of the Hasthigiri hill and has the chEthana , achEthana tatthvams as pieces of jewelery and weapons on His sacred body. There He stands as to delight the hearts of every one. The key words of this paasuram are : " Garudan uruvAm MaRayin poruLAm KaNNan , Karigiri mEl ninRu anaitthum KaakkinRAn " ( The Lord , who is the meaning of the VedAs , which are the body of Garudan , standa atop Hasthigiri and protects the Universe and its beings ). As the Lord stands on top of the Hasthi Giri, He displays on His sacred ThirumEni many types of Jewels and weapons , which are associated with all the Tathtvams . The links are as follows: 1. Jeevan = KOusthibha Gem on His chest 2. Eternally existign mUla Prakruthi = the Mole , SrIvathsam 3. Mahath Tatthvam = the Mace, KOumEdhaki 4. Jn~Anam = the Sword, Nandhakam 5. Ajn~Anam = the sheath for the sword , Nandhakam 6. Sarngam & Paanchajanyam = Taamasa & SaathvIka ahankArams 7. Manas = Sudarsana chakram 8. Jn~Ana & Karma Indhriyams = the ten arrows 9. Five TanmAthrams & Pancha BhUthams: VanamAlai, Vaijayanthi 81) The Greatness of SrIrangam: SthAna VisEsha adhikAram: Part I **************************************************************** The dhivya dEsam of SrIrangam has the eternal presence of Lord RanganAtha and is like the embodiment of the Lord's compassion that has taken the form of insatiable nectar (AarAdha aruLamudham pothintha Koil); SrIrangam was bequeathed to King IshvAku by Brahma DEvan (AmbhuyatthOn AyOddhi mannaRkku aLittha Koil); It has the vaibhavam as a temple worshipped by Lord Raamachandran Himself during His Vibhava avathAram (tOlAtha tani veeran thozhutha Koil); This is the koil that is the support for VibhIshaNan , who gave his support to Lord Raamachandra in the fight against his brother ( thuNaiaana VeDaNarkku thuNayAm Koil); It is the Koil , which can bless the worshippers with anugrahams that can not be gained anywhere else (sErAtha payan yellAm sErkkum Koil) ; It is the KOil, which has the VimAnam in the shape of PraNavam ( sezhu maRayin mudhal yezhutthu sErntha Koil ) ; It has the powerto destroy all kinds of sins ( theerAtha vinai yellAm therkkum Koil); It is the dhivya dEsam of SrIrangam , which is the top most among all dhivya dEsams ( Thiruvarangam yenat-thihazhum Koil thAne). 82) ThiruvEnkatam's glories: SthAna VisEsha AdhikAram: Part II ************************************************************** ThiruvEnkatam hills is the holy hill that has the power to reveal the sacred feet of the Lord (m KaNNan adi iNai yemakku kAttum veRppu). The hill of the Lord has the power to destroy every kind of sin committed by the chEthanams (kadu vinayar iru vinayum kadiyum veRppu); This hill has the glories to be recognized as SrI Vaikuntam (TiNNamithu veedu yenna thihazhum veRppu); This hill has the vaibhavam of having sacred waters like KonEri, PaapanAsam and others (theLintha perum theertthangaL seRintha veRppu) ; This is the hill , which is recognized as the embodiment of all PuNyams (PuNNiyatthin puhal ithu yenap-puhazhum veRppu); ThiruvEnkatam is the holy hill , where all the bhOgams enjoyed in Parama Padham are made within the reach of the residents of this earth( PonnulahiR-bhOgamellAm puNarkkum veRppu); This is the auspicious hill desired as a placeof residence by the DEvAs and the residents of this BhUlOkam (ViNNavarum MaNNavarum virumbum veRppu); This is indeed the hill celebrated by the VedAs and is revered as the ThiruvEnkatam hills (Venkata veRppu yena viLangum Veda VeRppE). 83) The Vaibhavam of Kaanchi: SthAna VisEsha adhikAram: Part III **************************************************************** This is the dhivya dEsam of Hasthi Giri , where the Lord stands to destroy totally all the sins of those , who seek His protection (Bhatthar vinai thotthu aRa aRukkum AtthigiriyE). This is the dhivya dEsam of the Lord , who revealed His heroism by cutting down the ten heads of RaavaNan with an arrow of unmatched power in a great battle at LankA. This is the dhivya dEsam of the Lord , who showed His meekness and soulabhyam by going after the VeNNai and curds that His mother YasOdhA had set aside and got threatened by her for consuming them secretly (matthu uRu tayir vaithathu meyttha VeNNai uNNum Atthan idam AtthigiriyE). In this posting , adiyEn will cover 4 more Paasurams( 84-86) of Desika Prabhandham , which are the 45th to the 47th Paasurams of AdhikAra Sangraham. 84) Thirumanthiram granting all Phalans *************************************** Those who recite and reflect on the deep meanings of Moola Mathram (Thiru Manthiram/AshtAksharam) will be blessed with every phalan that they wish to have. This includes the acquisition of the eight GuNams of the Lord , the eight traits of one's Buddhi (intellect) ,the eight aathma guNams (eight flowers) , the eight attainments (siddhis) , the eight kinds of devotion (Bhakthi) , the mastery over the eight limbs of Yogam , the eight kinds of Iswaryams (wealth) and the mastery over the 64 kinds of arts and sciences (Kalais). The key passage of this Paauram is " YeNN guNatthOn yettu yeNNum yeNN guNamathiyOrkku , yeNN patthi--yettu guNamum mElathuvum yettina" (For the vivEkis with eight kinds of Aathma GuNams reciting and reflecting on the AshtAksharam of the Lord with the eight auspicious guNams , the eight kinds of Bhakthis , eight kinds of wealth and much more are readily attained). Additional notes by Swamy SrIraama DesikAcchAr: *********************************************** The eight auspcious guNams of the Lord are: (1) Freedom from the influence of karmaas (2) Freedom from old age /nithya youvanam (3) freedom from death (4) freedom from sorrow (5) Freedom (6) Freedom from Hunger (7) possesion of BhOga vasthus , which are imperishable and (8) Power to execute desired actions. The eight guNams of the Intellect are: (1) ability to comprehend material presented (2) power to hold the recieved material firmly (3) ability to recall material learnt and held (4) power to describe them to others (5) ability to grasp things not explained by others (6) discriminatory skills to reject unwanted things told to them by others (7) power to comprehend doctrines with clarity and (8) ability to understand the true meanings of the tatthvams . The eight aathma guNams of the human beings acquired through anusandhAnam of AshtAksharam are : (1) nonviolence/ahimsai towards others by speech, body or mind (2) Control of senses/Indhriya nigraham (3) Mercy towards all (4) Patience towards all/poRumai (5) Jn~Anam/true wisdom about Tatthva thrayams (6) Tapas / penance (7) DhyAnam and (8) Sathyam/Truthfulness. The eight kinds of Bhakthis are: (1) Love towards BhAgavathAs (2) Joy in worshipping the Lord (3) Eagerness to hear about the Lord's charithram (4) thickening of the voice and horripulation on hearing about the Lord,, thinking about Him or speaking about Him (5) attempts to perform aarAdhanam for the Lord (6) freedom from ego during the performance of Kaimkaryams for the Lord (7) thinking about Him alone (8) not asking the Lord trivial and perishable boons from the Lord. 85) Explanation of the Meaning of Charama SlOkam: Part I ******************************************************** Our Emperor of Emperors (SarvEswaran) incarnated as the son of VasudEvan so that the citizens of DhwArakai can be blessed with the ultimate PurushArTam of enjoying His company . He sat in front of Arjuna's chariot as His Saarathy and revealed the height of His soulabhyam. This Lord along with His Devi undertook the sankalpam to uplift the created jeevans from all sufferings and used Arjuna as the excuse to instruct the world on His Charama SlOkam . He stood as the SiddhOpAyam in His Charama slOkam and took upon Himself the burden to protect those, who sought His protection.He removes the effects of Prakruthi which stands in the way of developing Tatthva Jn~Anam and assures us freedom from sorrows and Moksham at the end of the Prapannan's bodily existence on this earth. The key passage of this Paasuram is " TaNN tuLava malar mArbhan , oNN thodiyAL ThirumahaLum thAnum aahi , oru ninaivAl eenRa uyir yellAm uyya , tAnE sonna tani dharumam yemakku tAnn aay , Tannai yenRum kaNDu kaLitthu adi sooda , vilakkAi ninRa viLayAttai kazhikkinRAn " " TaNN tuLava malar mArbhan " is the Lord wearing the cool TuLasi garland interwoven with the fragrant flowers. "oNN todiyAL ThirumahaLum TAnum aahi oru ninaivAl eenRa uyir yellAm uyya" refers to the Lord with Periya Piratti propelled by their unified sankalpam to uplift he jeevans created by Them . How did the Lord do it ? He became the matchless means (upAyam) for those who performed Prapatthi unto Him and stood as SiddhOpAyam for them as revealed in His Charama slOkam. Thus, He lifted those fortunate ones up from the mire of samsAram so that they can enjoy Him in His Prama Padham and wear His sacred Thiruvadis on their heads and become filled with bliss. Through this MahOpakAram, the Lord removes all the interferences created by His own Moola Prakruthi. 86) Explanation of the Meaning of Charama SlOkam : Part II ********************************************************** The Charama slOkam has a key paasage , " Sarva DharmAn Partithyajya " . This paasuram focuses on the Six meanings of the two words : "Srava dharmAn ". The meaning of this Paasuram is expanded in the 316th Desika Prabhandham Paasuram housed in the SrI Sookthi of "Charama slOka churukku". The extended meanings of this Paasuram will be covered , when we arrive at the 316th Paasuram. Briefly , these six meanings are quoted as: (1) asakthAdhikArithvam (2) aakinchanya puraskriya 3) ananga bhAvam (4) dharmANAm asakthyArambhavAraNam (5) TathprathyAsA prasamanamand (6) BrahmAsthra nyAya Soochanam . adiyEn will share the detailed commentary of Swamy SrIrAma DesikAcchAr , when we arrive a tthe 316th Paasuram. The six meanings of "Srava DharmAn Partithyajya" are: (1) Do not continue with difficult-to-practise upAyams for Moksham any more , even if you have been active in pursuing them. (2) It is the best act to banish desire in practising such upAyams. (3) The act of SaraNAgathy does not need the help of any thing else except its five angams. (4) If other acts are undertaken, then SaraNAgathy will be fruitless. In such a case, ti is like BrahmAsthram that can not co-exist with other asthrams. (5) and (6) You who is engaged in the ancient upAyam of Bhakthi yOgam fit for great Jn~Anis will feel powerless to practise this upAyam. Keep Your helpless state as help and seek My Thiruvadi (sacred feet) as the sole upAyam and I will banish Your sins and grant You Moksham. The brief comments however regarding the context of this Paasuram are: "Sarva DharmAn Partithyajya" has the meaning to abandon all upaayams.Dharmam is a phalan-yielding practise (saadhanam) that can be understood only with the help of the Lord's Saasthrams. When the Lord uses the plural of dharmam (DharmAn) in Charama slOkam , He referes to the many kinds of dharmams . The "Sarva" padham selected by the Lord refers to the nature of the dharmams having many angams (limbs). Dharma padham generally connotes UpAyam; here in the context of the Lord's upadEsam refers to MokshOpAyam such as Bhakthi yOgam with its 8 angams. In this posting , adiyEn will cover One Paasuram( 87) of Desika Prabhandham , which is the 48th Paasurams of AdhikAra Sangraham. This Paasuram has so many subtle meanings relating to the essence of Charama slOkam that adiyen will focus on it alone in this posting. The Greatness of Today ********************** Today is the Iypaasi SravaNam day at Oppiliappan KOil. ThiruviNNagarappan adorns nila mAlais and invites us to reflect on the message imprinted on His right hand (MaamEkam SaraNam Vraja). On top of Hasthi Giri, at Kaanchi, Lord VaradarAjan invites us to reflect on the message imprinted on His right hand : " Maa Sucha:" Today is also the birth day of Poygai AzhwAr followed by the avathAra dinams of BhUtham and PEy AzhwArs (the three " Mudhal AazhwArs "). In the 89th Paasuram of Desika Prabhandham (the 50th slOkam of AdhikAra Sangraham) , Swamy Desikan acknowledges with gratitude the upakAram of the Mudhal AzhwArs in revealing the UpAyams of Prapatthi and Bhakthi yOgams to us for gaining Moksham. The "yEka Sabdham ***************** On this auspicious occasion , it is very appropriate to reflect upon the "YEka " sabdham , which is the essential part of Charama slOkam : "Maam YEkam SaraNam vraja". The six meanings of the "YEka " sabdham is the subject matter of the 87th Desika Prabhandham (48th Paasuram of AdhikAra Sangraham) to be covered today. In the 93rd slOkam of Desika Prabhandham (54th of AdhikAra Sangraham), Swamy Desikan describes our status of living in a state of freedom from fear due to the following of the UpadEsam of the Lord housed in His Charama slOkam . 87) Charama SlOkam : Meanings of Charama SlOkma : Part III ********************************************************** Saadhanamum naRppayanum nAnE aavan Saadhakanum yenn vayamAi yennai paRRum Saadhanamum SaraNa neRi anru unakku SaadhanangaL innilaikku ohr idayil nillA Vedanai sEr vErangam ithanil vENDA vEru yellAm niRkkum nilai NaanE niRppan ThUthanumAm NaaTanumAm YennaippaRRi sOham theer yena uraitthAn soozhhinrAnE The Six Meanings of "yEka " Sabdham *********************************** In this Paasuram , Swamy Desikan instructs us on the six meanings of the word "yEkam " ("Maam yEkam SaraNam Vraja portion" of Charama slOkam) as revealed by the SrI Sookthis of His PoorvAchAryAs . Additional References on "yEka " Sabdham **************************************** Swamy Desikan elaborates on the above meanings of the "yEka " sabdham again in the 318th Paasuram of Desika Prabhandham (4th slOkam of Charama slOkaSurukku) and the 344th Desika Prabhndha Paasuram (the 19th Paasuram of GeethArTa Sangraham) . The 29th Chapter of SrImath Rahasya Thraya sAram entitled " Charama SlOkAdhikAram" has the most elaborate commentary by Swamy Desikan on our GeethAchAryan's Charama SlOkam . 87 th Prabhandha Paasuram ( 48th AdhikAra Sangraha Paasuram) ************************************************************ 1) The first Meaning of YEka Sabdham ************************************ The first of the six meanings is : " Saadhanamum nall-payanum nAnE aavan " ( I will remain as the UpAyam/Means and the Phalan/auspicious fruit of that UpAyam of SaraNAgathy) . Our Lord is both UpAyam and its Phalan . The yEka sabdham reveals that SarvEswaran alone is both upAyam and Phalan. 2) The second meaning of the YEka Sabdham ***************************************** The second meaning is : " Saathakanum Yenn vayamAi Yennaip-PaRRum" (The anushtAthA /practioner of that UpAyam will remain under my Lordship and perform SaraNAgathy to Me to gain Moksha siddhi). Here , yEka sabdham reveals that the Svatantram of the ChEthanam is banished in the context of the Lord's unfettered will and His SvAtantryam . Our Lord is the SiddhOpAyan and the jeevan should not get confused as the the adhikAri, who practises the UpAyam. It is SarvEsvaran's sankalpam that lets the chEtanan perform the UpAyam as a totally dependent one on the DayA of the Lord . The chEthanam is never-ever independent to perform the UpAyam on its own. 3) The third meaning of the YEka sabdham **************************************** The third meaning is " unakku SaraNa neRi Saadhanamum anRu" ( The Lord says: For you , the ChEthanan , SaraNAgathy is not a direct upAyam , but is only a vyAjam for gaining the fruit of that UpAyam ). Here , the "yEka " sabdham cautions the chEtanam not to link Iswaran (SiddhOpAyam) with SaadhyOpAyams like Prapatthi or Bhakthi yOgams . Iswaran uses Prapatthi as a vyAjam to grant the boon of Moksham . He stands in the place of difficult to observe upAyams like Bhakthi yOgam to grant Moksham to those who perform Prapatthi to Him alone . 4) The Fourth Meaning of the YEka Sabdham ***************************************** The fourth meaning is " SaathanangaL innilaikku ohr idayil nillA" (UpAyams like Bhakthi yOgam will not have any value/help to the SaraNAgathy / Prapatthi yOgam . In other words , Bhakthti yOgam and other upAyams do not assist/advance Prapatthi) . Our Lord does not place any obstacle/burden between Him and the one , who hastened to perform Prapatthi stands in place of all other upAyams and bears all the burdens Himself. He does not expect any other UpAyams from the Prapannan after the performance of Prapatthi . 5) The Fifth Meaning of the YEka Sabdham **************************************** The Fifth meaning is " vEthanai sEr vERu angam ithanil vENDA " (For one who observes SaraNAgathy as the UpAyam , there is no admixture with angams that are very difficult to observe). Except the five angams associated with Prapatthi (aanukoolya Sankpam , PrAthkoolya Vrajanam , KaarpaNyam , Gopthruva VaraNam and MahA Visvaasam), Our Lord does not desire any other angams from those , who perform Prapatthi to Him. 6) The Sixth Meaning of the YEka Sabdham **************************************** The Sixth meaning is: " vERu yellAm niRkkum nilai nAnE niRppan" (I will stand in place of all other upAyams like difficult-to-observe Bhakthi yOgam and bless the ChEthanan performing Prapatthi with the fruits of Moksham). Our Lord instructs the Prapanna jeevan that He is under the influence of that Jevan as a result of the performance of Prapatthi to Him alone and therefore He will stay in place of all other upAyams and bless that jeevan with the fruits of Moksha sukham. Our Lord's UpadEsam ******************* The Lord says : " For all these six reasons , Oh ChEthanam , observe Prapatthi yOgam to Me alone and leave the other upAyams (Sarva dharmAn Partithyajya Maam yEkam SaraNam vraja) . After performing Prapatthi, live in a state of freedom from worries and sorrows ; place the burden of your protection and the fruits of that protection at My feet . As Your Lord and Parama Soulabhyan, who went as the messenger for the PaaNDavAs , I will protect You without fail so that you can stay in a state of nirbhayam and nirbharam (freedom form worries and fears about your gathi) ". Swamy Desikan's instruction to us ********************************* Swamy Desikan summarizes the Charama slOkam message this way : "ThUthanumAm NaaTanumAm Yennaip-paRRI sOham theer yena uraitthAn soozhkinrAnE " That Lord who identified Himself as the Lord of all the ChEthanams and as the soulabhyan , who went to DuryOdhanA's court as the messenger for PaaNDavAs instructed us to free ourselves of all fears and sorrows through the performance of Prapatthi at His sacred feet alone. He does not let the PrapannAs down since He is Achyuthan and surrounds them (SoozhinrAn) with His anugraham . In this posting , adiyEn will cover two Paasurams( 88-89) of Desika Prabhandham , which is the 49th and the 50th Paasurams of AdhikAra Sangraham. 88) Hesitation of ChEthanam to seek Moksham ******************************************* The key passage in this Paasuram is about Lord's laughter about our putting off our desire to seek His Supreme abode and wanting to hang on to this SamsAric world instead to enjoy its persihable "pleasures" : " Mutthi tara munnE tOnRi nall ninaivAl nAmm isayum kAlam inRO naaLayO yenRu nahai seyhinrAn " Our Lord has incarnated in many forms to grant us Moksham and is waiting for us to seek His protection. While waiting for us to gain the auspicious Jn~Anam to seek the journey to His Supreme abode , He is making fun (parihAsam) of our indecision to take that decision and questions with humor : "Is it today or would that be tomorrow that you will decide to perfom the upAya anushtAnam ? " (Meaning of the entire Paasuram ): No one can overcome the will (sankalpam) of our Lord. He creates all kinds of desires in those who do not try to reach Him and display enimity towars Him . He interferes with their ability to enjoy those bhOgams that they desire. Our Lord of this disposition has showered His grace on us already and destroyed our longing for SamsAra BhOgams. He has now accepted us as objects of protection to enjoy the shade of His sacred feet. He has banished His earler anger over our previous trespasses. He has taken many avathArams to mix with us as Parama soulabhyan and to grant us mOksha sukham. Inspite of all these special efforts on His part, we who do not recognize His extraordinary grace and compassion keep postponing the performance of SaraNAgathy to realize Moksham from one day to the other. Our Lord laughs over our ignorance , procrastrination and ineptitude . For His enemies that do not cherish Him (Tannai NaNNAthAr), He creates all kinds of desires in them (ninaivu anaitthum Taann ViLayitthu) and then prevents them from enjoying those desires that they covet (Taann viLaitthum vilakkum NaaTan). For us whom He has decided to protect from the SamsAric horror, He has eliminated our taste for the non-lasting and pain-yielding SamsAric "pleasures" (ibbhavatthil yemm ninyaivai mARRi) and has placed us as objects of protection under His sacred pair of feet (iNayadikkeezh adaikkalam yenRu yemmai vaitthu). He has now forgotten or gotten over His anger over our previous trespasses and is ready to grant us His protection (munn ninayivAl yAmm muyanRa vinayAl vantha munivu ayarnthu). He waits to see when that day would be for us to elect to seek His rakshaNam. He has taken many avathArams already to grant us Moksham (Muthti tara munnE thOnRi) and He is amused at our delay/procrastrination and makes fun of us by asking whether He has to wait until today (the end of the day) or until tomorrow for us to make up our mind as a result of the dawning of the clear Jn~Anam that would propel us to perform the SaraNAgathy (Mutthi tara munnE thOnRi, nall ninayvAl nAmm isayum kAlam inRo nALayO yenRu nahai seyhinRAn). The Lord laughs over the thought that He has rushed to grant Moksham to the chEthanam and that chEthanam does not wish to forsake the SamsAric pleasures to seek the Moksham. That chEthanam keeps on postponing the day for seeking Moksham from today to tomorrow and onwards. Our Lord laughs over the ineptitude and ignorance of the ChEthanam , which can not make up its mind. 89) The help of the Mudhal AzhwAr ********************************* During DhvApara Yugam , our Lord indirectly provided additional help to the chEthanams to rush towards Him to seek Moksham from Him . During the course of a rainy night at His dhivya dEsam of ThirukkOvaloor, Our Lord got Poygai, BhUtham and PEy AzhwArs together in the tight space of an idaikazhi (dEhaLi) and pressed them there to have their physical contact ( dEha sambhandham ) and for witnessing the birth of the three AndhAthis from the three AzhwArs . The andhAthis of the AzhwArs lit the lamp of true knowledge (Sathya dheepam) to chase away the darkness of aj~nAnam that had enveloped the world. The lamp lit by the AzhwArs in front of the Lord and His divine consort glorified the UpAyams of Bhakthi and Prapatthi yOgams celebrated by the Vedams. This Paasuram is about AchArya KruthyAdhikAram Topic of the 30th Chapter of SrImath Rahasya Thraya Saaram This 89th Paasuram should be remembered especially on this Iyppasi Satabhishak day , the day of avathAram of PEy AzhwAr: Paattukku-uriya pazhayavar moovaraip-paNDu-oruk-kaal Maattukku aruL tarum Maayan malinthu varutthathalAl nAttukku iruL seha nAnmaRai anthi nadai viLanga veettukku idaikkazhikkE veLikAttu,ammeyviLakkE (meaning): SarvEswaran caused the Mudhal AzhwArs to sing their ThiruvandhAthis to banish the nescience that shrouded the world and through their andhAthis instructed us on the UpAyams like Bhakthi and Prapatthi for our upliftment from SamsAric sufferings. " Paattukku uriya Pazhayavar moovar" are the triad of AzhwArs, who are the most qualified for singing the Lord's vaibhavam. "mAttukku aruL tarum Maayan " is the Lord of wonderous deeds , who showers His grace on His property, the ChEthanams . "nAttukku iruL seha" describes the purpose of the Lord empowering the Mudhal AzhwArs to sing their Prabhandhams : it was to destroy the darkness of ajn~Anam that prevailed in the world . What did the Mudhal AzhwArs do and how did they banish the surrounding darkness? They lit a lamp of Sathyam to eliminate the darkness of nescience. What did that lamp do besides chasing away the darkness of ajn~Anam ? It shed light all around and glorified the means (upAyams) for the performance of SaraNAgathy (Bhakthi & Prapatthi yOgams) at the sacred feet of the Lord (nAnn muRai anthi nadai viLanga veLikkAttum). In this posting , adiyEn will cover the 90th Paasuram of Desika Prabhandham , which is the 51st Paasuram of AdhikAra Sangraham. This Paasuram is the first of the sixth Paasurams dealing with the 32nd chapter of SrImath Rahasya Thraya Saaram (NigamanAdhikAram). Theis paasuram celebrates the glories of the Sacred feet of Lord RanganAthan as the UpAyam and Phalan for the ChEthanams. It is set in PathinARu (Sixteen) seer (symmetry) Aasiriya viruttham (Aasiriyam meter) and has a total of 16 lines. Each of the sixteen lines celebrate one or other aspect of the myriad anugrahams of Lord RanganAthan's ThiruvadigaL. (Genaral Meaning): The insatiable nectar of the tender feet of the Lord of SrIrangam are never abandoned by His BhakthAs. He did many miracles during KrishNAvathAram. Let us enjoy the beauty of this KarNa ranjaka Paasuram line by line. The tamil text of this 90th Paasuram is in paranthesis after the general meaning for each line: 1. Those feet gave a swift kick to the asuran , who came in the form of a wheel to destroy the Lord and instead got totally destroyed (uRu sakatam udaya orukAl uRRu uNarnthana) . 2. Those powerful feet crawled between the two Marutha trees and brought them down (udan marutham Odiya oru-pOthil tavazhnthana). 3. They stayed tied to the husking mortar , when the Lord was caught stealing the butter and curds from the pots held high by His Mother (uRi tadavum aLavil uralOdu uRRu ninRana). 4. Those strong feet went on an ambassadorial misison to DuryOdhanA's court with delight to plead the case for Dharma Puthran , the eldest PaaNDavan of unimpeachable conduct (uRu neRi ohr Dharuman vidu thUthukku uhanthana) . 5. Those purposeful feet roamed in BrundhAvanam with anger to destroy the enemies of the righteous ones (MaRa-neRiyar muRiya PirutAnatthu vanthana). 6. Those tender feet could not even take the pressings of MahA Lakshmi's soft hands and reddened even under such soft pressure (Malar MahaL kai varuda , malar pOthil sivanthana). 7. Those dayA-filled feet (Thiruvadi) became the appropriate object for the devotion (bhakthi) of the Sages , who did not wish to be born again in this Karma BhUmi (MaRu piRavi aRum Munivar mAlukku isainthana). 8. Those sacred feet took its residence inside the VimAnam (PraNavAkAra VimAnam ) , which reached the kings of Manu Dynasty from Brahma DEvan (Manu MuRayil varuvathu ohr vimAnatthu uRainthana). 9. Those victorious feet shone on the chariot of the righteous ArjunA (aRam udaya Visayan amar tEril thihazhnthana). 10. They desroyed the hoods of the powerful serpent , KaaLiyan through dancing on them ( adal urakam paDam madiya aadik-kadanthana). 11. Those sacred feet adorn the celebrated sTAnam of SrI Vaikuntam that is not comprehended by any one of the six mathams (aRu samayam aRivu ariya tAnatthu amarnthana). The six samayams/mathams are: Saankhyam , KaNAtham , Bhouddham , Patanjali matham , Jainam and Saivam. 12. Those holy feet became the most appropriate object of singing for the tongue of Swamy NammAzhwAr , the king of ThirukkuruhUr , which is the jewel for the BhU MaNDalam (aNi Kuruhai nahar munivar nAvukku amainthana). 13. Those sacred feet became the glorious object of adornment of the fragrant TuLasi garland (veRi udaya TuLava malar veeRukku aNinthana). 14. Those holy feet blessed queen Utthirai 's embryo , which was like a piece of charcoal and transformed it in to a beautiful young child (vizhu kari Ohr kumaran yena mEvic--chiRanthana). 15. Those powerful feet chased and destroyed the heroic army of the asurAs (viRal asurar paDai adaya veeyat-thurantana). 16. " Vidalariya PERIYA PERUMAAL menn PaadhangaLE" Those soft and tender feet of Lord RanganAthan performed all these miracles effortlessly. Those sacred feet of Lord RanganAthan are not ever abandoned by His adiyArs. May those adiyArs of this victorious Lord be uplifted by seeking these holy feet as their means (upAyam) and fruits (Phalan) for Moksham! The chantham ( the rhythmic beats of this Paasuram can only be appreciated by the musicians and raskikAs amongst us. This Paasuram is usually sung as Raaga Maalikai in the RaagAs of Kaapi, BehAg, Sindhu Bhairavi and HamsAnandhi. In this posting , adiyEn will cover three Paasurams of Desika Prabhandham , which are the 52nd ,53rd and the 54th Paasurams of AdhikAra Sangraham . These three Paasurams also cover the content of the 32nd Chapter of SrImath Rahasya Thraya Saaram (NigamanAdhikAram). 91) Linking up and Involvement with our Sath-SampradhAyam ********************************************************* Our Sath-SampradhAyam is ancient and free from any blemish. Those AasathikAs , who have been blessed to belong to this sacred sampradhAyam wil reach Sathgathi . The key words of this 91st Paasuram are: " poRai mihum Punithar kAttum yengaL ponRAtha nal-neRiyil puhuthuvAr " ( Those AasthikAs , who with many aathma guNams will enter the auspicous path shown by our sublimely pure AchAryAs marked by patience and forbearance greater than BhUmi Devi Herself). These AasthikAs have the clarity and purity of mind as a result of following our Sath-SampradhAyam(Sath-sampradhAya Parisuddha manas). This paasuram describes the eight unique lakshaNams of these AdhikAris, who through their links to our hoary and timeless SampradhAyam keep it dynamic, radiant and eternal: (1) These AasthIkAs (AastheekyavAn) will have deep faith in the meanings of the eternal Vedaas (maRai uraikkum poruL yellAm mey yenRu OhrvAr). (2) They will have sharp intellects to appreciate the subtle meanings of Tatthva Thrayams (manniya koor mathi udayAr). (3) They will never be jealous of others. They will be free from asooyai (vaNN guNatthil kuRai uraikka ninaivu illAr). (4) They will have the distinction of recieving upadEsam about our Sath-SampradhAyam from SadAchAryAs (GurukkaL pAll kOthu aRRa manam peRRAr) . (5) They will follow only the essential course that our Sath-SampradhAyam has laid out (nanmai koLvAr). (6) They wil be beyond the influence and ways of the common folk deeply immersed in SamsAric sufferings (siRai vaLarkkum mAnthar sankEtatthAl sithayAtha tiNN madhiyOr). (7) They will not seek the insignificant phalans sought by the samsAris and will seek instead the everlasting fruits of Moksham (terinthathu OhrAr). (8) They will follow the golden path laid out and travelled by our SadAchAryAs known for their matchless forbearance and be uplifted (poRai mihum Punithar kAttum yengaL ponRAtha nall neRiyil puhuthuvAr). Thanks to them our Sath-SampradhAyam will flourish without any interruption forever. The Sanskrit SlOkam quoted in this context of the Paasuram is: AastheeykavAn nisitha budhdi: anabhyasooyu: Sath-sampradhAya parisuddhamaNA: SadharTee SankEtha-bheethi rahitha: thruNEshvasaktha: Sadh-varthamAneemanuvidhAsyathi Saasvatheem na: 92) The path desscribed by this Prabahndham is the best ******************************************************* This Prabhandham instructs one about Prapatthi mArgam as the best to follow for our upliftment from SamsAric sufferings and to gain MOksha Siddhi. Prapatthi is the key upAyam (Mukhya upAyam) for Moksham. The key words associated with this Paasuram are: " ithu vazhi inn amudhu ; MaRayOr aruLaal ithu vazhiyA isainthanam" ( The way of the SadAchAryAs described in this Prabhandham is the nectarine path. Thanks to the krupai of the SadAchAryAs , this way has been opened to us and we have accepted it as the auspicious path for us to follow). The complete meaning of this Paasuram celebrating the Vaibhavam of Prapatthi mArgam taught by our noble AchAryAs is : We pushed aside the pursuit of trivia practised by the samsAris as not befitting our Svaroopam as instructed by our AchAryas. We became thoroughly convinced that the grace of the Lord resulting from our observance of the most important UpAyam (Prapatthi) is the cause for Moksham and turned away from other inconsequentail upAyams .Our AchAryAs instructed us on this Prapatthi mArgam and willed that this mArgam should thrive and prosper for ever for our benefit. Our AchAryAs forgave our belmishes and performed upadEsam on this nectarine Prapatthi mArgam . We followed this Prapatthi mArgam as revealed to us by our AchAryAs. 93) Freedom from worries due to the help of Charama SlOkam ********************************************************** We are the ignorant ones , who do not even know that 8+2 is ten (yettum iraNDum aRiyAthavar) .We are the untutored ignoramus , who do not know about the Manthrams with 8 aksharams (Yettum) and Dhvayam (iraNDum) as well as the Lord's Charama slOkam . As a result, we are devoid of Jn~Anam about Bhagavath Bhakthi and the glory of Parapatthi mArgam. Our Lord through His AchAryAs made sure that we the ignorant ones recieved upadEsam on the esoteric meanings of the three rahasyams to get us ready to enter His Supreme abode. We reflect on His charama slOka upadEsam to perform Prapatthi at His sacred feet to free ourselves from fear about SamsAram and enjoy worry-free lives here before entering the Lord's abode . Swamy Desikan describes the sacred Charama SlOkam of Lord Krsihna as " kattu yezhil Vaachakam " ( the strong and beautiful words) .Our Lord's upadEsam is summed up as : " vinait-thiraL mutta maaLa muyanRidum " (Engage in the observance of Prapatthi to baniish all of Your karma vargams ). I will stand in the place of all dharmams and take up Your rakshaNam . Please be freed of your worries and fears (anjal yenrAr). Swamy Desikan describes SrIvaikuntam , the Supreme abode of the Lord as " yetta oNNAtha idam " and that Prapatthi will grant (tarum) residence at this sTaanam of the Lord that cannot be reached only through the observance of Prapatthi (or BhakthiyOgam) .Our AchAryAs have recommended for us , the incompetent adhikAris , Prapatthi yOgam as the esay-to-practise and sure-to-grant phalan compared to the difficult to observe Bhakthi yOgam , which takes a long time to yield the phalan of Moksha sukham. In this posting , adiyEn will cover the two more Paasurams of Desika Prabhandham , which are the 55th and the 56th Paasurams of AdhikAra Sangraham. These two final Paasurams of AdhikAra Sangraham also cover the content of the 32nd (Final) Chapter of SrImath Rahasya Thraya Saaram (NigamanAdhikAram). With these two Paasurams , AdhilAra Sangraham is concluded . 94) The anugrahams arising from AdhikAra Sangraham ************************************************** The key message of this paasuram is that the SrI Sookthi of AdhikAra Sangraham and the contents covered there would be most enjoyable to the Lord known for His lotus feet brimming with honey (tEn uLa Paadha Maalar ThirumAlukku titthikkum). Swamy Desikan is ready to conclude His second Tamil Prabhandham of AdhikAra sangraham . He makes the following observations: " The nishtais (status and accomplishments) described in this Prabhandham are not with in the easy reach of even Indhran and Nithyasooris. Through the power (anguraha sakthi) of the AchAryan , we are blessed to gain these nishtais and also see on this karma bhUmi others possessing these rare-to-gain nishtais . Our bhAgyam is indeed worthy of celebration ! There may be those with distorted minds (vakra buddhi) , who may find fault with this grantham and us , who composed this prabahndham and therefore may resent us . Inspite of them , this Prabhandham will be very dear to SrIman NaarAyaNan's mind (ThiruvuLLam ) . 95) Lord Hayagrivan's role as AchAryan for grantha nirmANam *********************************************************** In this last slOkma of AdhikAra Sangraham , Swamy Desikan acknowledges that this grantham was not authored by him directly. He acknowledges his debt of gratitude to Lord HayagrIvan as his AchAryan for His upadEsam that was captured by him and transformed in to the written version. Swamy Desikan in all modesty appropriate for the occasion states clearly that this Prabhandham is blemishless since it originated directly from Lord HayagrIvan serving as his AchAryan. The key passage of this final Paasuram of AdhikAra Sangraham is: " VeLLaip-Parimuhar DEsikarAy virahAl adiyOm uLLatthu yezhuthiyathu yAmm olayil ittanam . ithaRkku yenn? " (Lord HayagrIvan incarnated as an AchAryan with the face of a white horse and human body and wrote these pAsurams of AdhikAra sangraham on the tablet of my mind through the route of upadEsam. adiyEn just took those texts imprinted in my mind and for the benefit of the people of the world transferred them to the palm leaves as a SrI kOsam. Since Lord HayagrIvan , the Lord of VidhyAs is the author of this grantham , how can there be any blemish in the content or style of this grantham ? It is impossible ). The beauty of Swamy Desikan's expression of his debt of gratitude through this paasuram is marvellous. Normally , the people of the world write/store in their mind the different AchArya granthams recorded on palm leaves by scribes . In his case , Swamy Desikan says that He wrote on plam leaves , what was already recorded on his mind by his AchAryan ,Lord HayagrIvan , so that people of the world can benefit from the Lord's direct upadEsam to him . Swamy Desikan observes further that the reviewing public may accept this grantham or reject them as defective (koLLa thuNiyinum ,kOthu yeNru ihazhinum) . Either of these reactions does not bother Swamy Desikan . He says that he is not going to rejoice because some accepted this grantham and welcomed it or he is not going to be downcast and curse the other group, which criticizes this grantham as faulty and reject it. His kiamkaryam is done and he has an equanimous attitude that does not elate him or depress him on learning about the two kinds of reception to this grantham. He comments about his reaction to the praise given by those who welcome it as a srEshta grantham: " yemm yezhil mathi yeLL atthanai uhavAthu" (Our beautiful mind filled with dispassion would not at all be elated ). In the case of those , who dismiss this grantham as insubstantial and full of doctrinal mistakes , Swamy Desikan states that his mind will not at all be angry or displeased with those critics. Why does Swamy Desikan takes this attitude filled with VairAgyam? He dismisses both the haters and lovers of this Grantham with the statement: " ithaRkku yenn? ". He says: "what does it matter ?". Swamy Desikan's conviction behind this attitude is that the grantham of AdhikAra Sangraham took its birth because of Lord HayagrIvan and it is HE , who is going to be happy , when critics welcome it or HE is theone , who is going to be angry with those who reject it and punish them . Swamy Desikan places the anugraha , nigraha sankalpam in the hands of his ParamAchAryan , Lord HayagrIvan and places the palm leaves at HIs Thiruvadi . Swamy Desikan's Saathvika ThyAgam and AchArya Bhakthi is abundantly evident in the final paasuram of salutation to Lord HayagrIvan .: Further Reading: External Links Spelling:
http://www.ibiblio.org/sripedia/ebooks/vdesikan/adhikaara_sangraham/index.html
CC-MAIN-2015-14
refinedweb
16,580
53.24
Errata for Practical Programming (2nd edition) The latest version of the book is P2.0, released 11: B8.0 (21-Aug-13) PDF page: 11 "For the mathematically inclined, the relationship between // and % comes from this equation, for any two numbers a and b: (b * (a // b) + a % b) is equal to a" Insert "non-zero" between "two" and "numbers", for when b = 0, a divide-by-zero error occurs. --George Sullivan-Davis - Reported in: P1.0 (17-Oct-13) Paper page: 12 The operator ** for type is not listed under section 2.3. Instead the operator * is listed twice. Wrong: "these operators can be applied to those values: +, -,*, /, //, %, and *." Should have been: "these operators can be applied to those values: +, -,*, /, //, %, and **."--Iver Røssum - Reported in: P2.0 (29-May-15) PDF page: 26 Paper page: 12 => "For example, in type int, the values are …, -3, -2, -1, 0, 1, 2, 3, … and we have seen that these operators can be applied to those values: +, -, *, /, //, %, and *." Should read "+, -, *, /, //, %, and **." The last symbol was rendered as a * rather than a ** for the exponent operator, thus showing multiplication twice. - Reported in: B8.0 (05-Sep-13) PDF page: 28 Fourth bullet point - It may be helpful to clarify the second sentence by explaining that the variable will now point to the new value. I find it vague the way it is currently stated. - Reported in: B8.0 (08-Sep-13) PDF page: 28 Paper page: 28 "Variables must be assigned values before they can used in expressions." the word "be" is missing - Reported in: B8.0 (07-Sep-13) PDF page: 32 "...refers to 10, Python Python evaluates this expression to -7." Duplication of the word Python--Dave Pelter - Reported in: B8.0 (20-Aug-13) PDF page: 32 end of line 5, beginning of line 6: the word Python is repeated. - Reported in: B8.0 (24-Aug-13) PDF page: 51 There are 2 instances where the text in step "4. Description" doesn't match the text in step "5. Body". First, the latter ends with "which are" instead of with the word "both". Second, the remainder of the sentence should be on the second line instead of passing "year)." to a third line. Defining the function with the docstring as it appears in "4. Description" returns the text as shown on page 51, after calling help on function "days_difference".--George Sullivan-Davis - Reported in: P1.0 (02-Nov-13) PDF page: 57 "2. Type Contract. The arguments in our function call examples are all inte- gers, and the return values are integers too," The return value is an integer. - Reported in: P1.0 (15-Dec-13) Paper page: 59 The first code example of section 3.7 reads: >>> 3 + 5 / abs(-2) 5.5 When I type this into IDLE 3, the return value is 4.0, not 5.5.--Ben - Reported in: B8.0 (13-Aug-13) PDF page: 62 Introduction of "!=" in example: Precondition: n != 0 Not is Index as symbol. Indexed as "Not operator". Defined in text at page 84. I suggest you add it to the Symbols in index and add note at the example.--Alan Meghrigian - Reported in: P1.0 (22-Sep-13) PDF page: 65 Nowhere in the description of strings is it mentioned that a string is a list of characters and that each character can be accessed by its index. I found this baffling when later strings were subscripted and looped over. Thanks to the Coursera forum I stumbled on the explanation. - Reported in: B8.0 (23-Aug-13) PDF page: 70 line 4 of the text reads: "using the *operator" -- there should be a space after the "*". - Reported in: B8.0 (23-Aug-13) PDF page: 71 Lines 4 and 5 explain that single quotes can be used for strings containing double quotes. Then lines 6 and 7 repeat that. - Reported in: B8.0 (23-Aug-13) PDF page: 72 Line 10 of the text contains: "Python creates contains a \n sequence" which should read "Python creates contains a \n escape sequence". - Reported in: B8.0 (23-Aug-13) PDF page: 76 Line 1 of the text starts: "In an earlier chapter, we explored some built-in functions." For clarity should read "In chapter 3, we explored some built-in functions." - Reported in: P1.0 (08-Jul-14) PDF page: 81 "Return True iff x is positive." - note extra 'f' in 'if'.--Paul Golds - Reported in: B8.0 (25-Aug-13) PDF page: 85 In the example at the bottom of the page the 5th to last line reads: "Return True iff x is positive." This should be "Return True if x is positive." - Reported in: P1.0 (07-Jan-14) PDF page: 85 The sentence 'This is often referred to as or Python decides which string is greater than which by comparing corresponding characters from left to right.' seems garbled. - Reported in: P1.0 (07-Jan-14) PDF page: 88 In the final example shouldn't there be parentheses around the string to be printed? - Reported in: B8.0 (03-Sep-13) PDF page: 89 Comparing Strings. The characters in strings are represented by integers: a capital A, for example, is represented by 65, while a space is 32, and a lowercase z is 172. 65 and 32 are decimal but the 172 is the octal for 'z', should be 122.--steven ward - Reported in: B8.0 (28-Aug-13) PDF page: 90 The section describing the IN operator uses an example meant to illustrate the case sensitivity of the operator. However the example is wrong: >>> 'A' in 'abc' True If IN is case sensitive then the result of this example should be "false". Running it in IDLE definitely shows a result of false as would be expected.--Dean Farrington - Reported in: B8.0 (01-Sep-13) PDF page: 90 About mid-page, example reports that the phrase, or whatever I should call it, 'A' in 'abc' is True. This violates the case sensitivity of things, and anyway, I get False when I run it. Thanks--James D Reid - Reported in: B8.0 (02-Sep-13) PDF page: 90 as printed: >>> 'A' in 'abc' True should be: >>> 'A' in 'abc' False - Reported in: B8.0 (06-Sep-13) PDF page: 90 [...] This is case sensitive: is: >>> 'A' in 'abc' True should be: >>> 'A' in 'abc' False - Reported in: B8.0 (15-Sep-13) PDF page: 90 The in operator produces True exactly when the first string appears in the second string. This is case sensitive: >>> 'a' in 'abc' True >>> 'A' in 'abc' True ## second entry should read false--Dave Pelter - Reported in: B8.0 (26-Aug-13) PDF page: 90 In the second code example on the page: >>> 'a' in 'abc' True >>> 'A' in 'abc' True The correct code should be: >>> 'A' in 'abc' False - Reported in: P1.0 (14-Jul-14) Paper page: 91 Throughout the text, chemical compounds that include oxygen (H2O, H2SO4, CO2, etc) are written with a zero instead of a capital letter O. For example: H20 should be H2O. This may be a bit nitpicky, since this is not a chemistry textbook. However, your examples and exercises involve finding occurrences of certain characters in these strings.--Derek - Reported in: B8.0 (25-Aug-13) PDF page: 92 Last line of last example on page reads "print "You should be careful with that!"" which is not valid syntax. It should read: "print("You should be careful with that!")"--John Purnell - Reported in: P1.0 (29-Oct-13) PDF page: 94 In the docstring "Return True iff x is positive," there's an extra "f"--Michael Fitzhugh - Reported in: B8.0 (01-Sep-13) PDF page: 96 I see that someone has already discussed this, but wouldn't it be better the replace the phrase "This code is the same as this:" with "is equivalent to" ? Maybe too picky?--James D Reid - Reported in: B8.0 (02-Sep-13) PDF page: 97 Isn't a colon missing in the first occurrence of "else" in the code at the bottom of the page?--James D Reid - Reported in: P1.0 (22-Sep-13) PDF page: 110 The whole of section 6.3 has an issue where the text refers to module temperature_program, which has 3 doctests, but the screenshots show the outcome of running doctest.testmod on module baking, which has 1 doctest. - Reported in: B8.0 (31-Aug-13) PDF page: 111 The second to last code snippet on the page reads: print("After import, __name__ is", __name__, \ "and echo.__name__ is", echo.__name__) The escape character at the end of line 1 is not required because of the surrounding parenthesis. - Reported in: P1.0 (29-Nov-13) Paper page: 111 Figure 5 shows the IDLE results of running baking.py, but the text discussing this on page 110 and the start of text on page 111 talks about temperature_program.py: I think Figure 5 needs to be updated to refer to temperature_program.py--Andrew Richards - Reported in: B8.0 (01-Sep-13) PDF page: 121 "Here are two more examples, this time using the other two string methods from the code on page 119." Should be page 120. - Reported in: B8.0 (01-Sep-13) PDF page: 122 The code snippet at the top of the page: >>> help(math.sqrt) Help on built-in function sqrt in module math: should be prefixed by: >>> import math to get it to work. - Reported in: B8.0 (01-Sep-13) PDF page: 125 The line: these three functions to a string with leading and trailing whitespace: "functions" should be replaced by "methods" - Reported in: P1.0 (14-Jul-14) Paper page: 126 This is not an error with the text, but an error with the exercise solutions at http: // pragprog (dot) com/wikis/wiki/PracProg2methods. Solutions for exercise 11 b and c read: b.'C02 H20'.find('0') c.'C02 H20'.find('0', 'C02 H20'.find('0') + 1) Should be: b.'C02 H20'.find('2') c.'C02 H20'.find('2', 'C02 H20'.find('2') + 1) The text asks for first and second occurrence of '2' in 'CO2 H2O' (not 'O'). --Derek - Reported in: P2.0 (16-Jun-15) PDF page: 133 The bottom figure, the id2:str cell: "none" -> "neon" It should be the result *after* the assignment to nobles[1].--Silvie Cinkova - Reported in: P2.0 (16-Jun-15) PDF page: 137 Paragraph 4 - To remove Dpy and Sma, the list useful_markers has to be sliced differently: >>> useful_markers = clelegans_phenotypes[0:4] -> >>> useful_markers = clelegans_phenotypes[0:3] The figure below presents it correctly. --Silvie Cinkova - Reported in: P2.0 (16-Jun-15) PDF page: 137 Sorry, my mistake. Only now I read further. Slicing in p.137 is correct.--Silvie Cinkova - Reported in: P1.0 (23-Sep-13) PDF page: 140 First paragraph: all references to "celegans_phenotypes" should be "celegan_markers". - Reported in: B8.0 (05-Sep-13) PDF page: 146 In the box "Where did my list go?": "As we will discuss in Section 6.3, Testing Your Code Semiautomatically, on page 114" should be: "As we did discuss in Section 6.3, Testing Your Code Semiautomatically, on page 114" - Reported in: P1.0 (26-Sep-13) PDF page: 171 Throughout the examples in this chapter newlines are removed using str.strip(). However in the Coursera course in the week 6 files exercise this is marked incorrect and str.rstrip('\n) is used. The examples should probably be updated to show this more specific syntax. -: B8.0 (16-Sep-13) PDF page: 177 In the last paragraph before 10.1: "You’ll first learn how to open and read information from files. After that, you’ll learn about the different techniques for reading files,..." Shouldn't that be "different techniques for writing files,..." - Reported in: B8.0 (16-Sep-13) PDF page: 178 In the penultimate paragraph: "calendar programs read and process ical files ()," ical should be iCal and the "()"? - Reported in: B8.0 (16-Sep-13) PDF page: 180 First line of second paragraph: "The second statement, contents = example_file.read(), tells Python that you want" should read: "The second statement, contents = file.read(), tells Python that you want" - Reported in: B8.0 (16-Sep-13) PDF page: 180 First line of third paragraph reads: "The last statement, example_file.close(), releases all resources associated with" and should be: "The last statement, file.close(), releases all resources associated with" - Reported in: P1.0 (18-Sep-13) PDF page: 180 2nd paragraph. The statements in the text a wrong. they are not the statements in the example. 2nd statement s/b contents = file.read() last statement s/b file.close() because these statements relate to the object "file" created by the open call. --Brad Walter - Reported in: P1.0 (19-Sep-13) PDF page: 183 A call to function sum_number_pairs results in a TypeError. And before that module total needs to be imported. - Reported in: P1.0 (19-Sep-13) PDF page: 185 In the first paragraph, the line: "programs using import tsdl, as shown in the next example. This allows us to" should import time_series - Reported in: P1.0 (26-Sep-13) PDF page: 185 The type contract for function definition smallest_value is incorrect as it returns an int. - Reported in: P1.0 (26-Sep-13) PDF page: 186 In function definition smallest_value_skip there is no check for a '-' in the first data line after the header. - Reported in: P1.0 (26-Sep-13) PDF page: 186 The type contract for function definition smallest_value_skip is incorrect as it returns an int. - Reported in: B8.0 (17-Sep-13) PDF page: 187 Section 10.4: all references to "urllib.urlrequest" are incorrect. The correct module name is "urllib.request". - Reported in: P1.0 (19-Sep-13) PDF page: 189 Top of page: "We now face the same choice as with skip_header: we can put find_largest in a module (possibly tsdl)," The module name is "time_series" not "tsdl". - Reported in: P1.0 (19-Sep-13) PDF page: 190 First paragraph: "here is the same code without using tsdl.skip_header and find_largest as helper methods:" should be "time_series.skip_header". This error occurs throughout this section. - Reported in: P1.0 (19-Sep-13) PDF page: 193 In example at the top of the page, file "multimol.pdb" is not closed. - Reported in: P1.0 (26-Sep-13) PDF page: 196 Not so much an error as a redundant check in function definition read_molecule: if fields[0] == 'ATOM': is redundant as with the format of the data file if not at the end of file and not at a new molecule the only other kind of line is a line that starts with 'ATOM'. - Reported in: P1.0 (05-Feb-14) Paper page: 213 "key/value pair listed is 'canada goose': 71" should be, "key/value pair listed is 'canada goose': 183"--Azef Aziz - Reported in: P1.0 (12-Mar-14) PDF page: 219 It's most likely my error but when I run the first code under 11.3, "Storing Data Using Dictionaries", the bird_counts[] list never populates. I ran it through the Python Visualizer and the code never runs past 'found = False'. Is there something missing ???--Jared Keown - (31-Oct-13) PDF page: 224 At the bottom of the page: def find_two_smallest(L): """ (list of float) -> tuple of (int, int) The type of the list in int not float. This error continues through all the examples in this section. - Reported in: P1.0 (30-Oct-13) PDF page: 225 In "Find, Remove, Find" it is not the re-insertion that is the problem, but the removal, as min2 is obtained before the re-insertion. - Reported in: P1.0 (31-Oct-13) PDF page: 230 Last example on page: # Examine each value in the list in order for i in range(2, len(values)): should be: # Examine each value in the list in order for i in range(2, len(L)): - Reported in: P1.0 (28-Jan-14) PDF page: 233 sea_level_press = [] sea_level_press_file = open('darwin.slp', 'r') for line in sea_level_press_file: sea_level_press.append(float(line)) I keep on getting the following error: sea_level_press.append(float(line)) ValueError: could not convert string to float: I have checked the data file and there are no extraneous spaces, etc. Yet I cannot get past this problem. Any suggestions?--Jam One - Reported in: P1.0 (31-Oct-13) PDF page: 233 In function definition: def time_find_two_smallest(find_func, lst): the return statement has a period at the end of the statement. - Reported in: P1.0 (31-Oct-13) PDF page: 233 In the example on the page the file is opened for reading but never closed. - Reported in: P1.0 (13-Nov-13) PDF page: 239 Bottom of page: list[0:i] doesn't contain value, and 0 <= i <= len(lst) should be: list[0:i] doesn't contain value, and 0 <= i < len(lst) as i should be 1 less than len(lst) - Reported in: P1.0 (29-Nov-13) Paper page: 241 In the explanation of the function linear_search(), it says, "At the end, we return... len(list) if value wasn't in list.", but actually -1 is returned--Andrew Richards - Reported in: P1.0 (13-Nov-13) PDF page: 243 function definition "time_it" has the type contract: (function, object, list) -> number which should be: (function, list, object) -> number - Reported in: P1.0 (08-Feb-14) Paper page: 246 " while if it is greater than j, we should move j down." should be (i think), "while if it is greater than v, we should move j down."--Asif Aziz - Reported in: P1.0 (09-Feb-14) Paper page: 246 'because L[i] isn't included in the range; instead...' should be (i think), 'because L[m] isn't included in the range....'--Asif Aziz - Reported in: P1.0 (09-Feb-14) Paper page: 248 code related to binary search, if __name__ == '__main__': import doctest doctest.testmod() will not display output unless dockets.testmod() is enclosed in print (), i.e. print(doctest.testmod())--Asif Aziz - Reported in: P1.0 (09-Jul-14) PDF page: 257 Figure 13—First few steps in selection sort => Figure 13—First few steps in insertion sort - Reported in: P1.0 (14-Nov-13) PDF page: 260 The doctest for function definition bin_sort is incorrect as the function returns a sorted copy of the list. - Reported in: P2.0 (20-Jan-15) PDF page: 282 In the example at the bottom of the page, the ISBN of book_1 and book_2 are the same, so '==' cannot distinguish between them. Assume this has been reported.--Eugene Rodriguez - Reported in: P2.0 (21-Jan-15) PDF page: 282 Previous submit not error, Sorry!--Eugene Rodriguez - Reported in: P1.0 (30-Nov-13) PDF page: 287 The example method at the bottom of the page: def __str__(self): """ (Member) -> str The type contract should be: """ (Faculty) -> str - Reported in: P1.0 (19-Jul-14) PDF page: 287 Variable paul = Faculty('Paul', 'Ajax', 'pgries@cs.toronto.edu', '1234') contains only string 'Paul', not 'Paul Gries' as suggested by the print(paul) command three lines below.--Adrian - Reported in: P1.0 (30-Nov-13) PDF page: 291 End of first paragraph reads: rewritten to return a Molecule object instead of a list of tuples: should be: rewritten to return a Molecule object instead of a list of lists: - Reported in: P2.0 (24-Aug-15) PDF page: 295 Paper page: 287 In both of the __str__ functions defined on this page, a string with new lines between each piece of data is returned. However, the docstring shows that the result should be a string with a backslash between each piece of data.--Abaas K - Reported in: P1.0 (05-Nov-13) PDF page: 302 The test case pattern at the top of the page: been expected = «the value we expect will be returned» What is the "been" on the first line doing there? - Reported in: P1.0 (05-Nov-13) PDF page: 304 Last paragraph on page: Following those steps, we created a variable, nums The variable in the docstring is "L" not nums". This error occurs throughout this paragraph. - Reported in: P1.0 (04-Dec-13) PDF page: 328 First sentence of "Changing Colors": Almost all foreground colors can be set using the bg and fg keyword arguments, respectively. Should be: Almost all foreground and background colors can be set using the bg and fg keyword arguments, respectively. - Reported in: P1.0 (04-Dec-13) PDF page: 335 Last sentence before the code: accessed using self.state, and its controllers are the methods upClick and quitClick. the 2 method names are: up_click and quit_click - Reported in: P1.0 (06-Dec-13) PDF page: 337 Question 5 states: In Section 3.4, Using Local Variables for Temporary Storage, on page 39, should be: In Section 3.3, Defining Our Own Functions, on page 35, - Reported in: P2.0 (04-Mar-15) PDF page: 338 Dear Sir I have bought the PDF file (practical programming_p2_0.pdf) last month. Unfortunately there is no page "338" in my PDF data! Is there any mistake with page numbers? I would like to receive this page(when it exists)! Thank you very much in advanced and I look forward to her from you! my email address: nazemeh.ashrafianfar@tu-clausthal.de With best regards Nazemeh --Nazemeh Ashrafianfar - Reported in: P1.0 (01-Dec-13) PDF page: 343 The first line on the page: The Python equivalent is a type we haven’t seen before called bytes... This is incorrect. We were introduced to bytes on page 181, section 10.4: There’s a hitch: because there are many kinds of files (images, music, videos, text, and more), the file-like object’s read and readline methods both return a type you haven’t yet encountered: bytes. - Reported in: P1.0 (19-Feb-14) Paper page: 350 for c in countries: cur.exec('INSERT INTO PopByCountry VALUES (?, ?, ?)', (c[0], c[1], c[2])) can be simply presented as, cur.exec('INSERT INTO PopByCountry VALUES (?, ?, ?)', c) rather than presenting each element within the tuple--Asif Aziz -
https://pragprog.com/titles/gwpy2/errata
CC-MAIN-2015-40
refinedweb
3,698
76.52
I have to do an assignment for class along these lines: Use a for loop to process the data for 5 employees. Use arrays to store the user input. In the for loop, if any of the entered fields are –1, break out of the loop. The program logic will first load all of the data, until the user enters the max number of records, or they input -1 for one of the fields. After the data is loaded, it will then be processed and output generated. I'm confuse on how to do the array which has to look similar to this: Enter name: Glenn Enter hourly rate: 2.00 Enter hours worked: 50 Pay to: Glenn Hours worked: $ 50.00 Hourly rate: $ 2.00 Gross pay: $110.00 Base pay: $ 80.00 Overtime pay: $ 30.00 Taxes paid: $ 22.00 Net pay: $ 88.00 The teacher wants us to update our previous assignment. I'm not sure how to integrate. Here is my code: Code: #include <stdio.h> #include <stdlib.h> #include <conio.h> /* using this for getch */ int main(void) { //Start of integers /* Ppl names */ char name[20]; /* Hourly rate */ float hrr; /* Hours worked */ float hrw; /* Amount paid */ float ap; /* Taxes paid */ float tp; /* Loop int */ int looper; //End of integers for (looper = 0; looper < 5; looper++){ /*START OF RUN*/ //Start of user input printf("Employee name:"); scanf("%s", &name); fflush(stdin); printf("Enter hourly rate:"); scanf("%f", &hrr); printf("Enter hours worked:"); scanf("%f", &hrw); printf("\n"); //End of user input //Start of equations if (hrw>40){ ap=40*hrr+(hrw-40)*(1.5*hrr); } else{ ap=hrw*hrr; } tp=ap*0.2; //End of equations //Start of calculated data printf("Pay to: %s \n", name); printf("Hourly rate: %f \n", hrr); printf("Hours worked: %f \n", hrw); printf("Amount paid: %f \n", ap); printf("Taxes paid: %f \n", tp); //End of calculated data /*END OF RUN*/ printf("----------------------------------\n"); } getch(); return 0; }
http://cboard.cprogramming.com/c-programming/149427-array-loop-printable-thread.html
CC-MAIN-2015-18
refinedweb
326
83.15
[SOLVED] Use class without explicitly instantiating it I have a class that creates various widgets. This class' methods should be callable from everywhere (where the header is included, obviously), but without explicitly instantiating an object of the class. Just like when you include the iostream library and then simply type std::cin or std::cout, I would like to include my header and use my class with a syntax like myClass::doThis(). Can you tell me any ways that I could achieve this? I have read about the Singleton pattern but it doesn't really suit my needs. Also, using static data and members looked like a solution, but I'm not sure where to start and I couldn't find a guide that explains it right (actually, none that explains it, they only say "use static bla-bla-blah" without much or any code). - JKSH Moderators last edited by I have already read lots of articles about ways of using static members of a class, including the one you linked, JKSH. However it didn't really help, I actually couldn't understand how to use static members for my purpose. Also, searching on the web reveals that most developers prefer instead a "singleton pattern". Now, I'm not sure if what I've come up with is some singleton implementation, but surely accomplishes 90% of my needs. My idea is to use a global object. Since a widget cannot be created before a QApplication has been created, a proper init() method will serve the purpose of doing the initialization. The object is created in a global namespace (or in my own namespace) and used with extern in all sources that include my object's class. So here's the boilerplate code I've come up with. It runs just fine and serves my purpose. However, I am looking for feedback as I might be doing something horribly wrong. FILE: test.h #ifndef TEST_H #define TEST_H class QPlainTextEdit; class QString; class test{ public: test(); ~test(); void init(); void print(const QString &message); private: QPlainTextEdit * pEdit; }; #endif // TEST_H FILE: test.cpp #include "test.h" #include <QPlainTextEdit> #include <QString> test::test(){ // nothing to do in ctor right now } test::~test(){ delete pEdit; pEdit = nullptr; } void test::print(const QString &message){ pEdit->appendPlainText(message); } void test::init(){ if (pEdit == nullptr) // if pEdit was not created, create it pEdit = new QPlainTextEdit; pEdit->show(); // and show the widget } /*** this is the global object that it's meant to be used anywhere ***/ test testWidget; FILE: main.cpp #include "test.h" #include <QApplication> // this is the global object created in test.cpp extern test testWidget; int main(int argc, char *argv[]){ QApplication app(argc, argv); testWidget.init(); // initialize the object testWidget.print("hello world!"); return app.exec(); } As I was posting I noticed that the public ctor will allow other objects of this class to be instantiated, but I'm not quite sure how to instantiate that object if it has a private ctor (the compiler says it is private - and it's right). I need some advice on this as well. - Chris Kawa Moderators last edited by Chris Kawa First - if at all, you should put the extern declaration in the class header. This way you won't have to type it everywhere, just include the header. Second - you should not have the extern or a publicly accessible object at all. The singleton pattern goes something like this: //header class MySingleton { public: static MySingleton* instance(); private: std::unique_ptr<Stuff> data; }; //cpp MySingleton* MySingleton::instance { static MySingleton singletonObject; //it's ok to call private constructor here if(!singletonObject.data) singletonObject.data = ... //initialize it somehow return &singletonObject; } //usage int main(int argc, char *argv[]) { QApplication app(argc, argv); MySingleton::instance() -> .... //use it, just not before app is created return app.exec(); } - JKSH Moderators last edited by JKSH I have a class that creates various widgets. When I first read this, I thought you were after the factory pattern: // myfactory.h class MyFactory { public: static QWidget* createWidget(int type) { switch(type) { case 0: return new QMainWindow; case 1: return new QDialog; default: return new QWidget; } } } // main.cpp int main(int argc, char *argv[]){ QApplication app(argc, argv); QWidget* w1 = MyFactory::createWidget(0); QWidget* w0 = MyFactory::createWidget(1); w0->show(); w1->show(); return app.exec(); } searching on the web reveals that most developers prefer instead a "singleton pattern". ... As I was posting I noticed that the public ctor will allow other objects of this class to be instantiated, but I'm not quite sure how to instantiate that object if it has a private ctor The pattern you should choose depends on what you're trying to achieve. Notice that a singleton requires lots of boilerplate code. For your class, is it worth the effort? If you only want to print messages to a central widget, then I think it's overkill to create a singleton. You can achieve same notation by using functions in a namespace, instead of functions in a class: // printer.h #include <QString> namespace Printer { void print(const QString& message); } // printer.cpp #include "printer.h" #include <QPlainTextEdit> static QPlainTextEdit* pEdit = nullptr; // NOTE: This "static" means "private" void Printer::print(const QString& message) { if (!pEdit) { pEdit = new QPlainTextEdit; pEdit->show(); // NOTE: When the QApplication is destroyed, it automatically destroys all widgets too } pEdit->appendPlainText(message); } // main.cpp #include <QApplication> #include "printer.h" int main(int argc, char *argv[]) { QApplication app(argc, argv); Printer::print("Hello world!"); return app.exec(); } That's much fewer lines of code. The "static" might be confusing -- In my example, it means that only printer.cpp is allowed to access pEdit. It is completely different to "static" applied to class members. (That's one of the flaws of C++: too many different meanings for "static") It runs just fine and serves my purpose. However, I am looking for feedback as I might be doing something horribly wrong. I don't see anything wrong with it, aside from the unnecessary "extern" that @Chris-Kawa pointed out, and the public constructor that you already pointed out. Those aren't fatal flaws though. JKSH, you just made my day! I was coming up with a similar solution to yours, but I didn't know how to make a global object to be accessible through some functions only. So static when used within a file (or namespace) will actually make that <static> object a local private object. I could never guess it since I always thought of static applied to class/function members. No, I wasn't looking for the factory pattern, even though I might need that for some other purpose ;) I think I mislead you by using the word creates instead of... contains (?). Unfortunately, my English is not good enough at moments. The singleton pattern however was clearly overkill when it comes to code (for example, a singleton class is hardly extensible through subclassing). I also read it is anything but multithread safe (even though I couldn't understand why - I'll make sure I read some more about it). Thanks Chris for your reply as well! As final question, are both JKSH's solution and mine multithread safe? I mean, I don't care if in a multithread scenario the cronological order is not respected when printing (I read it is a common problem). I just care of having the messages printed without a print from one thread to break the print of another and loose any messages. - Chris Kawa Moderators last edited by Chris Kawa There are two aspects of thread safety here. First - creation of the singleton/global object. From the given solutions only the one I presented is thread safe and only starting from c++11. This is because in c++11 function local static variables are guaranteed to be constructed in a thread safe manner and initialized only once (a.k.a. magic statics). Unfortunately compiler support is still spotty (e.g. Visual Studio supports it starting with 2015). Second - accessing the singleton/global functions. None of the suggested approaches are thread safe. What's more even guarding it with a mutex is not enough as you can't access/create widgets from worker threads. - JKSH Moderators last edited by So static when used within a file (or namespace) will actually make that <static> object a local private object. I could never guess it since I always thought of static applied to class/function members. Not namespaces. Files only. Without "static", other files can access the widget by writing extern QPlainTextEdit* pEdit. With "static", only printer.cpp can access it. Namespaces are not relevant. This usage of "static" comes from the C language, which doesn't have classes. I think I mislead you by using the word creates instead of... contains (?). Unfortunately, my English is not good enough at moments. Not a problem :) I can understand you quite well most of the time. I also read it is anything but multithread safe (even though I couldn't understand why - I'll make sure I read some more about it). A singleton class can be made thread-safe the same way you make any other class thread-safe -- using mutexes, for example. As final question, are both JKSH's solution and mine multithread safe? Currently. they are not thread-safe because you can only construct QPlainTextEdit and call its functions in the GUI thread. (If you violate these rules, your program will crash) To make them thread-safe, you need to do 2 things: - Make sure you call an init() function from the GUI thread, to construct the QPlainTextEdit. - Replace the function calls with queued invocations: // Replace this... pEdit->appendPlainText(message); // ...with this: QMetaObject::invokeMethod(pEdit, "appendPlainText", Qt::QueuedConnection, Q_ARG(QString, message)); This makes the function run in the thread that pEdit lives in. However, invokeMethod() only works if the function is a slot, or if it is marked with Q_INVOKABLE. Multithreading is something I'll be looking deep into later. For now my problem is solved! Thank you both very much, guys!
https://forum.qt.io/topic/51859/solved-use-class-without-explicitly-instantiating-it
CC-MAIN-2022-40
refinedweb
1,672
55.24
After a long time on shared hosting, I'm moving my stuff to a VPS and it has become necessary to learn about Nginx + uWSGI to deploy my apps (python). After spending a couple of weeks learning the basics, I'm in the process of setting up my local machine (ubuntu 11.04) to run my apps on Nginx + uWSGI. I'm using the "Hello world" Ubuntu 10.10 linode guide. The setup was simple but when I run or I get a 502 Bad Gateway everytime. Appreciate pointers on how to get the setup working. My nginx.conf: [ I backed up the default nginx conf (which works fine and shows "Welcome to Nginx" when I hit) and replaced it with this custom nginx conf from the linode guide that links nginx to the uWSGI server. ] worker_processes 1; events { worker_connections 1024; } http { server { listen 80; server_name localhost; access_log /srv/www/myHostname/logs/access.log; error_log /srv/www/myHostname/logs/error.log; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:9001; } location /static { root /srv/www/myHostname/public_html/static/; index index.html index.htm; } } } My uWSGI conf is exactly the same as detailed in the linode guide I linked above, with the one change that "duckington.org" in their example replaced with "myHostname" in my setup. No errors in my nginx error log. Nginx installed at /opt/nginx and uWSGI is at /opt/uwsgi as laid out in the guide linked above. I haven't touched any files the guide doesn't talk about. /opt/nginx /opt/uwsgi What I have tried to solve this, so far: sites-enabled nginx -t error_log foo.log warn; I saw your post yesterday, just searching for an answer for exactly the same issue. Finally, today I reached the root of the problem. I suppose you specify the chdir in your .ini config file as the directory where your project is located. I mean that if for example your project is 'myproject' and you have it in '/var/www/myproject' directory, you specify '/var/www' as the chdir. So, the path for all internal resources is not well defined, and Python interpereter (may be you are using Django?) does not reach them. Ok, I will explain how a solution works in a Django project. For example, suppose that you have an app inside your project called 'app1'; in your views module you are calling for a forms defined in your forms module; you will be doing this like that: from app1.forms import * okay? Well, the thing is that the path is not well defined for uwsgi. You should now define this like that: from myproject.app.forms import * and you will see that everything is working now. No more 502 Bad Gateway errors will appear for you :) Yes, I know that is not very elegant to add 'myproject.' to every internal resource calling. So, you can simply add this to your 'settings.py' file: sys.path.append('/var/www/myproject/') Substitute '/var/www/' for the path where your project is located on your machine. With this tiny solution, every started working for me :) I hope my comment helps you. Cheers, Jose By posting your answer, you agree to the privacy policy and terms of service. asked 3 years ago viewed 4409 times active
http://serverfault.com/questions/277147/nginx-uwsgi-on-localhost-always-gives-502-bad-gateway-any-ideas-how-to-solv
CC-MAIN-2014-42
refinedweb
549
66.94
I have created a simple products table with sqlite database using ruby. I can retrieve the recors as per the normal select statement passing the product_code as a where condition. anytime I retrieve a record I want to be able to store the record in an array and add to prices of the items selected to get the total value, just as in a online basket. I am doing this in ruby without rails, just the console. exsample of the select statement def select(item_code) item_code = item_code begin db = SQLite3::Database.new "test.db" results = db.get_first_row "SELECT * FROM Products WHERE Product_code = ?", item_code puts results.join "\s" rescue SQLite3::Exception => e puts "Exception occured" puts e ensure db.close if db end end thank you. Try something like this $prices, $index_of_price = [], 0 #index_of_price is the index of the field in your result def select(item_code) begin db = SQLite3::Database.new "test.db" results = db.get_first_row "SELECT * FROM Products WHERE Product_code = ?", item_code $prices << results[$index_of_price] rescue puts $! ensure db.close if db end end select 1 select 2 puts "Total is #{$prices.inject(0){|sum,item| sum + item}}"
http://www.dlxedu.com/askdetail/3/59131f12be83d215b43ceb2cb40aefc8.html
CC-MAIN-2018-39
refinedweb
187
60.82
I immediately bought the other day bicycle motor-wheel . It has long dreamed of , and here it is, in all its glory ! The DC motor , valve , 48V , 500vat . The Chinese have promised an efficiency of about 85 % , which is not bad for such a moschnosti.K it was complete controller 800 watts, the throttle and brakes. The battery pack - 4 pieces , 12c , 9a/chasov and charger to them acquired separately . I gathered all this stuff on my old mountain is great First impressions of the electric drive train just beyond words ! Maximum torque right from the start ( accelerate faster than 50cc scooter ) , the maximum speed of 45 km / h. Break in this whole thing , I wanted to tie it to some kind of feature, that realties to indicate the cost of energy. Then he asked for a digital power meter. In the previous article we learned how to measure the voltage voltage and current. To measure the power consumption should be multiplied by the current voltage : W = B * A. In fact, the power meter - is ammeter and voltmeter in a single device . Let us join the scheme ammeter and voltmeter as described in previous articles, and obtain the circuit power meter : Power meter is built on the microcontroller atmega8, which is deserved status of people . Current, voltage and power output to the display Lcd- 16x2. Resistors R8, R9 built a voltage divider with a division factor 11 , a voltage reference is made in a regulated and TL431 zener voltage is set to 5.12 volt.Tok is measured by measuring the voltage drop across the shunt R2, further shunt voltage is amplified by the operational amplifier Lm358 and the input of the ADC adc0. The program is written CodeVisionAVR # include <mega8.h> # include <delay.h> # include <stdio.h> // library which contains the function sprintf // Alphanumeric LCD Module functions # asm . equ __ lcd_port = 0x12; PORTD # endasm # include <lcd.h> # define ADC_VREF_TYPE 0x00 // Read the AD conversion result unsigned int read_adc (unsigned char adc_input) { ADMUX = adc_input | (ADC_VREF_TYPE & 0xff); // Delay needed for the stabilization of the ADC input voltage delay_us ( 10 ); // Start the AD conversion ADCSRA | = 0x40; // Wait for the AD conversion to complete while ((ADCSRA & 0x10) == 0 ); ADCSRA | = 0x10; return ADCW; } void main (void) { char buffer [ 32 ] ; // variable which will form the string for output to lcd unsigned long int u; // variable to store the voltage in millivolts unsigned long int a; // variable to store the current unsigned long int w; // variable to store the values of power consumption PORTB = 0x00; DDRB = 0x00; // Port C initialization PORTC = 0x00; DDRC = 0x00; // Port D initialization PORTD = 0x00; DDRD = 0x00; // Timer / Counter 0 initialization TCCR0 = 0x00; TCNT0 = 0x00; // Timer / Counter 1 initialization; // Timer / Counter 2 initialization ASSR = 0x00; TCCR2 = 0x00; TCNT2 = 0x00; OCR2 = 0x00; // External Interrupt (s) initialization MCUCR = 0x00; // Timer (s) / Counter (s) Interrupt (s) initialization TIMSK = 0x00; // Analog Comparator initialization ACSR = 0x80; SFIOR = 0x00; // ADC initialization // ADC Clock frequency: 500,000 kHz // ADC Voltage Reference: AREF pin ADMUX = ADC_VREF_TYPE & 0xff; ADCSRA = 0x81; // LCD module initialization lcd_init ( 16 ); while (1 ) { a = read_adc ( 0 ) // read ADC value from port 0 u = read_adc ( 1 ) // read ADC value from port 1 /* 1. Measure current The current flowing through the shunt , vichislyaetsya by Ohm's law : I = U / R R = 0,1 Ohm , a U ( voltage drop across the shunt ) will be measured . Since the ADC have a 10-bit , the maximum number that the function will return read_adc, () will be equal to 1024, this number is equivalent to the input voltage adc0. For example, if read_adc () returned 512 then it means that the input adc0 E filed half a reference voltage To calculate the actual stress , we need to make a proportion : reference voltage - 1024 the desired voltage - a We reference voltage = 5.12 The required voltage = 5.12 * a/1024, or Seeking voltage = 0,005 * a For simplicity translate volts in millivolts by multiplying by 1000 The required voltage = 0,005 * a * 1000 Everything is good here , but we do not take into account the coefficient of op amp calculated by the formula : Gain = 1 + R1/R2. Substituting , we get : Gain = (1 + 4) = 5 The actual voltage = 0,005 * a * 1000/5 , we get just a 2. Measure voltage Next measure the voltage across the resistor divider We form a proportion , as described above, and we get : The required voltage = 0,005 * u * 1000 We must also take into account the coefficient of the resistor voltage divider We it is Kdel = (R1 + R2) / R2. Substituting , we get : Kdel = (10 +1 ) / 1 = 11 The actual voltage = 0,005 * u * 1000 * 11 */ u = 55 * u; // calculate the voltage in millivolts a = a * 10 // calculate the current value according to Ohm's law : I = U / R = a/100 * 1000 = a * 10 miliamperah w = a * u; // calculate the power consumption sprintf (buffer, "I =% u,% u U =% u,% u W =% u,% lu", a/1000, // The whole honor current (a% 1000) / 10 // The fractional part of the current u/1000, // integer part of voltage (u% 1000) / 10 // The fractional part of the voltage w/1000000, // Integer part of the power (w% 1000000) / 10000 ) // Form a line to output lcd_clear (); // clean the screen before displaying lcd_puts (buffer); // deduce the formation of the string to display delay_ms ( 100), // make the delay }; } The project is in the Proteus and the source code of the program in the archive Watmeter.rar Комментарии - (8)
http://articles.greenchip.com.ua/3-0-31-2.html
CC-MAIN-2018-30
refinedweb
897
52.43
As you may know, when a new board is announced, I try to find a good project for it. I have to tell you that this goal is keeping me pretty busy. This one I had to finish at the request of the Channel 9 team for PDC – that is a breathalyzer on the new Secret Labs Netduino Mini (). (Note: At this posting, the Netduino Mini is not yet available in distribution but will be in a few days – it can be preordered at:.) The mini is a 24 pin standard DIP configuration. This means that there are not shields that you can plug into it but a number of places that you can plug it into. For example, any breadboard (as we will see) and any device that currently takes a Parallax Basic Stamp. I like this package for that flexibility and I like is because it is just so tiny (34 mm x 17 mm). Don’t let this bare bones package deceive you though. You have SPI, I2C, 4 ADCs, 4 PWMs, serial UART, and 16 GPIOs. Compare that with anything else this size and then add Visual Studio and C# productivity!! This is not a design for a breathalyzer that is going to let you know if you are just under the legal limits in your state and safe to drive – calibration is really hard for these devices. You need to be able to control the exact alcohol exposure to know how to calibrate and then it varies with temperature and humidity. But for the purposes of a fun project and to give to my daughter whose friends are just starting to drive, it is sufficient. The breathalyzer end result looks like the picture below and works as shown in the video >. With this form factor (ie no USB connector), how do you connect to Visual Studio and how do you power the device? First, you need to connect to the serial port instead of the USB Port. To do this on the breadboard, you need to hack a serial cable. You can see the serial pin numbering below. What you need are: Serial pin #2 to connect to Mini pin #1 - TX (Debug/Com2) Serial pin #3 to connect of Mini pin #2 - RX RX (Debug/Com2) Serial pin #5 to connect to Mini pin #4 - GND You can see these in the picture below. Then I have a regulated 5V power source running on 2 AA batteries (Bodhilabs VPack 5.0V). I put a simple toggle switch to turn the power on and off. This is connected to the side rails of the breadboard and jumpered into Mini pin #21 (regulated 5V input) and Mini pin #23 (GND). Finally, I put in an LED to let me know when the power is on (so that I don't waste batteries). That is all you need to set up the Mini to be run and ready to be programmed. In fact, it is a good time to ping the device using MFDeploy to make sure that it responds with ‘TinyCLR’ to double check your connections. Now you can treat the serial port in VS just like you did the USB port in other devices. Remember to set the NETMF tab accordingly. The sensor is an MQ-3 that I bought at Pololu.com because I liked their carrier board. It takes the six pins of the sensor and reduces that to three that fit into a right angle header. This sensor costs less than $5.00. I used a 100K variable resistor for calibration. The sensor outputs 0-5V which is too much for the Mini to handle so I added a voltage divider to step it down by a third to 3.3V. The analog input port theoretically returns a value between 0 and 1024 but the actual baseline measurement is dependant on a number of variables (calibration setting, temperature, and humidity). So, I let the device stabilize before starting the measures (see The Code below). As you can see, there is one clear LED and three each green, yellow, and red LEDs that I can light up to reflect the reading we get from the MQ-3. I put three values of resistors in each color group (870, 390, and 100 Ohms) so the the first LED of each color is dimmer than the last. These are not the optimal values to use. I calculated them with the 5V power source in my head but these are driven off of GPIOs from the Mini which are 3.3V so that whole thing is a little dimmer than it might be. After I built this project, I ran across something that you might want to use instead. This is the Elabguy LED-Rainbow. In this LED array, the LEDs are not driven by the processor but by the 5V power supply and there is a simple transistor switch that the processor interfaces with. The resistors are internal to the array as well – much easier for the next project. It is telling that I spend so much more time talking about the hardware than the software. I found for my setup that adjusting the baseline (using the variable resistor) so that it is around 200 gives me a full range of values. I blink the clear LED for 15 seconds to let the sensor heat up and stabilize. Then I take a current baseline measurement and calculate the steps for each LED. The rest is really simple. using System; using System.Threading; using Microsoft.SPOT; using Microsoft.SPOT.Hardware; using SecretLabs.NETMF.Hardware; using SecretLabs.NETMF.Hardware.NetduinoMini; namespace BreathalyzerMini { public class Program { public static void Main() { Cpu.Pin[] pinMap = new Cpu.Pin[10] { (Cpu.Pin)Pins.GPIO_PIN_13, (Cpu.Pin)Pins.GPIO_PIN_14, (Cpu.Pin)Pins.GPIO_PIN_15, (Cpu.Pin)Pins.GPIO_PIN_7, (Cpu.Pin)Pins.GPIO_PIN_8, (Cpu.Pin)Pins.GPIO_PIN_9, (Cpu.Pin)Pins.GPIO_PIN_10, (Cpu.Pin)Pins.GPIO_PIN_11, (Cpu.Pin)Pins.GPIO_PIN_12, (Cpu.Pin)Pins.GPIO_PIN_17 }; OutputPort[] leds = new OutputPort[10]; AnalogInput br = new AnalogInput((Cpu.Pin)GPIO_PIN_5); int baseline = 0; int step; int value; // build for (int idx = 0; idx < leds.Length; ++idx) { leds[idx] = new OutputPort((Cpu.Pin)pinMap[idx], false); } // let the unit adjust for (int i = 0; i < 30; i++) { leds[0].Write(true); Thread.Sleep(500); leds[0].Write(false); Thread.Sleep(500); value = br.Read(); Debug.Print(value.ToString()); } // set baseline and steps baseline = br.Read() + 20 ; step = (1024 - baseline) / 8; while (true) { value = br.Read(); Debug.Print(value.ToString()); if (value > 0) leds[0].Write(true); else leds[0].Write(false); if (value > baseline) leds[1].Write(true); else leds[1].Write(false); if (value > (baseline + step)) leds[2].Write(true); else leds[2].Write(false); if (value > (baseline + (2*step))) leds[3].Write(true); else leds[3].Write(false); if (value > (baseline + (3*step))) leds[4].Write(true); else leds[4].Write(false); if (value > (baseline + (4*step))) leds[5].Write(true); else leds[5].Write(false); if (value > (baseline + (5*step))) leds[6].Write(true); else leds[6].Write(false); if (value > (baseline + (6*step))) leds[7].Write(true); else leds[7].Write(false); if (value > (baseline + (7*step))) leds[8].Write(true); else leds[8].Write(false); if (value > (baseline + (8*step))) leds[9].Write(true); else leds[9].Write(false); Thread.Sleep(1000); } } } } This project is not going to get you a breathalyzer that will stand up in court but it will give you a way to verify that someone has been drinking and whether a lot or a little. This was done as a demo for PDC. There was also a Tweeting Breathalyzer demo done on the Netduino Plus which you can see at: This form factor may not be for everyone – it does not come with all the current Arduino shield options that the original Netduino can draw on. Maybe it just takes me back to my old bread boarding and wire wrapping days but I like this little guy. Maybe it’s just that I like to see people’s reaction when I hold up this tiny thing and say - ‘This has .NET running on it!’.
http://blogs.msdn.com/b/netmfteam/archive/2010/11/05/breathalyzer-on-a-new-board.aspx
CC-MAIN-2013-20
refinedweb
1,363
74.79
No nudity please, we’re Google (or why you shouldn’t mix naked domains and www on Google App Engine) Naked domains, of course, are domain names without the www prefix. So, instead of, for example, having singularity08.com. One of my pet peeves are sites that don't display correctly without the www prefix. I've found that it's usually a good sign that the site is going to be pretty crap. In fact, I was hoping that some day we would have www disappear from use altogether and that we'd all be swimming in a sea of naked domains. Well, I almost got my wish -- we've at least got heaps of domains with nudity. The truth is, however, that the www subdomain is not going anywhere, especially on the cloud, and the thing to do is to have your naked domain forward to www (you listening, no-www?) Why GAE naked domains don't play nice with others When you host with Google App Engine, you can choose to use your own domain name for your app. You do this through Google Apps (Confused? You should be. The two sound very similar.) Google Apps, of course, is Google's online office suite. You get a Google Apps account and create A-records for your naked domain (four of them, pointing to 216.239.32.21, 216.239.34.21, 216.239.36.21, and 216.239.38.21) and you create a CNAME for your www subdomain that points to ghs.google.com. (I use DynDNS for all my DNS hosting -- they rock -- and make it really easy to set this stuff up.) If all this DNS voodoo sounds confusing, it's because it is. I still wouldn't know a CNAME from an A-Record if I met one in a dimly-lit alley (ok, maybe I'd recognize the A-Record if it wasn't wearing make-up -- just maybe though). All you really need to know is that if you use those settings, things will work thanks to the magic of those hard-working DNS gnomes. No really, they exist. They're cute little things too, all bright-eyed and furry. The problem you're left with, however, is that your domain is reachable from two URLs: one using the naked domain and one using the www. "So what's wrong with that?", I hear you ask. Ah, a number of things, my inquisitive fellow, a number of things... Firstly, it's not good to have two sets of URLs for the same resource (if you don't have time to digest the in-depth reasons why, at least know that it's bad for search engine rankings.) Secondly, that address that you set your CNAME to, ghs.google.com, does special things. Like load balance your requests among the hundreds thousands gazillions of servers that Google has in its cloud. Your puny "A" list is not going to compete with that. In other words: www 1, naked domain 0. Finally, and most importantly, your app will break. Woah, that's a big one... care to explain, Aral. Sure, Aral, I thought you'd never ask (it's not considered talking to yourself if you do it in a blog post, you know!) Take this scenario: You hit the Singularity web site at. Next, you go to buy a ticket and you get forwarded to PayPal. In the forwarding URL, PayPal gets told to return you to. Not a big deal, right? Oooh, but it is. When you return, you end up losing your session. Ouch! You can work around this by making sure that you always use request.META['HTTP_HOST'] in Django when creating callback URLs but I guarantee you that you'll forget at some point and mix your naked domain and www and end up scratching your head at the random errors that result. That's exactly what happened to me earlier this week while gearing up to launch the Singularity web site. So how do you work around it? The simplest way I found was to write a simple piece of middleware in Django to handle the forwarding. Here it is, released under the MIT license: import os from django.conf import settings from django import http class NakedDomainRedirectMiddleware(object): def process_request(self, request): """ If the domain is being accessed from the naked domain, forward it to www. Copyright (c) 2008, Aral Balkan, Singularity Web Conference () Released under the open source MIT License. """ naked_domain = settings.NAKED_DOMAIN host_name = os.environ['HTTP_HOST'] start_of_uri = host_name[0:len(naked_domain)] if start_of_uri == naked_domain: full_path = request.get_full_path() uri = '.' + naked_domain + full_path; return http.HttpResponsePermanentRedirect(uri) Save the above class and then add it to your settings file, at the top of your MIDDLEWARE_CLASSES tuple. For example, I have it in a module called middleware: MIDDLEWARE_CLASSES = ( 'middleware.NakedDomainRedirectMiddleware', 'django.middleware.common.CommonMiddleware', # etc. ) Finally, set your naked domain in the settings file: NAKED_DOMAIN = 'singularity08.com' This should forward all requests to the naked domain to www. You'll end up not having two sets of URLs for each resource and you'll save yourself a lot of headache. Google is aware of this issue and they were trying to implement a fix on their end to help me out but that's not in place yet. It's possible that they may implement the fix and make it the default behavior for all accounts (which is what I think should be the case) but it may take a little while as any such change will have to go through full QA testing. In the meanwhile, this is a stop gap measure that's working out fine for me currently on the Singularity web site. I hope it helps you out too. James Bennett I don’t know if Google’s App Engine has done something screwy that makes this not work, but the standard way to do this in Django is to enable the CommonMiddleware (which you’ve done in your sample) and also have the setting PREPEND_WWW be True. Documentation is here: 21st, 2008 at 9:58 am Fraser Meh, I’m of the opinion that a canonical domain name is good - whether it’s got www on it, it’s naked or you put a completely different subdomain on it - you just need to make sure there’s one true location for your website/service/etc. to live at and that anything people might reference it as aliases to the same place and permanently redirects there. If you’re running Apache with mod_rewrite you can get this to do it for you: RewriteEngine on RewriteBase / RewriteCond %{HTTP_HOST} !^ [NC] RewriteCond %{HTTP_HOST} !^$ RewriteRule ^(.*) [R=301] put in a .htaccess or a virtual host config this will take any request that’s somehow managed to land on your virtual host (e.g. or) will be redirected to the same place on the canonical domain: i.e. Handy for when you have umpteen (mis)spelling aliases of your domain name all configured for the same virtual host.July 21st, 2008 at 3:48 pm Fraser Bah, that should be /$1 not /1 in the RewriteRule line…July 21st, 2008 at 3:51 pm Aral Hi James, I tried the Django CommonMiddleware option but it simply prefixes any URL with “www.” which is not what I want when testing from IP addresses and from localhost. The code in question is:July 21st, 2008 at 7:38 pm James Bennett Generally, what I do for situations like that is place something in my settings file like so: PREPEND_WWW = True if DEBUG: PREPEND_WWW = False This way the behavior is turned off when I’m developing (e.g., running on an IP address or localhost), but automatically switches on when I’m running in production on my live domain. I also tend to tweak a few other settings this way — for example, I often change the cache settings to the ‘dummy’ backend (which does no caching) when in development, so that I don’t see misleading results due to caching; as soon as I flip DEBUG off, however, caching kicks in to give me the performance boost I need in production.July 22nd, 2008 at 1:19 am James Bennett Also, note that an even simpler way to handle this is to place your “development” settings in their own file — say, ‘dev_settings.py’ — in the same directory as your project settings, and then simply place this at the bottom of the project’s settings file: if DEBUG: from dev_settings import * This lets you maintain cleaner separation of your development and production settings, while still maintaining the convenience of just flipping DEBUG on and off to switch between modes.July 22nd, 2008 at 1:21 am Suplementy very interesting observationsJuly 29th, 2008 at 10:12 pm Jussi Seriously helpful. Thank you! I bet a lot of developers run into this. Google should consider linking to this post until they have some solution/documentation on this.August 4th, 2008 at 7:45 pm Andrew Do I have to add a new url to the GAE dashboard? I have a there but do I need to add the naked one? I try and I get “Required field must not be blank ” I have the CNAME set and the www works, I added the 4 A name ips but naked doesn’t work. Any ideas?August 31st, 2008 at 9:05 pm Andrew oh, i didn’t have google apps set to “current version” and I was entering the empty string I think vs the domain. now it worksAugust 31st, 2008 at 9:57 pm Sarah .aka. Mamalotsoftots Thanks so much for the definition of naked domain. Totally lost me on it.February 16th, 2009 at 7:49 pm Betty How did you make domain.com/folder redirected to in the first place? It keeps redirecting to. Please help me.March 4th, 2009 at 8:12 am Aral Hey Betty, I believe Google changed this after I wrote the blog post and, AFAIK, aren’t supporting naked domains any longer. I haven’t set up a new project on there since to check.March 5th, 2009 at 3:43 pm Betty Thanks Aral~March 6th, 2009 at 12:52 am Laurent Aral, I have the same issue as Betty.May 26th, 2009 at 7:35 pm I’m using GoDaddy to redirect keeness.net to, but then if my users type keeness.net/folder, my appengine app doesn’t see /folder. Is there a way to get that back? (referer, other?)
http://aralbalkan.com/1425
crawl-002
refinedweb
1,765
71.55
Production use¶ You probably want to define your own Style in a production scenario. See How to create a Style, and especially the section on how to integrate into an existing code base. Just like you have your own custom base class for Django’s Model to have a central place to put customization you will want to do the same for the base classes of iommi. In iommi this is even more important since you will almost certainly want to add more shortcuts that are specific to your product. Copy this boilerplate to some place in your code and import these classes instead of the corresponding ones from iommi: import iommi class Page(iommi.Page): pass class Action(iommi.Action): pass class Field(iommi.Field): pass class Form(iommi.Form): class Meta: member_class = Field page_class = Page action_class = Action class Filter(iommi.Filter): pass class Query(iommi.Query): class Meta: member_class = Filter form_class = Form class Column(iommi.Column): pass class Table(iommi.Table): class Meta: member_class = Column form_class = Form query_class = Query page_class = Page action_class = Action class Menu(iommi.Menu): pass class MenuItem(iommi.MenuItem): pass
https://docs.iommi.rocks/en/latest/production_use.html
CC-MAIN-2021-49
refinedweb
185
59.6
Extending Classes In A Modular JavaScript Application Architecture Using RequireJS Yesterday, I tried to apply some deep thinking to how dependencies should be managed in a modular JavaScript application architecture that is using RequireJS. The conclusion that I came to was that RequireJS should manage and load "definitions" while your application should manage and load "instances." This makes sense since instantiation is the domain of your business logic, not your organizational framework. When it comes to extending classes in a RequireJS context, the same principle holds true, but things seem a little bit more interesting. Base classes, which may have static functionality, are still just "definitions" that can be managed by the RequireJS framework. NOTE: In JavaScript, most values are technically "instances" at the language level. When I refer to "instances" in this post, I am referring to objects that need to be instantiated using the "new" keyword. To look at inheritance in a RequireJS context, I'm going to create a simple class, Friend. Friend has a single method, getName(). The Friend class inherits from the core Model class which provides the core method, getInstanceID(). Friend() - getName() Model() - getInstanceID() Before we look at how the Friend or the base Model class are coded, let's look at the demo that makes use of the two classes: - // Load the application. - require( - [ - "./model/friend" - ], - function( Friend ){ - // Create a few instances of Friend so that we can see - // how inherited functionality has been propogated. - var friends = [ - new Friend( "Sarah" ), - new Friend( "Tricia" ), - new Friend( "Joanna" ) - ]; - // Now, iterate over each friend to output the name (which - // is part of the FRIEND class) and the ID (which is part of - // the inhereted MODEL class). - for (var i = 0 ; i < friends.length ; i++){ - console.log( - friends[ i ].getName(), - ":", - friends[ i ].getInstanceID() - ); - } - } - ); The main code demo relies only on the Friend module directly. The core Model module is a dependency of the Friend module and will be loaded automatically by the RequireJS framework. As you can see from the code, we are creating a few instances of Friend and then invoking both the sub-class method, getName(), and the core method, getInstanceID(). And, when we run the above code, we get the following console output: Sarah : 1 Tricia : 2 Joanna : 3 So far, so simple - I think you get the idea. Now, let's look at the core Model module. This class is intended to be sub-classed by all other model classes within the application: model.js - Our Core Model Class - // Define the base / core Model class. - define( - function(){ - // I am the internal, static counter for the number of models - // that have been created in the system. This is used to - // power the unique identifier of each instance. - var instanceCount = 0; - // I get the next instance ID. - var getNewInstanceID = function(){ - // Precrement the instance count in order to generate the - // next value instance ID. - return( ++instanceCount ); - }; - // -------------------------------------------------- // - // -------------------------------------------------- // - // I return an initialized object. - function Model(){ - // Store the private instance id. - this._instanceID = getNewInstanceID(); - // Return this object reference. - return( this ); - } - // I return the current instance count. I am a static method - // on the Model class. - Model.getInstanceCount = function(){ - return( instanceCount ); - }; - // Define the class methods. - Model.prototype = { - // I return the instance ID for this instance. - getInstanceID: function(){ - return( this._instanceID ); - } - }; - // -------------------------------------------------- // - // -------------------------------------------------- // - // Return the base Model constructor. - return( Model ); - } - ); To me, this module is really interesting! Not only are we defining a constructor and then returning it (as the module definition), we are also providing public and private static functionality. The private, static functionality manages the internal counter for instance creation; the public, static functionality - Model.getInstanceCount() - provides a window into the private static data. This kind of module can start to blur the line between "definition" and "instance" (in the context of this conversation). On the one hand, it defines the core Model class; however, on the other hand, it clearly provides data and behavior - the characteristics of an object that you would typically consider an "instance." Using our previous conclusions about management responsibilities, we'll have to consider this module a "definition." Since nothing is being instantiated using the "new" keyword, we'll defer to RequireJS to load this class as a dependency is subsequent modules. Now, let's take a look at the Friend class. Since Friend extends Model, Model is clearly a dependency for the full definition of Friend. And, since Model is, itself, a "definition" not an instance, we'll use RequireJS to manage and load the Model module for the Friend module. friend.js - Our Friend Class - // Define the Friend model class. This extends the core Model. - define( - [ - "./model" - ], - function( Model ){ - // I return an initialized object. - function Friend( name ){ - // Call the super constructor. - Model.call( this ); - // Store the name. - this._name = name; - // Return this object reference. - return( this ); - } - // The Friend class extends the base Model class. - Friend.prototype = Object.create( Model.prototype ); - // Define the class methods. - Friend.prototype.getName = function(){ - return( this._name ); - }; - // -------------------------------------------------- // - // -------------------------------------------------- // - // Return the base Friend constructor. - return( Friend ); - } - ); As you can see, we are using RequireJS to load the Model class for use within the Friend module definition. With the various types of dependencies found within an application, it can sometimes get confusing as to who is responsible for loading which dependency. Mix in the object-based nature of JavaScript and the matter can become even more confusing. Modules that appear to provide both definition and functionality challenge our understanding of what we consider an "instance." In my dialogue, I refer to an instance as that which is explicitly instantiated by the application. As such, base classes - which provide static, instance-like functionally - are still "definitions" that should be managed and loaded by the RequireJS framework. Reader Comments Java altogether. @Fab, I don't do too much Java programming; but, coming from the ColdFusion world, I enjoy dipping into the Java layer (ColdFusion is built on top of Java) in order to use things like the java.util.regex classes or jSoup or whatever. When I look at Java code, one of the things that always strikes me is the "import" statements - the ability to "load" other class definitions for use within in a class. I really like the RequireJS allows for this kind of class-dependency loading. Glad you found this post helpful! Wondered why do you return (this) from the constructor? could you explain? many thanks for this post and for the series! fascinating. @Lior, The default return value from a constructor function is the new object. So, he didn't really need to return "this" but it doesn't hurt either. Thanks for this writeup with examples. I just started using require and it hadnt even occurred to me that you could return constructors from a module. I will begin using this pattern right away! thanks! Thank you for the article! Allowed me to get my models in-line using require and sparked an idea in the process. I found a much easier way to do this in coffeescript for anyone interested. A short example just to give an idea: Base: Extending: In our company we came to the same conclusion and since then we have used this way of organizing our code with RequireJS. It's nice to see others get to the same conclusion :)
https://www.bennadel.com/blog/2320-extending-classes-in-a-modular-javascript-application-architecture-using-requirejs.htm
CC-MAIN-2019-13
refinedweb
1,205
57.16
How to make a string value available from one scope to another in a C # Silverlight application Update: creating a userIP socket seemed to work. However, I found out that MainPage () is explicated before Application_Startup (), so the InitParam values are not immediately available. I might have to put my code somewhere else. I am writing a Silverlight application and taking a variable in InitParams, and I want that variable to be available in some way to other areas of my code. I would rather not immediately assign a value to an element in XAML, and instead use C # if possible. I have to take one more step before using the data to modify the XAML. Here's what I've done so far: In my App.xaml.cs file, I added the userIP line to the App class, hoping to access this value later. Then I try to assign the value of the InitParams variable to the userIP string I did above. This is what it looks like. namespace VideoDemo2 { public partial class App : Application { public string userIP; public App() { this.Startup += this.Application_Startup; this.Exit += this.Application_Exit; this.UnhandledException += this.Application_UnhandledException; InitializeComponent(); } private void Application_Startup(object sender, StartupEventArgs e) { this.RootVisual = new MainPage(); this.userIP = e.InitParams["txtUserIP"]; } ...} The only lines I added to the code were public string userIP; and this.userIP = e.InitParams["txtUserIP"]; . I am wondering if it is correct to do this so that this data is available later. In my MainPage.xaml.cs file, I am trying to reference the userIP value given earlier, but I cannot figure out how. For example, I want to create a new line and then set it to userIP: public MainPage() { InitializeComponent(); string myUserIP; myUserIP = VideoDemo2.App.userIP; } Then I get the error: Error 1 Object reference is required for non-static field, method or property "VideoDemo2.App.userIP". I need to do something with InitParams in App.xaml.cs because that's where the arguments are passed, but I want to make one of those parameters available to other parts of my application, without putting it in XAML if possible. What must happen so that I can "see" the value later in the application? I'm new to C #, so any help would be much appreciated. source to share FlySwat and James Cadd's answers were helpful, but I found it better for Silverlight to use an application resource dictionary. In an ASPX or HTML page, use the tag <param> in the Silverlight tag <object> : <param name="initParams" value="txtSomeVariable=SomeValue"/> Then, in the Application_Startup method from App.xaml.cs, use a foreach loop to add values to the resource dictionary: private void Application_Startup(object sender, StartupEventArgs e) { //Method 1 - Resource Dictionary if (e.InitParams != null) { foreach (var item in e.InitParams) { this.Resources.Add(item.Key, item.Value); } } this.RootVisual = new MainPage(); } To pull a value from a dictionary for the entire life of your application, simply use: App.Current.Resources["txtSomeVariable"].ToString(); I learned about InitParams and the Application Resource Dictionary from Tim Heuer's video in Silverlight.Net: Using Launch Parameters in Silverlight . Also, I wrote a blog post describing this situation in more detail: Pass the user's IP address to Silverlight as a parameter . I hope this information helps other users who stumbled upon this question! source to share The problem is that VideoDemo2.App is not an instance but a type. If you want to access userIP, you need to access the actual instance of your application. Silverlight provides a static property that exposes the current instance of the application: App runningApp = (VideoDemo2.App)Application.Current; string myUserIP = runningApp.userIP; Or you can make userIP a static string in your application. You remove "this" where it is installed, but you can type-access it from anywhere. namespace VideoDemo2 { public partial class App : Application { public static string userIP; public App() { this.Startup += this.Application_Startup; this.Exit += this.Application_Exit; this.UnhandledException += this.Application_UnhandledException; InitializeComponent(); } private void Application_Startup(object sender, StartupEventArgs e) { this.RootVisual = new MainPage(); App.userIP = e.InitParams["txtUserIP"]; } ...} source to share You can find your instance of the App class in the static property Application.Current. Try this from anywhere in your SL app: ((VideoDemo2.App)Application.Current).userIP; EDIT: By contrast, I'd rather keep such settings in a single class (its a simple template to implement in C #, a quick lookup should be enough). Also, for "good form" everything that the public should be property and public properties should be imposed with pascal. Change the public userIp string to: public string UserIP { get; set; } And you instantly win friends and influence people;) EDIT: This is good information about the lifecycle of the Application class in Silverlight source to share
https://daily-blog.netlify.app/questions/2503782/index.html
CC-MAIN-2021-49
refinedweb
790
56.05
I tried out lovely.remotetask for cron jobs in a sample grok site. It was high time for me to get dirty with utilities in a grok site. First things first: put lovely.remotetask in the install_requires section of your setup.py. Secondly, you need a “task service”: something that actually runs the tasks you give it to run. I registered it as a local utility in my grok site: class CronService(lovely.remotetask.TaskService): pass class Mysite(grok.Site): interface.implements(IMysite) grok.local_utility( CronService, provides=lovely.remotetask.interfacesITaskService, name='cronservice') Thirdly, you need the actual task. This just has to be a callable. For now I restricted myself to a simple “print” instead of immediately doing a zodb pack. Wrap it in a lovely.remotetask.task.SimpleTask and register it as a global utility with some name: def echo(): print "something" grok.global_utility(SimpleTask(echo), name='echo', direct=True) The last thing is to add the task from step 3 to the task service from step 2. There’s no out-of-the-box way to do that right away with utility registration or through configuration. You could react to the site’s object-created event, I guess. I recently experimented with a new buildout recipe that creates a site in an empty Data.fs and optionally calls a method after that by feeding a generated python file to zopectl run. So I set up a method that I could call from there: def setup_cron_job(site): every_minute = tuple(range(60)) service = site.getSiteManager().queryUtility(ITaskService, 'cronservice') if not service: raise RuntimeError("No cronservice utility found") tasks = [job.task for job in service.jobs.values()] if u'echo' not in tasks: service.addCronJob(u'echo', minute=every_minute) I got rewarded by a “something” being printed every minute. I’m not sure I’m happy with it yet. A regular cron job that calls a view is straightforward. But there’s no password in plain text anywhere (which you need with a cron job) on your filesystem and that is again a huge bonus. You even do without an admin user... I’ll have to see how it plays out in more elaborate situations. Comments? Better examples? An actual simple cron example with zc.async? Please mail them to reinout@vanrees.org :-) Comment by Radim Novotny: I’m using lovely.remotetask with Plone to query third party server for organization data. It is not cron-like service, but asynchronous query service. User enters ID of the organization, and as soon as blurs from the input field, remote task service is started. While user is entering additional details (email address) the service queries remote server for organization name and address. As soon as data from the remote server are ready, they are displayed on the page and entered to hidden fields of the input form. Remote query take about 1-2):
https://reinout.vanrees.org/weblog/2009/07/08/lovely-remotetask.html
CC-MAIN-2018-17
refinedweb
480
60.11
In my previous posting about Windows Phone 7 development I showed how to use WebBrowser control in Windows Phone 7. In this posting I make some other improvements to my blog reader application and I will show you how to use isolated storage to store information in phone. Isolated storage is place where your application can save its data and settings. The image on right (that I stole from MSDN library) shows you how application data store is organized. You have no other options to keep your files besides isolated storage because Windows Phone 7 does not allow you to save data directly to other file system locations. From MSDN: “Isolated storage enables managed applications to create and maintain local storage. The mobile architecture is similar to the Silverlight-based applications on Windows. All I/O operations are restricted to isolated storage and do not have direct access to the underlying operating system file system. Ultimately, this helps to provide security and prevents unauthorized access and data corruption.” I updated my RSS-reader so it reads RSS from web only if there in no local file with RSS. User can update RSS-file by clicking a button. Also file is created when application starts and there is no RSS-file. Why I am doing this? I want my application to be able to work also offline. As my code needs some more refactoring I provide it with some next postings about Windows Phone 7. If you want it sooner then please leave me a comment here. Here is the code for my RSS-downloader that downloads RSS-feed and saves it to isolated storage file calles rss.xml. public class RssDownloader { private string _url; private string _fileName; public delegate void DownloadCompleteDelegate(); public event DownloadCompleteDelegate DownloadComplete; public RssDownloader(string url, string fileName) { _url = url; _fileName = fileName; } public void Download() var request = (HttpWebRequest)WebRequest.Create(_url); var result = (IAsyncResult)request.BeginGetResponse(ResponseCallback, request); private void ResponseCallback(IAsyncResult result) var request = (HttpWebRequest)result.AsyncState; var response = request.EndGetResponse(result); using(var stream = response.GetResponseStream()) using(var reader = new StreamReader(stream)) using(var appStorage = IsolatedStorageFile.GetUserStoreForApplication()) using(var file = appStorage.OpenFile("rss.xml", FileMode.OpenOrCreate)) using(var writer = new StreamWriter(file)) { writer.Write(reader.ReadToEnd()); } if (DownloadComplete != null) DownloadComplete(); } Of course I modified RSS-source for my application to use rss.xml file from isolated storage. As isolated storage files also base on streams we can use them everywhere where streams are expected. As isolated storage files are opened as streams you can read them like usual files in your usual applications. The next code fragment shows you how to open file from isolated storage and how to read it using XmlReader. Previously I used response stream in same place. using(var appStorage = IsolatedStorageFile.GetUserStoreForApplication()) using(var file = appStorage.OpenFile("rss.xml", FileMode.Open)) var reader = XmlReader.Create(file); // more code As you can see there is nothing complex. If you have worked with System.IO namespace objects then you will find isolated storage classes and methods to be very similar to these. Also mention that application storage and isolated storage files must be disposed after you are not using them anymore. Windows Phone 7 development: Using isolated storage - Gunnar Peipman's ASP.NET blog [asp.net] on Topsy.com Pingback from Bill Morefield | Windows Phone 7 What happens if two apps use the same file name? re: What happens if two apps use the same file name? ans: Can't happen. Each app has its own seperate ("isolated") place in the file system, and can not go outsie of it. Besides, apps are tombstoned when not active. In this video, I take you through the development and deployment of a Windows Phone 7 app, which leverages Are you able to physically locate the isolated storage file? For example, I am doing some debugging with touch gestures and I want a file that records all my gestures. so for example, dragging follow by tapping etc. How would I go about physically locating the isolated storage file? thanks (newbie here) where do you put the xml url thats downloaded? You're probably sick of hearing it, but you've got a really well written blog. Keep up the the great work.<a href="" title="cheap louis vuitton bags">cheap louis vuitton bags</a> Well, I love to leave this comment and share with you about this. Thanks.<a href="" title="sac louis vuitton">sac louis vuitton</a> Its really good to be part of a community that you know you can contribute something worthwhile.thanks for the share.<a href="" title="cheap True Religion jeans">cheap True Religion jeans</a> This is a really good read for me, Must admit that you are one of the best bloggers I ever saw.Thanks for posting this informative article.<a href="" title="cheap herve leger dress">cheap herve leger dress</a> Thanks for taking this opportunity to converse about this, I feel strongly about this and I enjoy learning about this subject. 8PbAln Uh, well, explain me a please, I am not quite in the subject, how can it be?!... upkx
http://weblogs.asp.net/gunnarpeipman/archive/2010/05/06/windows-phone-7-development-using-isolated-storage.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+gunnarpeipman+%28Gunnar+Peipman%27s+ASP.NET+blog%29
crawl-003
refinedweb
854
50.02
#include <stdint.h> #include "libavutil/rational.h" Go to the source code of this file. Definition in file timecode.h. Value: "timecode", "set timecode value following hh:mm:ss[:;.]ff format, " \ "use ';' or '.' before frame number for drop frame", \ offsetof(ctx, tc.str), \ AV_OPT_TYPE_STRING, {.str=NULL}, CHAR_MIN, CHAR_MAX, flags Definition at line 33 of file timecode.h. Adjust frame number for NTSC drop frame time code. Definition at line 31 of file timecode.c. Referenced by dv_write_pack(), mpeg1_encode_sequence_header(), and mxf_write_system_item(). Convert frame id (timecode) to SMPTE 12M binary representation. Definition at line 40 of file timecode.c. Referenced by dv_write_pack(), and mxf_write_system_item(). Parse SMTPE 12M time representation (hh:mm:ss[:;. ]ff). str and rate fields from tc struct must be set. Definition at line 58 of file timecode.c. Referenced by dv_write_header(), encode_init(), and mxf_write_header().
http://ffmpeg.org/doxygen/0.9/timecode_8h.html
CC-MAIN-2014-42
refinedweb
135
64.37
In this article, we will go over the Django User Model. We can extend it with AbstractUser and AbstractBaseUser, but what options do we have? How can we customize it and how can we use it? The Django User Model is part of the Django Authentication package. It provides you with a standard model that is used as a backbone for your user accounts. You can find the standard fields here. Extending the Django User Model There are various ways to extend the User model with new fields. You might want to do this, when you want to add options to your users, or when you want to split them in groups (for when you have two different types of accounts for example). Extend the Django user model with AbstractUser (preferred) With AbstractUser, you can overwrite the standard User model. It will inherit all functions and current fields from the standard User class and you can add anything you would like to this. Here is an example that would be in models.py: from __future__ import unicode_literals from django.db import models from django.contrib.auth.base_user import AbstractBaseUser class User(AbstractBaseUser): is_trailing = models.BooleanField(default=True) # add extra fields here Once you did that, you will also need to let Django know that we want to use that table as the default user table. You can do that with adding the next line to your settings.py file (replace <yourappname> with whatever your app name is. AUTH_USER_MODEL = 'yourappname.User' Why this is the preferred method: you keep your database extremely clean as you are not creating any extra unnecessary tables (always keep database normalisation in mind!). Remember that great programmers care much more about the design and architect (that’s including the layout of your database), then the actual code. It’s way easier to refactor code, then it is to migrate your database. It’s very easy to create extra model functions for this as you have your user table in your models. Alternatively, you could also use the AbstractBaseUser. The difference with the AbstractUser here is that AbstractBaseUser doesn’t have the standard fields and functions. In other words, you will have to add all the basic items yourself. For most people the AbstractUser will be the right fit. Extend the Django user model with an extra model This works too, though you will not follow the normalisation rules that go for database tables. You will have to create an extra (unnecessary) table with this option. Extending the User model with an extra table is easy to implement though. Simply create the new table and then create a one-to-one field to the User model. class Profile(models.Model): user = models.OneToOneField('auth.User', on_delete=models.cascade) # add extra fields here Using this approach means that you will have to call fields through the User class. Here is an example for in your template: {{request.user.profile.bio}} As you can see, we first get the request, then the user, then the profile model that is attached to it and then the field we need. With the first method we can skip the profile altogether as the fields are in the user model (note that profile is lowercase in this case!). One thing you might want to consider with this method, is to immediately create your Profile when you create the user. For this, we will have to trigger a function when a User is created. We can do that with Signals. Let’s create one for the standard User class: from django.contrib.auth.models import User @receiver(post_save, sender=User) def user_save(sender, instance, **kwargs): Profile.objects.create(user=instance) Remove username from Django authentication There are many cases were it’s actually preferred to have users login with their email address instead of a username. A username is generally unnecessary for private applications. To remove the username, we will have to create a new User model and then assign a different UserManager to it. The UserManager is used to create new users. In this we will have to replace it with a custom one since we will not be creating new users like it is normally done. Here is an example: class CustomUserManager(BaseUserManager): def _create_user(self, email, password, is_staff, is_superuser, **extra_fields): now = timezone.now() if not email: raise ValueError('The given email must be set') email = self.normalize_email(email) user = self.model(email=email, is_staff=is_staff, is_active=True, is_superuser=is_superuser,) class User(AbstractBaseUser): first_name = models.CharField(max_length=5000) last_name = models.CharField(max_length=5000) email = models.EmailField(max_length=5000, unique=True) is_staff = models.BooleanField(default=False) date_joined = models.DateTimeField(auto_add_now=True) last_login = models.DateTimeField(null=True) # any fields you would like to add USERNAME_FIELD = 'email' REQUIRED_FIELDS = [] objects = CustomUserManager() As you can see, we are using the very bare-bone AbstractBaseUser as we don’t need the username field and can’t change it with the AbstractUser class. If we would have used the AbstractUser class then we would still have to enter a username everywhere.
https://djangowaves.com/resources/django-user-model/
CC-MAIN-2019-39
refinedweb
842
57.27
OpenXML is a new standard used by all of Microsoft's new Office suites (Office 2007)! Because it is an open standard and based on XML, we can easily parse documents created with these products. The topic of this article will be how to parse a Word document (using XLINQ) and then render the document in a WPF control called the FlowDocument! The first problem is to extract or unpack the *.docx file into its containing *.xml files. Package package = Package.Open(filename); Uri documentUri = new Uri("/word/document.xml", UriKind.Relative); PackagePart documentPart = package.GetPart(documentUri); XElement wordDoc = XElement.Load(new StreamReader(documentPart.GetStream())); The snippet above opens a Word document, extracts the document.xml into an XElement. All Word documents require the WordML namespace: XNamespace w =; Now we can make a simple XLINQ query to get all the paragraphs (please note: to keep this simple, I am only looking at the paragraphs, I ignore images, drawings, tables, etc.). var paragraphs = from p in wordDoc.Descendants(w + "p") select p; Next, we iterate over the collection of paragraphs and display them in a FlowDocument! Sacha has a nice article on the basics of FlowDocuments available here! foreach (var p in paragraphs) { var style = from s in p.Descendants(w + "pPr") select s; var font = (from f in style.Descendants(w + "rFonts") select f.FirstAttribute).FirstOrDefault(); var size = (from s in style.Descendants(w + "sz") select s.FirstAttribute).FirstOrDefault(); Paragraph par = new Paragraph(); Run r = new Run(p.Value); if (font != null) { FontFamilyConverter converter = new FontFamilyConverter(); r.FontFamily = (FontFamily)converter.ConvertFrom(font.Value); } if (size != null) { r.FontSize = double.Parse(size.Value); } par.Inlines.Add(r); flowDoc.Blocks.Add(par); } For each paragraph, I check if it has a font family or size explicitly set. I then create a new Paragraph and add a Run to its Inlines collection (I also change the font family and size if available). I then add the Paragraph to my FlowDocument! And that is it! All that is left now is to find a cool way to extend the normal FlowDocument to support Word document? You can sub-class the FlowDocument... I decided to rather provide it as an extension method! To load a Word document into a flow document, you first create a FlowDocumentViewer (I created it in XAML): <FlowDocumentPageViewer x: And load the Word document... FlowDocument flowDoc = new FlowDocument(); flowDoc.loadWordML("DisciplesBios.docx"); flowDocViewer.Document = flowDoc; I created a simple Word document with all the WPF Disciples bios. Here is how it looks in a FlowDocument: Simple and easy... If you like the article, please vote for it and also visit my blog: General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/WPF/OpenFlowDoc.aspx
crawl-002
refinedweb
448
51.75
In this article, I will tell you a simple way to split the big JSON file into small JSON files. This is based on my own personal experience. For the final year project of my undergraduate studies as a team of 4 members, we did a project on Twitter. We needed to collect a considerable amount of tweets to do our research project. At the start, we needed to collect tweets. So, we used snscrape to collect tweets. This is a very good article providing instructions on how to use snscrape, and it's also the one I followed. After collecting the tweets, we started the data preprocessing steps. In that step, we identified that we are having a problem loading a 4-5 GB file at once. Because my computer kept freezing because of the huge file size. This is not a problem with a high-performance computer. But if you're not exactly having state-of-the-art hardware, this can become a problem. So, encountering this problem, I also needed to find a solution. I tried a few different things but the problem wasn't solved. Then I come across this simple Python code as a solution for my problem. This can be a very simple piece of code. But it can come in handy for someone who is having the same problem as I did. In this case, I used PyCharm as my IDE. You can use any IDE according to your preferences. When using this code for your problem, you just need to add a few configurations as per your need. You need to change the file path of your big JSON file and you need to provide the split size you want or expect. In my case, I needed to split the original file into 2,000 tweets per split. So, I used 2,000 tweets. import os import json #you need to add you path here with open(os.path.join(**'C:/Users/Acer/Desktop/New folder'**, **'twitter_data.json'**), **'r'**, encoding=**'utf-8'**) as f1: ll = [json.loads(line.strip()) for line in f1.readlines()] #this is the total length size of the json file print(len(ll)) #in here 2000 means we getting splits of 2000 tweets #you can define your own size of split according to your need size_of_the_split=2000 total = len(ll) // size_of_the_split #in here you will get the Number of splits print(total+1) for i in range(total+1): json.dump(ll[i * size_of_the_split:(i + 1) * size_of_the_split], open( **"C:/Users/Acer/Desktop/New folder/twitter_data_split" **+ str(i+1) + **".json"**, **'w'**, encoding=**'utf8'**), ensure_ascii=False, indent=True) Then you just need to run the code. You can see that the big JSON file is split into small JSON files. You can see the length of the big JSON file and the number of splits as an output in your command prompt in Windows (I'm using Windows 10 Education Version). After these steps, I need to convert them into CSV files because we need to create a machine learning model from this. So, I created a simple JSON CSV converter that you can use to convert all the JSON files into one code without converting them one by one. This is the JSON CSV converter code. You need to provide the number of splits according to your requirement. In my work, I split the big JSON file into 8 splits. So, I provided 8 as the value. Then you need to simply run the code and you will get the CSV files from the JSON files. import pandas as pd #in here you need to provide the number spilts number_of_splits=7 for i in range(0,number_of_splits+1): word =i+1 print(word) json_file =**f"twitter_data_split**{word}**.json" **csv_file =**f"twitter_data_split**{word}**.csv" ** ** **df = pd.read_json (**fr'C:\Users\Acer\Desktop\New folder\**{json_file}**'**) df.to_csv (**fr'C:\Users\Acer\Desktop\New folder\**{csv_file}**'**, index = None) Files after Splitting and Convert into CSV Files Conclusion This is not a big technical article. But like I mentioned earlier, I think this will come in handy for a person who is having the same problem that I did. I did an internet search and read answers on StackOverflow to implement those codes and I would like to thank them for their help. This article has explored an easy way to split JSON files and I hope that it will assist you in completing your work more accurately. I'd like to thank you for reading my article. I hope to write more articles on new trending topics in the future so keep an eye on my account if you liked what you read today! References: Split JSON file in equal/smaller parts with Python *I am currently working on a project where I use Sentiment Analysis for Twitter Posts. I am classifying the Tweets with…*stackoverflow.com
https://plainenglish.io/blog/split-big-json-file-into-small-splits
CC-MAIN-2022-40
refinedweb
816
73.68
What I'm trying to do is to take 7 of the scores from judges and discard the lowest and the highest score then average five scores that remain after. I got to where I can get all of the scores, but I have no idea how to take the lowest and highest score and discard them. Please help me.. Thank you! import acm.program.*; public class Judges1 extends ConsoleProgram { public void run() { int numJudges = 7; int highest = 0; int lowest = 0; double score[] = new double[numJudges]; double totScore = 0.0; double avgScore = 0.0; for (int i = 0; i < numJudges; i++) { score[i] = readDouble ("Enter Score: "); totScore += score[i]; } avgScore = totScore / numJudges; println ("Average Score: " + avgScore); } }
https://www.daniweb.com/programming/software-development/threads/324031/problem-with-array
CC-MAIN-2018-30
refinedweb
117
78.28
Hi, I'm quite new to java, and I've come to the point where there are just some things that I don't get no matter where I look or what I read up. I've been confused with using a created class as a variable type for quite some time now. Take this class for example: public class Dragon { private String name; private boolean breathesFire; public Dragon(String name, boolean breathesFire) { this.name = name; this.breathesFire = breathesFire; } public String getName() { return name; } public void setName(String name) { this.name = name; } public boolean isBreathesFire() { return breathesFire; } public void setBreathesFire(boolean breathesFire) { this.breathesFire = breathesFire; } } This was provided by my lecturer. In another class, he has done this: package com1003.objectville.composition; public class Knight { private String name; [B]private Dragon dragon;[/B] public Knight(String name) { this.name = name; } public Knight(String name, Dragon dragon) { this.name = name; this.dragon = dragon; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Dragon getDragon() { return dragon; } public void setDragon(Dragon dragon) { this.dragon = dragon; } public String toString() { String s = "Knight: "+name; if (dragon != null) { s += ", his dragon is: "+dragon.getName(); } return s; } } Now I've made my area of concern in bold above. You see how he makes the private variable "dragon" type "Dragon"? So he wrote "private Dragon dragon" instead of "private int dragon" or something that I usually see. And this type "Dragon" is the class name of the first piece of code provided. I don't see how you can do this, and why you do this. Could someone explain what's going on, and why it's done this way in the code above? Thanks a lot, I really appreciate it.
http://www.javaprogrammingforums.com/whats-wrong-my-code/15253-help-making-classes-variable-type.html
CC-MAIN-2014-10
refinedweb
289
67.96
Rx.NET in the real world Lessons that can help you when marbles can’t 📝 Naming is hard Because of the decisions the C# team made while designing LINQ, Rx.Net is a one of a kind flavor of Reactive Extensions. The names of all basic operators are made so that they match their LINQ counterparts instead of the RX conventions. This can lead to some confusion, especially if your team learned RX using some other languages. For completeness sake, here are the C# method names and their “normal” RX names. If you wanna use the “normal” names, I’ve created a gist that adds those methods as extensions: Then there are methods that sound like they do the same thing but in reality behave slightly differently, leading to even further runtime confusion. The cherry on top is the SubscribeOn and ObserveOn methods, which are so confusing I sometimes catch myself looking at their documentation. 😫 Threading is harder Reactive Extensions are, by nature, thread agnostic. The thread in which the work happens is not guaranteed by default. This is what the IScheduler interface is for. Out of the box, we get a couple of useful and very well named schedulers, like CurrentThreadScheduler, NewThreadScheduler, SynchronizationContextScheduler and ThreadPoolSchedulerto name a few. As useful as these are, they don’t help us with a problem common to interface developers: all work that updates the UI is meant to run on a special thread, the Main Thread. My team tried leveraging both the CurrentThreadScheduler and the SynchronizationContextScheduler to behave as though they were a MainThreadScheduler. We did so by ensuring that Observables were only ever created from the constructors of ViewModels, which in turn were always created in the main thread. This approach had two flaws. The first is that sometimes we needed to create our observables in different threads. In such scenarios, we found ourselves doing all sorts of jerry-rigs to find a way around the limitation of single thread observable creation. The other problem was testing. We were constrained to the current thread schedulers, but we needed a TestScheduler for testing. We finally found a solution to this problem by drawing inspiration from RxSwift’s Driver. A driver is like an observable, but it can’t error, always runs on the main thread and always emits the last event to subscribers on subscription. This makes Drivers perfect for all your UI needs! The final solution was adding a .AsDriver(SchedulerProvider) method. This method simply takes an instance of the ISchedulerProvider interface, retrieves its MainThreadScheduler and then uses operators to ensure the observable never fails and is shared. We used the main thread schedulers from the ReactiveUI project and everything worked like a charm 👌 🔮 Observables are just monoids in the category of endofunctors, so what? Monads are historically known for being something hard to explain, but the point here is not making you understand them. While I totally suggest you read more on this subject, my focus here is on something else, so please read on even if you don’t fully get what monads are. C# has some common monads we use daily. Out of those I’d like to highlight two: IEnumerable<T> and Task<T>. These have powerful language features attached to them: the LINQ query syntax and async/await, respectively. What’s lesser known in C# land is that these language features can be applied to any type that implements some specific methods, and the Rx.NET package implements these methods for the IObservable<T> interface! The await operator is used to extract a value of type T from the Task<T> monad. You can literally do the same with Observable! public Task<int> getTask() => Task.FromResult(1); public IObservable<int> getObservable() => Observable.Return(1);T valueFromTask = await getTask(); T valueFromObservable = await getObservable(); valueFromTask.Should().Be(valueFromObservable); In order for observables to become awaitable, all you need to do is add the using System.Reactive.Linq; using statement. Simple as that! Bear in mind that when you await an Observable you are awaiting for its last value (that is, the last value emitted before the observable completes), so calling an observable that hasn’t completed yet will make the call hang until the observable completes. See the following example var a = Observable.Return(1); var b = new BehaviorSubject<int>(1); var value = await a; //This executes normally var otherValue = await b; //This hangs forevervalue.Should().Be(otherValue); //This never gets executed To prevent such hanging from happening, call obs.FirstAsync() , which will transform the observable in an observable that returns its latest emitted value and then completes. The other thing we can use to consume Observable<T> in C# is the LINQ query syntax. Any type that implements Where and a couple of other specially named methods can benefit from this syntax and IObservable is one of those types, where we can flatten observable: var numberSource1 = Observable.Return(2); var numberSource2 = Observable.Return(Observable.Return(2)); // Nested var unwrapAndSumFluent = numberSource2 .SelectMany(x => x) .CombineLatest(numberSource1, (x, y) => x + y) .Select(x => x.ToString()); var unwrapAndSumQuery = from obs in numberSource2 from number in obs from otherNumber in numberSource1 let sum = number + otherNumber select sum.ToString(); var query = await unwrapAndSumQuery; var fluent = await unwrapAndSumFluent; fluent.Should().Be(query); And even apply filtering: var numberSource1 = Observable.Return(2); var numberSource2 = Observable.Return(2); var combiningWithFluentSyntax = Observable .CombineLatest(numberSource1, numberSource2, (number, otherNumber) => number + otherNumber) .Where(sum => sum % 2 == 0) .Select(sum => sum.ToString()); var combiningWithQuerySyntax = from number in numberSource1 from otherNumber in numberSource2 let sum = number + otherNumber where sum % 2 == 0 select sum.ToString(); var fluent = await combiningWithFluentSyntax; var query = await combiningWithQuerySyntax; fluent.Should().Be(query); They are yield the same value! Note that from a in b is by no means equivalent to CombineLatest , it’s just that in this example they happen to be the same because both source observables emit a single value. This property makes the query syntax specially suitable for combining observables and tasks (assuming that you implement SelectMany and Where for the Task<T> type as well). The LINQ query syntax is by no means superior (or inferior) to its fluent counterpart, but knowing both can help you make your code more readable in some cases, so it’s an useful tool on your reactive belt.
https://medium.com/@heytherewill/rx-net-in-the-real-world-be61e0287a93
CC-MAIN-2019-43
refinedweb
1,050
55.64
- Check pypi - Toscawidgets provides documentation for some of it’s released packages. Some of the docs contain Widget Browsers that allow you to play with the widget live. - The Toscawidgets repository holds some widget libraries that have not been released. Use them at your own risk. The overall process for using a widget is: - Create a single instance of the widget (or compound widget), to be used throughout the program - Pass this instance from the controller to a template - In the template, call the widget to display it. Parameters can be passed at display time, and this is commonly used for the value of the widget. For this tutorial we are going to create a star rating widget which utilizes ajax to store the user response and return a request back to the browser to update the user’s view. Before we start using our widget we need to install it. For the time being, this widget has not been released to pypi so we need to install from the trunk. easy_install tw.rating import the widget into your project from tw.rating import Ratings Create the widget inside your controllers definitions. my_rating = Rating(id='my_rating', action='rating', label_text='') Create a new controller method to share our widget @expose('genshi:myproject.templates.widget') def testing(self, **kw): tmpl_context.widget = rating return dict() In the template, call the widget to display it. ${tmpl_context.widget(value)} Here is what the resulting widget looks like: Now, star widget doesn’t do any good without some kind of server interaction. For this tutorial we are going to just simply keep track of the average as the user’s click the stars in memory. This could be later modified to support some sort of crafty database interaction. First, lets initialize our “database” of star-click averages: sum_ratings = 0 num_ratings = 0 Then we make a newly exposed method which shares the same name as the “action” which is sent into the Widget. @expose('json') def rating(self, rating): global sum_ratings global num_ratings rating = int(rating) sum_ratings += rating num_ratings += 1 rating = float(sum_ratings)/float(num_ratings) return dict(num_ratings=num_ratings, avg_rating=rating) This method returns a json stream to the widget which is then read as a response by the javascript on the client side. Now, this is not a terribly interesting example until you start to handle the response that comes back. To do that, you just add an “on_click” parameter to the widget definition. <div id="avg_stars"/> First we modify the template to give a place to hold the data that comes back from the server. rating = Rating(id='my_rating', action='rating', label_text='', on_click="""$('#avg_stars')[0].textContent='The average is now: '+response.avg_rating""") The ‘response’ javascript variable will hold an object which is your extracted json stream. In this case, we are displaying the average rating. It is important to note that the star widget uses the jQuery library, and the ‘$’ operator may not work the same in other libraries.
http://www.turbogears.org/2.1/docs/main/ToscaWidgets/Using.html
CC-MAIN-2015-06
refinedweb
494
62.68
0 Hello DaniWeb, I am pretty new to this community and overall to the C++ language. Not so long ago i've decided to learn C++. Anyways the problem is: I am trying to make a simple calculator, its very easy my problem is i am trying to advance my knowlege about the structure so i was trying to make a calculator using only structure based variables, my question is: When i take a normal integer variable for example: int x,y,z; z=x+y; would compile correctly but if i would make a structure called database example: struct database{ int x,y,z}; database get; get.z=get.x+get.y; would not be correct how can i make it correct if possible? Here is the source code i've made so far: #include<iostream> #include<conio.h> using namespace std; struct database{/*Structure start.*/ int loop; string choice,var1,var2,combined; };/*Structure ends.*/ database calc(database get); int main(){ database get,*p; p=&get; /*Point's the pointer into the database. just testing*/ get.loop=0; while(get.loop<1){ /*First question*/ cout<<"Welcome to the <M>aster <B>laster Calculator!\nPlease type one of the words below afterwards press enter.\nPlus\nMinus\nMultiple\nDivide\n\nChoice:"; /*Input of choices*/ cin>>get.choice; calc(get); } } database calc(database get){ if((get.choice=="Minus")||(get.choice=="minus")){ cout<<"Please choose the first number:"; cin>>get.var1; cout<<"Please choose the second number:"; cin>>get.var2; get.var1=get.var1-get.var2; //<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<Error occurs here!! cout<<"\nThe answer is:"<<get.var1<<"-"<<get.var2<<"="<<"\n"; } } I've marked where the compiler Dev-C++ bloodshed see the errors by a comment(in line 43). Thank you for all the contributors and have a good day.
https://www.daniweb.com/programming/software-development/threads/314535/need-a-bit-of-assistance-struct-related
CC-MAIN-2017-47
refinedweb
294
59.5
> When I try to implement this in a function I get something like: > > def mklist(num): > return map(lambda x:[] * num, range(num)) > > and mklist(4) yields [[], [], [], [], [], []] > > This works in 'Python interactive'. In a normal Python window I get a NameError. This is bound to be in an FAQ somewhere: the lambda has its own namespace, i.e. within the lambda body you can see (1) names within that body and (2) global names. So, you can't see the names in the intermedeate function. The workaround is to abuse the parameter mechanism: def mklist(num): return map(lambda x, num=num: []*num, range(num)) -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++ | see
https://mail.python.org/pipermail/pythonmac-sig/2000-February/001778.html
CC-MAIN-2014-15
refinedweb
128
71.85
[edit]Must learn to read for comprehension...[/edit] [edit]Must learn to read for comprehension...[/edit] Last edited by Prelude; 08-29-2004 at 01:31 PM. My best code is written with the delete key. I don't but this just sin't working. And that is all there was on the error thing. Edit: This is just plain stupid now! The more I add the more errors that make no sence come. The most errors I had so far were 54. Last edited by Rune Hunter; 08-29-2004 at 12:55 PM. your main() is nested inside of startprogram(). That's not allowed. DaveDaveCode:void startprogram() { // lots of stuff int main() { // other stuff } //this is the closing bracket for main() } //this is the closing bracket for startprogram() ok so now I need to know how to be able to use a variable in side of a void function() Edit: It keep saying it is undicleared. first thing I would do is make a struct with all of that info your using for your traits. which i personally would put in a header file.which i personally would put in a header file.Code:struct info{ int health; int attack; int def; int magic; int alreadyset; int totaldamage; int dead; }; Rune think of it like this Code://fight.h #ifndef fight_h #define fight_h void fight(); #endif //fight_hCode://fight.cpp #include "fight.h" void fight() { //blah blah //kill something //if you die then exit }your code is able to find the fight function because you protyped it in the header file so as long as you include that your fineyour code is able to find the fight function because you protyped it in the header file so as long as you include that your fineCode://main.cpp #include <iostream> #include "fight.h" int main { //whatever if(playerwants to fight) { fight(); } return 0; } Woop? Ok here is what I got... I got info.h wich has all the variables in it. Then I got createplayer.h and .cpp. In create player .cpp I had to use some of the variables used in info.h. But it says that the variables are undiclared. Even if I put I have even triedI have even triedCode:#include "info.h" Any help now, lol, Please.Any help now, lol, Please.Code:#include "info.cpp" Last edited by Rune Hunter; 08-29-2004 at 02:00 PM. variables need to be like this Now in a cpp fileNow in a cpp fileCode://+---Variables.h---+ #ifndef _VARIABLES_H #define _VARIABLES_H //+--This is not the actual declaration of the variable, Its almost like a prototype--+ extern int Var1; #endif Now in another cpp file you can get access to the variable by including the header file.Now in another cpp file you can get access to the variable by including the header file.Code:#include "Variables.h" //+---Here you declare the variables--+ int Var1; //+---Other code that uses the varibles---+ It works like this The variable is declared in a source file, but the variables scope is only in the file it is declared in. This is where extern comes into play. It lets that variable be external of its original scope so you can use it globaly through out your project. Keep in mind that most of the time globals are not good to use, but thats something to worry about later on. Thanks alot now I can continue working on my game! alright now I got this error...it's always error after error...-sigh- Here is the error log thing... 80 C:\Documents and Settings\Alex\Desktop\RPG Game\mainmenu.cpp ISO C++ forbids declaration of `P_attack' with no type I have that error a couple of times with P_ def too. here is the code now were it happens... P_health=P_health+10; P_attack=P_attack+2; P_def=P_def+2; P_mp=P_mp+3; Those varaibles worked in other places but here and a couple other places it doesn't like for some reason. Hmm, I would need to see the code of the actual declarations of those variables. did you make them int's? Maybe you could put your code into a zip file or something and post it so we can see what the problem is. well ok if you really want to look throw it... I gotta eat so I'll be back later to see if anyone actauly looked at it. In callmainmenu() you're missing a closing bracket on your while loop... I'm surprised you didn't get an error before. Other than that, I didn't see anything particularly wrong when I glanced over your code. I am seeing a few brackets you forgot to close. Thats probably the problem Go back and check your opening and closing brackets EDIT: ARGH, fricken hunter Yep close that bracket and it will work! Last edited by Vicious; 08-29-2004 at 04:36 PM. lol pretty soon I will have people swaring at me LOL. Ok I'll go and close some brackets lol. Edit: Well that one bracket caused alot o truble. But you still didn't sware at me yet... Last edited by Rune Hunter; 08-29-2004 at 05:07 PM.
https://cboard.cprogramming.com/cplusplus-programming/56208-just-need-more-help-2.html
CC-MAIN-2017-22
refinedweb
872
84.27
The cri module is an alternative version of the ri module that uses the Python ctypes module to interface a renderer directly. The ctypes module is a foreign function library that is part of the standard Python libraries since Python 2.5 [1]. With previous versions of Python, the module has to be installed separately. The module can be used in combination with any renderer that implements the RenderMan API in a shared library. Using this module instead of the ri module has a number of advantages: The disadvantage of using this module is that it depends on an external dynamic library that must implement the actual functionality. This means if you have not installed a renderer that ships with such a library you cannot use the module. Before any RenderMan function can be used, a library has to be loaded using the loadRI() function. The returned module-like object can then be used just like the ri module. Load a RenderMan library and return a module-like handle to it. libName is the name of a shared library that implements the RenderMan interface. The name can either be an absolute file name or just the name of the library (without suffix or “lib” prefix) in which case the function tries to find the library file itself. The return value is an object that can be used like a module, i.e. it contains all RenderMan functions, constants, etc. If libName is None or the empty string, the return value is just a reference to the ri module. import cgkit.cri ri = cgkit.cri.loadRI("3delight") ri.RiBegin(ri.RI_NULL) ri.RiWorldBegin() ri.RiSurface("plastic") ri.RiSphere(1,-1,1,360) ri.RiWorldEnd() ri.RiEnd() The RenderMan function names can either be used with or without the “Ri” prefix. So the following is equivalent to the above: ri.Begin(ri.RI_NULL) ri.WorldBegin() ri.Surface("plastic") ri.Sphere(1,-1,1,360) ri.WorldEnd() ri.End() The following table lists the library names for some renderers that are known to work with this module: Import the RenderMan names into the given namespace. ri is the module-like object that was returned by loadRI() and ns is a dictionary containing a module namespace (such as globals()) that will receive the “imported” symbols. Only the names with the “Ri” prefix will be available. Example: ri = cgkit.cri.loadRI("3delight") cgkit.cri.importRINames(ri, globals()) RiBegin(RI_NULL) RiWorldBegin() RiSurface("plastic") RiSphere(1,-1,1,360) RiWorldEnd() RiEnd() The following example renders three Koch curves that form sort of a “snowflake” shape. Each curve is created by a procedural which just uses a regular Python function as subdivide function. No RIB is generated by this example. — It is possible to create the same image using the generic ri module, but the procedural would have to be written as a separate Python script that is invoked using the “RunProgram” procedural. The code would actually be longer because you would have to encode/decode the parameters of the procedural as strings whereas this example directly passes vector objects around. # Render a Koch snowflake as procedurals import cgkit.cri from cgkit.cgtypes import * def bound(A, E): """Compute the bounding box of one segment.""" eps = 0.03 dv = E-A n = vec3(-dv.y, dv.x, 0) C = 0.5*(A+E) + 0.2887*n xx = [A.x, C.x, E.x] yy = [A.y, C.y, E.y] bound = [min(xx)-eps, max(xx)+eps, min(yy)-eps, max(yy)+eps, -0.001, 0.001] return bound def subdiv(data, detail): """Subdivide function.""" A,E = data dv = E-A if dv.length()<0.005: RiCurves(RI_LINEAR, [2], RI_NONPERIODIC, P=[A,E], constantwidth=0.003) else: t = 1.0/3 B = (1.0-t)*A + t*E D = (1.0-t)*E + t*A n = vec3(-dv.y, dv.x, 0) C = 0.5*(A+E) + 0.2887*n RiProcedural((A,B), bound(A,B), subdiv) RiProcedural((B,C), bound(B,C), subdiv) RiProcedural((C,D), bound(C,D), subdiv) RiProcedural((D,E), bound(D,E), subdiv) # Load the RenderMan API. # Replace the library name with whatever renderer you want to use. ri = cgkit.cri.loadRI("3delight") cgkit.cri.importRINames(ri, globals()) RiBegin(RI_NULL) RiFormat(1024,768,1) RiDisplay("koch.tif", RI_FRAMEBUFFER, RI_RGB) RiPixelSamples(3,3) RiProjection(RI_ORTHOGRAPHIC) RiScale(vec3(0.8)) RiTranslate(0,0.55,5) RiWorldBegin() RiSurface("constant") RiColor((1,1,1)) RiPatch(RI_BILINEAR, P=[-2,2,1, 2,2,1, -2,-2,1, 2,-2,1]) RiColor((0,0,0)) RiProcedural((vec3(-1,0,0),vec3(1,0,0)), [-2,2, -2,2, -0.01,0.01], subdiv) RiProcedural((vec3(0,-1.732,0),vec3(-1,0,0)), [-2,2, -2,2, -0.01,0.01], subdiv) RiProcedural((vec3(1,0,0), vec3(0,-1.732,0)), [-2,2, -2,2, -0.01,0.01], subdiv) RiWorldEnd() RiEnd() Running the above example produces this image: Footnotes
http://cgkit.sourceforge.net/doc2/cri.html
CC-MAIN-2017-30
refinedweb
822
59.4
This example is part of the tools supplied for the Arduino GSM Shield and helps you change or remove the PIN of a SIM card . First, import the GSM library #include <GSM.h> Initialize an instance of the GSMPin class. GSMPIN PINManager; Create your variables, starting with a String to hold input from the serial monitor. Also make a flag for checking f the SIM has been authenticated with a valid PIN, and messages for the serial monitor. In setup, open a serial connection to the computer. After opening the connection, send a message to the Serial Monitor indicating the sketch has started. Call PINManager.begin() to reset the modem. Check to see if the SIM is locked with a PIN If locked, ask for the PIN via the serial monitor. You'll use a custom function named readSerial() to parse the information. If the PIN is valid, set the auth flag to true. Send a status message to the serial monitor indicating the result. If you enter the wrong PIN, you can try again. After 3 missed attempts, the PIN will be locked, and you'll need the PUK number to unlock. If the SIM is in PUK lock mode, ask for the PUK code and a new PIN If there is an error, and the PIN number and PUK are both locked, send an appropriate status message : If there's no PIN number, set the auth flag to true Check the registration on the GSM network, and indicate if you're connected or not, and if you're roaming. You're going to create a custom function to handle serial input from the serial monitor. Make a named function of type String. While there is serial information available, read it into a new String. If a newline character is encountered, return to the main program. loop() acts as a PIN management tool, allowing you to turn the PIN on or off, and change it. Once your code is uploaded, open the serial monitor to work with the PIN. The complete sketch is below. Last revision 2015/08/17 by SM
https://www.arduino.cc/en/Tutorial/GSMToolsPinManagement
CC-MAIN-2017-17
refinedweb
352
73.27
I. If we wanted to introduce something similar to C's block scopes, regardless of syntax, we'd have to answer the question where local variables will be declared.. Consider def f(x): block a = 1: x = a # Is this local to the block or to the function? return x print(f(0)) # 0 or 1? IMO, if 'x = a' were to create a new variable in the block, I think that would be confusing. It would also beg the question why we bother to adorn the block with variable declarations. There are other good reasons why any variable *not* explicitly declared as being part of a block should have function scope:. It then would make sense to add some other syntax extensions like for let i in ...: <statements> (makes i local to the loop, giving it the semantics needed by closures as well) and with contextmanager() as let x: <statements>,
https://mail.python.org/archives/list/python-ideas@python.org/message/WK3C2LEBX2PAJ7HB4PHWZAYIUYWJ3SKJ/
CC-MAIN-2021-17
refinedweb
151
69.82