text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
These are chat archives for ramda/ramda Is there a way to use head and tail of a list as argument for a function call while you only pass in the list "as is" in? Example: const findStuff = object => find(eqProps('x', object)) const list = [ {x:1}, {x:2}, {x,3}] // Call it like but in "one line", maybe with Ramda head/tail functions) const [head, ...tail] = list findStuff(head)(tail) // Ideas I had R.converge(findStuff,[R.head, R.tail])(list) const findStuff = (object, list) => find(eqProps('x', object), list) const list = [ {x: 1, y: 'foo'}, {x: 2, y: 'bar'}, {x: 1, y: 'baz'}, {x: 3, y: 'qux'}] R.converge(findStuff, [R.head, R.tail])(list) //=> {x: 1, y: 'baz'} converge is not going to work with that sort of curried function, as (the binary version of) its API looks like converge(f, [g, h])(x) //=> f(g(x), h(x)) and not f(g(x))(h(x) convergein this simple manner, it's often better to use the more standardized lift: const findStuff = (object) => find(eqProps('x', object)) const list = [ {x: 1, y: 'foo'}, {x: 2, y: 'bar'}, {x: 1, y: 'baz'}, {x: 3, y: 'qux'}] R.lift(findStuff)(R.head, R.tail)(list) //=> {x: 1, y: 'baz'} fis what is called a Natural Transformation, I will NEVER be able to provide that proof. In Haskell that proof is automatic because it will not compile if it is not. Also same goes for claims like "Pair is a Limit". And without PP, in JS we can never provide an abstraction of an End/Coend, because we would have to provide a case and product /co-product type that accounts for EVERY type that can be conceived in JS. But really the only thing that matters or that we will encounter more often is the Natural Transformation bit, but we will just have to play the trust me game. union-typeor daggy. And even then, using abstractions like Limit/ Colimitand End/ Coendare very few and far between semiWalldefinition (heh) I get to the conclusion that you like the AdHoc polymorphism power of JS and Python and don't like the strongly typed implementations of things like Haskell and Pure/TypeScript because you lose a lot of the flexibility of being able to provide safe defaults and alternative logic that you get with property/field checks on the direct object or proto chain(Object.hasOwnProperty('key') versus obj[key] or instanceof) I quite like this article Instead of arguing about untyped vs typed, a non-existent distinction, we should accept that all programs have invariants that must be obeyed, i.e., all programs are typed. The argument we must have is about the pragmatics of types and type checking. const match = (p, ...c) => { const cases = new Map(c) return x => cases.get(p(x))(x) } const validate = match( x => x.foo.bar > 5, [false, x => "number must be more than five!"], [true, x => "ok" ]) type Match = <P extends string, X extends {[p in P]}, R>( x: X, p: P, f: {[_ in X[P]]: (data: X & {[p in P]: _}) => R }) => R declare const match: Match declare const x: { foo: "a", bar: 42 } | { foo: "b", baz: 42 } match(x, "foo", { a: x => x.bar, b: x => x.baz }) @ben-eb "Types are invaluable to developing my programs, but your ‘typed’ language prevent me from writing down my types!’" So much this const match = (p : Function, ... c : Array< [any, Function ]>) : Function => { const cases : Map< any, Function > = new Map(c); return x => cases.get(p(x)(x)); } anyeverywhere the type system is pointless. Moreover, if I'm using noImplicitAny, I'm going to keep having to introduce and eliminate that in various places anyis in all the FP TS code I have seen, so what is the point const match = (p : Function, ... c : Array< [Boolean | string, (x : any) => string] >) : Function => { const cases : Map< Boolean | string, (x) => string > = new Map(c); const ret : Function = (x) : string => cases.get(p(x))(x); return ret; } const max = match( a => !a.length ? "0" : a.length === 1 ? "1" : "n", ["0", () => fail("No max of empty list") ], ["1", ([x]) => x ], ["n", ([x, ...y]) => { const m = max(y); return x > m ? x : m; }]) match, very eloquent for a rough example const match = (p : Function, ... c : Array< [Boolean | string, (x : any) => string] >) : Function => { const cases = new Map(c); return (x) => cases.get(p(x))(x); } Booleanin that type signature, given that this function isn't specifically for matching on booleans (otherwise I'd just use a ternary operator) const match = (p, ...c) => { const cases = new Map(c) return x => cases.get(p(x))(x) } id_string :: String -> Stringand id_int :: Int -> Intand so on", which is a valid workaround, but above a certain barrier of abstraction the number of workarounds start overwhelming the actual logic you're trying to write @skatcat31 I'm not sure whether this is caused by how I'm trying to explain it or if it's personal bias I'm probably misinterpreting what you're saying, since my TypeScript reading skills have atrophied somewhat. Could you put what you're trying to say a different way? anyby a conscious and defined developer choice matchaccepts a function from some value ato a union of a row of types. It additionally accepts a heterogeneous list of pairs, the first elements of which correspond to the aforementioned row of types. The second element of each pair is a function that carries ato some result type. Finally, it returns a function from ato a union of the result types of the second elements of the heterogeneous list match(in other words, express a type signature for it in such a way that the compiler corrects the programmer, and not the other way around), we would need a type system with several features that TypeScript doesn't have const match = <T, V>(p : Function, ... c : Array< [T, Function] >) : Function => { const cases = new Map(c); return (x : V) :T => cases.get(p(x))(x); } any-s pto produce a union of the types occurring in the first elements of the pair list
https://gitter.im/ramda/ramda/archives/2018/12/14?at=5c143263b4c74555cccbfbc9
CC-MAIN-2019-30
refinedweb
1,019
65.86
Linear regression. Linear regression is used to predict an outcome given some input value(s). While machine learning classifiers use features to predict a discrete label for a given instance or example, machine learning regressors have the ability use features to predict a continuous outcome for a given instance or example. For example, a classifier might draw a decision boundary that can tell you whether or not a house is likely to sell at a given price (when provided with features of the house) but a regressor can use those same features to predict the market value of the house. Nonetheless, regression is still a supervised learning technique and you'll still need to train your model on a set of examples with known outcomes. The basic premise behind linear regression is to provide a model which can observe linear trends in the data; you've probably encountered linear regression at some point in the past as finding the "line of best fit". The model Our machine learning model is represented by the equation of a line. [{h_\theta }\left( x \right) = {\theta _0} + {\theta _1}x] In this case, we define our model as ${h _\theta }\left( x \right)$ where $h$ represents our predicted outcome as a function of our features, $x$. Typically, the model of a line is represented by $y=mx+b$. We use $h$ instead of $y$ to denote that this model is a hypothesis of the true trend. In a sense, we're attempting to provide our "best guess" for finding a relationship in the data provided. $\theta _0$ and $\theta _1$ are coefficients of our linear equation (analogous to $b$ and $m$ in the standard line equation) and will be considered parameters of the model. So far, our model has only been capable of capturing one independent variable and mapping its relationship to one dependent variable. We'll now extend our linear model to allow for $n$ different independent variables (in our case, features) to predict one dependent variable (an outcome). Thus, for each input $x$ we'll have a vector of values each with corresponding coefficients, $\theta$. Note: $x _0 = 1$ by convention. This allows us to do matrix operations with $x$ and $\theta$ without deviating from our defined linear model. Extending this model to account for multiple inputs, we can write it as [{h _\theta }\left( x \right) = {\theta _0} + {\theta _1}{x _1} + {\theta _2}{x _2} + {\theta _3}{x _3} + ... + {\theta _n}{x _n}] or more succinctly as [{h_\theta }\left( x \right) = {\theta ^T}x] when dealing with one example or [{h _\theta }\left( x \right) = X\theta ] when dealing with $i$ examples. Note: this matrix, $X$, is also referred to as the design matrix and is usually capitalized by convention to denote it as a matrix as opposed to a vector. If that doesn't quite make sense, let's take a step back and walk through the representation. For a univariate model, we could write the model in matrix form as: Go ahead and do the matrix multiplication to convince yourself. Now that you're convinced, let's extend this to a multivariate model with $j$ features and $i$ examples. This is pretty useful because we can go ahead and calculate the prediction, ${{h _\theta }\left( {{x _i}} \right)}$, for every example in our data set using one matrix operation. The cost function In order to find the optimal line to represent our data, we need to develop a cost function which favors model parameters that lead to a better fit. This cost function will approximate the error found in our model, which we can then use to optimize our parameters, $\theta _0$ and $\theta _1$, by minimizing the cost function. For linear regression, we'll use the mean squared error - the average difference between our guess, ${{h _\theta }\left( {{x _i}} \right)}$, and the true value, ${{y _i}}$ - to measure performance. [\min \frac{1}{{2m}}\sum\limits_{i = 1}^m {{{\left( {{h _\theta }\left( {{x _i}} \right) - {y _i}} \right)}^2}} ] Substituting in our definition of the model, ${{h _\theta }\left( {{x _i}} \right)}$, we'll update our cost function as: [J\left( \theta \right) = \frac{1}{{2m}}\sum\limits _{i = 1}^m {{{\left( {\left( {\sum\limits _{j = 0}^n {{\theta _j}{x _{i,j}}} } \right) - {y _i}} \right)}^2}} ] Finding the best parameters There's a number of ways to find the optimal model parameters. A few common approaches are detailed below. Gradient descent We use gradient descent to perform the parameter optimization in accordance with our cost function. Remember, the cost function essentially will score how well our model is fitting the present data. A smaller cost function means less error, which means we have a better model. We first start with an initial guess for what our parameter values should be. Gradient descent will then guide us through an iterative process of updating our parameter weights in search of an optimum solution. Let's first look at the simplest example, where we fix $\theta _0 = 0$ and find the optimal slope value. Using gradient descent, the weights will continue to update until we've converged on the optimal value. For linear regression, the cost function is always convex with only one optimum, so we can be confident that our solution is the best possible one. This is not always the case, and we'll need to be weary of converging on local optima using other cost functions. As the picture depicts, gradient descent is an iterative process which guides the parameter values towards the global minimum of our cost function. [{\theta _j}: = {\theta _j} - \eta \frac{\partial }{{\partial {\theta _j}}}J({\theta _0},{\theta _1})] Gradient descent will update your parameter value by subtracting the current value by the slope of the cost function at that point multiplied by a learning rate, which basically dictates how far to step. The learning rate, $\eta$, is always positive. Notice in the picture above how the first attempted value, $\theta _1 = 0$, has a negative slope on the cost function. Thus, we would update the current value by subtracting a negative number (in other words, addition) multiplied by the learning rate. In fact, every point left of the global minimum has a negative slope and would thus result in increasing our parameter weight. On the flip side, every value to the right of the global minimum has a positive slope and would thus result in decreasing our parameter weight until convergence. One neat thing about this process is that gradient descent will automatically slow down as we're nearing the optimum. See how the initial iterations take relatively large steps in comparison to the last steps? This is because our slope begins to level off as we approach the optimum, resulting in smaller update steps. Because of this, there is no need to adjust $\eta$ during the optimization. We can extend this idea to update both $\theta _0$ and $\theta _1$ simultaneously. In this case, our 2D curve has become a 3D contour plot. While the gradient descent algorithm remains largely the same, [{\theta _j}: = {\theta _j} - \alpha \frac{\partial }{{\partial {\theta _j}}}J\left( \theta \right)] we now define the differential as [\frac{\partial }{{\partial {\theta _j}}}J\left( \theta \right) = \frac{1}{m}\sum\limits _{i = 1}^m {\left( {\left( {\sum\limits _{j = 0}^n {{\theta _j}{x _{i,j}}} } \right) - {y _i}} \right)} {x _{i,j}}] using the chain rule. A few notes and practical tips on this process. - Scaling your features (as mentioned in Preparing data for a machine learning model) to be on a similar scale can greatly improve the time it takes for your gradient descent algorithm to converge. - It's important that your learning rate is reasonable. If it is too large it may overstep the global optimum and never converge, and if it's too small it will take a very long time to converge. - Plot the $J\left( \theta \right)$ as a function of the number of iterations you have run gradient descent. If gradient descent is working properly it will continue to decrease steadily until leveling off, hopefully, close to zero. You can also use this plot to visually determine when your cost function has converged and gradient descent may be stopped. - If $J\left( \theta \right)$ begins to increase as the number of iterations increases, something has gone wrong in your optimization. Typically, if this occurs it is a sign that your learning rate is too large. Gradient descent is often the preferred method of parameter optimization for models which have a large number (think, over 10,000) of features. Normal equation Remember how we used matrices to represent our model? Using normal equation for linear regression you can solve for the optimal parameters analytically in one step. [\theta = {\left( {{X^T}X} \right)^{ - 1}}{X^T}y] You can find a good derivation for this equation here. The benefit of using the normal equation to solve for our model parameters is that feature scaling is not necessary and it can be more effective than gradient descent for small feature sets. A few notes and practical tips on this process. - This method will occasionally give you problems when dealing with noninvertible or degenerate matrices. If this is the case, it is likely that your feature set has redundant (linearly dependent) features, or simply too many features with respect to the number of training examples given. Implementation Linear regression is very simple to implement in sklearn. It can accommodate both univariate and multivariate models and will automatically handle one-hot encoded categorical data to avoid the dummy variable trap. from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(features_train, labels_train) predictions = model.predict(features_test) import matplotlib.pyplot as plt plt.scatter(features, labels, color = 'red') plt.plot(features_test, predictions, color = 'blue') Get the latest posts delivered right to your inbox
https://www.jeremyjordan.me/linear-regression/
CC-MAIN-2018-22
refinedweb
1,668
50.57
Core module of django orm extensions package. Is a collection of third party plugins build in one unified package. Project description NOTE: this package is deprecated, because django after version 1.6 has support for binaryfield nativelly. Binary field and other usefull tools for postgresql bytea field type. Simple example: from django.db import models from djorm_pgbytea.fields import ByteaField, LargeObjectField class ByteaModel(models.Model): data = ByteaField() class LargeObjectModel(models.Model): lobj = LargeObjectField(default=None, null=True) How to install? Install the stable version using pip by running: pip install djorm-ext-pgbytea Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/djorm-ext-pgbytea/
CC-MAIN-2020-40
refinedweb
122
53.78
json-variables json-variablesjson-variables Preprocessor for JSON to allow keys referencing keys Table of ContentsTable of Contents InstallInstall npm i json-variables // consume via a CommonJS require():const jsonVariables = ;// or as an ES Module:; Here's what you'll get: Idea - updated for v.7 - full rewriteIdea - updated for v.7 - full rewrite This library allows JSON keys to reference other keys. It is aimed at JSON files which are used as means to store the data part, separate from the template code. <=v6.xwas resolving variables from the root to the branch tip. This was apparently a bad idea and v7.xfixes that. Now resolving is done from the tips down to the root. <=v6.xhad data stores but it referenced only the same level as the variable AND root level when checking for values when resolving. This was not good. The v7.xnow looks every single level upward, from the current to the root, plus data stores on each level. This, combined with tips-to-root resolving means now, for the first time, you have true freedom to cross-reference the variables any way you like (as long as there are no loop in the resolved variable chain). Previously, on <=v6.x, the scope of second-level variable references was lost and since resolving started from the root, it instantly received a variable with a lost scope (for data store lookup, for example). Not any more. - Using <=v6.xin production I also found out how unhelpful the error messages were (not to mention in 90% of the error cases, errors were not real, only resolving algorithm shortcomings). I went extra mile in this rewrite to provide not only the path of the variables being resolved, but also the piece of the whole source object. - In v7.x, I also switched to strictly object-pathnotation. - Additionally, v7.xuses the latest tools I created since coding the original core of json-variables. For example, variable extraction is now done using a separate library, string-find-heads-tails. I know, these architectural mistakes look no-brainers now but trust me, they were not so apparent when the original json-variables idea was conceived. Also, I didn't anticipate this amount of variable-cross-referencing happening in real production, which was beyond anything that unit tests could imitate. APIAPI jsonVariables (inputObj, [options]) Returns a plain object with all variables resolved. inputObjinputObj Type: object - a plain object. Usually, a parsed JSON file. optionsoptions Type: object - an optional options object. (PS. Nice accidental rhyming) Defaults: heads: '%%_'tails: '_%%'headsNoWrap: '%%-'tailsNoWrap: '-%%'lookForDataContainers: truedataContainerIdentifierTails: '_data'wrapHeadsWith: ''wrapTailsWith: ''dontWrapVars:preventDoubleWrapping: truewrapGlobalFlipSwitch: truenoSingleMarkers: falseresolveToBoolIfAnyValuesContainBool: trueresolveToFalseIfAnyValuesContainBool: truethrowWhenNonStringInsertedInString: false Use examplesUse examples If you don't care how to mark the variables, use my notation, %%_, to mark a beginning of a variable (further called heads) and _%% to mark ending (further called tails). Check this: const jv = ;var res =;console;// ==> {// a: 'some text value1 more text value2',// b: 'something',// var1: 'value1',// var2: 'value2'// } You can declare your way to mark variables, your own heads and tails. For example, { and }: const jv = ;var res =;console;// => {// a: 'some text value1 more text value2',// b: 'something',// var1: 'value1',// var2: 'value2'// } You can also wrap all resolved variables with strings, a new pair of heads and tails, using opts.wrapHeadsWith and opts.wrapTailsWith. For example, bake some Java, wrap your variables with ${ and }: const jv = ;var res =;console;// => {// a: 'some text ${value1} more text ${value2}',// b: 'something',// var1: 'value1',// var2: 'value2'// } If variables reference keys which have values that reference other keys, that's fine. Just ensure there's no closed loop. Otherwise, renderer will throw and error. const jv = ;var res =;console;// THROWS because "e" loops to "b" forming an infinite loop. This one's OK: const jv = ;var res =;console;// => {// a: 'zzz',// b: 'zzz',// c: 'zzz',// d: 'zzz',// e: 'zzz'// } Variables can also reference deeper levels within objects and arrays — just put dot like variable.key.subkey: const jv = ;var res =;console;// => {// a: 'some text %%=value1=%% more text %%=value2=%%',// b: 'something',// var1: {key1: 'value1'},// var2: {key2: 'value2'}// } Data containersData containers Data-wise, if you looked at a higher level, it might appear clunky to put values as separate values, like in examples above. Saving you time scrolling up, check this out: a: 'some text %%_var1_%% more text %%_var2_%%'b: 'something'var1: 'value1'var2: 'value2' Does this look like clean data arrangement? Hell no. It's convoluted and nasty. The keys var1 and var2 are not of the same status as an a and b, therefore can't be mashed together at the same level, can it? What if we placed all key's a variables within a separate key, a_data — it starts with the same letter, so it will end up being nearby the a after sorting. Observe: a: 'some text %%_var1_%% more text %%_var2_%%'a_data:var1: 'value1'var2: 'value2'b: 'something' That's better, isn't it? I think so too. To set this up, you can rely on my default way of naming data keys (appending _data) or you can customise how to call data keys using opts.dataContainerIdentifierTails. On the other hand, you can also turn off this function completely via opts.lookForDataContainers and force all values to be the keys at the same level as the current variable's key. const jv = ;var res =;console;// => {// a: 'some text value1 more text 333333.',// b: 'something',// a_data: {// var1: 'value1',// var3: '333333'// }// } Data container keys can also contain objects or arrays. Just query the whole path: const jv = ;var res =;console;// => {// a: 'some text value1 more text 333333.',// b: 'something',// a_data: {// var1: {key1: {key2: {key3: 'value1'}}},// var3: '333333'// }// } Ignores with wildcardsIgnores with wildcards You can ignore the wrapping on any keys by supplying their name patterns in the options array, dontWrapVars value. It can be array or string and also it can contain wildcards: const jv = ;var res =;console;// => {// a: 'val', <<< didn't get wrapped// b: 'val', <<< also didn't get wrapped// c: 'val'// } WrappingWrapping Challenge: How do you wrap one instance of a variable, but not another, when both are in the same string? Solution: Alternative heads and tails, which are always non-wrapping: opts.headsNoWrap and opts.tailsNoWrap. Default values are: %%- and -%%. You can customise them to anything you want. For example: When processed with options { wrapHeadsWith: '{{ ', wrapTailsWith: ' }}' }, it will be: In practice:In practice: Wrapping of the variables is an essential feature when working with data structures that need to be adapted for both back-end and front-end. For the development, preview build you might want John as a first name, but for back-end build, you might want {{ user.firstName }}. The following example shows how to "bake" HTML sprinkled with Nunjucks notation (or any members of Jinja-like templating languages that use double curly braces): HTML template: {{ hero_title_wrapper }} JSON for DEV build (a preview build to check how everything looks): In the above, hero_title_wrapper basically redirects to hero_title, which pulls John as a first name. The alternative title's text is used when first_name is missing. JSON for PROD version is minimal, only overwriting what's different/new (to keep it DRY): We'll process the merged object of DEV and PROD JSON contents using { wrapHeadsWith: '{{ ', wrapTailsWith: ' }}' }, which instructs to wrap any resolved variables with {{ and }}. In the end, our baked HTML template, ready to be put on the back-end will look like: {{ user.firstName }}, check out our seasonal offers! So far so good, but what happens if we want to add a check, does first_name exist? Again in a Nunjucks templating language, it would be something like: content JSON for PROD build: with intention to bake the following HTML: HTML template: {% if user.firstName %}Hi {{ user.firstName }}, check out our seasonal offers!{% else %}Hi, check out our seasonal offers!{% endif %} Now notice that in the example above, the first first_name does not need to be wrapped with {{ and }} because it's already in a Nunjucks statement, but the second one does need to be wrapped. You solve this by using non-wrapping heads and tails. Keeping default values opts.wrapHeadsWith and opts.wrapTailsWith it would look like: content JSON for PROD build: Notice %%-first_name-%% above. The non-wrapping heads and tails instruct the program to skip wrapping, no matter what. Mixing Booleans and stringsMixing Booleans and strings Very often, in email templating, the inactive modules are marked with Boolean false. When modules have content, they are marked with strings. There are cases when you want to resolve the whole variable to Boolean if upon resolving you end up with a mix of strings and Booleans. When opts.resolveToBoolIfAnyValuesContainBool is set to true (default), it will always resolve to the value of the first encountered Boolean value. When set to false, it will resolve Booleans to empty strings. When opts.resolveToFalseIfAnyValuesContainBool and opts.resolveToBoolIfAnyValuesContainBool are set to true (both defaults), every mix of string(s) and Boolean(s) will resolve to Boolean false. If opts.resolveToBoolIfAnyValuesContainBool is set to false, but opts.resolveToFalseIfAnyValuesContainBool to true, the mixes of strings and Booleans will resolve to the value of the first encountered Boolean variable's value. Observe: var res =;console;// => {// a: false, // <<< It's because opts.resolveToFalseIfAnyValuesContainBool is default, true// b: true// } var res =;console;// => {// a: true, <<< It's because we have a mix of string and Boolean, and first encountered Boolean value is `true`// b: true// } ContributingContributing If you want a new feature in this package or you would like us to change some of its functionality, raise an issue on this repo. If you tried to use this library but it misbehaves, or you need advice setting it up, and its readme doesn't make sense, just document it and raise an issue on this repo. If you would like to add or change some features, just fork it, hack away, and file a pull request. We'll do our best to merge it quickly. Prettier is enabled, so you don't need to worry about the code style. LicenceLicence MIT License (MIT)
https://www.npmjs.com/package/json-variables
CC-MAIN-2018-34
refinedweb
1,671
61.56
import "azul3d.org/audio/wav.v1" Package wav decodes and encodes wav audio files. The decoder is able to decode all wav audio formats (except extensible WAV formats), with any number of channels. These formats are: 8-bit unsigned PCM 16-bit signed PCM 32-bit signed PCM 32-bit floating-point PCM 64-bit floating-point PCM μ-law a-law The encoder is capable of encoding any audio data -- but it currently will convert all data to 16-bit signed PCM on-the-fly before writing to a file. Ultimately this means regardless of what type of audio data you encode, it ends up as a 16-bit WAV file in the end. Future versions of this package will allow the encoder to output the same types as the decoder. Please refer to the WAV specification for in-depth details about its file format: decoder.go doc.go encoder.go header.go structs.go ErrUnsupported defines an error for decoding wav data that is valid (by the wave specification) but not supported by the decoder in this package. This error only happens for audio files containing extensible wav data. NewEncoder creates a new WAV encoder, which stores the audio configuration in a WAV header and encodes any audio samples written to it. The contents of the WAV header and the encoded audio samples are written to w. Note: The Close method of the encoder must be called when finished using it. Package wav imports 8 packages (graph) and is imported by 3 packages. Updated 2016-07-19. Refresh now. Tools for package owners.
https://godoc.org/azul3d.org/audio/wav.v1
CC-MAIN-2017-13
refinedweb
265
52.7
Found not working imports: QtMultimedia is not installed import QtQuick 2.6 import QtQuick.Controls 1.5 import QtQuick.Dialogs 1.2 import QtMultimedia 5.6 import QtQuick.Layouts 1 } } I just uinstalled Qt 5.5.1 and installed Qt 5.6 (but above is the new project created in Qt 5.6). I added QT += multimedia in my pro file. I use MSVC 2013 (as alway when using Qt), Windows 7. It builds program and when I click Run, it works, although it shows in Application Output: 3x failed to access to the graph builder. I could ignore it, but I am not able to access to Design mode ("Cannot open this QML document beacuse of an error in the QML file"). Could you help me to solve it?
https://forum.qt.io/topic/66057/found-not-working-imports-qtmultimedia-is-not-installed
CC-MAIN-2018-43
refinedweb
130
79.06
Name | Synopsis | Description | Return Values | Errors | Examples | Attributes | See Also | Notes cc –mt [ flag... ] file... [ library... ] #include <thread.h> #include <signal.h> int thr_sigsetmask(int how, const sigset_t *set, sigset_t *oset); The thr_sigsetmask() function changes or examines a calling thread's signal mask. Each thread has its own signal mask. A new thread inherits the calling thread's signal mask and priority; however, pending signals are not inherited. Signals pending; thus, thr_sigsetmask() can be used to inquire about the currently blocked signals. The value of the argument how specifies the method in which the set is changed and takes one of the following values: set corresponds to a set of signals to block. They are added to the current signal mask. set corresponds to a set of signals to unblock. These signals are deleted from the current signal mask. set corresponds to the new signal mask. The current signal mask is replaced by set. If the value of oset is not NULL, it points to the location where the previous signal mask is stored. Upon successful completion, the thr_sigsetmask() function returns 0. Otherwise, it returns a non-zero value. The thr_sigsetmask() function will fail if: The value of how is not defined and oset is NULL. The following example shows how to create a default thread that can serve as a signal catcher/handler with its own signal mask. new will have a different value from the creator's signal mask. As POSIX threads and Solaris threads are fully compatible even within the same process, this example uses pthread_create(3C) if you execute a.out 0, or thr_create(3C) if you execute a.out 1. In this example: The sigemptyset(3C) function initializes a null signal set, new. The sigaddset(3C) function packs the signal, SIGINT, into that new set. Either pthread_sigmask() or thr_sigsetmask() is used to mask the signal, SIGINT (CTRL-C), from the calling thread, which is main(). The signal is masked to guarantee that only the new thread will receive this signal. pthread_create() or thr_create() creates the signal-handling thread. Using pthread_join(3C) or thr_join(3C), main() then waits for the termination of that signal-handling thread, whose ID number is user_threadID. Then main() will sleep(3C) for 2 seconds, after which the program terminates. The signal-handling thread, handler: Assigns the handler interrupt() to handle the signal SIGINT by the call to sigaction(2). Resets its own signal set to not block the signal, SIGINT. Sleeps for 8 seconds to allow time for the user to deliver the signal SIGINT by pressing the CTRL-C. /* cc thisfile.c -lthread -lpthread */ #define _REENTRANT /* basic first 3-lines for threads */ #include <pthread.h> #include <thread.h> thread_t user_threadID; sigset_t new; void *handler( ), interrupt( ); int main( int argc, char *argv[ ] ){ test_argv(argv[1]); sigemptyset(&new); sigaddset(&new, SIGINT); switch(*argv[1]) { case '0': /* POSIX */ pthread_sigmask(SIG_BLOCK, &new, NULL); pthread_create(&user_threadID, NULL, handler, argv[1]); pthread_join(user_threadID, NULL); break; case '1': /* Solaris */ thr_sigsetmask(SIG_BLOCK, &new, NULL); thr_create(NULL, 0, handler, argv[1], 0, &user_threadID); thr_join(user_threadID, NULL, NULL); break; } /* switch */ printf("thread handler, # %d, has exited\n",user_threadID); sleep(2); printf("main thread, # %d is done\n", thr_self( )); return (0) } /* end main */ struct sigaction act; void * handler(char *argv1) { act.sa_handler = interrupt; sigaction(SIGINT, &act, NULL); switch(*argv1){ case '0': /* POSIX */ pthread_sigmask(SIG_UNBLOCK, &new, NULL); break; case '1': /* Solaris */ thr_sigsetmask(SIG_UNBLOCK, &new, NULL); break; } printf("\n Press CTRL-C to deliver SIGINT signal to the process\n"); sleep(8); /* give user time to hit CTRL-C */ return (NULL) } void interrupt(int sig) { printf("thread %d caught signal %d\n", thr_self( ), sig); } void test_argv(char argv1[ ]) { if(argv1 == NULL) { printf("use 0 as arg1 to use thr_create( );\n \ or use 1 as arg1 to use pthread_create( )\n"); exit(NULL); } } In the last example, the handler thread served as a signal-handler while also taking care of activity of its own (in this case, sleeping, although it could have been some other activity). A thread could be completely dedicated to signal-handling simply by waiting for the delivery of a selected signal by blocking with sigwait(2). The two subroutines in the previous example, handler() and interrupt(), could have been replaced with the following routine: void * handler(void *ignore) { int signal; printf("thread %d waiting for you to press the CTRL-C keys\n", thr_self( )); sigwait(&new, &signal); printf("thread %d has received the signal %d \n", thr_self( ), signal); } /*pthread_create( ) and thr_create( ) would use NULL instead of argv[1] for the arg passed to handler( ) */ In this routine, one thread is dedicated to catching and handling the signal specified by the set new, which allows main() and all of its other sub-threads, created after pthread_sigmask() or thr_sigsetmask() masked that signal, to continue uninterrupted. Any use of sigwait(2) should be such that all threads block the signals passed to sigwait(2) at all times. Only the thread that calls sigwait() will get the signals. The call to sigwait(2) takes two arguments. For this type of background dedicated signal-handling routine, a Solaris daemon thread can be used by passing the argument THR_DAEMON to thr_create(). See attributes(5) for descriptions of the following attributes: sigaction(2), sigprocmask(2), sigwait(2), cond_wait(3C), pthread_cancel(3C), pthread_create(3C), pthread_join(3C), pthread_self(3C), sigaddset(3C), sigemptyset(3C), sigsetops(3C), sleep(3C), attributes(5), cancellation(5), standards(5) It is not possible to block signals that cannot be caught or ignored (see sigaction(2)). It is also not possible to block or unblock SIGCANCEL, as SIGCANCEL is reserved for the implementation of POSIX thread cancellation (see pthread_cancel(3C) and cancellation(5)). This restriction is quietly enforced by the standard C library. Using sigwait(2) in a dedicated thread allows asynchronously generated signals to be managed synchronously; however, sigwait(2) should never be used to manage synchronously generated signals. Synchronously generated signals are exceptions that are generated by a thread and are directed at the thread causing the exception. Since sigwait() blocks waiting for signals, the blocking thread cannot receive a synchronously generated signal. Calling thesigprocmask(2) function will be the same as if thr_sigsetmask() or pthread_sigmask() has been called. POSIX leaves the semantics of the call to sigprocmask(2) unspecified in a multi-threaded process, so programs that care about POSIX portability should not depend on this semantic. If a signal is delivered while a thread is waiting on a condition variable, the cond_wait(3C) function will be interrupted and the handler will be executed. The state of the lock protecting the condition variable is undefined while the thread is executing the signal handler. Signals that are generated synchronously should not be masked. If such a signal is blocked and delivered, the receiving process is killed. Name | Synopsis | Description | Return Values | Errors | Examples | Attributes | See Also | Notes
http://docs.oracle.com/cd/E19082-01/819-2243/thr-sigsetmask-3c/index.html
CC-MAIN-2015-22
refinedweb
1,132
51.78
xcb_key_press_event_t - Man Page a key was pressed/released Synopsis #include <xcb/xproto.h> Event datastructure typedef struct xcb_key_press_event_t { uint8_t response_type; xcb_keycode_t detail; uint16_t sequence; xcb_timestamp_t time; xcb_window_t root; xcb_window_t event; xcb_window_t child; int16_t root_x; int16_t root_y; int16_t event_x; int16_t event_y; uint16_t state; uint8_t same_screen; uint8_t pad0; } xcb_key_press_event_t; Event Fields - response_type The type of this event, in this case XCB_KEY_RELEASE. This field is also present in the xcb_generic_event_t and can be used to tell events apart from each other. - sequence The sequence number of the last request processed by the X11 server. - detail The keycode (a number representing a physical key on the keyboard) of the key which was pressed. - time Time when the event was generated (in milliseconds). - root The root window of child. - event NOT YET DOCUMENTED. - child NOT YET DOCUMENTED. - root_x The X coordinate of the pointer relative to the root window at the time of the event. - root_y The Y coordinate of the pointer relative to the root window at the time of the event. - event_x If same_screen is true, this is the X coordinate relative to the event window's origin. Otherwise, event_x will be set to zero. - event_y If same_screen is true, this is the Y coordinate relative to the event window's origin. Otherwise, event_y will be set to zero. - state The logical state of the pointer buttons and modifier keys just prior to the event. - same_screen Whether the event window is on the same screen as the root window. Description See Also xcb_generic_event_t(3), xcb_grab_key(3), xcb_grab_keyboard(3) Author Generated from xproto.xml. Contact xcb@lists.freedesktop.org for corrections and improvements.
https://www.mankier.com/3/xcb_key_press_event_t
CC-MAIN-2022-21
refinedweb
269
57.77
Welcome to the world of Ruby internationalization (Ruby i18n) and localization (Ruby l10n). In my previous article, I’ve explained how to translate Ruby applications with the R18n gem that has a somewhat different approach than the widely-used I18n solution. However, there is yet another technology that you may stick with when doing Ruby localization for your application. Meet GNU GetText and PO files. GetText is a mature and battle-tested solution initially released by Sun Microsystems more than 25 years ago. GetText provides a set of utilities that allow localizing various programs and even operating systems. In this article, you will see how to translate Ruby applications with the help of the fast_gettext gem written by Michael Grosser. The gem boasts its speed and supports multiple backends for storing translations (various types of files and even databases). Today we will discuss the following topics: - What types of files GetText supports and what their specifics are - Creating a sample application - Storing translations in PO files - Performing simple translations - Adding pluralization rules and gender information - Parsing and manipulating PO files - Using YAML files Let’s get started, shall we? Introduction to GetText and fast_gettext So, GetText is a quite complex solution to localize various kinds of programs. We are not going to discuss all ins and outs of GetText in this article, but you may find full documentation online at gnu.org. In this section, we will briefly discuss the idea of this system and the supported file formats. GetText not only provides you the right tools to perform localization but also instructs how the files and directories should be organized and named. Under the hoods, GetText uses two types of files to store translations: .po and .mo. PO means Portable Object and those are the files edited by a human. Translations for given strings are provided inside as well as some metadata and pluralization rules. Each PO file dedicates to a single language and should be stored in a directory named after this language, for example, en, ru, de etc. We will mostly work with the PO files in this article, though later you will learn that the fast_gettext gem also supports YAML files (which is probably a relief for a Ruby developer). MO means Machine Object and those are binary files read by the computer. They are harder to maintain and we are not going to stick with these files. Another thing worth mentioning is that fast_gettext has a concept of a text domain. In a simple case, there is only one domain, usually named after your program. But for more complex projects there may be multiple domains. Your PO files should be named after the text domain, therefore the approximate file structure would be: - domain1.po - domain2.po - ru - domain1.po - domain2.po We’ll see this in action later, but for now, let’s create a sample Ruby application that we are going to translate with GetText. This application is going to be very similar to the one created in the previous article. Sample Application So, our small project will be called Bank. It will allow instantiating new accounts with a specified balance and information about the owner. Create the following file structure: - bank - bank.rb - lib - locale - account.rb - errors.rb - locale_settings.rb - runner.rb The bank folder is going to contain all the files for the project, whereas runner.rb will be used to actually boot the program. Here are the contents of the bank.rb file: require_relative 'lib/locale_settings' require_relative 'lib/errors' require_relative 'lib/account' module Bank end Nothing fancy, we are just including some files and defining an empty module. This module will be used to namespace our classes. Next, errors.rb: module Bank class WithdrawError < StandardError end end This error will be raised when the money can’t be withdrawn from an account (for example, where there is just not enough money). Last but not the least is the account.rb: module Bank class Account attr_reader :owner, :balance, :gender VALID_GENDER = %w(male female).freeze def initialize(owner:, balance: 0, gender: 'male') @owner = owner @balance = balance @gender = check_gender_validity_for gender end def transfer_to(another_account, amount) begin withdraw(amount) another_account.credit amount rescue WithdrawError => e puts e else puts "Money sent!" end end def credit(amount) @balance += amount end def withdraw(amount) raise(WithdrawError, 'Not enough money for withdrawal') if balance < amount @balance -= amount end private def check_gender_validity_for(gender) VALID_GENDER.include?(gender) ? gender : 'male' end end end So, we have three attributes: owner (name or full name), account’s balance (default to 0) and owner’s gender (to properly display some informational messages—it is important for some languages). Note that the initialize method has arguments defined in a hash-style which is only supported in newer versions of Ruby. You may indeed stick to the traditional format. When setting the gender, we check that it has a proper value. This is done inside the check_gender_validity_for private method that employs the VALID_GENDER constant. Also, there are a credit and withdraw interface methods to perform money transactions. Note that we do not allow to directly modify the balance attribute in order to check if, for example, there is enough money on the balance. Lastly, there is a transfer_to method that enables us to transfer money between accounts. This method has a begin/rescue block that checks whether the transaction succeeded. Now you may flesh our the runner.rb file to see the program in action: require_relative 'bank/bank' john_account = Bank::Account.new owner: 'John', balance: 20, gender: 'male' kate_account = Bank::Account.new owner: 'Kate', balance: 15, gender: 'female' john_account.transfer_to(kate_account, 10) This is pretty much it, our preparations are done. Now it is time to move to the next part and add support for multiple languages. I will stick with Russian and English but you may indeed make a different choice. Integrating fast_gettext Start off by installing a new gem on your PC: gem install fast_gettext It has no special requirements so the installation should succeed without any problems. Next, require the corresponding module inside bank.rb: require 'fast_gettext' # ... Now we should load our translations (that will be added later) from the locales directory and set a text domain. Our program is very simple, so having one domain is enough, though fast_gettext does support multiple domains as well. Let’s add some code to the locale_settings.rb file: module Bank class LocaleSettings def initialize FastGettext.add_text_domain('bank', path: File.join(File.dirname(__FILE__), 'locale'), type: :po) end end end The name for the text domain will be bank. We are also specifying a path to our translations and set the file type to PO. Next provide the list of supported locales and set the text domain: # ... FastGettext.text_domain = 'bank' FastGettext.available_locales = %w(ru en) Alternatively, you may set FastGettext.default_text_domain setting to bank. Now let’s list all the available locales and ask the user to choose one: puts "Select locale code:" FastGettext.available_locales.each do |locale| puts locale end change_locale_to gets.strip.downcase change_locale_to is a private method that checks whether the chosen locale is supported or not: # ... private def change_locale_to(locale) locale = 'en' unless FastGettext.available_locales.include?(locale) FastGettext.locale = locale end If the locale is not supported, we revert to English. Here is the full code for the locale_settings.rb file: module Bank class LocaleSettings def initialize FastGettext.add_text_domain('bank', path: File.join(File.dirname(__FILE__), 'locale'), type: :po) FastGettext.text_domain = 'bank' FastGettext.available_locales = %w(ru en) puts "Select locale code:" FastGettext.available_locales.each do |locale| puts locale end change_locale_to gets.strip.downcase end private def change_locale_to(locale) locale = 'en' unless FastGettext.available_locales.include?(locale) FastGettext.locale = locale end end end Now just load the settings inside the bank.rb file: # ... module Bank LocaleSettings.new end After translations are loaded, they are cached, therefore the fast_gettext has a very nice performance (at least ten times faster than I18n::Simple, according to the docs). All right, now that the user is able to select a locale, we need to prepare some translations, therefore proceed to the next section! Creating PO Files So, as you already know PO means Portable Object. Those files are separated into directories for different locales and named after the text domain. Our text domain is bank and supported locales are Russian and English, so here is the file structure for the locales directory: - locales - bank.po - ru - bank.po PO files can look somewhat strange, especially if you got used to YAML format. You may find specifications for these files at gnu.org website. Every PO file starts with a header entry that contains information about the file, the author, last revision date and pluralization rules. Let’s add the header for the en/bank.po file: # Bank application # Copyright (C) 2017 # #, fuzzy" As you see, here we are specifying the version of the file, author’s name, content type and encoding. Plural-Forms will be filled with a proper value later. Add the same header to the ru/bank.po file: # Bank application # Copyright (C) 2017" Alright, the files are created and we may flesh them out by adding some translations. Performing Simple Translations So, for starters let’s display a simple message to the user after a new account is instantiated. Add the following files to the en/bank.po file (after the header entry): msgid "New account instantiated!" msgstr "" msgid can be treated as a key for the message, whereas msgstr contains the translation. In this example I’ve left the translation empty—this means that the key will be displayed instead. This is not the case for the Russian language, of course. Tweak the ru/bank.po file: msgid "New account instantiated!" msgstr "Новый счёт создан!" Here, as you see, I am providing translation for the given string. Of course, if you get used to i18n gem and YAML format, you may write your keys in a different way, for example: msgid "account_instantiated" msgstr "New account instantiated!" Now, in order to perform translations, let’s include a new module inside the Account class: module Bank class Account include FastGettext::Translation # ... end end To look up a translation by its key, use a method with a very minimalistic name _: def initialize(owner:, balance: 0, gender: 'male') @owner = owner @balance = balance @gender = check_gender_validity_for gender puts _('New account instantiated!') end Quite simple, eh!? If for some reason you’d like to use a different locale when performing a specific translation, you may wrap it in a with_locale block: FastGettext.with_locale 'ru' do puts _('New account instantiated!') end Great, we’ve just translated our first message! Let’s perform yet another translation. Add the following line to the en/bank.po file: msgid "not enough money for withdrawal" msgstr "[ERROR] This account does not have enough money to withdraw!" As you see, this is our error that is raised when the account does not have enough money. Also, add Russian translation: msgid "not enough money for withdrawal" msgstr "[ОШИБКА] На счету недостаточно средств для снятия!" Now utilize the _ method again: # ... def withdraw(amount) raise(WithdrawError, _('not enough money for withdrawal')) if balance < amount @balance -= amount end Using Interpolation We have seen how to perform simple translations with fast_gettext, but the question is how do we add an extra layer of complexity and utilize interpolation in our translations? All in all, it is quite a simple task to achieve. Suppose, we’d like to display information about the user’s account listing its owner and balance. Interpolation in PO files is performed by using a construct like text %{interpolation} more text. So, the interpolated values should be wrapped with the %{}. Tweak the en/bank.po file: msgid "account owner info" msgstr "Account owner: %{owner} (%{gender}). Balance: $%{balance}." Do the same for the ru/bank.po: msgid "account owner info" msgstr "Владелец счёта: %{owner} (%{gender}). Текущий баланс: $%{balance}." Interpolated values are provided in a pretty odd-looking way: # account.rb # ... def info _('account owner info') % { owner: owner, gender: gender, balance: balance } end This % method uses the provided hash and interpolates the given values. Note that the keys inside the hash must be named after the placeholder inside the PO file. Now you may see this method in action by adding the following line to the runner.rb: # ... puts john_account.info Using Gender Information So far so good: our project is nearly translated. Now suppose we would like to display a more detailed information inside the transfer_to method when the transaction succeeds. For instance, I’d like to say who transferred money to whom and what was the amount. We could stick with only interpolation as it was done in the previous section, but unfortunately, that’s not enough for the Russian language (and for a handful of other languages). The thing is in Russian some words are written differently for different genders, like “перевёл” (“transferred”) for a male, but “перевела” (again, “transferred” in English) for a female. Luckily, there is a way to overcome this problem in PO files by using a scope. Add the following lines to the ru/bank.po file: msgid "male|transferred" msgstr "%{sender} перевёл %{recipient} $%{amount}" msgid "female|transferred" msgstr "%{sender} перевела %{recipient} $%{amount}" Note now I prefix the keys with a male and female scope and provide different translations. Of course, the scope can be used in many other cases, not only to provide gender information. For English the messages will be absolutely identical in both cases: msgid "male|transferred" msgstr "%{sender} sent %{recipient} $%{amount}" msgid "female|transferred" msgstr "%{sender} sent %{recipient} $%{amount}" Now in order to work with the scope, use the s_ method (yeah, all those methods have some seriously short names): def transfer_to(another_account, amount) begin withdraw(amount) another_account.credit amount rescue WithdrawError => e puts e else puts s_("#{gender}|transferred") % { sender: owner, recipient: another_account.owner, amount: amount } end end This is it! Pluralization Rules Another painful I18n topic is pluralization. Some languages (like English) have simple pluralization rules, whereas others (like Russian or Polish) have much complex rules and therefore need more translations for various cases. Suppose we’d like to just say how many dollars is on the balance of a given account. For English, that’ll be either “1 dollar” or “5 dollars”. For Russian, however, we have three possible cases: “1 доллар”, “2 доллара”, “10 долларов”. To take care of these scenarios, you need to properly set Plural-Forms in the header of each PO file (luckily, the following page lists pluralization rules for all the languages). For English everything is quite simple: "Plural-Forms: nplurals=2; plural=(n != 1);\n" For Russian the formula is somewhat complex: "Plural-Forms: nplurals=3; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2);\n" Unfortunately, when things come to adding translations for pluralized string, it becomes a bit messy. You need to provide not only msgid, but also msgid_plural and, of course, translations for each possible case. Firstly, modify the en/bank.po: msgid "dollar" msgid_plural "dollars" msgstr[0] "%{amount} dollar" msgstr[1] "%{amount} dollars" Now ru/bank.po: msgid "dollar" msgid_plural "dollars" msgstr[0] "%{amount} доллар" msgstr[1] "%{amount} доллара" msgstr[2] "%{amount} долларов" Now use yet another short-named method n_ while providing an interpolated value: # account.rb # ... def balance_info n_('dollar', 'dollars', balance) % { amount: balance} end The first argument passed to the n_ method is the singular form, then plural form and then the count. To see this method in action, add yet another line to the runner.rb: puts john_account.balance_info Parsing PO Files If you have large PO files that need to be parsed (for example, to understand how many messages are left untranslated), you may stick with a simple gem called POParser. Install it on your PC: gem install PoParser Next, require it, open a file and parse it: require 'poparser' content = File.read('example.po') po = PoParser.parse(content) The po variable will now contain an instance of the PoParser::Po class: <PoParser::Po, Translated: 68.1% Untranslated: 20.4% Fuzzy: 11.5%> You are able to grab all the entries from the PO file all get only the untranslated ones: po.entries po.untranslated It is even possible to directly add new entries to a PO file by creating a proper hash and using the add method: new_entry = { translator_comment: 'comment', reference: 'reference comment', msgid: 'untranslated', msgstr: 'translated string' } po.add(new_entry) After you are done editing the file, save it: po.save_file All in all, POParser is a pretty convenient tool and you may learn more about it by referring to the official documentation. Working With YAML Files As I already mentioned before, fast_gettext also supports YAML files that can be more convenient for some developers. In order to start using YAML files instead of PO, simply change the :type setting passed to the add_text_domain method: # locale_settings.rb module Bank class LocaleSettings def initialize FastGettext.add_text_domain('bank', path: File.join(File.dirname(__FILE__), 'locale'), type: :yaml) end end end Note that the YAML files do not need to be separated into folders. They also have a somewhat different format. For example, locale/en.yml: en: pluralisation_rule: '(n != 1)' dollar: one: '%{amount} dollar' other: '%{amount} dollars' locale/ru.yml: ru: pluralisation_rule: '(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2)' dollar: one: '%{amount} доллар' other: '%{amount} доллара' plural2: '%{amount} долларов' Of course, when sticking to YAML files you will probably want to name your keys in a snake case, so these lines: msgid "not enough money for withdrawal" msgstr "[ERROR] This account does not have enough money to withdraw!" will turn to something like: not_enough_money_error: "[ERROR] This account does not have enough money to withdraw!" Therefore, do not forget to tweak your calls to _, n_ and s_ methods accordingly. Stick with Phrase! Writing code to localize your application is one task, but working with translations is a totally different story. Having many translations for multiple languages may quickly overwhelm you which will lead to the user’s confusion. But Phrase can make your life as a developer easier! Grab your 14-day trial now. Phrase supports many different languages, including Ruby, and formats, including YAML and PO. In this article, we have seen how to translate Ruby applications with the fast_gettext gem. You have learned what GetText is, what PO files are and what their format is. We’ve also discussed how to perform translation with fast_gettext, how to add pluralization rules and gender information. Lastly, we have talked about POParser that may simplify working with PO files. Note that fast_gettext has even more features. For instance, you may use database to store your translations and there is also a plugin for Ruby on Rails framework. You may also find more usage examples of the gem by browsing test cases on GitHub. If you have any other questions left, don’t hesitate to post a comment or drop me a line. As always, I thank you for staying with me and until the next time!
https://phrase.com/blog/posts/i18n-translate-ruby-applications-with-gettext-gem/
CC-MAIN-2021-04
refinedweb
3,195
56.86
In the last chapter, you learned how to catalog peers and resources with a discovery web service. In this chapter, we'll develop a sophisticated file-sharing application that uses the discovery service. The file-sharing client is a lengthy example, and it will take the entire chapter to dissect the code. This complexity is a result of the multiple roles that a peer-to-peer application must play. A peer-to-peer client needs to periodically submit registration information to a central web service, serve files to hordes of eager peers, and retrieve more files from a different set of peers on the network—potentially all at once. The only way to handle these issues is with careful, disciplined threading code. The FileSwapper application is built around a single form (see Figure 9-1). This form uses multiple tables and allows users to initiate searches, configure settings, and monitor uploads and downloads. FileSwapper divides its functionality into a small army of classes, including the following: SwapperClient, which is the main form class. It delegates as much work as possible to other classes and uses a timer to periodically update its login information with the discovery service. Global, which includes the data that's required application-wide (for example, registry settings). App, which includes shared methods for some of the core application tasks such as Login(), Logout(), and PublishFiles(), and also provides access to the various application threads. KeywordUtil and MP3Util, which provide a few shared helper methods for analyzing MP3 files and parsing the keywords that describe them. RegistrySettings, which provides access to the application's configuration settings, along with methods for saving and loading them. ListViewItemWrapper, which performs thread-safe updating of a ListViewItem. Search, which contacts the discovery service with a search request on a separate thread (allowing long-running searches to be easily interrupted). FileServer and FileUpload, which manage the incoming connections and transfer shared files to interested peers. FileDownloadQueue and FileDownloadClient, which manage in-progress downloads from other peers. Messages, which defines constants used for peer-to-peer communication. The file-transfer process is fairly easy. Once a peer locates another peer that has an interesting file, it opens a direct TCP/IP connection and sends a download request. Conceptually, this code is quite similar to some of the examples shown in Chapter 7. However, the application is still fairly complex because it needs to handle several tasks that require multithreading at once. Because every peer acts as both a client and a server, every application needs to simultaneously monitor for new incoming connections that are requesting files. In addition, the application must potentially initiate new outgoing connections to download other files. Not only does the client need to perform uploading and downloading at the same time, but it also needs to be able to perform multiple uploads or downloads at once (within reason). In order to accommodate this design, a separate thread needs to work continuously to schedule new uploads or downloads as required. Figure 9-2 shows a simplified view of threads in the FileSwapper application. Note that for the most part, independent threads run code in separate objects to prevent confusion. However, this isn't a requirement, and a single object could be executed on multiple threads or a single thread could run the code from multiple objects. The full FileSwapper application can be downloaded with the code for this chapter. In this chapter, we'll walk through all the threading and networking code, but omit more trivial details such as namespace imports and the automatically generated Windows designer code. We'll begin by examining some of the building blocks such as the classes used to register the peer, to read configuration information, and to process MP3 files. Next, you'll look at the code for searching available peers. Finally, you'll see the multithreaded code for handling simultaneous uploads and downloads over the network. The FileSwapper requires a web reference to the discovery service in order to work. To add this, right-click the project name in the Solution Explorer, and choose Add Web Reference. Type the full path to the virtual directory and web service .asmx file in the Address field of the Add Web Reference window. When you press Enter, the list of web-service methods from the Internet Explorer test page will appear, as shown in Figure 9-3. Click Add Reference to generate the proxy class and add it to your project. The proxy class should not be manually modified once it's created, and so it isn't shown in the Solution Explorer. However, you can examine it by choosing Project Show All Files from the Visual Studio .NET window. The proxy class is always named Reference.vb, as shown in Figure 9-4. We won't consider the proxy class code in this book, although it makes interesting study if you'd like to understand a little more about how web services convert .NET objects into SOAP messages and back. If you need to change a web service, you must recompile it and then update the client's web reference. To do so, right-click the web-service reference in the Solution Explorer, and choose Update Web Reference. In the remainder of this chapter, we'll walk through the FileSwapper code class-by-class, and discuss the key design decisions.
http://www.yaldex.com/vb-net-tutorial/LiB0054.html
CC-MAIN-2018-39
refinedweb
895
54.12
BlackBerryProfileCreateDataFlag #include <bb/platform/identity/BlackBerryProfileCreateDataFlag> To link against this class, add the following line to your .pro file: LIBS += -lbbplatform The flags for profile data creation. Multiple flags can be combined using bitwise 'OR' unless stated otherwise. createData() flags parameter Overview Public Types Index Public Types - Default 0x00000000 Default creation flag. No options specified, and the creation will follow the default behavior where no caching and no extra encryption will be performed for the new entry. Use device-to-device encryption with dynamic keys, where user interaction is not required. Additional encryption is performed on the data before that data is stored remotely. Data is encrypted with dynamically generated keys shared between devices using the same BlackBerry ID user. Only devices with the same user will have the keys to decrypt this data. The keys are shared between devices and not included in backups or transferred as part of a device swap, so if a user only has one device, and it is lost, the keys are not recoverable, and any remote data stored with this encryption will be non-recoverable. Performing a "security wipe" will retain the keys and the stored data is recoverable if the same user logs back in to the device. If the user has multiple data-enabled devices, and are data enabled, the devices with the same BlackBerry ID user will exchange the keys securely so that all of them can store and retrieve the data stored with this encryption. Operations will return NotReady while the encryption keys are exchanged; the app can repeat the request after a short wait to avoid failures during this one time key exchange window. - EncryptDeviceToDevice 0x0000000110 - Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/cascades/bb__platform__identity__blackberryprofilecreatedataflag.html
CC-MAIN-2016-30
refinedweb
294
53.31
Math With Java Integer Scalar Types Math With Java Integer Scalar Types Check out this romp through math using Java integer scalar types. We'll explore under and overflow in our look at Join the DZone community and get the full member experience.Join For Free How do you break a Monolith into Microservices at Scale? This ebook shows strategies and techniques for building scalable and resilient microservices. I'm pretty sure that most developers do not think about how to do right math with scalar types in Java. Experienced developers can say - "we do not use scalar types for business math, for this we use java.lang.BigDecimal." And it is true, in Java world all business math operations used java.lang.BigDecimal or java.lang.BigInteger classes. But this article is not for business math. Almost all Java developers know that the integer operators do not indicate overflow or underflow in any way (jls-4.2.2). For instance, we have to calculate half of the sum of variables a and b. Let's say that variables have type int and (a%2 ==0, b%2 == 0). Below we have the method doJob. public int doJob(int a, int b) { ... } For this job (calculating half of the sum a and b) almost all developers will write code like this: public int doJob(int a, int b) { return (a + b)/2; } Even though they know about jls-4.2.2, but someone can say - "In my situation there is no possible way for a and b when the result of the doJob method will overflow or underflow." But when it somehow happens, there will be overflow or underflow. How can I rewrite the result in the doJob method to avoiding this? The answer is we must take care of expression. For any type of expression, you need to think how to write it more optimized to minimizing or even avoiding overflow or underflow. We can rewrite this method: public int doJob(int a, int b) { return a/2 + b/2; } The current implementation avoids all collision of jls-4.2.2. But ok, if we have expressions like this: - a + b - a * b - etc. In Java 8 we have in class java.lang.Math additional methods for checking overflow and underflow: public static int addExact(int x, int y) public static long addExact(long x, long y) public static int subtractExact(int x, int y) public static long subtractExact(long x, long y) public static int multiplyExact(int x, int y) public static long multiplyExact(long x, long y) public static int incrementExact(int a) public static long incrementExact(long a) public static int decrementExact(int a) public static long decrementExact(long a) // etc. All these methods may throw ArithmeticException. And now we do not need to check overflowing or underflowing by ourselves, we can use very helpful method like in this example: public int doJobSum(int a, int b) { return Math.addExact(a, b); } That's all, and if the result will overflow int type method addExact will throw ArithmeticException. If you will check class java.lang.Math you can find many helpful methods since Java 8. }}
https://dzone.com/articles/math-with-java-scalar-types
CC-MAIN-2018-51
refinedweb
529
62.98
Chapter 12 Contents: Registering the Driver The Header File blk.h Handling Requests: A Simple Introduction Handling Requests: The Detailed View How Mounting and Unmounting Works The ioctl Method Removable Devices Partitionable Devices Interrupt-Driven Block Drivers Backward Compatibility Quick Reference Our discussion thus far has been limited to char drivers. As we have already mentioned, however, char drivers are not the only type of driver used in Linux systems. Here we turn our attention to block drivers. Block drivers provide access to block-oriented devices -- those that transfer data in randomly accessible, fixed-size blocks. The classic block device is a disk drive, though others exist as well. The char driver interface is relatively clean and easy to use; the block interface, unfortunately, is a little messier. Kernel developers like to complain about it. There are two reasons for this state of affairs. The first is simple history -- the block interface has been at the core of every version of Linux since the first, and it has proved hard to change. The other reason is performance. A slow char driver is an undesirable thing, but a slow block driver is a drag on the entire system. As a result, the design of the block interface has often been influenced by the need for speed. The block driver interface has evolved significantly over time. As with the rest of the book, we cover the 2.4 interface in this chapter, with a discussion of the changes at the end. variant called spull as a way of showing how to deal with partition tables. As always, these example drivers gloss over many of the issues found in real block drivers; their purpose is to demonstrate the interface that such drivers must work with. Real drivers will have to deal with hardware, so the material covered in Chapter 8, "Hardware Management" and Chapter 9, "Interrupt Handling" will be useful as well. One quick note on terminology: the word block as used in this book refers to a block of data as determined by the kernel. The size of blocks can be different in different disks, though they are always a power of two. A sectoris a fixed-size unit of data as determined by the underlying hardware. Sectors are almost always 512 bytes long. Registering the Driver Like char drivers, block drivers in the kernel are identified by major numbers. Block major numbers are entirely distinct from char major numbers, however. A block device with major number 32 can coexist with a char device using the same major number since the two ranges are separate. The functions for registering and unregistering block devices look similar to those for char devices:#include <linux/fs.h> int register_blkdev(unsigned int major, const char *name, struct block_device_operations *bdops); int unregister_blkdev(unsigned int major, const char *name); The arguments have the same general meaning as for char devices, and major numbers can be assigned dynamically in the same way. So the sbull device registers itself in almost exactly the same way as scull did:result = register_blkdev(sbull_major, "sbull", &sbull_bdops); if (result < 0) { printk(KERN_WARNING "sbull: can't get major %d\n",sbull_major); return result; } if (sbull_major == 0) sbull_major = result; /* dynamic */ major = sbull_major; /* Use `major' later on to save typing */ The similarity stops here, however. One difference is already evident: register_chrdev took a pointer to a file_operations structure, but register_blkdev uses a structure of type block_device_operations instead -- as it has since kernel version 2.3.38. The structure is still sometimes referred to by the name fops in block drivers; we'll call it bdops to be more faithful to what the structure is and to follow the suggested naming. The definition of this structure is as follows:struct block_device_operations { int (*open) (struct inode *inode, struct file *filp); int (*release) (struct inode *inode, struct file *filp); int (*ioctl) (struct inode *inode, struct file *filp, unsigned command, unsigned long argument); int (*check_media_change) (kdev_t dev); int (*revalidate) (kdev_t dev); }; The open, release, and ioctl methods listed here are exactly the same as their char device counterparts. The other two methods are specific to block devices and are discussed later in this chapter. Note that there is no owner field in this structure; block drivers must still maintain their usage count manually, even in the 2.4 kernel. The bdops structure used in sbull is as follows:struct block_device_operations sbull_bdops = { open: sbull_open, release: sbull_release, ioctl: sbull_ioctl, check_media_change: sbull_check_change, revalidate: sbull_revalidate, }; Note that there are no read or write operations provided in the block_device_operations structure. All I/O to block devices is normally buffered by the system (the only exception is with "raw'' devices, which we cover in the next chapter); user processes do not perform direct I/O to these devices. User-mode access to block devices usually is implicit in filesystem operations they perform, and those operations clearly benefit from I/O buffering. However, even "direct'' I/O to a block device, such as when a filesystem is created, goes through the Linux buffer cache.[47] As a result, the kernel provides a single set of read and write functions for block devices, and drivers do not need to worry about them. [47] Actually, the 2.3 development series added the raw I/O capability, allowing user processes to write to block devices without involving the buffer cache. Block drivers, however, are entirely unaware of raw I/O, so we defer the discussion of that facility to the next chapter. Clearly, a block driver must eventually provide some mechanism for actually doing block I/O to a device. In Linux, the method used for these I/O operations is called request; it is the equivalent of the "strategy'' function found on many Unix systems. The request method handles both read and write operations and can be somewhat complex. We will get into the details of request shortly. For the purposes of block device registration, however, we must tell the kernel where our request method is. This method is not kept in the block_device_operations structure, for both historical and performance reasons; instead, it is associated with the queue of pending I/O operations for the device. By default, there is one such queue for each major number. A block driver must initialize that queue with blk_init_queue. Queue initialization and cleanup is defined as follows:#include <linux/blkdev.h> blk_init_queue(request_queue_t *queue, request_fn_proc *request); blk_cleanup_queue(request_queue_t *queue); The init function sets up the queue, and associates the driver's request function (passed as the second parameter) with the queue. It is necessary to call blk_cleanup_queue at module cleanup time. The sbull driver initializes its queue with this line of code:blk_init_queue(BLK_DEFAULT_QUEUE(major), sbull_request); Each device has a request queue that it uses by default; the macro BLK_DEFAULT_QUEUE(major) is used to indicate that queue when needed. This macro looks into a global array of blk_dev_struct structures called blk_dev, which is maintained by the kernel and indexed by major number. The structure looks like this:struct blk_dev_struct { request_queue_t request_queue; queue_proc *queue; void *data; }; The request_queue member contains the I/O request queue that we have just initialized. We will look at the queue member shortly. The data field may be used by the driver for its own data -- but few drivers do so. Figure 12-1 visualizes the main steps a driver module performs to register with the kernel proper and deregister. If you compare this figure with Figure 2-1, similarities and differences should be clear. Figure 12-1. Registering a Block Device Driver In addition to blk_dev, several other global arrays hold information about block drivers. These arrays are indexed by the major number, and sometimes also the minor number. They are declared and described in drivers/block/ll_rw_block.c. - int blk_size[][]; - This array is indexed by the major and minor numbers. It describes the size of each device, in kilobytes. If blk_size[major] is NULL, no checking is performed on the size of the device (i.e., the kernel might request data transfers past end-of-device). - int blksize_size[][]; - The size of the block used by each device, in bytes. Like the previous one, this bidimensional array is indexed by both major and minor numbers. If blksize_size[major] is a null pointer, a block size of BLOCK_SIZE (currently 1 KB) is assumed. The block size for the device must be a power of two, because the kernel uses bit-shift operators to convert offsets to block numbers. - int hardsect_size[][]; - Like the others, this data structure is indexed by the major and minor numbers. The default value for the hardware sector size is 512 bytes. With the 2.2 and 2.4 kernels, different sector sizes are supported, but they must always be a power of two greater than or equal to 512 bytes. - int read_ahead[]; - int max_readahead[][]; - These arrays define the number of sectors to be read in advance by the kernel when a file is being read sequentially. read_ahead applies to all devices of a given type and is indexed by major number; max_readahead applies to individual devices and is indexed by both the major and minor numbers. Reading data before a process asks for it helps system performance and overall throughput. A slower device should specify a bigger read-ahead value, while fast devices will be happy even with a smaller value. The bigger the read-ahead value, the more memory the buffer cache uses. The primary difference between the two arrays is this: read_ahead is applied at the block I/O level and controls how many blocks may be read sequentially from the disk ahead of the current request. max_readahead works at the filesystem level and refers to blocks in the file, which may not be sequential on disk. Kernel development is moving toward doing read ahead at the filesystem level, rather than at the block I/O level. In the 2.4 kernel, however, read ahead is still done at both levels, so both of these arrays are used. There is one read_ahead[] value for each major number, and it applies to all its minor numbers. max_readahead, instead, has a value for every device. The values can be changed via the driver's ioctl method; hard-disk drivers usually set read_ahead to 8 sectors, which corresponds to 4 KB. The max_readahead value, on the other hand, is rarely set by the drivers; it defaults to MAX_READAHEAD, currently 31 pages. - int max_sectors[][]; - This array limits the maximum size of a single request. It should normally be set to the largest transfer that your hardware can handle. - int max_segments[]; - This array controlled the number of individual segments that could appear in a clustered request; it was removed just before the release of the 2.4 kernel, however. (See "Section 12.4.2, "Clustered Requests"" later in this chapter for information on clustered requests). The sbull device allows you to set these values at load time, and they apply to all the minor numbers of the sample driver. The variable names and their default values in sbull are as follows: - size=2048 (kilobytes) - Each RAM disk created by sbull takes two megabytes of RAM. - blksize=1024 (bytes) - The software "block'' used by the module is one kilobyte, like the system default. - hardsect=512 (bytes) - The sbull sector size is the usual half-kilobyte value. - rahead=2 (sectors) - Because the RAM disk is a fast device, the default read-ahead value is small. The sbull device also allows you to choose the number of devices to install. devs, the number of devices, defaults to 2, resulting in a default memory usage of four megabytes -- two disks at two megabytes each. The initialization of these arrays in sbullis done as follows:read_ahead[major] = sbull_rahead; result = -ENOMEM; /* for the possible errors */ sbull_sizes = kmalloc(sbull_devs * sizeof(int), GFP_KERNEL); if (!sbull_sizes) goto fail_malloc; for (i=0; i < sbull_devs; i++) /* all the same size */ sbull_sizes[i] = sbull_size; blk_size[major]=sbull_sizes; sbull_blksizes = kmalloc(sbull_devs * sizeof(int), GFP_KERNEL); if (!sbull_blksizes) goto fail_malloc; for (i=0; i < sbull_devs; i++) /* all the same blocksize */ sbull_blksizes[i] = sbull_blksize; blksize_size[major]=sbull_blksizes; sbull_hardsects = kmalloc(sbull_devs * sizeof(int), GFP_KERNEL); if (!sbull_hardsects) goto fail_malloc; for (i=0; i < sbull_devs; i++) /* all the same hardsect */ sbull_hardsects[i] = sbull_hardsect; hardsect_size[major]=sbull_hardsects; For brevity, the error handling code (the target of the fail_malloc goto) has been omitted; it simply frees anything that was successfully allocated, unregisters the device, and returns a failure status. One last thing that must be done is to register every "disk'' device provided by the driver. sbull calls the necessary function (register_disk) as follows:for (i = 0; i < sbull_devs; i++) register_disk(NULL, MKDEV(major, i), 1, &sbull_bdops, sbull_size << 1); In the 2.4.0 kernel, register_disk does nothing when invoked in this manner. The real purpose of register_disk is to set up the partition table, which is not supported by sbull. All block drivers, however, make this call whether or not they support partitions, indicating that it may become necessary for all block devices in the future. A block driver without partitions will work without this call in 2.4.0, but it is safer to include it. We revisit register_disk in detail later in this chapter, when we cover partitions. The cleanup function used by sbull looks like this:for (i=0; i<sbull_devs; i++) fsync_dev(MKDEV(sbull_major, i)); /* flush the devices */ unregister_blkdev(major, "sbull"); /* * Fix up the request queue(s) */ blk_cleanup_queue(BLK_DEFAULT_QUEUE(major)); /* Clean up the global arrays */ read_ahead[major] = 0; kfree(blk_size[major]); blk_size[major] = NULL; kfree(blksize_size[major]); blksize_size[major] = NULL; kfree(hardsect_size[major]); hardsect_size[major] = NULL; Here, the call to fsync_dev is needed to free all references to the device that the kernel keeps in various caches. fsync_dev is the implementation of block_fsync, which is the fsync "method'' for block devices. The Header File blk.h All block drivers should include the header file <linux/blk.h>. This file defines much of the common code that is used in block drivers, and it provides functions for dealing with the I/O request queue. Actually, the blk.h header is quite unusual, because it defines several symbols based on the symbol MAJOR_NR, which must be declared by the driver before it includes the header. This convention was developed in the early days of Linux, when all block devices had preassigned major numbers and modular block drivers were not supported. If you look at blk.h, you'll see that several device-dependent symbols are declared according to the value of MAJOR_NR, which is expected to be known in advance. However, if the major number is dynamically assigned, the driver has no way to know its assigned number at compile time and cannot correctly define MAJOR_NR. If MAJOR_NR is undefined, blk.hcan't set up some of the macros used with the request queue. Fortunately, MAJOR_NR can be defined as an integer variable and all will work fine for add-on block drivers. blk.h makes use of some other predefined, driver-specific symbols as well. The following list describes the symbols in <linux/blk.h> that must be defined in advance; at the end of the list, the code used in sbull is shown. - MAJOR_NR - This symbol is used to access a few arrays, in particular blk_dev and blksize_size. A custom driver like sbull, which is unable to assign a constant value to the symbol, should #define it to the variable holding the major number. For sbull, this is sbull_major. - DEVICE_NAME - The name of the device being created. This string is used in printing error messages. - DEVICE_NR(kdev_t device) - This symbol is used to extract the ordinal number of the physical device from the kdev_t device number. This symbol is used in turn to declare CURRENT_DEV, which can be used within the request function to determine which hardware device owns the minor number involved in a transfer request. The value of this macro can be MINOR(device) or another expression, according to the convention used to assign minor numbers to devices and partitions. The macro should return the same device number for all partitions on the same physical device -- that is, DEVICE_NR represents the disk number, not the partition number. Partitionable devices are introduced later in this chapter. - DEVICE_INTR - This symbol is used to declare a pointer variable that refers to the current bottom-half handler. The macros SET_INTR(intr) and CLEAR_INTR are used to assign the variable. Using multiple handlers is convenient when the device can issue interrupts with different meanings. - DEVICE_ON(kdev_t device) - DEVICE_OFF(kdev_t device) - These macros are intended to help devices that need to perform processing before or after a set of transfers is performed; for example, they could be used by a floppy driver to start the drive motor before I/O and to stop it afterward. Modern drivers no longer use these macros, and DEVICE_ON does not even get called anymore. Portable drivers, though, should define them (as empty symbols), or compilation errors will result on 2.0 and 2.2 kernels. - DEVICE_NO_RANDOM - By default, the function end_request contributes to system entropy (the amount of collected "randomness''), which is used by /dev/random. If the device isn't able to contribute significant entropy to the random device, DEVICE_NO_RANDOM should be defined. /dev/random was introduced in "Section 9.3, "Installing an Interrupt Handler"" in Chapter 9, "Interrupt Handling", where SA_SAMPLE_RANDOM was explained. - DEVICE_REQUEST - Used to specify the name of the request function used by the driver. The only effect of defining DEVICE_REQUEST is to cause a forward declaration of the request function to be done; it is a holdover from older times, and most (or all) drivers can leave it out. The sbull driver declares the symbols in the following way:#define MAJOR_NR sbull_major /* force definitions on in blk.h */ static int sbull_major; /* must be declared before including blk.h */ #define DEVICE_NR(device) MINOR(device) /* has no partition bits */ #define DEVICE_NAME "sbull" /* name for messaging */ #define DEVICE_INTR sbull_intrptr /* pointer to bottom half */ #define DEVICE_NO_RANDOM /* no entropy to contribute */ #define DEVICE_REQUEST sbull_request #define DEVICE_OFF(d) /* do-nothing */ #include <linux/blk.h> #include "sbull.h" /* local definitions */ The blk.h header uses the macros just listed to define some additional macros usable by the driver. We'll describe those macros in the following sections. Handling Requests: A Simple Introduction The most important function in a block driver is the request function, which performs the low-level operations related to reading and writing data. This section discusses the basic design of the requestprocedure. The Request Queue When the kernel schedules a data transfer, it queues the request in a list, ordered in such a way that it maximizes system performance. The queue of requests is then passed to the driver's request function, which has the following prototype:void request_fn(request_queue_t *queue); The request function should perform the following tasks for each request in the queue: - Check the validity of the request. This test is performed by the macro INIT_REQUEST, defined in blk.h; the test consists of looking for problems that could indicate a bug in the system's request queue handling. - Perform the actual data transfer. The CURRENT variable (a macro, actually) can be used to retrieve the details of the current request. CURRENT is a pointer to struct request, whose fields are described in the next section. - Clean up the request just processed. This operation is performed by end_request, a static function whose code resides in blk.h. end_requesthandles the management of the request queue and wakes up processes waiting on the I/O operation. It also manages the CURRENT variable, ensuring that it points to the next unsatisfied request. The driver passes the function a single argument, which is 1 in case of success and 0 in case of failure. When end_request is called with an argument of 0, an "I/O error'' message is delivered to the system logs (via printk). - Loop back to the beginning, to consume the next request. Based on the previous description, a minimal request function, which does not actually transfer any data, would look like this:void sbull_request(request_queue_t *q) { while(1) { INIT_REQUEST; printk("<1>request %p: cmd %i sec %li (nr. %li)\n", CURRENT, CURRENT->cmd, CURRENT->sector, CURRENT->current_nr_sectors); end_request(1); /* success */ } } Although this code does nothing but print messages, running this function provides good insight into the basic design of data transfer. It also demonstrates a couple of features of the macros defined in <linux/blk.h>. The first is that, although the while loop looks like it will never terminate, the fact is that the INIT_REQUEST macro performs a return when the request queue is empty. The loop thus iterates over the queue of outstanding requests and then returns from the request function. Second, the CURRENT macro always describes the request to be processed. We get into the details of CURRENT in the next section. A block driver using the request function just shown will actually work -- for a short while. It is possible to make a filesystem on the device and access it for as long as the data remains in the system's buffer cache. This empty (but verbose) function can still be run in sbull by defining the symbol SBULL_EMPTY_REQUEST at compile time. If you want to understand how the kernel handles different block sizes, you can experiment with blksize= on the insmod command line. The empty request function shows the internal workings of the kernel by printing the details of each request. The request function has one very important constraint: it must be atomic. request is not usually called in direct response to user requests, and it is not running in the context of any particular process. It can be called at interrupt time, from tasklets, or from any number of other places. Thus, it must not sleep while carrying out its tasks. Performing the Actual Data Transfer To understand how to build a working requestfunction for sbull, let's look at how the kernel describes a request within a struct request. The structure is defined in <linux/blkdev.h>. By accessing the fields in the request structure, usually by way of CURRENT, the driver can retrieve all the information needed to transfer data between the buffer cache and the physical block device.[48] CURRENT is just a pointer into blk_dev[MAJOR_NR].request_queue. The following fields of a request hold information that is useful to the request function: [48]Actually, not all blocks passed to a block driver need be in the buffer cache, but that's a topic beyond the scope of this chapter. - kdev_t rq_dev; - The device accessed by the request. By default, the same request function is used for every device managed by the driver. A single request function deals with all the minor numbers; rq_dev can be used to extract the minor device being acted upon. The CURRENT_DEV macro is simply defined as DEVICE_NR(CURRENT->rq_dev). - int cmd; - This field describes the operation to be performed; it is either READ (from the device) or WRITE (to the device). - unsigned long sector; - The number of the first sector to be transferred in this request. - unsigned long current_nr_sectors; - unsigned long nr_sectors; - The number of sectors to transfer for the current request. The driver should refer to current_nr_sectors and ignore nr_sectors (which is listed here just for completeness). See "Section 12.4.2, "Clustered Requests"" later in this chapter for more detail on nr_sectors. - char *buffer; - The area in the buffer cache to which data should be written (cmd==READ) or from which data should be read (cmd==WRITE). - struct buffer_head *bh; - The structure describing the first buffer in the list for this request. Buffer heads are used in the management of the buffer cache; we'll look at them in detail shortly in "Section 12.4.1.1, "The request structure and the buffer cache"." There are other fields in the structure, but they are primarily meant for internal use in the kernel; the driver is not expected to use them. The implementation for the working requestfunction in the sbull device is shown here. In the following code, the Sbull_Dev serves the same function as Scull_Dev, introduced in "Section 3.6, "scull's Memory Usage"" in Chapter 3, "Char Drivers".void sbull_request(request_queue_t *q) { Sbull_Dev *device; int status; while(1) { INIT_REQUEST; /* returns when queue is empty */ /* Which "device" are we using? */ device = sbull_locate_device (CURRENT); if (device == NULL) { end_request(0); continue; } /* Perform the transfer and clean up. */ spin_lock(&device->lock); status = sbull_transfer(device, CURRENT); spin_unlock(&device->lock); end_request(status); } } This code looks little different from the empty version shown earlier; it concerns itself with request queue management and pushes off the real work to other functions. The first, sbull_locate_device, looks at the device number in the request and finds the right Sbull_Dev structure:static Sbull_Dev *sbull_locate_device(const struct request *req) { int devno; Sbull_Dev *device; /* Check if the minor number is in range */ devno = DEVICE_NR(req->rq_dev); if (devno >= sbull_devs) { static int count = 0; if (count++ < 5) /* print the message at most five times */ printk(KERN_WARNING "sbull: request for unknown device\n"); return NULL; } device = sbull_devices + devno; /* Pick it out of device array */ return device; } The only "strange'' feature of the function is the conditional statement that limits it to reporting five errors. This is intended to avoid clobbering the system logs with too many messages, since end_request(0) already prints an "I/O error'' message when the request fails. The static counter is a standard way to limit message reporting and is used several times in the kernel. The actual I/O of the request is handled by sbull_transfer:static int sbull_transfer(Sbull_Dev *device, const struct request *req) { int size; u8 *ptr; ptr = device->data + req->sector * sbull_hardsect; size = req->current_nr_sectors * sbull_hardsect; /* Make sure that the transfer fits within the device. */ if (ptr + size > device->data + sbull_blksize*sbull_size) { static int count = 0; if (count++ < 5) printk(KERN_WARNING "sbull: request past end of device\n"); return 0; } /* Looks good, do the transfer. */ switch(req->cmd) { case READ: memcpy(req->buffer, ptr, size); /* from sbull to buffer */ return 1; case WRITE: memcpy(ptr, req->buffer, size); /* from buffer to sbull */ return 1; default: /* can't happen */ return 0; } } Since sbull is just a RAM disk, its "data transfer'' reduces to a memcpy call. Handling Requests: The Detailed View The sbull driver as described earlier works very well. In simple situations (as with sbull), the macros from <linux/blk.h> can be used to easily set up a request function and get a working driver. As has already been mentioned, however, block drivers are often a performance-critical part of the kernel. Drivers based on the simple code shown earlier will likely not perform very well in many situations, and can also be a drag on the system as a whole. In this section we get into the details of how the I/O request queue works with an eye toward writing a faster, more efficient driver. The I/O Request Queue Each block driver works with at least one I/O request queue. This queue contains, at any given time, all of the I/O operations that the kernel would like to see done on the driver's devices. The management of this queue is complicated; the performance of the system depends on how it is done. The queue is designed with physical disk drives in mind. With disks, the amount of time required to transfer a block of data is typically quite small. The amount of time required to position the head (seek) to do that transfer, however, can be very large. Thus the Linux kernel works to minimize the number and extent of the seeks performed by the device. Two things are done to achieve those goals. One is the clustering of requests to adjacent sectors on the disk. Most modern filesystems will attempt to lay out files in consecutive sectors; as a result, requests to adjoining parts of the disk are common. The kernel also applies an "elevator'' algorithm to the requests. An elevator in a skyscraper is either going up or down; it will continue to move in those directions until all of its "requests'' (people wanting on or off) have been satisfied. In the same way, the kernel tries to keep the disk head moving in the same direction for as long as possible; this approach tends to minimize seek times while ensuring that all requests get satisfied eventually. A Linux I/O request queue is represented by a structure of type request_queue, declared in <linux/blkdev.h>. The request_queue structure looks somewhat like file_operations and other such objects, in that it contains pointers to a number of functions that operate on the queue -- for example, the driver's requestfunction is stored there. There is also a queue head (using the functions from <linux/list.h> described in "Section 10.5, "Linked Lists"" in Chapter 10, "Judicious Use of Data Types"), which points to the list of outstanding requests to the device. These requests are, of course, of type struct request; we have already looked at some of the fields in this structure. The reality of the request structure is a little more complicated, however; understanding it requires a brief digression into the structure of the Linux buffer cache. The request structure and the buffer cache The design of the request structure is driven by the Linux memory management scheme. Like most Unix-like systems, Linux maintains a buffer cache, a region of memory that is used to hold copies of blocks stored on disk. A great many "disk" operations performed at higher levels of the kernel -- such as in the filesystem code -- act only on the buffer cache and do not generate any actual I/O operations. Through aggressive caching the kernel can avoid many read operations altogether, and multiple writes can often be merged into a single physical write to disk. One unavoidable aspect of the buffer cache, however, is that blocks that are adjacent on disk are almost certainly not adjacent in memory. The buffer cache is a dynamic thing, and blocks end up being scattered widely. In order to keep track of everything, the kernel manages the buffer cache through buffer_head structures. One buffer_head is associated with each data buffer. This structure contains a great many fields, most of which do not concern a driver writer. There are a few that are important, however, including the following: - char *b_data; - The actual data block associated with this buffer head. - unsigned long b_size; - The size of the block pointed to by b_data. - kdev_t b_rdev; - The device holding the block represented by this buffer head. - unsigned long b_rsector; - The sector number where this block lives on disk. - struct buffer_head *b_reqnext; - A pointer to a linked list of buffer head structures in the request queue. - void (*b_end_io)(struct buffer_head *bh, int uptodate); - A pointer to a function to be called when I/O on this buffer completes. bh is the buffer head itself, and uptodate is nonzero if the I/O was successful. Every block passed to a driver's request function either lives in the buffer cache, or, on rare occasion, lives elsewhere but has been made to look as if it lived in the buffer cache.[49] As a result, every request passed to the driver deals with one or more buffer_head structures. The request structure contains a member (called simply bh) that points to a linked list of these structures; satisfying the request requires performing the indicated I/O operation on each buffer in the list. Figure 12-2 shows how the request queue and buffer_head structures fit together. [49]The RAM-disk driver, for example, makes its memory look as if it were in the buffer cache. Since the "disk'' buffer is already in system RAM, there's no need to keep a copy in the buffer cache. Our sample code is thus much less efficient than a properly implemented RAM disk, not being concerned with RAM-disk-specific performance issues. Figure 12-2. Buffers in the I/O Request Queue Requests are not made of random lists of buffers; instead, all of the buffer heads attached to a single request will belong to a series of adjacent blocks on the disk. Thus a request is, in a sense, a single operation referring to a (perhaps long) group of blocks on the disk. This grouping of blocks is called clustering, and we will look at it in detail after completing our discussion of how the request list works. Request queue manipulation The header <linux/blkdev.h> defines a small number of functions that manipulate the request queue, most of which are implemented as preprocessor macros. Not all drivers will need to work with the queue at this level, but a familiarity with how it all works can be helpful. Most request queue functions will be introduced as we need them, but a few are worth mentioning here. - struct request *blkdev_entry_next_request(struct list_head *head); - Returns the next entry in the request list. Usually the head argument is the queue_head member of the request_queue structure; in this case the function returns the first entry in the queue. The function uses the list_entry macro to look in the list. - struct request *blkdev_next_request(struct request *req); - struct request *blkdev_prev_request(struct request *req); - Given a request structure, return the next or previous structure in the request queue. - blkdev_dequeue_request(struct request *req); - Removes a request from its request queue. - blkdev_release_request(struct request *req); - Releases a request structure back to the kernel when it has been completely executed. Each request queue maintains its own free list of request structures (two, actually: one for reads and one for writes); this function places a structure back on the proper free list. blkdev_release_request will also wake up any processes that are waiting on a free request structure. All of these functions require that the io_request_lock be held, which we will discuss next. The I/O request lock The I/O request queue is a complex data structure that is accessed in many places in the kernel. It is entirely possible that the kernel needs to add more requests to the queue at the same time that your driver is taking requests off. The queue is thus subject to the usual sort of race conditions, and must be protected accordingly. In Linux 2.2 and 2.4, all request queues are protected with a single global spinlock called io_request_lock. Any code that manipulates a request queue must hold that lock and disable interrupts, with one small exception: the very first entry in the request queue is (by default) considered to be owned by the driver. Failure to acquire the io_request_lock prior to working with the request queue can cause the queue to be corrupted, with a system crash following shortly thereafter. The simple request function shown earlier did not need to worry about this lock because the kernel always calls the request function with the io_request_lock held. A driver is thus protected against corrupting the request queue; it is also protected against reentrant calls to the request function. This scheme was designed to enable drivers that are not SMP aware to function on multiprocessor systems. Note, however, that the io_request_lock is an expensive resource to hold. As long as your driver holds this lock, no other requests may be queued to any block driver in the system, and no other request functions may be called. A driver that holds this lock for a long time may well slow down the system as a whole. Thus, well-written block drivers often drop this lock as soon as possible. We will see an example of how this can be done shortly. Block drivers that drop the io_request_lock must be written with a couple of important things in mind, however. First is that the request function must always reacquire this lock before returning, since the calling code expects it to still be held. The other concern is that, as soon as the io_request_lock is dropped, the possibility of reentrant calls to the request function is very real; the function must be written to handle that eventuality. A variant of this latter case can also occur if your request function returns while an I/O request is still active. Many drivers for real hardware will start an I/O operation, then return; the work is completed in the driver's interrupt handler. We will look at interrupt-driven block I/O in detail later in this chapter; for now it is worth mentioning, however, that the request function can be called while these operations are still in progress. Some drivers handle request function reentrancy by maintaining an internal request queue. The request function simply removes any new requests from the I/O request queue and adds them to the internal queue, which is then processed through a combination of tasklets and interrupt handlers. How the blk.h macros and functions work In our simple request function earlier, we were not concerned with buffer_head structures or linked lists. The macros and functions in <linux/blk.h> hide the structure of the I/O request queue in order to make the task of writing a block driver simpler. In many cases, however, getting reasonable performance requires a deeper understanding of how the queue works. In this section we look at the actual steps involved in manipulating the request queue; subsequent sections show some more advanced techniques for writing block request functions. The fields of the request structure that we looked at earlier -- sector, current_nr_sectors, and buffer -- are really just copies of the analogous information stored in the first buffer_head structure on the list. Thus, a request function that uses this information from the CURRENT pointer is just processing the first of what might be many buffers within the request. The task of splitting up a multibuffer request into (seemingly) independent, single-buffer requests is handled by two important definitions in <linux/blk.h>: the INIT_REQUEST macro and the end_request function. Of the two, INIT_REQUEST is the simpler; all it really does is make a couple of consistency checks on the request queue and cause a return from the requestfunction if the queue is empty. It is simply making sure that there is still work to do. The bulk of the queue management work is done by end_request. This function, remember, is called when the driver has processed a single "request'' (actually one buffer); it has several tasks to perform: - Complete the I/O processing on the current buffer; this involves calling the b_end_io function with the status of the operation, thus waking any process that may be sleeping on the buffer. - Remove the buffer from the request's linked list. If there are further buffers to be processed, the sector, current_nr_sectors, and buffer fields in the request structure are updated to reflect the contents of the next buffer_head structure in the list. In this case (there are still buffers to be transferred), end_request is finished for this iteration and steps 3 to 5 are not executed. - Call add_blkdev_randomness to update the entropy pool, unless DEVICE_NO_RANDOM has been defined (as is done in the sbull driver). - Remove the finished request from the request queue by calling blkdev_dequeue_request. This step modifies the request queue, and thus must be performed with the io_request_lock held. - Release the finished request back to the system; io_request_lock is required here too. The kernel defines a couple of helper functions that are used by end_request to do most of this work. The first one is called end_that_request_first, which handles the first two steps just described. Its prototype isint end_that_request_first(struct request *req, int status, char *name); status is the status of the request as passed to end_request; the name parameter is the device name, to be used when printing error messages. The return value is nonzero if there are more buffers to be processed in the current request; in that case the work is done. Otherwise, the request is dequeued and released with end_that_request_last:void end_that_request_last(struct request *req); In end_request this step is handled with this code:struct request *req = CURRENT; blkdev_dequeue_request(req); end_that_request_last(req); That is all there is to it. Clustered Requests The time has come to look at how to apply all of that background material to the task of writing better block drivers. We'll start with a look at the handling of clustered requests. Clustering, as mentioned earlier, is simply the practice of joining together requests that operate on adjacent blocks on the disk. There are two advantages to doing things this way. First, clustering speeds up the transfer; clustering can also save some memory in the kernel by avoiding allocation of redundant request structures. As we have seen, block drivers need not be aware of clustering at all; <linux/blk.h> transparently splits each clustered request into its component pieces. In many cases, however, a driver can do better by explicitly acting on clustering. It is often possible to set up the I/O for several consecutive blocks at the same time, with an improvement in throughput. For example, the Linux floppy driver attempts to write an entire track to the diskette in a single operation. Most high-performance disk controllers can do "scatter/gather" I/O as well, leading to large performance gains. To take advantage of clustering, a block driver must look directly at the list of buffer_head structures attached to the request. This list is pointed to by CURRENT->bh; subsequent buffers can be found by following the b_reqnext pointers in each buffer_head structure. A driver performing clustered I/O should follow roughly this sequence of operations with each buffer in the cluster: - Arrange to transfer the data block at address bh->b_data, of size bh->b_size bytes. The direction of the data transfer is CURRENT->cmd (i.e., either READ or WRITE). - Retrieve the next buffer head in the list: bh->b_reqnext. Then detach the buffer just transferred from the list, by zeroing its b_reqnext -- the pointer to the new buffer you just retrieved. - Update the request structure to reflect the I/O done with the buffer that has just been removed. Both CURRENT->hard_nr_sectors and CURRENT->nr_sectors should be decremented by the number of sectors (not blocks) transferred from the buffer. The sector numbers CURRENT->hard_sector and CURRENT->sector should be incremented by the same amount. Performing these operations keeps the request structure consistent. - Loop back to the beginning to transfer the next adjacent block. When the I/O on each buffer completes, your driver should notify the kernel by calling the buffer's I/O completion routine:bh->b_end_io(bh, status); status is nonzero if the operation was successful. You also, of course, need to remove the request structure for the completed operations from the queue. The processing steps just described can be done without holding the io_request_lock, but that lock must be reacquired before changing the queue itself. Your driver can still use end_request (as opposed to manipulating the queue directly) at the completion of the I/O operation, as long as it takes care to set the CURRENT->bh pointer properly. This pointer should either be NULL or it should point to the last buffer_head structure that was transferred. In the latter case, the b_end_io function should not have been called on that last buffer, since end_request will make that call. A full-featured implementation of clustering appears in drivers/block/floppy.c, while a summary of the operations required appears in end_request, in blk.h. Neither floppy.c nor blk.h are easy to understand, but the latter is a better place to start. The active queue head One other detail regarding the behavior of the I/O request queue is relevant for block drivers that are dealing with clustering. It has to do with the queue head -- the first request on the queue. For historical compatibility reasons, the kernel (almost) always assumes that a block driver is processing the first entry in the request queue. To avoid corruption resulting from conflicting activity, the kernel will never modify a request once it gets to the head of the queue. No further clustering will happen on that request, and the elevator code will not put other requests in front of it. Many block drivers remove requests from the queue entirely before beginning to process them. If your driver works this way, the request at the head of the queue should be fair game for the kernel. In this case, your driver should inform the kernel that the head of the queue is not active by calling blk_queue_headactive:blk_queue_headactive(request_queue_t *queue, int active); If active is 0, the kernel will be able to make changes to the head of the request queue. Multiqueue Block Drivers As we have seen, the kernel, by default, maintains a single I/O request queue for each major number. The single queue works well for devices like sbull, but it is not always optimal for real-world situations. Consider a driver that is handling real disk devices. Each disk is capable of operating independently; the performance of the system is sure to be better if the drives could be kept busy in parallel. A simple driver based on a single queue will not achieve that -- it will perform operations on a single device at a time. It would not be all that hard for a driver to walk through the request queue and pick out requests for independent drives. But the 2.4 kernel makes life easier by allowing the driver to set up independent queues for each device. Most high-performance drivers take advantage of this multiqueue capability. Doing so is not difficult, but it does require moving beyond the simple <linux/blk.h> definitions. The sbull driver, when compiled with the SBULL_MULTIQUEUE symbol defined, operates in a multiqueue mode. It works without the <linux/blk.h> macros, and demonstrates a number of the features that have been described in this section. To operate in a multiqueue mode, a block driver must define its own request queues. sbull does this by adding a queue member to the Sbull_Dev structure:request_queue_t queue; int busy; The busy flag is used to protect against request function reentrancy, as we will see. Request queues must be initialized, of course. sbull initializes its device-specific queues in this manner:for (i = 0; i < sbull_devs; i++) { blk_init_queue(&sbull_devices[i].queue, sbull_request); blk_queue_headactive(&sbull_devices[i].queue, 0); } blk_dev[major].queue = sbull_find_queue; The call to blk_init_queue is as we have seen before, only now we pass in the device-specific queues instead of the default queue for our major device number. This code also marks the queues as not having active heads. You might be wondering how the kernel manages to find the request queues, which are buried in a device-specific, private structure. The key is the last line just shown, which sets the queue member in the global blk_dev structure. This member points to a function that has the job of finding the proper request queue for a given device number. Devices using the default queue have no such function, but multiqueue devices must implement it. sbull's queue function looks like this:request_queue_t *sbull_find_queue(kdev_t device) { int devno = DEVICE_NR(device); if (devno >= sbull_devs) { static int count = 0; if (count++ < 5) /* print the message at most five times */ printk(KERN_WARNING "sbull: request for unknown device\n"); return NULL; } return &sbull_devices[devno].queue; } Like the request function, sbull_find_queue must be atomic (no sleeping allowed). Each queue has its own request function, though usually a driver will use the same function for all of its queues. The kernel passes the actual request queue into the request function as a parameter, so the function can always figure out which device is being operated on. The multiqueue request function used in sbull looks a little different from the ones we have seen so far because it manipulates the request queue directly. It also drops the io_request_lock while performing transfers to allow the kernel to execute other block operations. Finally, the code must take care to avoid two separate perils: multiple calls of the request function and conflicting access to the device itself.void sbull_request(request_queue_t *q) { Sbull_Dev *device; struct request *req; int status; /* Find our device */ device = sbull_locate_device (blkdev_entry_next_request(&q->queue_head)); if (device->busy) /* no race here - io_request_lock held */ return; device->busy = 1; /* Process requests in the queue */ while(! list_empty(&q->queue_head)) { /* Pull the next request off the list. */ req = blkdev_entry_next_request(&q->queue_head); blkdev_dequeue_request(req);; } Instead of using INIT_REQUEST, this function tests its specific request queue with the list function list_empty. As long as requests exist, it removes each one in turn from the queue with blkdev_dequeue_request. Only then, once the removal is complete, is it able to drop io_request_lock and obtain the device-specific lock. The actual transfer is done using sbull_transfer, which we have already seen. Each call to sbull_transfer handles exactly one buffer_head structure attached to the request. The function then calls end_that_request_first to dispose of that buffer, and, if the request is complete, goes on to end_that_request_last to clean up the request as a whole. The management of concurrency here is worth a quick look. The busy flag is used to prevent multiple invocations of sbull_request. Since sbull_request is always called with the io_request_lock held, it is safe to test and set the busy flag with no additional protection. (Otherwise, an atomic_t could have been used). The io_request_lock is dropped before the device-specific lock is acquired. It is possible to acquire multiple locks without risking deadlock, but it is harder; when the constraints allow, it is better to release one lock before obtaining another. end_that_request_first is called without the io_request_lock held. Since this function operates only on the given request structure, calling it this way is safe -- as long as the request is not on the queue. The call to end_that_request_last, however, requires that the lock be held, since it returns the request to the request queue's free list. The function also always exits from the outer loop (and the function as a whole) with the io_request_lock held and the device lock released. Multiqueue drivers must, of course, clean up all of their queues at module removal time:for (i = 0; i < sbull_devs; i++) blk_cleanup_queue(&sbull_devices[i].queue); blk_dev[major].queue = NULL; It is worth noting, briefly, that this code could be made more efficient. It allocates a whole set of request queues at initialization time, even though some of them may never be used. A request queue is a large structure, since many (perhaps thousands) of request structures are allocated when the queue is initialized. A more clever implementation would allocate a request queue when needed in either the open method or the queue function. We chose a simpler implementation for sbull in order to avoid complicating the code. That covers the mechanics of multiqueue drivers. Drivers handling real hardware may have other issues to deal with, of course, such as serializing access to a controller. But the basic structure of multiqueue drivers is as we have seen here. Doing Without the Request Queue Much of the discussion to this point has centered around the manipulation of the I/O request queue. The purpose of the request queue is to improve performance by allowing the driver to act asynchronously and, crucially, by allowing the merging of contiguous (on the disk) operations. For normal disk devices, operations on contiguous blocks are common, and this optimization is necessary. Not all block devices benefit from the request queue, however. sbull, for example, processes requests synchronously and has no problems with seek times. For sbull, the request queue actually ends up slowing things down. Other types of block devices also can be better off without a request queue. For example, RAID devices, which are made up of multiple disks, often spread "contiguous'' blocks across multiple physical devices. Block devices implemented by the logical volume manager (LVM) capability (which first appeared in 2.4) also have an implementation that is more complex than the block interface that is presented to the rest of the kernel. In the 2.4 kernel, block I/O requests are placed on the queue by the function __make_request, which is also responsible for invoking the driver's requestfunction. Block drivers that need more control over request queueing, however, can replace that function with their own "make request'' function. The RAID and LVM drivers do so, providing their own variant that, eventually, requeues each I/O request (with different block numbers) to the appropriate low-level device (or devices) that make up the higher-level device. A RAM-disk driver, instead, can execute the I/O operation directly. sbull, when loaded with the noqueue=1 option on 2.4 systems, will provide its own "make request'' function and operate without a request queue. The first step in this scenario is to replace __make_request. The "make request'' function pointer is stored in the request queue, and can be changed with blk_queue_make_request:void blk_queue_make_request(request_queue_t *queue, make_request_fn *func); The make_request_fn type, in turn, is defined as follows:typedef int (make_request_fn) (request_queue_t *q, int rw, struct buffer_head *bh); The "make request'' function must arrange to transfer the given block, and see to it that the b_end_io function is called when the transfer is done. The kernel does not hold the io_request_lock lock when calling the make_request_fn function, so the function must acquire the lock itself if it will be manipulating the request queue. If the transfer has been set up (not necessarily completed), the function should return 0. The phrase "arrange to transfer'' was chosen carefully; often a driver-specific make request function will not actually transfer the data. Consider a RAID device. What the function really needs to do is to map the I/O operation onto one of its constituent devices, then invoke that device's driver to actually do the work. This mapping is done by setting the b_rdev member of the buffer_head structure to the number of the "real'' device that will do the transfer, then signaling that the block still needs to be written by returning a nonzero value. When the kernel sees a nonzero return value from the make request function, it concludes that the job is not done and will try again. But first it will look up the make request function for the device indicated in the b_rdev field. Thus, in the RAID case, the RAID driver's "make request'' function will not be called again; instead, the kernel will pass the block to the appropriate function for the underlying device. sbull, at initialization time, sets up its make request function as follows:if (noqueue) blk_queue_make_request(BLK_DEFAULT_QUEUE(major), sbull_make_request); It does not call blk_init_queue when operating in this mode, because the request queue will not be used. When the kernel generates a request for an sbull device, it will call sbull_make_request, which is as follows:int sbull_make_request(request_queue_t *queue, int rw, struct buffer_head *bh) { u8 *ptr; /* Figure out what we are doing */ Sbull_Dev *device = sbull_devices + MINOR(bh->b_rdev); ptr = device->data + bh->b_rsector * sbull_hardsect; /* Paranoid check; this apparently can really happen */ if (ptr + bh->b_size > device->data + sbull_blksize*sbull_size) { static int count = 0; if (count++ < 5) printk(KERN_WARNING "sbull: request past end of device\n"); bh->b_end_io(bh, 0); return 0; } /* This could be a high-memory buffer; shift it down */ #if CONFIG_HIGHMEM bh = create_bounce(rw, bh); #endif /* Do the transfer */ switch(rw) { case READ: case READA: /* Read ahead */ memcpy(bh->b_data, ptr, bh->b_size); /* from sbull to buffer */ bh->b_end_io(bh, 1); break; case WRITE: refile_buffer(bh); memcpy(ptr, bh->b_data, bh->b_size); /* from buffer to sbull */ mark_buffer_uptodate(bh, 1); bh->b_end_io(bh, 1); break; default: /* can't happen */ bh->b_end_io(bh, 0); break; } /* Nonzero return means we're done */ return 0; } For the most part, this code should look familiar. It contains the usual calculations to determine where the block lives within the sbull device and uses memcpy to perform the operation. Because the operation completes immediately, it is able to call bh->b_end_io to indicate the completion of the operation, and it returns 0 to the kernel. There is, however, one detail that the "make request'' function must take care of. The buffer to be transferred could be resident in high memory, which is not directly accessible by the kernel. High memory is covered in detail in Chapter 13, "mmap and DMA". We won't repeat the discussion here; suffice it to say that one way to deal with the problem is to replace a high-memory buffer with one that is in accessible memory. The function create_bouncewill do so, in a way that is transparent to the driver. The kernel normally uses create_bounce before placing buffers in the driver's request queue; if the driver implements its own make_request_fn, however, it must take care of this task itself. How Mounting and Unmounting Works Block devices differ from char devices and normal files in that they can be mounted on the computer's filesystem. Mounting provides a level of indirection not seen with char devices, which are accessed through a struct file pointer that is held by a specific process. When a filesystem is mounted, there is no process holding that file structure. When the kernel mounts a device in the filesystem, it invokes the normal open method to access the driver. However, in this case both the filp and inode arguments to open are dummy variables. In the file structure, only the f_mode and f_flags fields hold anything meaningful; in the inode structure only i_rdev may be used. The remaining fields hold random values and should not be used. The value of f_mode tells the driver whether the device is to be mounted read-only (f_mode == FMODE_READ) or read/write (f_mode == (FMODE_READ|FMODE_WRITE)). This interface may seem a little strange; it is done this way for two reasons. First is that the open method can still be called normally by a process that accesses the device directly -- the mkfs utility, for example. The other reason is a historical artifact: block devices once used the same file_operations structure as char devices, and thus had to conform to the same interface. Other than the limitations on the arguments to the open method, the driver does not really see anything unusual when a filesystem is mounted. The device is opened, and then the request method is invoked to transfer blocks back and forth. The driver cannot really tell the difference between operations that happen in response to an individual process (such as fsck) and those that originate in the filesystem layers of the kernel. As far as umount is concerned, it just flushes the buffer cache and calls the release driver method. Since there is no meaningful filp to pass to the release method, the kernel uses NULL. Since the releaseimplementation of a block driver can't use filp->private_data to access device information, it uses inode->i_rdev to differentiate between devices instead. This is how sbullimplements release:int sbull_release (struct inode *inode, struct file *filp) { Sbull_Dev *dev = sbull_devices + MINOR(inode->i_rdev); spin_lock(&dev->lock); dev->usage--; MOD_DEC_USE_COUNT; spin_unlock(&dev->lock); return 0; } Other driver functions are not affected by the "missing filp'' problem because they aren't involved with mounted filesystems. For example, ioctl is issued only by processes that explicitly open the device. The ioctl Method Like char devices, block devices can be acted on by using the ioctl system call. The only relevant difference between block and char ioctl implementations is that block drivers share a number of common ioctlcommands that most drivers are expected to support. The commands that block drivers usually handle are the following, declared in <linux/fs.h>. - BLKGETSIZE - Retrieve the size of the current device, expressed as the number of sectors. The value of arg passed in by the system call is a pointer to a long value and should be used to copy the size to a user-space variable. This ioctl command is used, for instance, by mkfs to know the size of the filesystem being created. - BLKFLSBUF - Literally, "flush buffers.'' The implementation of this command is the same for every device and is shown later with the sample code for the whole ioctl method. - BLKRRPART - Reread the partition table. This command is meaningful only for partitionable devices, introduced later in this chapter. - BLKRAGET - BLKRASET - Used to get and change the current block-level read-ahead value (the one stored in the read_ahead array) for the device. For GET, the current value should be written to user space as a long item using the pointer passed to ioctl in arg; for SET, the new value is passed as an argument. - BLKFRAGET - BLKFRASET - Get and set the filesystem-level read-ahead value (the one stored in max_readahead) for this device. - BLKROSET - BLKROGET - These commands are used to change and check the read-only flag for the device. - BLKSECTGET - BLKSECTSET - These commands retrieve and set the maximum number of sectors per request (as stored in max_sectors). - BLKSSZGET - Returns the sector size of this block device in the integer variable pointed to by the caller; this size comes directly from the hardsect_size array. - BLKPG - The BLKPG command allows user-mode programs to add and delete partitions. It is implemented by blk_ioctl (described shortly), and no drivers in the mainline kernel provide their own implementation. - BLKELVGET - BLKELVSET - These commands allow some control over how the elevator request sorting algorithm works. As with BLKPG, no driver implements them directly. - HDIO_GETGEO - Defined in <linux/hdreg.h> and used to retrieve the disk geometry. The geometry should be written to user space in a struct hd_geometry, which is declared in hdreg.h as well. sbull shows the general implementation for this command. The HDIO_GETGEO command is the most commonly used of a series of HDIO_ commands, all defined in <linux/hdreg.h>. The interested reader can look in ide.c and hd.c for more information about these commands. Almost all of these ioctl commands are implemented in the same way for all block devices. The 2.4 kernel has provided a function, blk_ioctl, that may be called to implement the common commands; it is declared in <linux/blkpg.h>. Often the only ones that must be implemented in the driver itself are BLKGETSIZE and HDIO_GETGEO. The driver can then safely pass any other commands to blk_ioctl for handling. The sbull device supports only the general commands just listed, because implementing device-specific commands is no different from the implementation of commands for char drivers. The ioctl implementation for sbull is as follows:int sbull_ioctl (struct inode *inode, struct file *filp, unsigned int cmd, unsigned long arg) { int err; long size; struct hd_geometry geo; PDEBUG("ioctl 0x%x 0x%lx\n", cmd, arg); switch(cmd) { case BLKGETSIZE: /* Return the device size, expressed in sectors */ if (!arg) return -EINVAL; /* NULL pointer: not valid */ err = ! access_ok (VERIFY_WRITE, arg, sizeof(long)); if (err) return -EFAULT; size = blksize*sbull_sizes[MINOR(inode->i_rdev)] / sbull_hardsects[MINOR(inode->i_rdev)]; if (copy_to_user((long *) arg, &size, sizeof (long))) return -EFAULT; return 0; case BLKRRPART: /* reread partition table: can't do it */ return -ENOTTY; case HDIO_GETGEO: /* * Get geometry: since we are a virtual device, we have to make * up something plausible. So we claim 16 sectors, four heads, * and calculate the corresponding number of cylinders. We set * the start of data at sector four. */ err = ! access_ok(VERIFY_WRITE, arg, sizeof(geo)); if (err) return -EFAULT; size = sbull_size * blksize / sbull_hardsect; geo.cylinders = (size & ~0x3f) >> 6; geo.heads = 4; geo.sectors = 16; geo.start = 4; if (copy_to_user((void *) arg, &geo, sizeof(geo))) return -EFAULT; return 0; default: /* * For ioctls we don't understand, let the block layer * handle them. */ return blk_ioctl(inode->i_rdev, cmd, arg); } return -ENOTTY; /* unknown command */ } The PDEBUG statement at the beginning of the function has been left in so that when you compile the module, you can turn on debugging to see which ioctl commands are invoked on the device. Removable Devices Thus far, we have ignored the final two file operations in the block_device_operations structure, which deal with devices that support removable media. It's now time to look at them; sbull isn't actually removable but it pretends to be, and therefore it implements these methods. The operations in question are check_media_changeand revalidate. The former is used to find out if the device has changed since the last access, and the latter re-initializes the driver's status after a disk change. As far as sbull is concerned, the data area associated with a device is released half a minute after its usage count drops to zero. Leaving the device unmounted (or closed) long enough simulates a disk change, and the next access to the device allocates a new memory area. This kind of "timely expiration'' is implemented using a kernel timer. check_media_change The checking function receives kdev_t as a single argument that identifies the device. The return value is 1 if the medium has been changed and 0 otherwise. A block driver that doesn't support removable devices can avoid declaring the function by setting bdops->check_media_change to NULL. It's interesting to note that when the device is removable but there is no way to know if it changed, returning 1 is a safe choice. This is the behavior of the IDE driver when dealing with removable disks. The implementation in sbull returns 1 if the device has already been removed from memory due to the timer expiration, and 0 if the data is still valid. If debugging is enabled, it also prints a message to the system logger; the user can thus verify when the method is called by the kernel.int sbull_check_change(kdev_t i_rdev) { int minor = MINOR(i_rdev); Sbull_Dev *dev = sbull_devices + minor; PDEBUG("check_change for dev %i\n",minor); if (dev->data) return 0; /* still valid */ return 1; /* expired */ } Revalidation The validation function is called when a disk change is detected. It is also called by the various stat system calls implemented in version 2.1 of the kernel. The return value is currently unused; to be safe, return 0 to indicate success and a negative error code in case of error. The action performed by revalidate is device specific, but revalidate usually updates the internal status information to reflect the new device. In sbull, the revalidate method tries to allocate a new data area if there is not already a valid area.int sbull_revalidate(kdev_t i_rdev) { Sbull_Dev *dev = sbull_devices + MINOR(i_rdev); PDEBUG("revalidate for dev %i\n",MINOR(i_rdev)); if (dev->data) return 0; dev->data = vmalloc(dev->size); if (!dev->data) return -ENOMEM; return 0; } Extra Care Drivers for removable devices should also check for a disk change when the device is opened. The kernel provides a function to cause this check to happen:int check_disk_change(kdev_t dev); The return value is nonzero if a disk change was detected. The kernel automatically calls check_disk_change at mount time, but not at opentime. Some programs, however, directly access disk data without mounting the device: fsck, mcopy, and fdiskare examples of such programs. If the driver keeps status information about removable devices in memory, it should call the kernel check_disk_change function when the device is first opened. This function uses the driver methods (check_media_change and revalidate), so nothing special has to be implemented in open itself. Here is the sbull implementation of open, which takes care of the case in which there's been a disk change:int sbull_open (struct inode *inode, struct file *filp) { Sbull_Dev *dev; /* device information */ int num = MINOR(inode->i_rdev); if (num >= sbull_devs) return -ENODEV; dev = sbull_devices + num; spin_lock(&dev->lock); /* revalidate on first open and fail if no data is there */ if (!dev->usage) { check_disk_change(inode->i_rdev); if (!dev->data) { spin_unlock (&dev->lock); return -ENOMEM; } } dev->usage++; spin_unlock(&dev->lock); MOD_INC_USE_COUNT; return 0; /* success */ } Nothing else needs to be done in the driver for a disk change. Data is corrupted anyway if a disk is changed while its open count is greater than zero. The only way the driver can prevent this problem from happening is for the usage count to control the door lock in those cases where the physical device supports it. Then open and close can disable and enable the lock appropriately. Partitionable Devices Most block devices are not used in one large chunk. Instead, the system administrator expects to be able to partition the device -- to split it into several independent pseudodevices. If you try to create partitions on an sbull device with fdisk, you'll run into problems. The fdisk program calls the partitions /dev/sbull01, /dev/sbull02, and so on, but those names don't exist on the filesystem. More to the point, there is no mechanism in place for binding those names to partitions in the sbull device. Something more must be done before a block device can be partitioned. To demonstrate how partitions are supported, we introduce a new device called spull, a "Simple Partitionable Utility.'' It is far simpler than sbull, lacking the request queue management and some flexibility (like the ability to change the hard-sector size). The device resides in the spull directory and is completely detached from sbull, even though they share some code. To be able to support partitions on a device, we must assign several minor numbers to each physical device. One number is used to access the whole device (for example, /dev/hda), and the others are used to access the various partitions (such as /dev/hda1). Since fdisk creates partition names by adding a numerical suffix to the whole-disk device name, we'll follow the same naming convention in the spull driver. The device nodes implemented by spull are called pd, for "partitionable disk.'' The four whole devices (also called units) are thus named /dev/pda through /dev/pdd; each device supports at most 15 partitions. Minor numbers have the following meaning: the least significant four bits represent the partition number (where 0 is the whole device), and the most significant four bits represent the unit number. This convention is expressed in the source file by the following macros:#define MAJOR_NR spull_major /* force definitions on in blk.h */ int spull_major; /* must be declared before including blk.h */ #define SPULL_SHIFT 4 /* max 16 partitions */ #define SPULL_MAXNRDEV 4 /* max 4 device units */ #define DEVICE_NR(device) (MINOR(device)>>SPULL_SHIFT) #define DEVICE_NAME "pd" /* name for messaging */ The spull driver also hardwires the value of the hard-sector size in order to simplify the code:#define SPULL_HARDSECT 512 /* 512-byte hardware sectors */ The Generic Hard Disk Every partitionable device needs to know how it is partitioned. The information is available in the partition table, and part of the initialization process consists of decoding the partition table and updating the internal data structures to reflect the partition information. This decoding isn't easy, but fortunately the kernel offers "generic hard disk'' support usable by all block drivers. Such support considerably reduces the amount of code needed in the driver for handling partitions. Another advantage of the generic support is that the driver writer doesn't need to understand how the partitioning is done, and new partitioning schemes can be supported in the kernel without requiring changes to driver code. A block driver that supports partitions must include <linux/genhd.h> and should declare a struct gendisk structure. This structure describes the layout of the disk(s) provided by the driver; the kernel maintains a global list of such structures, which may be queried to see what disks and partitions are available on the system. Before we go further, let's look at some of the fields in struct gendisk. You'll need to understand them in order to exploit generic device support. - int major - The major number for the device that the structure refers to. - const char *major_name - The base name for devices belonging to this major number. Each device name is derived from this name by adding a letter for each unit and a number for each partition. For example, "hd'' is the base name that is used to build /dev/hda1 and /dev/hdb3. In modern kernels, the full length of the disk name can be up to 32 characters; the 2.0 kernel, however, was more restricted. Drivers wishing to be backward portable to 2.0 should limit the major_name field to five characters. The name for spull is pd ("partitionable disk''). - int minor_shift - The number of bit shifts needed to extract the drive number from the device minor number. In spull the number is 4. The value in this field should be consistent with the definition of the macro DEVICE_NR(device) (see "Section 12.2, "The Header File blk.h""). The macro in spullexpands to device>>4. - int max_p - The maximum number of partitions. In our example, max_p is 16, or more generally, 1 << minor_shift. - struct hd_struct *part - The decoded partition table for the device. The driver uses this item to determine what range of the disk's sectors is accessible through each minor number. The driver is responsible for allocation and deallocation of this array, which most drivers implement as a static array of max_nr << minor_shift structures. The driver should initialize the array to zeros before the kernel decodes the partition table. - int *sizes - An array of integers with the same information as the global blk_size array. In fact, they are usually the same array. The driver is responsible for allocating and deallocating the sizes array. Note that the partition check for the device copies this pointer to blk_size, so a driver handling partitionable devices doesn't need to allocate the latter array. -; - A pointer to the block operations structure for this device. Many of the fields in the gendisk structure are set up at initialization time, so the compile-time setup is relatively simple:struct gendisk spull_gendisk = { major: 0, /* Major number assigned later */ major_name: "pd", /* Name of the major device */ minor_shift: SPULL_SHIFT, /* Shift to get device number */ max_p: 1 << SPULL_SHIFT, /* Number of partitions */ fops: &spull_bdops, /* Block dev operations */ /* everything else is dynamic */ }; Partition Detection When a module initializes itself, it must set things up properly for partition detection. Thus, spull starts by setting up the spull_sizes array for the gendisk structure (which also gets stored in blk_size[MAJOR_NR] and in the sizes field of the gendisk structure) and the spull_partitions array, which holds the actual partition information (and gets stored in the part member of the gendisk structure). Both of these arrays are initialized to zeros at this time. The code looks like this:spull_sizes = kmalloc( (spull_devs << SPULL_SHIFT) * sizeof(int), GFP_KERNEL); if (!spull_sizes) goto fail_malloc; /* Start with zero-sized partitions, and correctly sized units */ memset(spull_sizes, 0, (spull_devs << SPULL_SHIFT) * sizeof(int)); for (i=0; i< spull_devs; i++) spull_sizes[i<<SPULL_SHIFT] = spull_size; blk_size[MAJOR_NR] = spull_gendisk.sizes = spull_sizes; /* Allocate the partitions array. */ spull_partitions = kmalloc( (spull_devs << SPULL_SHIFT) * sizeof(struct hd_struct), GFP_KERNEL); if (!spull_partitions) goto fail_malloc; memset(spull_partitions, 0, (spull_devs << SPULL_SHIFT) * sizeof(struct hd_struct)); /* fill in whole-disk entries */ for (i=0; i < spull_devs; i++) spull_partitions[i << SPULL_SHIFT].nr_sects = spull_size*(blksize/SPULL_HARDSECT); spull_gendisk.part = spull_partitions; spull_gendisk.nr_real = spull_devs; The driver should also include its gendisk structure on the global list. There is no kernel-supplied function for adding gendisk structures; it must be done by hand:spull_gendisk.next = gendisk_head; gendisk_head = &spull_gendisk; In practice, the only thing the system does with this list is to implement /proc/partitions. The register_disk function, which we have already seen briefly, handles the job of reading the disk's partition table.register_disk(struct gendisk *gd, int drive, unsigned minors, struct block_device_operations *ops, long size); Here, gd is the gendisk structure that we built earlier, drive is the device number, minors is the number of partitions supported, ops is the block_device_operations structure for the driver, and size is the size of the device in sectors. Fixed disks might read the partition table only at module initialization time and when BLKRRPART is invoked. Drivers for removable drives will also need to make this call in the revalidate method. Either way, it is important to remember that register_disk will call your driver's request function to read the partition table, so the driver must be sufficiently initialized at that point to handle requests. You should also not have any locks held that will conflict with locks acquired in the request function. register_disk must be called for each disk actually present on the system. spull sets up partitions in the revalidate method:int spull_revalidate(kdev_t i_rdev) { /* first partition, # of partitions */ int part1 = (DEVICE_NR(i_rdev) << SPULL_SHIFT) + 1; int npart = (1 << SPULL_SHIFT) -1; /* first clear old partition information */ memset(spull_gendisk.sizes+part1, 0, npart*sizeof(int)); memset(spull_gendisk.part +part1, 0, npart*sizeof(struct hd_struct)); spull_gendisk.part[DEVICE_NR(i_rdev) << SPULL_SHIFT].nr_sects = spull_size << 1; /* then fill new info */ printk(KERN_INFO "Spull partition check: (%d) ", DEVICE_NR(i_rdev)); register_disk(&spull_gendisk, i_rdev, SPULL_MAXNRDEV, &spull_bdops, spull_size << 1); return 0; } It's interesting to note that register_diskprints partition information by repeatedly callingprintk(" %s", disk_name(hd, minor, buf)); That's why spull prints a leading string. It's meant to add some context to the information that gets stuffed into the system log. When a partitionable module is unloaded, the driver should arrange for all the partitions to be flushed, by calling fsync_dev for every supported major/minor pair. All of the relevant memory should be freed as well, of course. The cleanup function for spull is as follows:for (i = 0; i < (spull_devs << SPULL_SHIFT); i++) fsync_dev(MKDEV(spull_major, i)); /* flush the devices */ blk_cleanup_queue(BLK_DEFAULT_QUEUE(major)); read_ahead[major] = 0; kfree(blk_size[major]); /* which is gendisk->sizes as well */ blk_size[major] = NULL; kfree(spull_gendisk.part); kfree(blksize_size[major]); blksize_size[major] = NULL; It is also necessary to remove the gendisk structure from the global list. There is no function provided to do this work, so it's done by hand:for (gdp = &gendisk_head; *gdp; gdp = &((*gdp)->next)) if (*gdp == &spull_gendisk) { *gdp = (*gdp)->next; break; } Note that there is no unregister_disk to complement the register_disk function. Everything done by register_disk is stored in the driver's own arrays, so there is no additional cleanup required at unload time. Partition Detection Using initrd If you want to mount your root filesystem from a device whose driver is available only in modularized form, you must use the initrd facility offered by modern Linux kernels. We won't introduce initrd here; this subsection is aimed at readers who know about initrd and wonder how it affects block drivers. More information on initrd can be found in Documentation/initrd.txt in the kernel source. When you boot a kernel with initrd, it establishes a temporary running environment before it mounts the real root filesystem. Modules are usually loaded from within the RAM disk being used as the temporary root file system. Because the initrd process is run after all boot-time initialization is complete (but before the real root filesystem has been mounted), there's no difference between loading a normal module and loading one living in the initrd RAM disk. If a driver can be correctly loaded and used as a module, all Linux distributions that have initrd available can include the driver on their installation disks without requiring you to hack in the kernel source. The Device Methods for spull We have seen how to initialize partitionable devices, but not yet how to access data within the partitions. To do that, we need to make use of the partition information stored in the gendisk->part array by register_disk. This array is made up of hd_struct structures, and is indexed by the minor number. The hd_struct has two fields of interest: start_sect tells where a given partition starts on the disk, and nr_sects gives the size of that partition. Here we will show how spull makes use of that information. The following code includes only those parts of spull that differ from sbull, because most of the code is exactly the same. First of all, open and closemust keep track of the usage count for each device. Because the usage count refers to the physical device (unit), the following declaration and assignment is used for the dev variable:Spull_Dev *dev = spull_devices + DEVICE_NR(inode->i_rdev); The DEVICE_NR macro used here is the one that must be declared before <linux/blk.h> is included; it yields the physical device number without taking into account which partition is being used. Although almost every device method works with the physical device as a whole, ioctl should access specific information for each partition. For example, when mkfscalls ioctl to retrieve the size of the device on which it will build a filesystem, it should be told the size of the partition of interest, not the size of the whole device. Here is how the BLKGETSIZE ioctl command is affected by the change from one minor number per device to multiple minor numbers per device. As you might expect, spull_gendisk->part is used as the source of the partition size.case BLKGETSIZE: /* Return the device size, expressed in sectors */ err = ! access_ok (VERIFY_WRITE, arg, sizeof(long)); if (err) return -EFAULT; size = spull_gendisk.part[MINOR(inode->i_rdev)].nr_sects; if (copy_to_user((long *) arg, &size, sizeof (long))) return -EFAULT; return 0; The other ioctl command that is different for partitionable devices is BLKRRPART. Rereading the partition table makes sense for partitionable devices and is equivalent to revalidating a disk after a disk change:case BLKRRPART: /* re-read partition table */ return spull_revalidate(inode->i_rdev); But the major difference between sbull and spull is in the request function. In spull, the requestfunction needs to use the partition information in order to correctly transfer data for the different minor numbers. Locating the transfer is done by simply adding the starting sector to that provided in the request; the partition size information is then used to be sure the request fits within the partition. Once that is done, the implementation is the same as for sbull. Here are the relevant lines in spull_request:ptr = device->data + (spull_partitions[minor].start_sect + req->sector)*SPULL_HARDSECT; size = req->current_nr_sectors*SPULL_HARDSECT; /* * Make sure that the transfer fits within the device. */ if (req->sector + req->current_nr_sectors > spull_partitions[minor].nr_sects) { static int count = 0; if (count++ < 5) printk(KERN_WARNING "spull: request past end of partition\n"); return 0; } The number of sectors is multiplied by the hardware sector size (which, remember, is hardwired in spull) to get the size of the partition in bytes. Interrupt-Driven Block Drivers When a driver controls a real hardware device, operation is usually interrupt driven. Using interrupts helps system performance by releasing the processor during I/O operations. In order for interrupt-driven I/O to work, the device being controlled must be able to transfer data asynchronously and to generate interrupts. When the driver is interrupt driven, the requestfunction spawns a data transfer and returns immediately without calling end_request. However, the kernel doesn't consider a request fulfilled unless end_request(or its component parts) has been called. Therefore, the top-half or the bottom-half interrupt handler calls end_request when the device signals that the data transfer is complete. Neither sbull nor spull can transfer data without using the system microprocessor; however, spull is equipped with the capability of simulating interrupt-driven operation if the user specifies the irq=1 option at load time. When irq is not 0, the driver uses a kernel timer to delay fulfillment of the current request. The length of the delay is the value of irq: the greater the value, the longer the delay. As always, block transfers begin when the kernel calls the driver's request function. The request function for an interrupt-driven device instructs the hardware to perform the transfer and then returns; it does not wait for the transfer to complete. The spull request function performs the usual error checks and then calls spull_transfer to transfer the data (this is the task that a driver for real hardware performs asynchronously). It then delays acknowledgment until interrupt time:void spull_irqdriven_request(request_queue_t *q) { Spull_Dev *device; int status; long flags; /* If we are already processing requests, don't do any more now. */ if (spull_busy) return; while(1) { INIT_REQUEST; /* returns when queue is empty */ /* Which "device" are we using? */ device = spull_locate_device (CURRENT); if (device == NULL) { end_request(0); continue; } spin_lock_irqsave(&device->lock, flags); /* Perform the transfer and clean up. */ status = spull_transfer(device, CURRENT); spin_unlock_irqrestore(&device->lock, flags); /* ... and wait for the timer to expire -- no end_request(1) */ spull_timer.expires = jiffies + spull_irq; add_timer(&spull_timer); spull_busy = 1; return; } } New requests can accumulate while the device is dealing with the current one. Because reentrant calls are almost guaranteed in this scenario, the request function sets a spull_busy flag so that only one transfer happens at any given time. Since the entire function runs with the io_request_lock held (the kernel, remember, obtains this lock before calling the request function), there is no need for particular care in testing and setting the busy flag. Otherwise, an atomic_t item should have been used instead of an int variable in order to avoid race conditions. The interrupt handler has a couple of tasks to perform. First, of course, it must check the status of the outstanding transfer and clean up the request. Then, if there are further requests to be processed, the interrupt handler is responsible for getting the next one started. To avoid code duplication, the handler usually just calls the request function to start the next transfer. Remember that the request function expects the caller to hold the io_request_lock, so the interrupt handler will have to obtain it. The end_request function also requires this lock, of course. In our sample module, the role of the interrupt handler is performed by the function invoked when the timer expires. That function calls end_request and schedules the next data transfer by calling the request function. In the interest of code simplicity, the spull interrupt handler performs all this work at "interrupt'' time; a real driver would almost certainly defer much of this work and run it from a task queue or tasklet./* this is invoked when the timer expires */ void spull_interrupt(unsigned long unused) { unsigned long flags spin_lock_irqsave(&io_request_lock, flags); end_request(1); /* This request is done - we always succeed */ spull_busy = 0; /* We have io_request_lock, no request conflict */ if (! QUEUE_EMPTY) /* more of them? */ spull_irqdriven_request(NULL); /* Start the next transfer */ spin_unlock_irqrestore(&io_request_lock, flags); } If you try to run the interrupt-driven flavor of the spull module, you'll barely notice the added delay. The device is almost as fast as it was before because the buffer cache avoids most data transfers between memory and the device. If you want to perceive how a slow device behaves, you can specify a bigger value for irq= when loading spull. Backward Compatibility Much has changed with the block device layer, and most of those changes happened between the 2.2 and 2.4 stable releases. Here is a quick summary of what was different before. As always, you can look at the drivers in the sample source, which work on 2.0, 2.2, and 2.4, to see how the portability challenges have been handled. The block_device_operations structure did not exist in Linux 2.2. Instead, block drivers used a file_operations structure just like char drivers. The check_media_change and revalidate methods used to be a part of that structure. The kernel also provided a set of generic functions -- block_read, block_write, and block_fsync -- which most drivers used in their file_operations structures. A typical 2.2 or 2.0 file_operations initialization looked like this:
http://www.xml.com/ldd/chapter/book/ch12.html
crawl-001
refinedweb
14,585
52.49
Red Hat Bugzilla – Bug 241217 fence_apc 1.32.45 doesn't work Last modified: 2009-04-16 16:23:52 I see the same error with fence-1.32.45-1.0.1 on an i686 system. This is definitely a regression from 1.32.25-1 which I was using before the upgrade to 4U5. APC details: Model: AP7920 Manufacture Date: 07/02/2006 Hardware Revision: B2 Network Management Card AOS: 2.7.0 Rack PDU APP: 2.7.3 Here's the relevant piece of code: elif i.find(DEVICE_MANAGER) != (-1): if switchnum != "": res = switchnum + "\r" else: res = "3\r" return (NOT_COMPLETE, res) which operaties on the following menu: ------- Device Manager -------------------------------------------------------- 1- Phase Monitor 2- Outlet Control 3- Power Supply Status Looks like the code is assuming "Outlet Control" will be option 3, which it isn't in this case. The old version uses a regular expression to identifiy the correct option which succeeds. This is a duplicate of 246216 *** This bug has been marked as a duplicate of 246216 *** There will be new fence agent for APC in RHCS 4.8 (same as in 5.3) which includes ssh support, non-root accounts support and also fix for this problem.
https://bugzilla.redhat.com/show_bug.cgi?id=241217
CC-MAIN-2016-36
refinedweb
203
67.35
The target system is an Optimus 4x model P880 with a Nvidia ARM SoC running some Linux version, and as a mobile phone, it has small RAM and space available, so we need do all the job in another place, like a PC / x86 computer, in this scenario the PC is called host. What is needed? The host CPU architecture (PC / x86) is different from the target architecture (ARM), so we need a cross-compiler toolchain, that includes the compiler, linker, assembler and all the tools needed to generate the ARM architecture instructions, instead of PC. After this, we need compile the driver using the same Linux kernel source of the target. The Linux Kernel and ARM Toolchain The trick part and most problematic is the kernel source of your device, if you are using some custom ROM like the cyanogenmod, use the respective provided sources. If you use the wrong kernel source files, without the manufacturer patches for example, your driver will not work, so make sure to find the proper kernel source before proceed. In my case I couldn't find the sources because it's the original LG ROM. So let's start discovering the target Linux kernel version and get its configuration file, for this you need a SSH server app running in the phone and a SSH client in the host. I'm using the Linux SSH client and SSHDroid app as server for Android, then type: uname -r 3.1.10-g0cf45eb As we can see, the kernel version is "3.1.10-g0cf45eb", so download the kernel sources for version 3.1.10 from kernel.org and extract it in the host machine. Then copy the kernel configuration file from "/proc/config.gz" to the kernel folder and extract the single file inside of it as ".config". Now the kernel sources folder has the vanilla kernel 3.1.10 version with the same configurations of the target. But we can't forget about the "-g0cf45eb" in the kernel version, so edit the Makefile: EXTRAVERSION = -g0cf45eb And install the ARM toolchain: apt-get install g++-arm-linux-gnueabi Now we are ready to cross-compile the driver for ARM ! Driver Create a 'hello_driver' folder above the kernel source folder with a file named "driver.c": #include <linux/init.h> #include <linux/module.h> #include <linux/kernel.h> MODULE_LICENSE("Dual BSD/GPL"); static int hello_init(void) { printk("Hello world!\n"); return 0; } static void hello_exit(void) { printk("Hello exit!\n"); } module_init(hello_init); module_exit(hello_exit); The Makefile: obj-m := driver.o And from the kernel source folder, type: make modules ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- make modules M=../hello_driver ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- Now go back to the driver folder and you should have the 'driver.ko': modinfo driver.ko filename: driver.ko license: Dual BSD/GPL depends: vermagic: 3.1.10-g0cf45eb SMP preempt mod_unload ARMv7 That's all! copy the driver to the phone and test, it will print messages in the kernel log file, use the command 'dmesg' to see them! 0 comentários :
http://www.l3oc.com/2015/06/android-and-linux-hello-world-driver.html
CC-MAIN-2017-51
refinedweb
513
65.52
getline(3) BSD Library Functions Manual getline(3) NAME getdelim, getline -- get a line from a stream LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <stdio.h> ssize_t getdelim(char ** restrict linep, size_t * restrict linecapp, int delimiter, FILE * restrict stream); ssize_t getline(char ** restrict linep, size_t * restrict linecapp, FILE * restrict stream); DESCRIPTION The getdelim() function reads a line from stream, delimited by the char- acter delimiter. The getline() function is equivalent to getdelim() with the newline character as the delimiter. The delimiter character is included as part of the line, unless the end of the file is reached. The caller may provide a pointer to a malloced buffer for the line in *linep, and the capacity of that buffer in *linecapp. These functions expand the buffer as needed, as if via realloc(). If linep points to a NULL pointer, a new buffer will be allocated. In either case, *linep and *linecapp will be updated accordingly. RETURN VALUES The getdelim() and getline() functions return the number of characters written, excluding the terminating NUL character. The value -1 is returned if an error occurs, or if end-of-file is reached. EXAMPLES The following code fragment reads lines from a file and writes them to standard output. The fwrite() function is used in case the line contains embedded NUL characters. char *line = NULL; size_t linecap = 0; ssize_t linelen; while ((linelen = getline(&line, &linecap, fp)) > 0) fwrite(line, linelen, 1, stdout); ERRORS These functions may fail if: [EINVAL] Either linep or linecapp is NULL. [EOVERFLOW] No delimiter was found in the first SSIZE_MAX charac- ters. These functions may also fail due to any of the errors specified for fgets() and malloc(). SEE ALSO fgetln(3), fgets(3), malloc(3) STANDARDS The getdelim() and getline() functions conform to . HISTORY These routines first appeared in FreeBSD 8.0. BUGS There are no wide character versions of getdelim() or getline(). BSD November 30, 2010 BSD Mac OS X 10.9.1 - Generated Wed Jan 8 06:28:58 CST 2014
http://manpagez.com/man/3/getline/
CC-MAIN-2019-13
refinedweb
334
63.8
A tool to detect unused code Pecker pecker is a tool to automatically detect unused code. It based on IndexStoreDB and SwiftSyntax. Why use this? During the project development process, you may write a lot of code. Over time, a lot of code is no longer used, but it is difficult to find. pecker can help you locate these unused code conveniently and accurately. Features pecker can detect the following kinds of unused Swift code. - class - struct - enum - protocol - function - typealias - operator Installation $ git clone $ cd Pecker $ make install With that installed and on our bin folder, now we can use it. Usage Xcode - Click on your project in the file list, choose your target under TARGETS, click the Build Phases tab and add a New Run Script Phase by clicking the little plus icon in the top left. - Paste the following script: /usr/local/bin/pecker Command Line Usage pecker [OPTIONS] -v/--version: Prints the peckerversion and exits. -i/--index-store-path: The Index path of your project, if unspecified, the default is ~Library/Developer/Xcode/DerivedData/<target>/Index/DataStore. Run pecker in the project target to detect. Project will be searched Swift files recursively. Rules Current only 2 rules are included in Pecker, They are skip_public and xctest, You can also check Source/PeckerKit/Rules directory to see their implementation. skip_public This rule means skip detect public class, struct, function, etc. Usually the public code is provided for other users, so it is difficult to determine whether it is used. So we don't detect it by default. But in some cases, such as using submodule to organize code, you need to detect public code, you can add it to disabled_rules. xctest XCTest is special, we stipulate that ignore classes inherited from XCTestCase and functions of this class that hasPrefix "test" and do not contain parameters. You can add it to disabled_rules if you don't need it. class ExampleUITests: XCTestCase { func testExample() { //used } func test(name: String) { // unused } func get() { // unsed } } Other rules These rules are used by default, you cannot configure them. override Skip declarations that override another. This works for both subclass overrides & protocol extension overrides. protocol ExampleProtocol { func test() // used } class Example: ExampleProtocol { func test() { // used } } class Animal { func run() { // used } } class Dod: Animal { override func run() { // used } } extensions Referenced elsewhere means used, except for extensions. class UnusedExample { // unused } extension UnusedExample { } Configuration Configure pecker by adding a .pecker.yml file from the directory you'll run pecker from. The following parameters can be configured: Rule inclusion: disabled_rules: Disable rules from the default enabled set. Reporter inclusion: - xcode: Warnings displayed in the IDE. - json: Generate a warnings json file. reporter: "xcode" disabled_rules: - skip_public included: # paths to include during detecting. `--path` is ignored if present. - ./ excluded: # paths to ignore during detecting. Takes precedence over `included`. - Carthage - Pods blacklist_files: # files to ignore during detecting, only need to add file name, the file extension default is swift. - HomeViewController blacklist_symbols: # symbols to ignore during detecting, contains class, struct, enum, etc. - AppDelegate - viewDidLoad GitHub Get the latest posts delivered right to your inbox
https://iosexample.com/a-tool-to-detect-unused-code/
CC-MAIN-2020-40
refinedweb
513
57.57
Hands on machine learning deployment using flask in 5 min Machine learning is a part of computer science,which helps to find future outcomes it can be predict the results.It provide some insights on the basis of previous stats. In this blog,i am going to show you hands on demonstration of deploy a machine learning model using the flask on Heroku platform. For this hands on implementation i am using car price prediction ml model, in which feature engineering and feature selection techniques are applied. Apply ExtraTreeRegressor model. Extra Trees is an ensemble machine learning algorithm that combines the predictions from many decision trees. Each tree is provided with a random sample of k features from the feature-set from which each decision tree must select the best feature to split the data based on some mathematical criteria. This random sample of features leads to the creation of multiple de-correlated decision trees. Extra tree provisioning features are Year,Volume,mileage,fuel type and transmission. ▶Let’s start the deployment of machine learning model: ⪢First dump your machine learning model into pickle file.For this you need to import a pickle library in your python notebook.Pickle library is used to dump the ml model,Pickling is a way to convert a python object into a character stream this contains all the information necessary to reconstruct the object in another python script. import pickle Here i dump my model into “model1.pkl” file.This pickle file contain all the information of ml model, it used in the prediction. ⪢After that create a folder on your local which contain your file,csv file and pickle file. ⪢Create a template folder which contain all html files. ⪢Add app.py file,it use to get data from user, this input helps to predict car value. Now add this code to your “app.py” file.In this i import various libraries, these libraries produce a final document which displays in users browser. In this, render_template ->A template is rendered with specific data to produce a final document. Flask uses the Jinja template library to render templates. request ->The requests module in Python allows you to exchange requests on the web.That mean you can input and get the output on web server using GET and POST method. url_for-> use to create a endpoint url. Apply conditions and convert categorical data into numerical form, provide the route of web on the local host. ⪢Before that insert all html files in template folder and give the placeholder, for linking.For an example create an index.html page and put into the template folder. In this html file, there are two placeholder first for predict and second is for the dashboard which is on tableau. Run your app on visual studio and the output will look like this.In this output we can predict the car price. Result will be redirect on next page using render template,for accessing the result.html file, you can go on my git repository. Lets bind up all the steps: 1️⃣ Dump ml model into pickle file. 2️⃣ Save all files in the folder. Pickle file , flask file, python file , csv data and add one more template folder; in this contain all html files. 3️⃣ After add code into the python file. 4️⃣ run this python file into your local server and check it works properly or not. After that add some required file which help to deploy a web on platform,here we deploy a web on Heroku server which is platform as a service. 5️⃣ Add procfile and requirement.txt file into the root directory of your folder. 🔰 Proc file is a text file which explicitly declare what command should be executed to start your app. 🔰The requirements.txt file lists the app dependencies together. When an app is deployed, Heroku reads this file and installs the python dependencies using the pip install -r. Before moving on to the final step , we create a dashboard on tableau,dashboard is a visualization tool which gives us better understanding of data and help to analyse it. For this must you have an account on tableau, tableau is give 14 days free trial,you can use it. Upload your data on tableau and create a sheets, after that add your sheets on dashboard. You can add many features as well. After finish the work of dashboard. 6️⃣ Copy this embedded code. And paste into your html page.Share dashboard on the web. You can to other settings to make it more visible for others. 7️⃣ After embedding the code into the html page, create github repository and clone into local. Add all files and push it . 8️⃣ Open Heroku login page and follow some steps to deploy your final web-app. Go to create app▶give the unique name to your app▶connect to github▶find the repository which you create ▶enable automatic deploys (if you do any changes in your github it will do automatic changes)▶deploy branch. Wait for a while it takes time, after some time you can see deployment is done and you can able to access your Heroku web-app. 💥💥💥Now you have a link to access your first ml base web-app.😍😍 And its done,in this web app you can predict the car value and as well if you have access then you can see the dashboard. So what we did, in just 8 steps we use ml model,deploy it,create html files for template and make web interactive.Use flask frame work to build a web on local server.Use tableau to add extra feature provide visualization of data🤩 to authorized users,push all these files onto the github and connect github repo to Heroku and deploy model.🥳🥳 Follow me on my github. smritijain412 - Overview I am working as Aspirant Data Scientist at my own and try to enhance my technical skills by creating Repositoires using… github.com Follow me on linkdin. #connect network #share knowledge #grow. Smriti Jain - Global Ambassador - WomenTech Network | LinkedIn View Smriti Jain's profile on LinkedIn, the world's largest professional community. Smriti has 4 jobs listed on their… #machine learning #deployment #web technology #flask #html # python programming #data science
https://medium.com/deploy-ml-model-on-heroku-in-5-min/hands-on-machine-learning-deployment-using-flask-in-5-min-8f0c4566667b?source=post_internal_links---------3----------------------------
CC-MAIN-2021-39
refinedweb
1,044
65.32
Like most of the things I get excited about and share with you, this technique really doesn’t have much to it, but I love its elegance, how it works in the background and gets out of your way. While it’s really simple I think this one is a real gem, ’cause when you look at a class that uses it, it looks like magic! Okay, so you know how when you’re writing a site or app that’s of a small to medium scale, you default to storing data in XML, and you map that XML to model classes, usually pretty directly? Or, maybe you use a configuration file for your site to load in some constants or something, and XML is a pretty easy choice for this. With E4X you can really parse through that XML quickly. Let’s say you have a model XML file that uses attributes to store properties: <menu> <icecream name="Rocky Road" chunksPerSpoonful="12" color="#E9E8CB"/> <icecream name="Coffee" chunksPerSpoonful="0" color="#8F6B43"/> </menu> Or you might decide to use child nodes: <menu> <icecream> <name>Rocky Road</name> <chunksPerSpoonful>12</chunksPerSpoonful> <color>#E9E8CB</color> </icecream> <icecream> <name>Coffee</name> <chunksPerSpoonful>0</chunksPerSpoonful> <color>#8F6B43</color> </icecream> </menu> Alternately, you might use a combination of techniques if one particular attribute needs to use characters that wouldn’t be valid inside an attribute: <menu> <icecream name="Rocky Road" chunksPerSpoonful="12" color="#E9E8CB"> <description><![CDATA[If you want to get <i>nutty</i>, try this<br/>delicious flavor]]></description> </icecream> <icecream name="Coffee" chunksPerSpoonful="0" color="#8F6B43"> <description><![CDATA[When <i>just</i> caffeine or<br/><i>just</i> sugar isn't enough, try both!]]></description> </icecream> </menu> I don’t know about you, but I would usually name the attributes in the XML precisely the same as the instance variables in my model. My model class would look so: package com.example.models { public class IceCreamModel { public var name:String; public var chunksPerSpoonful:Number; public var color:uint; public var description:String; } } By using public variables, I’m going to a) make sure my model can be serialized automatically by AMF and b) not worry about writing lengthy getters and setters. It also simplifies our discussion here, so bear with me. My next step would be to write a constructor that pulls those values out of the XML. It would only be a few lines. But you know what? I have better things to do than write boilerplate code. package com.example.models { import com.yourmajesty.models.AbstractXMLPopulatedModel; public class IceCreamModel extends AbstractXMLPopulatedModel { public var name:String; public var chunksPerSpoonful:Number; public var color:uint; public var description:String; public function IceCreamModel(xml:XML) { super(xml); } } } Seriously, man, I’m for real here. Never ever write any XML model binding code again. Ever. Computers are here to do that monkey work for us! Kinda like Rails binds record sets directly to model objects that extend ActiveRecord, we can bind blocks of XML to model objects just by writing the fields into the model and extending AbstractXMLPopulatedModel. AbstractXMLPopulatedModel won’t pollute your model class with any fields or methods. Its only method is the constructor, which is of course shadowed by your model’s constructor. Because of single inheritance, this implies you can’t extend any other classes in your model classes, but it’s unlikely you would do so in projects so simple that you are binding directly to XML. The automatic binding depends on your variables being public (as does AMF serialization). Note that you also get the benefit of AVM2’s fast, static, offset-based lookups of the fields in your instances; your model classes do not have to be dynamic. Let’s see another example. package com.example.models { import com.yourmajesty.models.AbstractXMLPopulatedModel; public class BookModel extends AbstractXMLPopulatedModel { public var title:String; public var publisher:String; public var ISBN:String; public var MSRP:Number; public var pages:int; public var authors:Array; public function BookModel(xml:XML) { super(xml); } } } var modelxml:XML = <book title="ActionScript 3.0 Bible" publisher="Wiley" ISBN="978-0-470-13560-0" MSRP="49.99" pages="792" authors="[Roger Braunstein, Mims H. Wright, Joshua J. Noble]" />; var book:BookModel = new BookModel(modelxml); trace("$"+Math.round(book.MSRP)); //$50 All you have to do is write the fields and extend AbstractXMLPopulatedModel. That’s it! So how does it all work? AbstractXMLPopulatedModel uses reflection to examine itself. It looks at all the variables defined, and their types, and then pulls them out of the XML, interpreting them in the proper way for the type of the variable. Take a look at the source below. Flash Player 9’s API contains a really cool utility function called describeType() which provides super-detailed information about the passed object. It’s the ultimate method for introspection, and it returns information in XML itself. Once you have the fields defined by the XML source, and the fields defined by the class itself, it’s straightforward to map them. Following this simple logic is some basic string-to-object parsing. Nothing too fancy, and you could plunder corelib’s JSON library to add onto these capabilities if you so wished. That’s all! It is simple, as I said before, and only saves you a little boilerplate code, but it allows you to write up models and XML in seconds, and change them at your whim. One less little pointless activity. Enjoy! com.yourmajesty.models.AbstractXMLPopulatedModel View Source | Download (.as, 1k) Very Nice. I believe there has been a lot of questions on how to do this on the flexcoders group. Thanks for sharing this little gem. Tony This is awesome. I’m unclear how to get the data from the model, however. I’ve created a test class into which I’m loading an XML file then instantiating “bookModel” and passing the XML as an argument. It compiles without errors, but I feel like I’m missing something, since I can’t yet do anything with it. Any thoughts? Thanks for an excellent post. I’m hoping to get this working asap. Hmm looks like something fundamental. The next thing I’m going to do is heavy testing of Your Majesty approach at the HUGE.GIS.XML file containing regular GIS data from the MapServer (usually overloaded by “boundingbox” and “geometry” tags). Let’s see how it will work. I saved the link to this page and then report results of my experiment here. If you’re interested. Feeling amazed. @Dylan, I didn’t make it clear enough, but each model binds to a single XML node, so to populate a set of models from an XML file, you should loop through the child nodes, creating a new model passing each node individually, f.ex. That’s pretty sweet dude! You could apply that to cairngorm as well, except with that you would probably tie it to generating the VOs within the models, rather than the models themselves. Hey, Thanks a lot for sharing that class, really useful :D. I wrote a review in french about it there : I also added the Boolean type in your class, just adding a case to the type switch: case “Boolean”: this[key] = value == “true” ? true : false; break; Thanks ! Fabien
http://dispatchevent.org/roger/instant-model-binding-with-reflection/
crawl-002
refinedweb
1,215
55.13
Hi comunity ! Im a moderator on a site where ppl can upload apps. We try to keep our apps without any kind of infection but as far my skill set goes, ill need help to determine a js code i did find. Many of our users complained aboute apps from one of our uploaders, they said apps where infected and it was a js code hijacking wscript in taskmanager claming it is a keylogger or some spyware. I have been able to do some reverse engineering on some of his/here apps and ended up with a *.js file from each of his/here uploads that were very similar to each other. It was unreadable/compressed so i decompressed it so got a little more readable but still, i think its encrypted somehow. So this is as far as my skills set goes and i would like to ask any one in this comunity if someone could have a look at the code to see if it has any malicious intend or not? ..and what it really does? Best regards Click on the mentioned hyperlink to know about what js code really does - I cant really recognise mutch from that link to the code i have other than function and some others. I think its encrypted, could i post the pastebin url here and mabye someone could guide me how to make it readable? Help would be very mutch appreciated how this is a serious case in our site. can some one recognice this as javascript or is it encrypted javascript? Barring someone actually figuring out what is being obfuscated in that code, you could always delete it and see if anything breaks. If nothing breaks, it's probably malicious. If something does break, it may be obfuscated code that the developer wanted to make hard to re-use/reverse-engineer, in which case you'd need to talk to that developer. NogDog thanks ! First i deletet the batch script telling AdobePIM (javascript file) and the *.bat file itself, but then the setup deleted it self after i runned it (with AV disabled). Then i let the bat file and js file be there, just edited the js file with random word (messing up code) the it installed like fine and things working. Weird no? As the uploader at this app had scripted in code if those file not exist, then dont run the "malware" or what code the js is.. Here is the ADC_Version.msi file that gets renamed to ADC_Version.bat when clicking on the setup exe.. ��&cls @echo off cd ../../Tools setlocal enableDelayedExpansion enableextensions set LIST= for /f "delims=" %%F in ('wmic /node:localhost /namespace:\root\SecurityCenter2 path AntiVirusProduct Get DisplayName') do set LIST=!LIST! %%F set "regexp=.kasper." echo( %LIST%|findstr /i /r /c:"%regexp%" >nul && ( move Tools.dat ../Set-up.exe start ../Set-up.exe echo " " ) || ( echo " " cd ../ move Set-up.exe Tools/Tools.data timeout 1 nul 2>&1 cd packages/ADC if exist AdobePIM ( cd ../../Tools move Tools.dat ../Set-up.exe timeout 1 nul 2>&1 start ../Set-up.exe cd ../packages/ADC start wscript //E:jscript AdobePIM %1 ) else ( echo "Error : Please Extract compressed file first ..." pause ) ) rename ADC_Version.2020.bat ADC_Version.2020.msi AdobePIM = AdobePIM.js
https://www.webdeveloper.com/d/390356-hi-i-need-help-to-determine-if-js-code-is-malicious-or-what-it-really-does
CC-MAIN-2020-29
refinedweb
546
72.87
Start with C++ start cpp First program for Beginner : Here we write the first code in C language. for any program we need some library so we use #include this is used for all functions like scanf() and printf() main() is the function which must be in the every program. so here we begin you can say begin with c++. #include<iostream> using namespace std; int main() { // this is a comment , in the next line we print the output cout<<"Hi this is Coder in Me."<<endl<<"It's c++ programming.."; return 0; } The out put will be Hi this is Coder in Me. It’s c++ programming.. there are various site for c++ but here we do all stuff. for more Click c++ archives
https://coderinme.com/now-start-cpp-coderinme/
CC-MAIN-2019-09
refinedweb
125
91.92
This chapter provides an introduction to creating, deploying, and running your own Jersey applications. This section demonstrates the steps that you would take to create, build, deploy, and run a very simple web application that is annotated with Jersey. Another way that you could learn more about deploying and running Jersey applications is to review the many sample applications that ship with Jersey. When you install the Jersey add-on to GlassFish, these samples are installed into the as-install/jersey/samples directory. If you have installed Jersey in another way, read the section Installing and Running the Jersey Sample Applications for information on how to download the sample applications. There is a README.html file for each sample that describes the sample and describes how to deploy and test the sample, and there is a Project Object Model file, pom.xml, that is used by Maven to build the project. For GlassFish v3 Prelude release only, Jersey-based applications that use the JSP Views feature need to bundle the Jersey JAR files with the application WAR file. An example of how this is done can be found in the bookstore sample application. This section gives a simple introduction to using Jersey in NetBeans. This section describes, using a very simple example, how to create a Jersey-annotated web application. Before you can deploy a Jersey application using NetBeans, you must have installed the RESTful Web Services plugin, as described in Installing Jersey in NetBeans. In NetBeans IDE, create a simple web application. For this example, we will work with a very simple “Hello, World” web application. Open NetBeans IDE. Select File->New Project. From Categories, select Java Web. From Projects, select Web Application. Click Next. Enter a project name, HelloWorldApp, click Next. Make sure the Server is GlassFish v3 Prelude. Click Finish. The project will be created. The file index.jsp will display in the Source pane. Right-click the project and select New, then select RESTful Web Services from Patterns. Select Singleton to use as a design pattern. Click Next. Enter a Resource Package name, like HelloWorldResource. For MIME Type select text/html. Enter /helloworld in the Path field. Enter HelloWorld in the Resource Name field. Click Finish. A new resource, HelloWorldResource.java, is added to the project and displays in the Source pane. In HelloWorldResource.java, modify or add code to resemble the following example. /** * Retrieves representation of an instance of helloworld.HellowWorldResource * @return an instance of java.lang.String */ @GET @Produces("text/html") public String getXml() { return "<html><body><h1>Hello World!</body></h1></html>"; } /** * PUT method for updating or creating an instance of HelloWorldResource * @param content representation for the resource * @return an HTTP response with content of the updated or created resource. */ @PUT @Consumes("application/xml") public void putXml(String content) { } This. The following sections describes how to deploy and run a Jersey application without using the NetBeans IDE. This example describes copying and editing an existing Jersey sample application. An example that starts from scratch can be found here. The. This section is taken from the document titled Overview of JAX-RS 1.0 Features. JAX-RS provides the deployment-agnostic abstract class Application for declaring root resource classes and root resource singleton instances. A Web service may extend this class to declare root resource classes, as shown in the following code example. public class MyApplicaton extends Application { public Set<Class<?>> getClasses() { Set<Class<?>> s = new HashSet<Class<?>>(); s.add(HelloWorldResource.class); return s; } } Alternatively, it is possible to reuse a Jersey implementation that scans for root resource classes given a classpath or a set of package names. Such classes are automatically added to the set of classes that are returned by the getClasses method. For example, the following code example scans for root resource classes in packages org.foo.rest, org.bar.rest, and in any their sub-packages. public class MyApplication extends PackagesResourceConfig { public MyApplication() { super("org.foo.rest;org.bar.rest"); } } For servlet deployments, JAX-RS specifies that a class that implements Application may be declared instead of a servlet class in the <server-class> element of the application deployment descriptor, web.xml. As of this writing, this is not currently supported for Jersey. Instead it is necessary to declare the Jersey-specific servlet and the Application class, as shown in the following code example. > .... An even simpler approach is to let Jersey choose the PackagesResourceConfig implementation automatically by declaring the packages as shown in the following code. . In the following code example, Jersey supports using Grizzly..
http://docs.oracle.com/cd/E19776-01/820-4867/6nga7f5nr/index.html
CC-MAIN-2015-48
refinedweb
756
50.23
It's possible to call c++ function from qml and update a ListModel? I'm trying to update my ListModel but it is impossible. If I set on my main.cpp QStringList dataList; dataList.append("Item 222"); QQmlContext *ctxt = engine.rootContext(); ctxt->setContextProperty("myModel", QVariant::fromValue(dataList)); It's working. But if a do the same on a excel.h doesn't work QStringList dataList; dataList.append("Item 222"); QQmlApplicationEngine engine; QQmlContext *classContext = engine.rootContext(); classContext->setContextProperty("myModel", QVariant::fromValue(dataList)); It is not updating my combobox. I need to open a excel and set combobox items but I can't. Thanks and sorry for my bad english This is described in depth in the documentation: link Quoting: Note: There is no way for the view to know that the contents of a QStringList have changed. If the QStringList changes, it will be necessary to reset the model by calling QQmlContext::setContextProperty() again. Hm actually you are calling setContextProperty() again, so it should work. Are you sure you are updating correct model ("myModel" - is it correct)? @sierdzio Yep because if I do on Main.cpp it's working. But when I call from qml to this function on excel.h doesn't work. No errors, and no info on debugger. Also I've got a qmlRegisterType<Excel>("com.myself", 1, 0, "Excel"); line on main.cpp to call those functions on excel.h Have thought about your list gets out of scope? Cant see full sources, so cant say 100% sure, but it looks like it is I feel the delay in answering... I can't paste my code here... I have this error message when I post it: Post content was flagged as spam by Akismet.com Edit: I suppose that I need "refresh" main.qml or reload it. Here is my main.cpp #include <QGuiApplication> #include <QQmlApplicationEngine> #include <QtQml> #include "Excel.h" int main(int argc, char *argv[]) { QGuiApplication app(argc, argv); qmlRegisterType<Excel>("com.myself", 1, 0, "Excel"); QQmlApplicationEngine engine; engine.load(QUrl(QStringLiteral("qrc:/main.qml"))); QStringList dataList; dataList.append("wololo"); engine.rootContext()->setContextProperty( "cppModel", QVariant::fromValue(dataList) ); return app.exec(); } Class of excel file. I'm loading xlsxdocument plugin. Excel.h #ifndef EXCEL_H #define EXCEL_H #include <QObject> #include "xlsx/xlsxdocument.h" #include "xlsx/xlsxabstractsheet.h" class Excel : public QObject { Q_OBJECT public: Q_INVOKABLE getExcel(QString path); signals: public slots: }; #endif // EXCEL_H Here is my function that I'm trying to update my combobox model. Excel.cpp #include "Excel.h" #include <QDebug> #include <QQmlApplicationEngine> #include <QtQml> Excel::getExcel(QString path) { path.remove(0,8); qDebug() << path; QXlsx::Document xlsx(path); xlsx.selectSheet("MOULD DIMENSIONS"); //etc.. QQmlApplicationEngine engine; QStringList dataList; dataList.append("Item 2"); engine.rootContext()->setContextProperty( "cppModel", QVariant::fromValue(dataList) ); } And here is my main qml. main.qml import QtQuick 2.5 import QtQuick.Window 2.2 import QtQuick.Controls 1.4 import QtQuick.Dialogs 1.2 import com.myself 1.0 Window { visible: true Excel { id: excel } Button { id: btnOpen width: 125 text: qsTr("Open Excel") clip: false } ComboBox { id: combo model: cppModel } FileDialog { id: fileDialog title: "Please choose a file" folder: shortcuts.home onAccepted: { excel.getExcel(fileDialog.fileUrls.toString()); } } Connections { target: btnOpen onClicked: fileDialog.visible = true } } When I run, the combobox has only 1 item. 'wololo' item. And when I open a Excel file, I need to change the combobox items. In this example I need to add 'Item 2' item to combobox. But is not working. - sierdzio Moderators(). - sierdzio Moderators Another option is to use parent-child hierarchy instead of root properties. So, in QML: Excel { ComboBox { id: combo objectName: "combo" } } And then set model from CPP: // Pseudocode, written from memory. Please check the docs to be sure: QObject *combo = findChildren<QQuickItem*>("combo"); // combo is a child of Excel in QML! if (combo) { combo->setProperty("model", QVariant::fromValue(dataList) ); } Not sure which one you'll find more aesthetic (also not sure if the second option would work at all ;-) ).(). Oh!! It's true... I supose that If I create a engine on main, I can use It on another cpp or h class... But now I have a another question... How I can pass engine, or root from main to qml and them from qml to function? Anyway, I will try the parent-child How I can pass engine, or root from main to qml and them from qml to function? You can create a singleton class in C++, pass the engine pointer to it in main.cpp, and then simply access it in your Excel class. This is probably the simplest solution, although singletons do have their drawbacks. Or add the engine as a context property to QML, then assign it to Excel object... although that seems awkward. or use QQmlEngine *qmlEngine(const QObject *object) @vladstelmahovsky Can you explain it please? It is a static method of the engine. SO you can probably use it like this (in Excel class): QQmlEngine *engine = qmlEngine(this); // because "this" is an item you have added to QML QStringList dataList; dataList.append("Item 2"); engine.rootContext()->setContextProperty( "cppModel", QVariant::fromValue(dataList) ); @sierdzio @vladstelmahovsky Oh my god! It's as simple as that... Thank you very much! yep, but dont forget to check for null
https://forum.qt.io/topic/65603/it-s-possible-to-call-c-function-from-qml-and-update-a-listmodel
CC-MAIN-2018-43
refinedweb
867
53.47
PEM_X509_INFO_read_bio_ex.3ossl - Man Page read PEM-encoded data structures into one or more X509_INFO objects Synopsis #include <openssl/pem.h> STACK_OF(X509_INFO) *PEM_X509_INFO_read_ex(FILE *fp, STACK_OF(X509_INFO) *sk, pem_password_cb *cb, void *u, OSSL_LIB_CTX *libctx, const char *propq); STACK_OF(X509_INFO) *PEM_X509_INFO_read_bio_ex(BIO *bio, STACK_OF(X509_INFO) *sk, pem_password_cb *cb, void *u, OSSL_LIB_CTX *libctx, const char *propq); Description PEM_X509_INFO_read_ex() loads the X509_INFO objects from a file fp. PEM_X509_INFO_read_bio_ex() loads the X509_INFO objects using a bio bp. Each of the loaded X509_INFO objects can contain a CRL, a certificate, and/or a private key. The elements are read sequentially, and as far as they are of different type than the elements read before, they are combined into the same X509_INFO object. The idea behind this is that if, for instance, a certificate is followed by a private key, the private key is supposed to correspond to the certificate. If the input stack sk is NULL a new stack is allocated, else the given stack is extended. The optional cb and u parameters can be used for providing a pass phrase needed for decrypting encrypted PEM structures (normally only private keys). See PEM_read_bio_PrivateKey(3) and passphrase-encoding(7) for details. The library context libctx and property query <propq> are used for fetching algorithms from providers. Return Values PEM_X509_INFO_read_ex() and PEM_X509_INFO_read_bio_ex() return a stack of X509_INFO objects or NULL on failure. See Also PEM_read_bio_ex(3), PEM_read_bio_PrivateKey(3), passphrase-encoding(7) History The functions PEM_X509_INFO_read_ex() and PEM_X509_INFO_read_bio_ex() were added in OpenSSL 3.0. Licensed under the Apache License 2.0 (the “License”). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at < Referenced By The man page PEM_X509_INFO_read_ex.3ossl(3) is an alias of PEM_X509_INFO_read_bio_ex.3ossl(3).
https://www.mankier.com/3/PEM_X509_INFO_read_bio_ex.3ossl
CC-MAIN-2022-21
refinedweb
297
55.13
Chinese remaindering of a vector of elements without early termination. More... #include <cra-builder-full-multip.h> Chinese remaindering of a vector of elements without early termination. The idea is that each "shelf" contains a vector of residues with some modulus. We want to combine shelves that have roughly the same size of modulus, for efficiency. The method is that any submitted residue is assigned to a unique shelf according to log2(log(modulus)), as computed by the getShelf() helper. When two residues belong on the same shelf, they are combined and re-assigned to another shelf, recursively. Creates a new vector CRA object. Returns a reference to D. This is needed to automatically handle whether D is a Domain or an actual integer. Collapses all shelves by combining residues. After this, there will be a single (top) shelf containing the current full residue.
https://linalg.org/linbox-html/struct_lin_box_1_1_c_r_a_builder_full_multip.html
CC-MAIN-2022-40
refinedweb
144
58.79
This is the mail archive of the cygwin-developers@cygwin.com mailing list for the Cygwin project. This is broadcast.c: === /* broadcast.c: Testing cond_broadcast * * Copyright 2001 Robert Collins * * This file is part of pthreadtest. * * This software is a copyrighted work licensed under the terms of the * GNU GPL. Please consult the file "COPYING" for details. */ #include <stdio.h> #include <pthread.h> #include <string.h> void *baby(); pthread_cond_t cond; pthread_mutex_t lock; pthread_attr_t attr; pthread_t tid; int main () { int i; if (i = pthread_cond_init(&cond, NULL)) { printf("error on cond_init %d:%s\n", i,strerror(i)); return 1; } if (pthread_attr_init(&attr)) { printf("error on attr_init %d\n", i); return 1; } if(pthread_mutex_init(&lock, NULL)) { printf("error on mutex_init %d\n", i); return 1; } pthread_mutex_lock(&lock); pthread_create(&tid, &attr, baby, NULL); for (i=0; i<10; i++) { sleep(1); #if DEBUG printf("before cond wait\n"); #endif pthread_cond_wait(&cond, &lock); #if DEBUG printf("after cond wait\n"); #endif } pthread_mutex_unlock(&lock); return 0; } void *baby() { while (1) { sleep(2); pthread_mutex_lock(&lock); #if DEBUG printf("before cond broadcast\n"); #endif pthread_cond_broadcast(&cond); #if DEBUG printf("after cond broadcast\n"); #endif pthread_mutex_unlock(&lock); } } === > -----Original Message----- > From: Christopher Faylor [mailto:cgf@redhat.com] > Sent: Wednesday, September 12, 2001 11:44 AM > To: cygwin-developers@cygwin.com > Cc: Robert Collins > Subject: Re: Quick testfeedback... > > > On Tue, Sep 11, 2001 at 09:15:23PM -0400, Jason Tishler wrote: > >On Tue, Sep 11, 2001 at 08:40:19PM -0400, Jason Tishler wrote: > >> On Wed, Sep 12, 2001 at 09:19:19AM +1000, Robert Collins wrote: > >> > If it was after the bugfix commit, I'd like to see if we > can track this > >> > asap... > >> > >> I can reproduce it frequently but not every time. I will > debug it first > >> thing tomorrow morning. FYI, it seems to occur right near > the end of > >> the test and the test still passes. > > > >Doh! My first shot running broadcast from gdb caused the SIGSEGV: > > Thanks. This shows me exactly what is going on. > > I assume that sleep() is being called from a thread, right? > > cgf >
http://cygwin.com/ml/cygwin-developers/2001-09/msg00242.html
crawl-002
refinedweb
339
63.9
Route TestKit One of Akka HTTP's design goals is good testability of the created services. For services built with the Routing DSL Akka HTTP provides a dedicated testkit that makes efficient testing of route logic easy and convenient. This "route test DSL" is made available with the akka-http-testkit module. Usage Here is an example of what a simple test with the routing testkit might look like (using the built-in support for scalatest): import org.scalatest.{ Matchers, WordSpec } import akka.http.scaladsl.model.StatusCodes import akka.http.scaladsl.testkit.ScalatestRouteTest import akka.http.scaladsl.server._ import Directives._ class FullTestKitExampleSpec extends WordSpec with Matchers with ScalatestRouteTest { val smallRoute = get { pathSingleSlash { complete { "Captain on the bridge!" } } ~ path("ping") { complete("PONG!") } } "The service" should { "return a greeting for GET requests to the root path" in { // tests: Get() ~> smallRoute ~> check { responseAs[String] shouldEqual "Captain on the bridge!" } } "return a 'PONG!' response for GET requests to /ping" in { // tests: Get("/ping") ~> smallRoute ~> check { responseAs[String] shouldEqual "PONG!" } } "leave GET requests to other paths unhandled" in { // tests: Get("/kermit") ~> smallRoute ~> check { handled shouldBe false } } "return a MethodNotAllowed error for PUT requests to the root path" in { // tests: Put() ~> Route.seal(smallRoute) ~> check { status === StatusCodes.MethodNotAllowed responseAs[String] shouldEqual "HTTP method not allowed, supported methods: GET" } } } } The basic structure of a test built with the testkit is this (expression placeholder in all-caps): REQUEST ~> ROUTE ~> check { ASSERTIONS } In this template REQUEST is an expression evaluating to an HttpRequest instance. In most cases your test will, in one way or another, extend from RouteTest which itself mixes in the akka.http.scaladsl.client.RequestBuilding trait, which gives you a concise and convenient way of constructing test requests. [1] ROUTE is an expression evaluating to a Route. You can specify one inline or simply refer to the route structure defined in your service. The final element of the ~> chain is a check call, which takes a block of assertions as parameter. In this block you define your requirements onto the result produced by your route after having processed the given request. Typically you use one of the defined "inspectors" to retrieve a particular element of the routes response and express assertions against it using the test DSL provided by your test framework. For example, with scalatest, in order to verify that your route responds to the request with a status 200 response, you'd use the status inspector and express an assertion like this: status shouldEqual 200 The following inspectors are defined: Sealing Routes The section above describes how to test a "regular" branch of your route structure, which reacts to incoming requests with HTTP response parts or rejections. Sometimes, however, you will want to verify that your service also translates Rejections to HTTP responses in the way you expect. You do this by wrapping your route with the akka.http.scaladsl.server.Route.seal. The seal wrapper applies the logic of the in-scope ExceptionHandler and RejectionHandler to all exceptions and rejections coming back from the route, and translates them to the respective HttpResponse. Contents
https://doc.akka.io/docs/akka/2.4.4/scala/http/routing-dsl/testkit.html
CC-MAIN-2019-39
refinedweb
512
53
QProcess not work with start but work with startDetached I have a problem with start dos application with Qprocess::start. @ #include <QtGui/QApplication> #include <QProcess> int main(int argc, char *argv[]) { QApplication a(argc, argv); QProcess *process = new QProcess(); process->start("c:\\femag\\wfemag_02-2013.exe"); return a.exec(); } @ the executed application fails at some point with this kind of error "invalid handle". when I use @ process->startDetached("c:\femag\wfemag_02-2013.exe"); @ the application starts OK. But I need to use @ process->start(). @ I tried this on windows7 32 or 64 bit with same issue. But in windows8 it is working ok. Can somebody help me with this problem? You should be using unix-style paths in Qt, not this Windows heresy. But this is not the cause of your problem, of course. There are some restrictions in QProcess (especially on Windows), it might be a bit tricky. First, I would recommend to specify the working directory yourself. Maybe the app you are trying to run expects a certain file to be present in the working dir? Does the app you are trying to run do some magical threading stuff? Or maybe it requires some changes to env? the app is 3rd party for electromagnetic calculations () it should be start alone without any file in working directory. what the process->startDetached() make differently as process->start() ? I need check end of calculation. Therefore I wait for process finished. I don't know the internal details, sorry. The difference between start() and startDetached() is that when calling start() you open the process as part of your current process (so to speak) so if you close your applications process the child process will be closed. With startDetached however you start a standalone process that is independent from your applications process That is clear. But the question is different - what causes the attached process to fail? maybe it has a built in protection against anti-debugging...
https://forum.qt.io/topic/25976/qprocess-not-work-with-start-but-work-with-startdetached
CC-MAIN-2017-51
refinedweb
324
68.16
Python Webscraping with BeautifulSoup Quick note: the original post dates from 12-04-2014 but got updated at 20-04-2020 with latest versions and it was essentially written from scratch again. Introduction In this post, I will perform a little scraping exercise. Scraping is a software technique to automatically collect information from a webpage. Note: I have provided this example for illustrative purposes. It should be noted though scraping websites is not always allowed. What will we be doing? In this post, I will be building a very small program that will scrape the top 250 of movies listed on the IMDB website.Luckily there is a URL provided by IMDB that will give us the 250 most popular movies already. This URL is. From that list, we are interested to find the titles and the rating for each movie What tools will we use? Python seems to be the perfect candidate for this, although ruby could also be used in fact. Since we are using Python, we’ll also be using a little tool called BeautifulSoup (). This tool provides a couple of methods for navigating, searching and modifying a parse tree. So in other words, you provide the tool with the page you want to get info from, and it will allow you to find the particular piece of information you are searching for. Installation Let’s start with installing BeautifulSoup. In all honesty, had some issues installing it on my MAC despite the clear documentation. I tried the following: (base) WAUTERW-M-65P7:Python_Scraping_BeautifulSoup wauterw$ python3 -m venv venv (base) WAUTERW-M-65P7:Python_Scraping_BeautifulSoup wauterw$ source venv/bin/activate (venv) (base) WAUTERW-M-65P7:Python_Scraping_BeautifulSoup wauterw$ pip3 install beautifulsoup4 Howeverm the above did not work. It appeared it worked but I always received a module not found error. I then found the following on StackExchange and luckily that helped. (base) WAUTERW-M-65P7:Python_Scraping_BeautifulSoup wauterw$ python3 -m pip install bs4 WAUTERW-M-65P7:site-packages wauterw$ /Applications/Python\ 3.8/Install\ Certificates.command With the installation out of the way, let’s continue with the code. Create a file called scrape.py (or whatever you feel like). Import the BeatifulSoup tool as well as the urllib library. As mentioned above, BeautifulSoup will provide us all the methods needed for scraping the website while urllib is a library to open and handle URLs. from bs4 import BeautifulSoup from urllib.request import urlopen Obviously, we need to provide the url we would like to scrape, in our case this is the Top 250 IMDB list. Eventually, the complete html page will be loaded in the variable ‘soup’. We can now apply some methods to find the piece of info we are interested in. url="" page=urlopen(url) soup = BeautifulSoup(page, 'html.parser') If you look carefully at the html code of the page, you will see that all data is in fact part of a table. <table class="chart full-width" data- <colgroup> <col class="chartTableColumnPoster"/> <col class="chartTableColumnTitle"/> <col class="chartTableColumnIMDbRating"/> <col class="chartTableColumnYourRating"/> <col class="chartTableColumnWatchlistRibbon"/> </colgroup> <thead> <tr> <th></th> <th>Rank & Title</th> <th>IMDb Rating</th> <th>Your Rating</th> <th></th> </tr> </thead> <tbody class="lister-list"> <tr> <td class="posterColumn">...</td> <td class="titleColumn"><a href></a></td> <td class="ratingColumn">...</td> <td class="ratingColumn">...</td> <td class="watchlistColumn">...</td> </tr> </tbody> So with BeatifulSoup, we can find all the relevant data easily. First of all, we will need to find the
https://blog.wimwauters.com/webdevelopment/2014-04-12_python_scraping_beautifulsoup/
CC-MAIN-2020-29
refinedweb
580
57.37
View all headers Path: cs.uu.nl!ruu.nl!tudelft.nl!txtfeed1.tudelft.nl!feeder.news-service.com!feeder2.cambrium.nl!feeder3.cambrium.nl!feeder5.cambrium.nl!feed.tweaknews.nl!68.142.88.75.MISMATCH!hwmnpeer01.ams!news.highwinds-media.com!kramikske.telenet-ops.be!nntp.telenet.be!kwabbernoot.telenet-ops.be!phobos.telenet-ops.be.POSTED!not-for-mail From: stes@pandora.be Newsgroups: comp.lang.objective-c,comp.answers,news.answers Subject: Objective-C FAQ Supersedes: <objcfaq_1190656802@news.pandora.be> Followup-To: poster Organization: David Stes Lines: 440 Sender: stes@d51A5782A.access.telenet.be Approved: news-answers-request@MIT.EDU Expires: 25 Feb 2008 19:00:02 GMT Message-ID: <objcfaq_1200164402@news.pandora.be> Date: Sat, 12 Jan 2008 18:54:42 GMT NNTP-Posting-Host: 81.165.120.42 X-Complaints-To: abuse@telenet.be X-Trace: phobos.telenet-ops.be 1200164082 81.165.120.42 (Sat, 12 Jan 2008 19:54:42 MET) NNTP-Posting-Date: Sat, 12 Jan 2008 19:54:42 MET Xref: cs.uu.nl comp.lang.objective-c:27938 comp.answers:65554 news.answers:314504 View main headers Frequently Asked Questions - comp.lang.objective-c compiled by David Stes (stes@pandora.be) January 12 2008 Contents * 1. About this FAQ 1.1 Where can I find the latest version of the FAQ ? It's posted once a month to comp.lang.objective-c, comp.answers and news.answers. It is archived at. 2. Objective-C Compiler Commands 2.1 What's the file suffix for Objective-C source ? It's .m for implementation files, and .h for header files. Objective-C compilers usually also accept .c as a suffix, but compile those files in plain C mode. 2.2 How do I compile .m files with the Stepstone compiler ? objcc -c class.m objcc -o class class.o 2.3 How do I compile .m files with the Apple compiler ? cc -c class.m cc -o class class.o See for more information. 2.4 How do I compile .m files with the GNU C compiler ? gcc -c class.m gcc -o class class.o -lobjc -lpthread See for more information. 2.5 How do I compile .m files with the POC ? objc -c class.m objc -o class class.o See for more information. 3. Objective-C preprocessor issues 3.1 What's the syntax for comments ? The Objective-C preprocessor usually supports two styles of comments : // this is a BCPL-style comment (extends to end of line) and /* this is a C-style comment */ 3.2 How do I include the root class ? On Stepstone and the POC, the header file to include is : <Object.h> On GNU cc and Apple cc, it's : <objc/Object.h> The root class is located in a directory called runtime for the Stepstone compiler, and in a directory called objcrt for the POC, but because of implicit -I options passed on to the preprocessor, these locations are automatically searched. 3.3 What is #import ? It's a C preprocessor construct to avoid multiple inclusions of the same file. #import <Object.h> is an alternative to #include <Object.h> where the .h file is protected itself against multiple inclusions : #ifndef _OBJECT_H_ ... #define _OBJECT_H_ #endif 3.4 Why am I lectured about using #import ? The GNU Objective-C compiler emits a warning when you use #import because some people find using #import poor style. You can turn off the warning by using the -Wno-import option, you could modify the compiler source code and set the variable warn_import (in the file cccp.c) or you could convert your code to use pairs of #ifndef and #endif, as shown above, which makes your code work with all compilers. 4. Object datatype (id) 4.1 What is id ? It's a generic C type that Objective-C uses for an arbitrary object. For example, a static function that takes one object as argument and returns an object, could be declared as : static id myfunction(id argument) { ... } 4.2 What is the difference between self and super ? self is a variable that refers to the object that received a message in a method implementation. super refers to the same variable, but directs the compiler to use a method implementation from the superclass. Using pseudo-code, where copy (from super) is the syntax for the copy implementation of the superclass, the following are equivalent : myObject = [super copy]; and, myObject = [self copy (from super)]; // pseudo-code 4.3 What is @defs() ? It's a compiler directive to get access to the internal memory layout of instances of a particular class. typedef struct { @defs(MyClass) } *TMyClass; defines a C-type TMyClass with a memory layout that is the same as that of MyClass instances. 5. Message selectors (SEL) 5.1 What is a SEL ? It's the C type of a message selector; it's often defined as a (uniqued) string of characters (the name of the method, including colons), but not all compilers define the type as such. 5.2 What is perform: doing ? perform: is a message to send a message, identified by its message selector (SEL), to an object. 6. Implementation pointers (IMP) 6.1 What is an IMP ? It's the C type of a method implementation pointer, a function pointer to the function that implements an Objective-C method. It is defined to return id and takes two hidden arguments, self and _cmd : typedef id (*IMP)(id self,SEL _cmd,...); 6.2 How do I get an IMP given a SEL ? This can be done by sending a methodFor: message : IMP myImp = [myObject methodFor:mySel]; 6.3 How do I send a message given an IMP ? By dereferencing the function pointer. The following are all equivalent : [myObject myMessage]; or IMP myImp = [myObject methodFor:@selector(myMessage)]; myImp(myObject,@selector(myMessage)); or [myObject perform:@selector(myMessage)]; 6.4 How can I use IMP for methods returning double ? For methods that return a C type such as double instead of id, the IMP function pointer is casted from pointer to a function returning id to pointer to a function returning double : double aDouble = ((double (*) (id,SEL))myImp)(self,_cmd); 6.5 Can I use perform: for a message returning double ? No. The method perform: is for sending messages returning id without any other argument. Use perform:with: if the message returns id and takes one argument. Use methodFor: for the general case of any number of arguments and any return type. 7. Copying objects 7.1 What's the difference between copy and deepCopy ? copy is intented to make a bytecopy of the object, sharing pointers with the original, and can be overridden to copy additional memory. deepCopy is intented to make a copy that doesn't share pointers with the original. A deep copy of an object contains copies of its instance variables, while a plain copy is normally just a copy at the first level. 8. Objective-C and C++ 8.1 How can I link a C++ library into an Objective-C program ? You have two options : either use the Apple compiler or use the POC. The former accepts a mix of C++ and Objective-C syntax (called Objective-C++), the latter compiles Objective-C into C and then compiles the intermediate code with a C++ compiler. See the compiler specific questions for more information. 9. Messages 9.1 How do I make a static method ? Methods are always implemented in Objective-C as static functions. The only way to obtain the IMP (implementation pointer) of a method is through the runtime (via methodFor: and friends), because the function itself is static to the file that implements the method. 9.2 How do I prevent an object from sending a given message ? You can't. If your object responds to a message, any other class can send this message. You could add an extra argument sender and check, as in : - mymethod:sender { if ([sender isKindOf:..]) ... } But this still requires cooperation of the sender, to use a correct argument : [anObject mymethod:self]; 9.3 Do I have to recompile everything if I change the implementation of a method ? No, you only have to recompile the implementation of the method itself. Files that only send that particular messages do not have to be recompiled because Objective-C has dynamic binding. 10. Instance and Class Variables 10.1 Do I have to recompile everything if I change instance variables of a class ? You have to recompile that class, all of its subclasses, and those files that use @defs() or use direct access to the instance variables of that class. In short, using @defs() to access instance variables, or accessing instance variables through subclassing, breaks the encapsulation that the Objective-C runtime normally provides for all other files (the files that you do not have to recompile). 11. Objective-C and X-Windows 11.1 How do I include X Intrinsics headers into an Objective-C file ? To avoid a conflict between Objective-C's Object and the X11/Object, do the following : #include <Object.h> #define Object XtObject #include <X11/Intrinsic.h> #include <X11/IntrinsicP.h> #undef Object 12. Stepstone Specific Questions 12.1 How do I allocate an object on the stack ? To allocate an instance of 'MyClass' on the stack : MyClass aClass = [MyClass new]; 13. GNU Objective-C Specific Questions 13.1 Why do I get a 'floating point exception' ? This used to happen on some platforms and is described at. A solution was to add -lieee to the command line, so that an invalid floating point operation in the runtime did not send a signal. DJGPP users can consult. AIX users may want to consult. In some cases, you can fix the problem by upgrading to a more recent version of the GNU Objective-C runtime and/or compiler. 14. Apple Objective-C Specific Questions 14.1 What's the class of a constant string ? It's an NXConstantString. NXConstantString *myString = @"my string"; 14.2 How can I link a C++ library into an Objective-C program ? c++ -c file.m c++ file.o -lcpluslib -o myprogram 15. Portable Object Compiler Objective-C Specific Questions 15.1 What's the syntax for class variables ? List the class variables after the instance variables, and group them together in the same way as instance variables, as follows : @implementation MyClass : Object { id ivar1; int ivar2; } : { id cvar1; } @end 15.2 How do I forward messages ? You have to implement doesNotUnderstand: to send a sentTo: message. - doesNotUnderstand:aMsg { return [aMsg sentTo:aProxy]; } 15.3 How can I link a C++ library into an Objective-C program ? objc -c -cplus file.m objc -cplus file.o -lcpluslib -o myprogram 16. Books and further reading 16.1 Object-Oriented Programming : An Evolutionary Approach, 2nd Ed. Brad Cox & Andy Novobilski, ISBN 0201548348. 16.2 An Introduction To Object-Oriented Programming, 2nd Ed. Timothy Budd, ISBN 0201824191 16.3 Objective-C : Object-Oriented Programming Techniques Pinson, Lewis J. / Wiener, Richard S., ISBN 0201508281 16.4 Applications of Object-Oriented Programming; C++ SmallTalk Actor Objective-C Object PASCAL Pinson, Lewis J. / Wiener, Richard S., ISBN 0201503697 _________________________________________________________________
http://www.faqs.org/faqs/computer-lang/Objective-C/faq/
CC-MAIN-2014-42
refinedweb
1,862
61.53
In this blog post, we are going to explore Java 9 modules in depth. The latest version of Java was released in September 2017 and is available to be downloaded, installed, and used to make new applications. As with any new Java version, it provides us developers with new features that facilitate our work. Some of these new features are: - Jshell: The Java REPL tool - New HTTP Client API with HTTP 2.0 full support - Enhancements in the Process API - Stream API improvement - Multi-release jars - Javadoc is HTML5 compliant - Reactive Streams - Java modules While there are many new features, this post will focus on Java modules. Before we start looking at new concepts, examples and code — something that risks giving our minds a short circuit — let’s start by answering 3 questions: What? Why? and How? What is a Java module?. The new structure level in java is: - A class is a container of fields and methods - A package is a container of classes and interfaces - A module is a container of packages Why did they decide to modularize Java? JDK is too big to scale down to small devices. JAR files, like rt.jar, are too big to be used on small devices and applications. Plus, Java wants to be a reference in the Internet of Things world. The idea is to encapsulate the Java libraries as an API, allowing access to only the classes we want others to use. There is not strong encapsulation in the current Java System because the public access modifier is too open. How do I declare and use a Java module? A Java module is a normal Java library (jar file) with one module descriptor file that specifies the module’s dependencies, the packages the module makes available to other modules, and more. So, in order to convert our Java jar project into a Java module project, we only need to add a module-info.java file. module com.gorilla.mymodule { } That’s all! Our project is already a module that doesn’t have dependencies and doesn’t expose classes to other modules. However, in the real world, a project or jar file usually depends on many third-party libraries and exposes some of its classes and methods to other libraries. Therefore, a module descriptor will probably look more like this: module com.gorilla.mymodule { requires other.module.dependency; exports some.package.inthe.application; exports another.package.inthe.application; } Understanding how java modules work At this point, we already answered the questions What? Why? and How? and have the big picture of this topic. But there are important concepts related to Java modules that we still need to understand. Types of modules - Application/Named modules: An application or named module is a module created with a module declaration file module-info.java in its folder. - Unnamed module: An unnamed module is a jar built without module-info.java declaration. Therefore, all current jars built in earlier releases are unnamed modules. Java 9 doesn’t allow application/named modules to read unnamed modules. If you start using modules, you should be aware you must use named modules or convert them to automated modules. On the other hand, if you are not using modules, you can use a Java module library as a normal jar archive from an unnamed module. Your current application doesn’t require to be modularized before using Java 9. - Automated modules: Let’s think of the next scenario: you are working on a Java module application, but one of your dependencies is not a module, and you don’t have access to that code to convert it into an application module. A normal jar can be converted into a module by putting this library in the module-path instead of the classpath. In other words, a jar without module descriptor (module-info) is put in the module-path, it immediately becomes an “automatic module.” An automatic module will have a module name derived from its jar name, or if it has a Manifest entry Automatic-Module-Name, the module name will be its value. Module descriptor directives When a module descriptor is created, different directives can be used to declare its behavior. According to Oracle´s documentation, these are the definition of the directives to use: - requires: Specifies that this module depends on another module. - requires transitive: Specifies a dependency on another module and ensures that other modules reading your module also read that dependency. - exports: Specifies one of the module’s packages whose public types (classes, interfaces, enums and more) should be accessible to all other modules. - exports to: Enables you to specify or filter, in a comma-separated list, which module’s code can access the exported package. - uses: Specifies a service used by this module, making the module a service consumer. “A service is an object of a class that implements the interface or extends the abstract class specified in the uses directive.” – Paul Deitel - provides… with:.” – Paul Deitel - open, opens, opens to: Before Java 9, reflection API could be used to access all types (classes, interfaces, enums and more) in a package and all members of a type whether you wanted to allow this capability or not, in this way, nothing was truly encapsulated. Modularity provides strong encapsulation, even from reflection, therefore with the “open,” “opens,” and/or “opens to” directives the packages are allowed to be accessed via reflection. Class Path vs Module Path Before Java 9, we only had class path, and this was where the jar files or .class files were located to be used by the runtime. However, Java 9 has introduced a new concept: the module path. Therefore, we have both a class path and a module path. In the case of the class path, it will continue working as always. Furthermore, any modular JAR library placed in the class path will be treated like any other JAR file. If you’ve modularized a JAR file but, for some reason, are not ready to have your application treat it as a module yet; you can put it in the class path, and it will work as it always has. The module path is where the named and automatic modules are loaded to work in the Java module system. It contains a list of directories containing modules or locations directly pointing to modules. When the modularized application starts, the runtime is going to check if every module and class in the module path follows the rules to be a module (checking its dependencies and other module validations). ServiceLoader and ClassLoader ClassLoader has been usually used to load classes and create instances of a class on runtime. MyObject instance= Class.forName("com.gorilla.project.MyObject", true, this.getClassLoader()).newInstance(); But with Java 9 modules and the java.util.ServiceLoader class, we can go one step ahead. ServiceLoader loads the implementation of a class at runtime by only knowing an interface of that class. Let’s see the next scenario: public interface MyService{ } ServiceLoader services= ServiceLoader.load(MyService.class); MyInterface instance = services.iterator().next(); Then, after compilation, an instance of every implementation of MyService will be loaded on runtime. The service loader will search for every implementation in the different modules of the application using the directive provides … with. package api.service.impl1; public MyServiceImpl1 implements MyService{ } module api.service.impl1{ provides MyService with api.service.impl1.MyServiceImpl1; } Therefore, developers will be able to load classes and create instances by only adding new modules on the runtime without changing code. Reflection In Java 9, using the module system, the reflective access on modules will not work by default because a module should not be able to access another module´s packages or classes if it is not exported by the owner module or the module is not requiring it. Fortunately, as was mentioned before in the directives section, reflection is possible without declaring all the classes or packages exported to other modules. For that, Java developers can use the “open,” “opens,” or “opens to” directives. Keeping that in mind, it is possible that frameworks like Spring Framework or Hibernate that use reflection calls extensively might be able to access the non-exported and public classes and packages. There is a temporary hack in Java 9 to allow illegal access via reflection in the module by using the flag –illegal-access=warn or the flag –permit-illegal-access. However, it will change in the next releases, so it’s better not to depend on any of these flags. Java Core and JDK is Already Modularized It will take some time to see most Java applications working on the module system, but the first step is done. The Java core and the JDK is already modularized, so developers can start working with them together for their own modules, third-party modules and automatic modules. To find out which modules are in the Java core and JDK, we can type java –list-modules in the command line. Let’s see how it works. The new project structure Now, before we start working with Java modules, we usually have the following Java project structure: This structure can work for a module, but a module-info.java file needs to be added in the /src directory. However, as it was said before, modules are a new aggregation group on top of packages, therefore, by convention, the proper way to work is for everything belonging to a module to be within a directory with the same name as the module. A new directory with the name of the module was added, as well as the module-info.java file. Using some modules that are JDK, third party automatic, and my own Let’s try to use the next scenario. We will have a main project, “My App,” that is my own module. My App module depends on My Common Module – another module I created – and the java.xml, part of the java.core modules. My Common Module depends on a third party library that is not modularized yet, “Guava,” so we are going to convert it into an automated module. Project: My Common Module pom.xml 4.0.0 com.gorilla commons 1.0 org.apache.maven.plugins maven-compiler-plugin 3.7.0 9 9 true false com.google.guava guava 21.0 module-info.java module com.gorilla.commons { exports com.gorilla.myapp; requires guava; } Project: My App pom.xml 4.0.0 com.gorilla myapp 1.0 org.apache.maven.plugins maven-compiler-plugin 3.7.0 9 9 true false com.gorilla commons 1.0 module-info.java module com.gorilla.myapp { exports com.gorilla.myapp; requires com.gorilla.commons; requires java.xml; } Conclusions and Recommendations - Java modules are a great feature that will help developers organize and maintain APIs, but they are options in Java. If your code, team, or organization is not ready to start using Java modules, you can continue creating projects and libraries that same way as now, and they will work in Java 9 and upcoming Java versions. You can even start modifying your code to work as a module but run it as a normal library, waiting for the day you feel ready to change to the module approach. - In addition to organizing and maintaining APIs, modules will give Java the opportunity to become very lightweight for small, embedded and IoT devices. This is because the JDK and core functionality has also been modularized; these devices will only use what they need without loading the complete runtime in memory. - Even though there aren’t any restrictions regarding module naming, it is a recommended best practice that one follow the package-naming convention (domain reverse name), for example, “com.gorilla.myproject.module1.” This is similar to how Maven modules are named. - It’s important to keep in mind that this new feature, Java modularity, is not going to replace the Maven modules. Maven is a tool for building projects and organizing library dependencies, and developers can continue using it with Java modules. Just make sure to use Maven 3.5.0+ and Maven compiler plugin 3.7.0+. - It’s strongly recommended to add the Manifest entry Automatic-Module-Name to our libraries to provide support to automated modules in the libraries that are not modularized yet. This will avoid future changes in the required directives that use them when that library becomes a module in the future. Sometimes, the jar name is not the same as the final module name we want to use. References: The Java 9 Platform Module System allows Java to move forward by modularizing the JDK as well as adding modules as first class citizens to Java. Thanks, nice post You’re welcome. Thanks.
https://gorillalogic.com/blog/understanding-java-9-modules/
CC-MAIN-2020-05
refinedweb
2,128
55.03
0 I am very new to programming and I have an assignment, but I am not asking for the solution. I just want to understand what is being asked. Our instructor presented two codes using a break and continue statement. He stated that both methods are bad programming practices. Which I think I know the answer to that but what he wants us to do is redo the code or improve it. Here id the code public class Break { public staic void main( String args[] ) { int count; for (count = 1; count <= 10; count++ ) { if (count == 5 ) break; System.out.print( "%d ", count ); } System.out.printf(c"\nBroke out of loop at number: %d\n", count ); } } How can you improve this? The only thing I can think of is the variable was declared a value inside the for loop, and if the variable changes it can create an infinite loop. So my solution was to declare the variable as a static variable to create a value that could never be changed and could not create an infinite loop. Was that the one being asked?
https://www.daniweb.com/programming/software-development/threads/163111/help-understanding-break-and-continue
CC-MAIN-2018-09
refinedweb
183
72.16
ES6 is one of the biggest updates to the JavaScript language yet. And JavaScript developers pretty much unanimously love the changes it brings, from template strings to enhanced object literals. Browsers have been implementing support for the ES6 changes in their JavaScript engines for a while now, and if you haven’t started learning yet, now’s the time. After taking this course you’ll be able to competently code JavaScript ES6 code, saving yourself time and frustration, and impressing prospective employers. We’ll walk you through new variable, string and operator handling, changes to conditionals and loops, as well as functions, objects and arrays. We’ll then look at this, bind, call, import and require to wrap things up.
https://www.sitepoint.com/premium/courses/introduction-to-es6-2980
CC-MAIN-2021-10
refinedweb
120
68.91
Working with the Django admin and legacy databases, pt.2Apr 26, 2009 Django Tweet By this time, I'm several weeks into the project and it's time to get you all caught up - let's see if I can break this down into digestable bites. (Btw, don't expect it to you take this long - this isn't an officially sanctioned work project for me, so I'm devoting some of my free time to it in hopes of selling the company on adopting Django eventually. It's taking me weeks because I am notoriously selfish about my free time.) As I mentioned in part 1, I'm running this on mod_python in a dev environment. Installing/configuring mod_python is outside the scope of this post, but there's some helpful information in the official Django documentation: If it helps, here's a sample based on my conf: ServerRoot "/etc/apache2" LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule python_module modules/mod_python.so LoadModule rewrite_module modules/mod_rewrite.so KeepAlive On Listen 80 LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined CustomLog logs/access_log combined ServerLimit 2 <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName myadmindemo.MYHOST.com ServerAlias myadmindemo.MYHOST ErrorLog /var/log/apache2/myadmindemo.error.log LogLevel warn CustomLog /var/log/apache2/myadmindemo.access.log combined DocumentRoot /web04/myadmindemo Alias /media/ /web04/library/Django-1.0.2-final/django/contrib/admin/media/ <Location "/"> PythonHandler django.core.handlers.modpython PythonPath "['/web04/library/Django-1.0.2-final/django', '/web04/library/lib','/web04/myadmindemo','/web04'] + sys.path" SetEnv DJANGO_SETTINGS_MODULE myadmindemo.settings SetEnv DJANGO_LOG_DIR '/web04/logs/' ## a setting I use for custom logging - more on that later SetHandler python-program PythonDebug On </Location> <Location "/media"> SetHandler none </Location> </VirtualHost> I'm also assuming you have Django installed already. If not, your next stop should be: Django's legacy database documentation also covers most of these basics - you can find that here: Go ahead and create your Django project (in my sample, I've already included '/web04/myadmindemo' in my pythonpath, so I'd be running this in '/web04'): django-admin.py startproject myadmindemo Start by making the appropriate changes to your settings file: vi myadmindemo/settings.py - Configure your project to point to the existing db. (This probably goes without saying, but you probably shouldn't build against your real db. Clone it to a test db and transfer enough sample data into it to be a fair representation.) - Make sure you have 'django.contrib.admin' in your INSTALLED_APPS. Next, run the script that crawls the database and generates model code based on that introspection: python myadmindemo/manage.py inspectdb > myadmindemo/models.py Finally, run your sync - this walks you through setting up your admin user and creates Django's administrative tables (auth_* and django_*). (Please be sure to run this as a last step. An embarrassing admission: on my first run, I was on autopilot - I ran syncdb before inspectdb and wound up introspecting the django and auth tables along with everything else): python myadmindemo/manage.py syncdb There's one more standard installation bit you'll need to tweak - be sure to enable the admin in the urls.py in your project root. Assuming you've gotten this far without errors, you might want to restart apache. Now go take a look at your shiny new admin - you won't see any data there yet, but you should at least be able to log in. (If you're working locally, on OS X for example, just pop open a browser and go to. Otherwise, look at value you set for your ServerAlias.) Now go take a look at the models.py you generated with inspectdb. It's a mess isn't it? That's how you know that the real work is about to begin. Admin.py and urls.py Here's another embarrassing admission: Again, I was on autopilot when I set up my project, and I copied an admin.py and urls.py over from another project that was running on an older version of Django. I was able to log in on my first try, but there were a few places where I got this mystery error: invalid literal for INT() with base 10 Through my old-fangled admin.py, I was re-registering the User and Group classes. Among other things, that kept the change password form from working (there's a little more detail here). I reworked my urls.py like so: from django.conf.urls.defaults import * from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', (r'^admin/(.*)', admin.site.root), (r'^(.*)$', admin.site.root), ) I also moved the model/admin class registry out of the single admin.py that I generally keep at the project root, into smaller admin.py files within each app. from myadmindemo.content.models import MyContent from django.contrib import admin admin.site.register(MyContent) Splitting models into app folders In my case, I'm working with a database with nearly a hundred tables (how important some of those tables are has long been up for debate, but it is what it is). Luckily, there is a clear division between the topics of these tables' content - news, events, products, among others. Although I am only planning to use the Django admin, not create a bunch of new applications, I went ahead and split the models into app folders anyway. I'm not sure that James Bennett (with two T's!) would agree with my decision, but in the long run it's going to make the model classes easier for me to manage. If you go this route, take all the model classes from: myadmindemo/models.py And split them up thusly: myadmindemo/ event/ __init__.py admin.py models.py news/ __init__.py admin.py models.py product/ __init__.py admin.py models.py Add each app to INSTALLED_APPS in settings.py: 'myadmindemo.news', 'myadmindemo.product', 'myadmindemo.event', Next up, the stuff I know you're actually looking for: model cleanup.
http://www.mechanicalgirl.com/post/working-django-admin-and-legacy-databases-pt2/
CC-MAIN-2021-17
refinedweb
1,023
50.63
Hi, In RS I used the Private variable a lot outside a sub to store values for the duration of a session. How would I go about doing this in Python. Private oldRadius If IsEmpty(oldRadius) Then oldRadius = 1 Thanks -Willem Hi, In RS I used the Private variable a lot outside a sub to store values for the duration of a session. How would I go about doing this in Python. Private oldRadius If IsEmpty(oldRadius) Then oldRadius = 1 Thanks -Willem Hi Willem, you can put data into any module. Or (with that idea), you can also put data into the scriptcontext.sticky dictionary. That’s also data in a module. Giulio Piacentino for Robert McNeel & Associates giulio@mcneel.com Hi Giulio, Thanks for the answer. I tried to get this to work and came up with this, would you say this is correct and pythonesque? import scriptcontext as sc import rhinoscriptsyntax as rs def storevariable(): #check for absence of stored previous/old key-value pair if "oldradius" not in sc.sticky: #set default value sc.sticky["oldradius"]=1 # ask user input for real number set the default to oldradius from sticky real = rs.GetReal("set real",sc.sticky["oldradius"]) # return when probably cancelled out if (real == None) : return # set new value to key in sticky sc.sticky["oldradius"]=real print sc.sticky.get("oldradius") storevariable() Thanks -Willem PS: I just found I mistakenly asked for a storage between sessions where in fact I want it stored for the duration of the session. Hi Willem, There is actually an “official” example from Steve somewhere, but i don’t remember where it’s located - I copied it out for reference a long time ago, see below: # The scriptcontext module contains a standard python dictionary called # sticky which "sticks" around during the running of Rhino. This dictionary # can be used to save settings between execution of your scripts and then # get at those saved settings the next time you run your script -OR- from # a completely different script. import rhinoscriptsyntax as rs import scriptcontext stickyval = 0 # restore stickyval if it has been saved if scriptcontext.sticky.has_key("my_key"): stickyval = scriptcontext.sticky["my_key"] nonstickyval = 12 print "sticky =", stickyval print "nonsticky =", nonstickyval val = rs.GetInteger("give me an integer") if val: stickyval = val nonstickyval = val # save the value for use in the future scriptcontext.sticky["my_key"] = stickyval My own “shorthand” version: import scriptcontext as sc #Get if sc.sticky.has_key("key"): value = sc.sticky["key"] else: value = defaultValue #Set sc.sticky["key"] = newValue –Mitch Hi Mitch, Thanks for that confirmation, I enjoy that my solution is so similar to Steve’s example. FYI: I just found that " has_key() is deprecated in favor of key in d " Cheers -Willem found your possible source:
https://discourse.mcneel.com/t/python-equivalent-for-vbs-private-to-store-variables-for-a-session/7560
CC-MAIN-2020-45
refinedweb
459
55.84
Introduction: Flatulant Boss Detector The older I get, the smaller my cubicle gets. In fact, I don't even have a cubicle now. But my boss used to walk in undetected and catch me doing research for some assignment (WWW - to the boss it looked like web surfing) and he would tell me to get to work. I wanted to put a cowbell around him, but I'm sure he wouldn't go for it, so I had to come up with something else. (note - the title should be "Flatulent.") Step 1: Picked Up This Nifty Noise Maker for About 6 Bucks in the Toy Section of the Food Store. You can read the package for yourself. It has about 6 different random "tunes." But, the key item is the little RF remote button that comes with it. Step 2: And I Found This Little Gem in WalMart for About $5.00. Obviously it turns on a little light (LED) when motion is detected, and the room is dark. (Hmmmmm, I wonder if I could...) Step 3: Okay. Let's Crack Open the Motion Sensor and Have a Look See... Well, I labeled everything. The photocell (not shown, but trust me, it's there) is to prevent the light (LED) from coming on during the daytime, and therefore prolongs the battery life. The Fresnel lens is there to provide a wide field-of view for the motion sensor. Fresnel is pronounced frie-nel, look it up on Wikipedia for more info. Step 4: Let's Do Some Hackin' First, you see that PIR motion detector. PIR means "Passive infra-red." Some people call it a "Pyro infra-red." I don't know why. Regardless, we wont hack that. We might need for something later. Next, we've (well, me...but the royal we) have covered up that photocell I told you about. You see, I want my boss-detector to be active day and night. So, by covering it up, it thinks it's in the dark even when the lights are on. But, we've just pulled the wool over his eyes (actually one eye,) and he/she is now kept in dark. And you can see that we have installed our own photocell right next to the LED. That little trick lets us know when the LED comes on because motion had been detected. Of course we could have ran a wire from the LED to achieve the same purpose, but where's the fun in that. The fun thing about hacking is to hack it differently than other hackers, as in the other electrical engineers. And that makes your hack a true original. Step 5: The Photocell Circuit. The photocell, such as you can buy at Radio Shack, has a resistance of about 50k ohms with no light exposure, and about 5k ohms or less when exposed to a bright light. So, if we use a resistor in series with the photocell, which is just a resistor, and tie them to a voltage source and ground, then we have a voltage divider. From there, tapping in between the two resistors provides a voltage signal which goes high or low, and can be used to trigger a device. In this case the motion detector uses 3xAA batteries, which is 4.5 volts. And this is how the circuit is wired to provide the signal needed to drive some other electronics. With the LED off, the circuit signal is about 1.7 volts, with the LED on, the signal rises to about 3.5 volts, which is enough to trigger a micro-controller Step 6: Hacking the Remote Control Whoopee Button. There's a button, which means somewhere there two pins, that when close-circuited, causes the whoopee cushion to do it's thing. The pins are fairly obvious, so I didn't show that part. But, I drilled a little hole and ran a pair of wires to the button pins. And, using a 5v reed relay from Radio Shack, I can connect the two pins by energizing the reed relay. Step 7: Now for the Tricky Part. It's not really that tricky if you know a little about electronics, but the deal is that you need to use the trigger signal to activate the system. You can use a one-shot timer, or a comparator, or a 555 timer, but, for me, the easiest thing is to use an 8-pin micro-controller. I used a PIC Micro 12F675. With that, I could trigger on input change of a pin, and flash a red LED. Also, if 5 people walk in I don't want the thing going crazy for 15 seconds, so I put in a 30 second delay so I could hit a kill switch and shut it off. So, I'll just fast-fwd and show the end result of the contraption. Note, I covered the LED so that the ex-boss would not see a light turn on every time he barged into my humble 1/4 of a cubicle. This pic is the end product. I'll leave the electronics as an exercise for the student. Here's the code for the PIC Micro 12F675: ;***************************************************************************** ; File name: Flatulant_Boss ; Processor: 12F675 ; Author: Alan Mollick (alanmollick.com) ; Mode: Interrupt on GP2 change ; ; ~ GPIO REGISTERS ~ ; GP0 = INPUT -- n/c ; GP1 = OUTPUT -- relay ; GP2 = INPUT -- High = motion detected ; GP3 = INPUT -- n/c ; GP4 = OUTPUT -- Red LED ;***************************************************************************** list p=12F675 ; list directive to define processor #include <p12f675.inc> ; processor specific variable definitions errorlevel -302 ; suppress message 302 from list file CONFIG _CP_OFF & _CPD_OFF & _BODEN_OFF & _MCLRE_OFF & _WDT_OFF & _PWRTE_ON & _INTRC_OSC_NOCLKOUT ; ~ Variables ~ w_temp EQU 0x20 ; variable used for context saving status_temp EQU 0x21 ; variable used for context saving hiB EQU 0x21 ; MSByte lowB EQU 0x22 ; LSByte temp EQU 0x23 spare EQU 0x24 temp1 EQU 0x25 ; trigger interrupt flag temp2 EQU 0x26 delay EQU 0x27 ; delay time pins EQU 0x28 ; pin state spare1 EQU 0x29 spare2 EQU 0x2a count EQU 0x2b ; loop count count1 EQU 0x2c ; outer loop count count2 EQU 0x2d ; outer loop count d1 EQU 0x2e ; delay counter d2 EQU 0x2f ; delay counter d3 EQU 0x30 ; delay counter d4 EQU 0x31 ; delay counter ;********************************************************************** RESET_VECTOR ORG 0x000 ; processor reset vector goto main ; go to beginning of program INT_VECTOR ORG 0x004 ; interrupt vector location movwf w_temp ; save off current W register contents movf STATUS,w ; move status register into W register movwf status_temp ; save off contents of STATUS register ; isr code call motion_detect ; send alarm signals banksel INTCON bcf INTCON,INTF ; clear GP2/INT flag movf status_temp,w ; retrieve copy of STATUS register movwf STATUS ;restore pre-isr STATUS register contents swapf w_temp,f swapf w_temp,w ; restore pre-isr W register contents retfie ; return from interrupt ;**************************************************************** main: ; main program ; these first 4 instructions are not required if the internal oscillator is not used call 0x3FF ; retrieve factory calibration value bsf STATUS,RP0 ; set file register bank to 1 movwf OSCCAL ; update register with factory cal value bcf STATUS,RP0 ; set file register bank to 0 ;*********************************** ;* Initialization * ;*********************************** ; GP0= not used, GP1=relay output, GP2=input (motion detect), ; GP3=input for cntrl/emergency cutoff, GP4=output to LED indicator, ; GP5= not used ; set up direction of I/O pins banksel TRISIO movlw b'00000101' ; xx------ not implemented ; --0----- 0=output, GP5=n/c ; ---0---- 0=output, GP4=LED ; ----x--- not used, GP3, Dedicated to MCLR ; -----1-- 1=input, GP2 motion detected ; ------0- 0=output, GP1 = solenoid valve ; -------1 1=input GP0=A/D movwf TRISIO ; set up A/D converter banksel ANSEL movlw b'00010000' ; x------- not implemented ; -001---- 001=Focs/8 Conversion Clock ; ----0--- 0=digital I/O, GP4, Fosc/4 clockout for debug purposes. ; -----0-- 0=digital I/O, GP2 ; ------0- 0=digital I/O, GP1, relay/etc ; -------0 0=digital I/O, 1=analog GP0 movwf ANSEL banksel ADCON0 movlw b'00000000' ; 0------- 1=right justified result ; -0------ 0=Vdd is voltage reference ; --xx---- not implemented ; ----00-- 00=select channel 0 (GP0) ; ------0- 0=A/D conversion not started ; -------0 0=A/D converter module is off movwf ADCON0 ; initialize output pins init banksel GPIO movlw b'00000000' movwf GPIO ; initialize interrupts banksel INTCON movlw b'00000000' ; 0------- 0=global interrupts disabled ; -0------ 1=enable peripheral interrupts ; --0----- 0=disable TMR0 overflow interrupt ; ---1---- 1=enable GP2/INT external interrupt ; ----0--- 0=disable GPIO port change interrupt ; -----0-- 0=no on TMR0 overflow ; ------0- 1= ; -------0 0=no GPIO port change movwf INTCON ; initialize interrupt on pin change GP2 banksel IOC movlw b'00000100' ; x------- not implemented ; -x------ not implemented ; --0----- 0=disable GP5 ; ---0---- 0=disable GP4 ; ----0--- 0=disable GP3 ; -----1-- 1=enable GP2/INTR ***** ; ------0- 0=disable GP1 ; -------0 0=disable GP0 movwf IOC banksel PIE1 movlw b'00000000' ; 0------- 0=disable EE write complete interrupt ; -0------ 0=disable A/D converter interrupt ; --xx---- not implemented ; ----0--- 0=comparator interrupt disabled ; -----xx- not implemented ; -------0 1=enable TMR1 overflow interrupt movwf PIE1 banksel PIR1 movlw b'00000000' ; 0------- 0=no EE write complete ; -0------ 0=no A/D conversion complete ; --xx---- not implemented ; ----0--- 0=no comparator interrupt ; -----xx- not implemented ; -------0 0=no TMR1 overflow movwf PIR1 ;********************************************************** ; GP1=output to relay ; GP4=output to LED ;********************************************************** banksel INTCON bsf INTCON,INTE ; enable GP2 interrupt bsf INTCON,GIE Main_Loop: ; if GP2=1 then output alarm signals on GP1, GP4 via interrupt sleep nop goto Main_Loop ;********************************************************** ; Motion Detection Interrupt Handler ; ; GP1=output to relay, GP4=output to LED ;********************************************************** motion_detect: bsf GPIO,1 ; energize relay for 100 msec call pause_100ms bcf GPIO,1 ; de-activate relay bsf GPIO,4 ; activate LED for 0.5 sec. call pause_500ms bcf GPIO,4 return ;********************************************************** ; online Delay Code Generator ; ;********************************************************** pause_100msec: ; Delay = 0.1 seconds ; Clock frequency = 4 MHz movlw 0x1F ;99998 cycles movwf d1 movlw 0x4F movwf d2 Delay_100 decfsz d1, f goto $+2 decfsz d2, f goto Delay_100 goto $+1 ;2 cycles return pause_500msec: ; Delay = 0.5 seconds ; Clock frequency = 4 MHz movlw 0x03 ;499994 cycles movwf d1 movlw 0x18 movwf d2 movlw 0x02 movwf d3 Delay_500 decfsz d1, f goto $+2 decfsz d2, f goto $+2 decfsz d3, f goto Delay_500 goto $+1 ;6 cycles goto $+1 goto $+1 return pause_1sec ; Delay = 1 seconds ; Clock frequency = 4 MHz movlw 0x08 ;999997 cycles movwf d1 movlw 0x2F movwf d2 movlw 0x03 movwf d3 Delay_1sec decfsz d1, f goto $+2 decfsz d2, f goto $+2 decfsz d3, f goto Delay_1sec goto $+1 ;3 cycles nop return ;***************************************************************************** Step 8: Final Words. This pic is one way to conceal everything. Note - by using a micro-controller, the number of variations on this instructable are unlimited. You can place the speaker so it the sound emanates from behind your boss. Or, you can tie it into the company PA system. You can even have the system ping your computer and have a work related page pop up in 1/10 of a second so that any time that your ex-boss, or anybody else walks into your perimeter, there's always a spreadsheet, or technical document that you should be working on. And 24/7...anybody walking into your cubicle, or out of it, can say that you have your nose to the grindstone every second of the day. That makes you a high valued employee. You are a goddam workaholic. Also, you don't really need the Fresnel lens. In fact, for boss-detection, it's better to remove it otherwise people inside your cubicle moving around will set it off. You can take the Fresnel lens off and put a 1 inch piece of PVC tubing (1/2 inch diameter from Home Depot) on the PIR detector and that will give you a very narrow field of view, such as directly at you doorway (assuming you have a door) but the sensor works just as well. It's range is about 5-10 feet without the Fresnel lens. You can also remove the PIR detector and using 3 wires, you can place the detector anywhere to make it concealed. You can even buy a sound module for 6 bucks, and record your own sounds. You can use the international signal for "boss is approaching" which is clearing your throat. And you can change it every morning. Or record the sound of you typing feverishly, etc. Here's a sound effect I made from that whoopee cushion, and ran it into my computer, edited it with Audacity, and used it for an Easy Button hack that I might put up one day. Step 9: A Variation Here's another boss detector based on the same concept. Also, somebody wanted a video, so I'll put up a video for this soon. The detector for this one is obviously a Robo Sapien mated to a motion detector from Home Depot. When motion is detected, the robot sends an IR signal to the bird cage where there is a hidden 38khz detector. The bird mechanism has several options. All options are individually selectable, but with everything turned on, the bird starts spinning, chirping, with a blinking LED. Also I added a superbright red LED mounted underneath that flashes 4 times so that you know somebody is coming without all the racket. This one also has a 30 second time delay, and you can disable the whole thing simply by lifting the pencil. The pencil has a magnet on the end which, when inserted into the bird feeder, enables the circuitry by way of a small magnetic reed relay. The only real difference in this system is that I didn't use the photocell trick. There's a quad op-amp in the motion detector, and I just tapped off the output pin of the final stage. I bought several of these bird things at a drug store because they were on sale for 5 bucks each. Then I added stones and vegetation in order to hide the IR detector, and made a little box out of cherry wood and varnished it to hide the extra AA battery I needed. The thing runs on 2 AA batteries, and is sound activated. I made it less sensitive to sound, and needed the extra battery because the 38khz detector I used needed at least 4.5 volts, which means 3 batteries. The motion detector was made to plug into a wall outlet, so I cut the big stuff off the circuit board and now it runs off of a 9v battery installed where the light bulb was located. Here's a link to a video of this. video Be the First to Share Recommendations 41 Discussions 9 years ago on Introduction Not cool. Like a Boss! 11 years ago on Introduction I can't wait for the Whoo-pee cushion radio controlled improvised explosive device, for War! 11 years ago on Introduction The title had me sitting here thinking, "Okay, so it's a fart detector!" Needless to say, I like this (motion activated farting device) better. I plan to make one that goes off when someone walks past the TV. Just for laughs... 11 years ago on Introduction boss motion decetor.. I have wnated to build one these for ever in fact i cant tell you how many posts i have put up on here trying to get sem help with it .. I had the same idea.. of it toggling the screen when motion is deteceted.. are you able to make me one. I will pay for it ! i want to conceal it in a bart doll or a cube toy ... and then have it detect motion and toggle my screen ... email me so we can chat ... thanks Ryan ryan.wilson ( at) rogers.com Reply 11 years ago on Introduction How about going lo-tech and attaching one of these above your monitor, then rely on your peripheral vision and that excellent pattern-recognition device - your brain - to give the alarm? 12 years ago on Step 2 I gotta get me one of these. i bet i could use it to protect my favorite vinyl albums. That thing must have over a zillion uses (zillion isn't even a number) Reply 12 years ago on Introduction actually it is a number, a very, very high number but a number nonetheless. Reply 11 years ago on Step 2 It's not really a number, what the dictionary entry meant is that it is used to replace a specific large number. Reply 12 years ago on Introduction Nicely pointed out... I wonder how many zillion's of times people have thought zillion wasn't a number. Reply 12 years ago on Introduction haha 13 years ago on Introduction de gustibus non est disputandum! (there is no accounting for taste.) Reply 12 years ago on Introduction stop the foreign comments!would it be nice if i said "bu proje tam bir deli projesi seni herif"to you? 12 years ago on Introduction YOU [censored]S! STOP THE "(removed by poster)" thing! 12 years ago on Introduction I agree homeland security sucks. the US is all about the "proper procedure" regardless if there are lives on the line and every second counts. My word even paramedics are slow and incompetent. My dad almost died from his appendix rupturing and the paramedics just stood there calmly (all 4 of them who could barely fit in the room) and asked him questions my mom has already answered. The only advantage to being in the US is the freedom to kill and steal at your leisure. Yes, I live in the US, and no, I don't blame bush specifically, this started long before he was in office. 12 years ago on Introduction um. wow. your either faking that and know ALOT about that stuff or your telling the truth. I'm guessing you telling the truth based off of how intelligent your idea was in this instructable. 12 years ago on Introduction man good job on the flash im deeply impressed.. ironically i can play that song on drums.. me and my band used to play it at shows LOL!!! ppl where like wtf.. anyways good invention.. and sorry bout the chick thing.. Reply 12 years ago on Introduction Cool!!! 12 years ago on Introduction Well, firstly, I disagree with the war in Iraq. If the war didn't cost in human lives, then there wouldn't even be public opinion holding back Bush the Lesser. Get rid of him, get out of Iraq, and then 25 to 30 good American men won't have to die each month.. Reply 12 years ago on Introduction That's really clever. Thanks for the info. 12 years ago on Introduction Yeah, fine with me if you use it... But instead of the horrid buzzer sound you might want to use a pezio siren of a relay to get a nice sound all together!
https://www.instructables.com/Flatulant-Boss-Detector/
CC-MAIN-2020-45
refinedweb
3,147
68.81
Need help with my snake gameI'll be waiting for you. P.S what the char number for ">" Cause I want my snake to look like this W... Need help with my snake gameI seem to be having a problem when I put [code] #include <windows.h> SetConsoleTextAttribute( GetSt... Need help with my snake gameThanks for the tip giblit. About the sounds.All I know the only music type you can use is wav file. Need help with my snake gameNot really when I saw the code.The guy just also get the code from someone too.He also said it was o... Need help with my snake gameHi guys I'm new here and I need help with my code. The code i'm using is already used so I want to ... This user does not accept Private Messages
http://www.cplusplus.com/user/xavier108/
CC-MAIN-2015-14
refinedweb
140
94.96
select <html:select> - Struts in the Struts HTML FORM tag as follows: Thanks...struts i am new to struts.when i execute the following code i am...=db.getDbConnection(); PreparedStatement ps=con.prepareStatement("select Dealer_Code from Select Tag (Form Tag) Example Select Tag (Form Tag) Example In this section, we are going to describe the select tag. The select tag is a UI tag that is used to render an HTML input tag of type Add struts2 select tag multiple struts2 select tag multiple struts2 select tag multiple The select tag In this section, you will learn about the select tag of Spring form tag library html:select in struts - Struts html select in struts What is the HTML select in struts default value select tag multiple values select tag multiple values I want to insert multiple values in database which i have selected from select tag Validate <select> tag Items Validate select tag Items Hi, How to validate addition of two numbers of two different "select" tag items in JavaScript..? Thanks in advance About Select tag About Select tag hello Sir I am using select tag in jsp page and i want to change the background color of label which I have given to the select... in select tag please give me the solution my code as follows : <s about select tag about select tag Hi, i have a doubt regarding the usage of select tag,that is,suppose i am having two dropdown boxes,based on the selection of one dropdown item another dropdown item will be displayed without interacting Struts Tag: Struts Tag: bean:struts Tag... configuration objects. This tag retrieve the value of the specified Struts... of the Struts<bean:struts> tag. Here you will learn to use the Struts Html< nested select tag nested select tag My requirement is as follows suppose combobox to select country whenever i select a country it will show corresponding states...; </head> <body> <select name='country validate select tag items in javascript validate select tag items in javascript Hi, How to validate addition of two numbers from two different <select> tag items in JavaScript..? Thanks in advance Proplem with select data - Struts Proplem with select data Hi , Please can u give me a example for display all data from the database (Access or MySql) using Struts struts html tag - Struts struts html tag Hi, the company I work for use an "id" tag on their tag like this: How can I do this with struts? I tried and they don't work Optiontransferselect Tag (Form Tag) Example . The Optiontransferselect tag is a UI tag that creates an option transfer select... select component. This tag contains various parameters: The label parameter... Optiontransferselect Tag (Form Tag) Example   Struts Tag Lib - Struts Struts Tag Lib Hi i am a beginner to struts. i dont have... Defines a tag library and prefix for the custom tags used in the JSP page. JSP Syntax Examples in Struts : Description The taglib Select Tag<html:select>: html:select Tag : Creates a HTML <select> element, associated... the Struts Html <html:select> tag. We will cover an example... in the struts-config.xml : Here, Action mapping helps to select the From Struts tag - Struts Struts tag I am new to struts, I have created a demo struts application in netbean, Can any body please tell me what are the steps to add new tags to any jsp page JavaScript validate select tag JavaScript validate select tag In this tutorial, you will learn how to validate html select tag. Here, we have used two select tags to select numbers in order to calculate addition of two numbers. If the user does not select any Doubleselect Tag (Form Tag) Example Doubleselect Tag (Form Tag) Example In this section, we are going to describe the doubleselect tag. The doubleselect tag is a UI tag that renders two HTML select elements with second one struts - Development process Struts select tag Struts select tag example code thin in tag of html this> Select tag to fetch data from oracle database Select tag to fetch data from oracle database I created a select box having more than one menus in the select box such as regnno, address and name of a student and when regnno is selected from the drop down list by a user Bar Chart Bar Chart This section illustrates you how to create bar chart using html in jsp. To draw a bar chart, we have used html tags. In this, firstly Select from select list + display Select from select list + display i have a select list containing... select EmpCode from the select list, the corresponding EmpName and DeptName should be displayed automatically in empty text fields. I am using struts 1.2 Struts2.2.1 UI tag example. Struts2.2.1 UI tag example. In this tutorial, you will see the use of some UI_ tags of struts framework. 1- index.jsp <html> <head><...="/struts-tags" %> <html> <head><title> Struts 2.0 - Struts Struts 2.0 Hi ALL, I am getting following error when I am trying to use tag. tag 'select', field 'list': The requested list key 'day' could... is my jsp: Select Tag Example Select Tag Example struts struts how to generate bar code for continuous numbers in a loop using struts ... or atleast for a given number Struts2.2.1 optiontransferselect tag example. Struts2.2.1 optiontransferselect tag example. In this example, you will see the implementation of optiontransferselect tag of struts2.2.1. The Optiontransferselect tag is a generic UI tag that creates an option transfer select menu bar in javascript - JSP-Servlet menu bar in javascript sir i need to display menu bar with some background color.in each menu,there are some values to display as menu.again if i select each value,again a sub menu to display.plz send me some code. menubar Optgroup Tag (Form Tag) Example a select tag <s:select>. Add the following code snippet...Optgroup Tag (Form Tag) Example In this section, we are going to describe the optgroup tag. The optgroup tag Combobox Tag (Form Tag) Example Combobox Tag (Form Tag) Example In this section, we are going to describe the combobox tag. The combo box is basically an HTML INPUT of type text and HTML SELECT grouped together to give you a combo multiple select values multiple select values can you provide an example for multiple select values for html:select tag Updownselect Tag (Form Tag) Example . The updownselect tag is a UI tag that creates a select component with buttons to move...; Create a jsp using the tag <s:updownselect> that creates a select component...Updownselect Tag (Form Tag) Example   Struts Struts in struts I want two struts.xml files. Where u can specify that xml files location and which tag u specified how to write Dao to select data from the database 1.2.9 (NB6.1) ? Problems with depend <html:select> and AJAX - Struts Struts 1.2.9 (NB6.1) ? Problems with depend and AJAX Hi I have 2 and one is depend to the other The 1st select I fill it of the DB... select value as a parameter How I do it with AJAX?? Maybe JQuery? Sorry JSF selectManyListbox Tag JSF selectManyListbox Tag This section is dedicated to describe you about selectManyListbox tag. This lets you select more than one options from a set of available options Struts2.2.1 text Tag Example Struts2.2.1 text Tag Example The text tag is a generic tag that is used... will shows how to implement theText tag in the Struts2.2.1 -- First we create...; <%@taglib prefix="s" uri="/struts-tags"%> JSF selectOneListbox Tag tag in detail. This is used when you have to allow the user to select only one...JSF selectOneListbox Tag  ...; tag provided by JSF. Code Description : <%@ taglib uri Struts HTML Tags Struts HTML Tags Struts provides HTML tag library for easy creation of user interfaces. In this lesson I will show you what all Struts HTML Tags are available to the JSP Manager on Tomcat 5 | Developing Struts PlugIn | Struts Nested Tag...) | Date Tag (Data Tag) | Include Tag (Data Tag) in Struts 2 | Param Tag (Data Tag) | Set Tag (Data Tag) | Text Tag (Data Tag) in Struts 2 - Struts struts shud i write all the beans in the tag of struts-config.xml Value Retain in struts 2 - Struts Value Retain in struts 2 Hi, I am trying to retain values... for getting all records in tag: URL: Thanks vinod kumar Struts2.2.1 optiontransferselect tag example. ;roseindia" extends="struts-default" namespace="/">...;%@taglib uri="/struts-tags" prefix="s" %> <html>... Download Select Source Code Struts2.2.1 radio Tag Example Struts2.2.1 radio Tag Example The radio tag is a UI tag that render a radio... tag in the Struts2.2.1 -- First we create a JSP file named RadioTag.jsp... prefix="s" uri="/struts-tags"%> < struts html tags - Struts struts html tags creating horizontal scroll-bar in struts HTML Textarea Tag<html:textarea>: . Here you will learn to use the Struts Html<html:textarea> tag. ... in the struts-config.xml Here, Action mapping helps to select FormBean...Textarea Tag<html:textarea>: Information on Textarea Tag Example of struts2.2.1 anchor tag. .In HTML - <a>, struts - <s:a>. Directory structure of anchor tag...Example of struts2.2.1 anchor tag. In this tutorial, we will introduce you to about the anchor tag of struts2.2.1. It provides a hyperlink from current page Struts Struts Why struts rather than other frame works? Struts... is not only thread-safe but thread-dependent. Struts2 tag libraries provide... by the application. There are several advantages of Struts that makes it popular Multiple select box Multiple select box Hi, I need help in code for multiple select box. The multiple select box should be populated with the db values.The selection done in the multiple select box is to be moved to the text area provided a Struts2.2.1 Debug Tag Example Struts2.2.1 Debug Tag Example The debug tag is a very useful debugging tag... in the web page. The following Example will shows how to implement the Debug tag...; <%@taglib prefix="s" uri="/struts.2.1 checkbox tag example. ;Struts2.2.1_ChexkBox_Tag_Example</h2><hr /> <h2>Select value of check...Struts2.2.1 checkbox tag example. In this example, you will see the use of checkbox tag of struts2.2.1 framework. The checkbox tag is a UI tag that is used Rewrite Tag<html:rewrite>: of the Rewrite<html:rewrite> tag. Here you will learn to use the Struts Html<...; html:rewrite Tag - This tag renders a request URI based on exactly the same rules as the link tag does, but without creating the hyperlink  2.2.1 doubleselect tag example. Struts2.2.1 doubleselect tag example. In this tutorial, you will see the use of doubleselect tag of struts2.2.1. the <s:doubleselect> tag is used... on the selected ?Degree? or "Diploma". Directory structure of doubleselect tag Example of struts2.2.1 combobox tag. ;Struts2.2.1_Combobox_Tag_Example</h1><hr/> <b>Select Date...Example of struts2.2.1 combobox tag. In this tutorial, you will see the implementation of struts2.2.1 combobox tag. The combobox is basically an HTML INPUT create bar chart in jsp using msaccess database create bar chart in jsp using msaccess database thanks for reply...;jspService(bar_jsp.java:80) org.apache.jasper.runtime.HttpJspBase.service...) org.apache.jsp.bar<em>jsp.</em>jspService(bar_jsp.java:57
http://www.roseindia.net/tutorialhelp/comment/41566
CC-MAIN-2014-52
refinedweb
1,924
65.83
So I have a list: ['x', 3, 'b'] And I want the output to be: [x, 3, b] How can I do this in python? If I do str(['x', 3, 'b']), I get one with quotes, but I don’t want quotes. mylist = ['x', 3, 'b'] print '[%s]' % ', '.join(map(str, mylist)) If you are using Python3: print('[',end='');print(*L, sep=', ', end='');print(']') Instead of using map, I’d recommend using a generator expression with the capability of join to accept an iterator: def get_nice_string(list_or_iterator): return "[" + ", ".join( str(x) for x in list_or_iterator) + "]" Here, join is a member function of the string class str. It takes one argument: a list (or iterator) of strings, then returns a new string with all of the elements concatenated by, in this case, ,. You can delete all unwanted characters from a string using its translate() method with None for the table argument followed by a string containing the character(s) you want removed for its deletechars argument. lst = ['x', 3, 'b'] print str(lst).translate(None, "'") # [x, 3, b] If you’re using a version of Python before 2.6, you’ll need to use the string module’s translate() function instead because the ability to pass None as the table argument wasn’t added until Python 2.6. Using it looks like this: import string print string.translate(str(lst), None, "'") Using the string.translate() function will also work in 2.6+, so using it might be preferable. This is simple code, so if you are new you should understand it easily enough. mylist = ["x", 3, "b"] for items in mylist: print(items) It prints all of them without quotes, like you wanted. Using only print: >>> l = ['x', 3, 'b'] >>> print(*l, sep='\n') x 3 b >>> print(*l, sep=', ') x, 3, b Here’s an interactive session showing some of the steps in @TokenMacGuy’s one-liner. First he uses the map function to convert each item in the list to a string (actually, he’s making a new list, not converting the items in the old list). Then he’s using the string method join to combine those strings with ', ' between them. The rest is just string formatting, which is pretty straightforward. (Edit: this instance is straightforward; string formatting in general can be somewhat complex.) Note that using join is a simple and efficient way to build up a string from several substrings, much more efficient than doing it by successively adding strings to strings, which involves a lot of copying behind the scenes. >>> mylist = ['x', 3, 'b'] >>> m = map(str, mylist) >>> m ['x', '3', 'b'] >>> j = ', '.join(m) >>> j 'x, 3, b' Tags: list, perl, printing, python
https://exceptionshub.com/printing-list-in-python-properly.html
CC-MAIN-2021-21
refinedweb
450
70.94
Thank you Erik. This was the answer I wanted to hear. db I use this pattern a lot in a slightly different context. When a thread wants to "wait" for an event in C++ to trigger it goes to sleep. If the source of the event is deleted, then the waiting thread is effectively terminated, because the source releases its reference to the yielding thread. Likewise, if the C++ object that started the thread is deleted, I disconnect all events, which causes the sleeping thread (and its children) to be collected. I don't get any "attempt to yield across metamethod/C-call boundary" errors. I've watched the GC and it does collect the "dead" yielding threads eventually. -Erik > If Ilua_yield from the C code, and then remove in references to the > thread, will it collect OK? > > That is, I was thinking of using lua_yield as a means of script > termination. > > In a Lua function > > int neverreturn (lua_State* L) > { > ... > return lua_yield(L,n); > } > > db
http://lua-users.org/lists/lua-l/2006-11/msg00034.html
crawl-001
refinedweb
165
74.9
Without STL containers getting in your way, the main processing loop might look like a good 1970s C textbook: char sms_buffer[161]={0}; for(;;) //main processing loop { read_data_from_client(sms_buffer); process_message(sms_buffer); memset(sms_buffer, 0, 161); //clear all bytes } Calling memset() directly in C++ code is a faux pas because passing the size of the buffer on every memset()call is a security hazard. Ideally, you want the buffer to clear itself. Enter the Good Parasite class! A proper solution to the self-clearing buffer requires an interface with the dual attributes of a low-level char array and the qualities of a high-level C++ class. Why use a class instead of a function? Because a class's constructor is invoked automatically, and that constructor is where the buffer clearing will take place. How will that class keep track of the buffer's size and address? Simple: both the class and the buffer live at the same address! So you monitor the buffer's size by bundling the size into the class's type—that is, you use a template. Bundling the size into the specialization has another advantage: the class will not contain any data members at all. This is crucial for your design. First, consider how to bundle the buffer's size into the class's type. This Solution uses a template non-type parameter: #include <new> //for overriding new and placement new #include <cstring> //for memset template <int N> class Zeroer//template arguments are int { private: void * operator new(unsigned int)=delete; Zeroer(const Zeroer&)=delete; Zeroer& operator=(const Zeroer&)=delete; ~Zeroer()=delete; //.. }; Because clients are not allowed to create Zeroer objects directly or to copy them, the assignment operator, copy constructor, destructor, and overridden operator new are declared as private deletedfunctions. You can instantiate a Zeroer object only by using placement new. This will ensure that a Zeroer object is constructed on the buffer's address. Notice that you need to override the global placement new(as shown in the Zeroer definition below). Finally, look at the constructor. Every time you create a Zeroer object, its constructor automatically clears the buffer. Here's the complete definition of Zeroer: template <int N> class Zeroer { private: //disabled operations void * operator new(unsigned int)=delete; Zeroer(const Zeroer&)=delete; Zeroer& operator=(const Zeroer&)=delete; ~Zeroer()=delete; public: Zeroer(){ memset(this, 0, N); } //clear the buffer void* operator new(unsigned int n, char* p) {return p; } }; Here is Zeroer in action: int main() { char buff[512]; for(;;) { new (buff) Zeroer<512>; //Zeroer ctor clears buff strcpy(buff,"hello"); //fill the buffer with data new (buff) Zeroer<512>; //clear the buffer again strcpy(buff,"world"); //fill the buffer again } } Zeroer objects are never destroyed because they have nothing to destroy. They don't own the buffer; they merely "iron" it after every use. Every time you want to clear buff, you construct a new instance of Zeroer<512> on top of buff using placement new. In real world code, constants and typedefs will eliminate the use of hard-coded numbers: const int PAGE_SIZE=512; typedef Zeroer<PAGE_SIZE> ZP; int main() { Zeroer<52> z; //error, destructor is inaccessible new Zeroer<1024> ; // error, new is inaccessible //handle two different buffers at once char buff[PAGE_SIZE]; char sms_buff[161]; //process and recycle new (sms_buff) Zeroer<161>; //clear sms_buffer new (buff) ZP; //clear buff strcpy(buff,"hello"); new (buff) ZP; //clear buff again strcpy(sms_buffer,"call me"); new (sms_buffer) Zeroer<161>; //clear sms_buffer again } Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/cplus/Article/41475/0/page/2
CC-MAIN-2014-10
refinedweb
609
50.06
: I am more interested in were the money comes from and who is making all the profits from these loans. So who is lending and how are they profiting/ "Can somebody help me to understand????" I hope I can. Maybe you've misunderstood the whole thing, or I misunderstood your inquiry; but let's go: The interest you fail to pay, will be added to your debt. So if you pay the creditor a sum, it doesn't matter whether you regard that sum as a part of (or all) the interest, or an equal share of the "elementary" debt. But of course; the faster you pay down the debt (including the interest), the less debt you will amass and hence the less you'll have to pay in total. Say you pay the $5 interest on a $100 loan. The debt will still be $100, but then you'll later have to pay interest for only $100, rather than for the $105 you'd have in debt unless you paid the interest. So the answer to "Is it possible that if this paying of interest first would stop then it would be much easier for countries to pay for their debts[?]" will be "no". "In this cońdition there are countries that will never pay their debt, only because it is that way." Yes, countries as well as other entities, like companies or individuals. But in that case they should've thought about that before they took up that debt, shouldn't they? And the creditors should've thought about that prior to underwriting those loans. "All of this debt is secured by sovereign property, and when the debt defaults, sovereign nations forfeit their collateral..mines,forests,water,utilities,and/or anything else of value in exchange for interest on compounding ODIOUS debt!" Yes, they (or we; i.e. the West) are going to forfeit (or surrender) a whole lot to China - world's next superpower... and it'll remain so (being the world's most powerful country) for at least a few decades (perhaps 2020-2080). Golden days are ahead for India, Russia and Brazil as well. Remember where you read it first, folks! Well, at least you are lucky enough to have mass immigration and rampant political correctness... which creates a lot of jobs for cleaning up after the results of the fantastic diversity, that is vandalism and violence. In addition, you have the pleasure of being a member in the corrupt, failed and undemocratic EU. Methinks your country is be about to get totally destroyed. gatesofvienna.blogspot.com/2010/09/totalitarian-sweden.html "Considering the above information and the debt per capita Japan, Canada and Norway seems to be managin their public debt better than US and Europe." Oh? When did Norway stop being a part of Europe? "The soeveriegn Kingdom of Brunei has no debt, wow!" Neither has Albania, apparently. "Is it chance that almost every country with no debt faces some kind of a war? Let me give you an example:" Well, what about the USA (and the rest of Nato)? Well, CuAs - I guess that China holds a pretty large stake! "Can someone explain it?" Perhaps because other factors (probably debt per capita) are taken into account. "It,s seems,the developing countries are doing well in public debt crisis." Perhaps that is due to the First World countries being stupid enough to give aid (and lots of it) to Third World countries? "Which one is correct" They may all be correct, in some sence; due to different views of what should be counted as "public debt", as well as when the figures are collected. Maybe some statistics include assets while others do not. Any monetary system that attaches interest to debt is doomed, because servicing the debt consumes ever increasing amounts of GDP. It is no mistake that we find ourselves in this terminal system. Steve Keen of Debtwatch, who predicted 2008, suggests using "total debt" for comparisons. As you have already discovered there are wide ranging estimates of debt. I like... for its visual Total Debt vs. GDP data. It really frames the question, "At what multiple of gdp do we the people say enough?" Mike Montagne and his MPE offers hope. Search YouTube for video of his models and to learn more. Let's look at UK TOTAL DEBT and see just where we are. Some say it's 500+% of GDP and some say close to 1,000% of GDP. Which is it? Would that not be the highest in the Western World? In fact so high it makes Greece look prudent? Give us the facts, please. I have a question that we lend Those who owes most of us lend us to live he does not ever want a correct answer please I have a question that we lend Those who owes most of us lend us to live he does not ever want a correct answer please I would Like to introduce the concept of optimal allocation of human and physical capital, about rate of return (for 1970 it was 0.9 for physical capital and 0.33 for human capital, J. Tinbergen verified it almost 40 years ago), also about private and social rate of return ( i worked it With prof. j. Vanek at Cornell University 1976/77). My question for readers: why World bank and all others do not invest much more in human capital? My NeXT question is related With estimation of impact of moral, intellectul, and social capital. My research two years ago confirmed that 85% of varisnce profitbility depends upon moral capital. Bailout from 2008 is solved With wrong strategy, defending financial mafia, not 99% of population. Can we through your portal start discussion about these results and Tudora and methodology behind it? Prof dr Ante Lauc, retired University professor Excellent points. I can make one (narrow) observation on this. Capitalism still uses the 14th century double entry book keeping system to track value creation in global companies. For example, managers are rewarded on EBIT growth. Human capital strategies generally are costed and there is little interest in the Net Present Value targets of intangible asset investment (such as leadership development investment for example). The market value to book value ratios will widen (not only due to cheap money), but because of the divergence between cash flow generation and standard outdated accounting measurements. Talent Management Strategies are at a very early stage of development. From a humble layman's point of view, it strikes me that all global wealth is simply being funnelled into fewer and fewer banks, at a geometrical (and completely untenable) rate of increase (thanks to the addition of interest); that all so-called federal reserve, world, and central banks are in fact privately owned!; that tumultuous events such as global financial crises merely serve to 'consolidate' the wealth base of a select handful of individuals & families, who are in effect slowy taking 'legal' (ha ha) possession of the rights and deeds to practically the entire planet's real estate and asset stockpile; and who will in so doing effectively enslave the entire population of mankind - most notably those future generations who, not knowing otherwise, will be born into debt and serdom, and will be 'forced' (I shudder to contemplate how, exactly!) to work back the debts of their profligate forefathers! But then maybe I'm just pessimistic and a $44 Trillion debt is doable. - Frank Genghis The US Debt is 102%/GDP. And you have to add also: - debts of states and municpalities - debts like to Madicare and Medicaid - debts of Fannie Mae and Freddie Mac - Private debts The indication in the Economist's chart of the NET debt for US is WRONG. Specially when the same chart uses the GROSS Debt, not the NET one, for Japan. Appreciating the work you have done with the global debt clock, as it makes visual the current and on-going state of the world debt, I have to make some comments on your article. 1) The Governments DO NOT borrow money from their own citizens. They borrow from the "markets", some non-democratic, speculative, "dark" capital with a fascist ideology. The Corporatism within the new world order of the Capitalism of Destruction is boosting the national debts as a means to buy his way into new markets. Greece and the rest of Southern Europe is just the beginning within the Western World. But, the People say otherwise. Democracy is the answer, crisis is just a fiscal matter, while crisis is a political one.
http://www.economist.com/comment/1246444
CC-MAIN-2014-35
refinedweb
1,432
62.98
Lab 3: Recursion Due at 11:59pm on Friday, 07/05/2019. Starter Files Download lab these questions is in lab03.py. - Questions 4-6 are optional, but highly recommended if you have the time. Starter code for these questions is in the lab03_extra.py file. Topics Consult this section if you are unfamiliar with the material for this lab. It's okay to skip directly to the questions and refer back here if The recursive implementation for factorial is as follows: def factorial(n): if n == 0: return 1 return n * factorial(n - 1) We know from its definition that 0! is 1. So we choose n == 0 the iterative version first. Required Questions Q1: Skip Add Write a function skip_add that takes a single argument n and computes the sum of every other integer between 0 and n. Assume n is non-negative. def skip_add(n): """ Takes a number x and returns x + x-2 + x-4 + x-6 + ... + 0. >>> skip_add(5) # 5 + 3 + 1 + 0 9 >>> skip_add(10) # 10 + 8 + 6 + 4 + 2 + 0 30 >>> # Do not use while/for loops! >>> from construct_check import check >>> check('lab03.py', 'skip_add', ... ['While', 'For']) True """"*** YOUR CODE HERE ***"if n <= 0: return 0 return n + skip_add(n - 2) Use Ok to test your code: python3 ok -q skip_add Q2: Hailstone Recall the hailstone function from homework 1. First,. Hint: When taking the recursive leap of faith, consider both the return value and side effect of this function. this_file = __file__ def hailstone(n): """Print out the hailstone sequence starting at n, and return the number of elements in the sequence. >>> a = hailstone(10) 10 5 16 8 4 2 1 >>> a 7 >>> # Do not use while/for loops! >>> from construct_check import check >>> check(this_file, 'hailstone', ... ['While', 'For']) True """"***) # Video walkthrough: Use Ok to test your code: python3 ok -q hailstone Q3: Summation Now,(this_file, 'summation', ... ['While', 'For']) True """ assert n >= 1"*** YOUR CODE HERE ***"if n == 1: return term(n) else: return term(n) + summation(n - 1, term) # Base case: only one item to sum, so we return that item. # Recursive call: returns the result of summing the numbers up to n-1 using # term. All that's missing is term applied to the current value n. Use Ok to test your code: python3 ok -q summation Optional Questions Note: The following questions are in lab03_extra.py. More Recursion Practice Q) # Video Walkthrough: Use Ok to test your code: python3 ok -q is_prime Q # Video Walkthrough: Use Ok to test your code: python3 ok -q gcd Q6: Ten-pairs Write a function that takes a positive integer n and returns the number of ten-pairs it contains. A ten-pair is a pair of digits within n that sums to 10. Do not use any assignment statements. The number 7,823,952 has 3 ten-pairs. The first and fourth digits sum to 7+3=10, the second and third digits sum to 8+2=10, and the second and last digit sum to 8+2=10. Note that a digit can be part of more than one ten-pair.
https://inst.eecs.berkeley.edu/~cs61a/su19/lab/lab03/
CC-MAIN-2020-29
refinedweb
519
71.14
Details Description From a discussion in the newsgroup I would like to request a feature. That feature would be to change the way Tapestry embeds its tags from: <span jwcid="insertStuff">this gets replaced</span> To (something like this): <span tap:this gets replaced</span> This allows for several things to happen. The first is that Tapestry now create XHMTL compliant templates at EDIT time so that developers and HTML coders can validate the XHTML. The second is more nebulous. But I believe the change to using a namespace would allow Tapestry more flexibility as future HTML changes come down the pipe to ensure Tapestry meets those specifications. Howard: What we really want is the ability to control the exact id, so that people can use 'id' (if they like), or use whatever namespace is convienient for them. Activity - All - Work Log - History - Activity - Transitions We'll see about addressing this in 3.1. It isn't a bug, its a new feature/enhancement. Just because it doesn't do exactly what you want doesn't make it broken. All too easily ... even I was surprised that the Tapestry template parser would accept "t:id" as an attribute name without requiring a code change. if we have a tapestry namespace (which sounds like a good idea to me), the 'jwc' part is a bit redundant isn't it: <span tap:this gets replaced</span> or, better: <span jwc:this gets replaced</span>
https://issues.apache.org/jira/browse/TAPESTRY-165?focusedCommentId=37790&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-48
refinedweb
242
60.85
How to find K-th symbol in Grammar using C++ In the problem, we build a table of n rows (1-indexed). We start by using writing zero within the 1st row. Now in every subsequent row, we observe the preceding row and replace each occurrence of zero with 01, and every incidence of one with 10. As an example,for n = 4,the 1st row is 0,the second row is 01,the 3rd row is 0110,and the 4th row is 01101001. we will take n and k as input and return K-th symbol in nth row of table. K-th symbol in Grammar using C++ For this problem, we will take a recursive approach. In each consecutive row, we can observe the repeating pattern. Elements before the middle element in each row is similar to the entire element in the previous row and the element after the middle element is NOT of elements in the previous row. And in each next row the number of elements get doubled. 1-> 0 2-> 1 0 3-> 0 1 1 0 4-> 1 0 0 1 0 1 1 0 So for nth row,mid(index of middle element)=2^(n-1)/2. And if K<mid then we will return Kth element previous row. And if K>mid, then NOT of (K-mid) element of the previous row will be answered. #include<bits/stdc++.h> using namespace std; int kthGrammar(int n, int k) { if(n==1 && k==1) return 0; int mid=pow(2,n-1)/2; if(k<=mid) { return kthGrammar(n-1,k); } return !kthGrammar(n-1,k-mid); } int main(){ int n,k; cin>>n>>k; cout<<kthGrammar(n,k); } input: n=4,k=5 output:1 Also read: C++ program to find Kth permutation Sequence
https://www.codespeedy.com/how-to-find-k-th-symbol-in-grammar-using-cpp/
CC-MAIN-2022-27
refinedweb
302
61.56
IntroductionEdit So what is Tcl?Edit The name Tcl is derived from "Tool Command Language" and is pronounced "tickle". Tcl is a radically simple open-source interpreted programming language that provides common facilities such as variables, procedures, and control structures as well as many useful features that are not found in any other major language. Tcl runs on almost all modern operating systems such as Unix, Macintosh, and Windows (including Windows Mobile). While Tcl is flexible enough to be used in almost any application imaginable, it does excel in a few key areas, including: automated interaction with external programs, embedding as a library into application programs, language design, and general scripting. Tcl was created in 1988 by John Ousterhout and is distributed under a BSD style license (which allows you everything GPL does, plus closing your source code). The current stable version, in February 2008, is 8.5.1 (8.4.18 in the older 8.4 branch). The first major GUI extension that works with Tcl is Tk, a toolkit that aims to rapid GUI development. That is why Tcl is now more commonly called Tcl/Tk. The language features far-reaching introspection, and the syntax, while simple, is very different from the Fortran/Algol/C++/Java world. Although Tcl is a string based language there are quite a few object-oriented extensions for it like Snit, incr Tcl, and XOTcl to name a few. Tcl was originally developed as a reusable command language for experimental computer aided design (CAD) tools. The interpreter is implemented as a C library that could be linked into any application. It is very easy to add new functions to the Tcl interpreter, so it is an ideal reusable "macro language" that can be integrated into many applications. However, Tcl is a programming language in its own right, which can be roughly described as a cross-breed between - LISP/Scheme to use functional or object-oriented styles of programming very easily. See "Tcl examples" below for ideas what one can do. Also, it is very easy to implement other programming languages (be they (reverse) polish notation, or whatever) in Tcl for experimenting. One might call Tcl a "CS Lab". For instance, here's how to compute the average of a list of numbers in Tcl (implementing a J-like functional language - see Tacit programming below): Def mean = fork /. sum llength or, in a RPN language similar to FORTH or Postscript, mean dup sum swap size double / ; while a more traditional, "procedural" approach would be proc mean list { set sum 0. foreach element $list {set sum [expr {$sum + $element}]} return [expr {$sum / [llength $list]}] } Here is yet another style (not very fast on long lists, but depends on nothing but Tcl). It works by building up an expression, where the elements of the lists are joined with a plus sign, and then evaluating that: proc mean list {expr double([join $list +])/[llength $list]} From Tcl 8.5, with math operators exposed as commands, and the expand operator, this style is better: proc mean list {expr {[tcl::mathop::+ {*}$list]/double([llength $list])}} or, if you have imported the tcl::mathop operators, just proc mean list {expr {[+ {*}$list]/double([llength $list])}} Note that all of these are in Tcl, just that the first two require some additional code to implement Def resp. ':' A more practical aspect is that Tcl is very open for "language-oriented programming" - when solving a problem, specify a (little) language which most simply describes and solves that problem - then go implement that language... Why should I use Tcl?Edit Good question. The general recommendation is: "Use the best tool for the job". A good craftsman has a good set of tools, and knows how to use them best. Tcl is a competitor to other scripting languages like awk, Perl, Python, PHP, Visual Basic, Lua, Ruby, and whatever else will come along. Each of these has strengths and weaknesses, and when some are similar in suitability, it finally becomes a matter of taste. Points in favour of Tcl are: - simplest syntax (which can be easily extended) - cross-platform availability: Mac, Unix, Windows, ... - strong internationalization support: everything is a Unicode string - robust, well-tested code base - the Tk GUI toolkit speaks Tcl natively - BSD license, which allows open-source use like GPL, as well as closed-source - a very helpful community, reachable via newsgroup, Wiki, or chat :) Tcl is not the best solution for every problem. It is however a valuable experience to find out what is possible with Tcl. Example: a tiny web serverEdit Before spoon-feeding the bits and pieces of Tcl, a slightly longer example might be appropriate, just so you get the feeling how it looks. The following is, in 41 lines of code, a complete little web server that serves static content (HTML pages, images), but also provides a subset of CGI functionality: if an URL ends with .tcl, a Tcl interpreter is called with it, and the results (a dynamically generated HTML page) served. Note that no extension package was needed - Tcl can, with the socket command, do such tasks already pretty nicely. A socket is a channel that can be written to with puts. The fcopy copies asynchronously (in the background) from one channel to another, where the source is either a process pipe (the "exec tclsh" part) or an open file. This server was tested to work pretty well even on 200MHz Windows 95 over a 56k modem, and serving several clients concurrently. Also, because of the brevity of the code, this is an educational example for how (part of) HTTP works. # DustMotePlus - with a subset of CGI support set root c:/html set default index.htm set port 80 set encoding iso8859-1 proc bgerror msg {puts stdout "bgerror: $msg\n$::errorInfo"} proc answer {sock host2 port2} { fileevent $sock readable [list serve $sock] } proc serve sock { fconfigure $sock -blocking 0 gets $sock line if {[fblocked $sock]} { return } fileevent $sock readable "" set tail / regexp {(/[^ ?]*)(\?[^ ]*)?} $line -> tail args if {[string match */ $tail]} { append tail $::default } set name [string map {%20 " " .. NOTALLOWED} $::root$tail] if {[file readable $name]} { puts $sock "HTTP/1.0 200 OK" if {[file extension $name] eq ".tcl"} { set ::env(QUERY_STRING) [string range $args 1 end] set name [list |tclsh $name] } else { puts $sock "Content-Type: text/html;charset=$::encoding\n" } set inchan [open $name] fconfigure $inchan -translation binary fconfigure $sock -translation binary fcopy $inchan $sock -command [list done $inchan $sock] } else { puts $sock "HTTP/1.0 404 Not found\n" close $sock } } proc done {file sock bytes {msg {}}} { close $file close $sock } socket -server answer $port puts "Server ready..." vwait forever And here's a little "CGI" script I tested it with (save as time.tcl): # time.tcl - tiny CGI script. if {![info exists env(QUERY_STRING)]} { set env(QUERY_STRING) "" } puts "Content-type: text/html\n" puts "<html><head><title>Tiny CGI time server</title></head> <body><h1>Time server</h1> Time now is: [clock format [clock seconds]] <br> Query was: $env(QUERY_STRING) <hr> <a href=index.htm>Index</a> </body></html>" Where to get Tcl/TkEdit On most Linux systems, Tcl/Tk is already installed. You can find out by typing tclsh at a console prompt (xterm or such). If a "%" prompt appears, you're already set. Just to make sure, type info pa at the % prompt to see the patchlevel (e.g. 8.4.9) and info na to see where the executable is located in the file system. Tcl is an open source project. The sources are available from if you want to build it yourself. For all major platforms, you can download a binary ActiveTcl distribution from ActiveState. Besides Tcl and Tk, this also contains many popular extensions - it's called the canonical "Batteries Included" distribution. Alternatively, you can get Tclkit: a Tcl/Tk installation wrapped in a single file, which you don't need to unwrap. When run, the file mounts itself as a virtual file system, allowing access to all its parts. January 2006, saw the release of a new and promising one-file vfs distribution of Tcl; eTcl. Free binaries for Linux, Windows, and Windows Mobile 2003 can be downloaded from . Especially on PocketPCs, this provides several features that have so far been missing from other ports: sockets, window "retreat", and can be extended by providing a startup script, and by installing pure-Tcl libraries. First stepsEdit To see whether your installation works, you might save the following text to a file hello.tcl and run it (type tclsh hello.tcl at a console on Linux, double-click on Windows): package require Tk pack [label .l -text "Hello world!"] It should bring up a little grey window with the greeting. To make a script directly executable (on Unix/Linux, and Cygwin on Windows), use this first line (the # being flush left): #!/usr/bin/env tclsh or (in an older, deprecated tricky style): #! /bin/sh # the next line restarts using tclsh \ exec tclsh "$0" ${1+"$@"} This way, the shell can determine which executable to run the script with. An even simpler way, and highly recommended for beginners as well as experienced users, is to start up tclsh or wish interactively. You will see a % prompt in the console, can type commands to it, and watch its responses. Even error messages are very helpful here, and don't cause a program abort - don't be afraid to try whatever you like! Example: $ tclsh info patchlevel 8.4.12 expr 6*7 42 expr 42/0 divide by zero You can even write programs interactively, best as one-liners: proc ! x {expr {$x<=2? $x: $x*[! [incr x -1]]}} ! 5 120 For more examples, see the chapter "A quick tour". SyntaxEdit Syntax is just the rules how a language is structured. A simple syntax of English could say (ignoring punctuation for the moment): - A text consists of one or more sentences - A sentence consists of one or more words Simple as this is, it also describes Tcl's syntax very well - if you say "script" for "text", and "command" for "sentence". There's also the difference that a Tcl word can again contain a script or a command. So if {$x < 0} {set x 0} is a command consisting of three words: if, a condition in braces, a command (also consisting of three words) in braces. Take this for example is a well-formed Tcl command: it calls Take (which must have been defined before) with the three arguments "this", "for", and "example". It is up to the command how it interprets its arguments, e.g. puts acos(-1) will write the string "acos(-1)" to the stdout channel, and return the empty string "", while expr acos(-1) will compute the arc cosine of -1 and return 3.14159265359 (an approximation of Pi), or string length acos(-1) will invoke the string command, which again dispatches to its length sub-command, which determines the length of the second argument and returns 8. Quick summaryEdit A Tcl script is a string that is a sequence of commands, separated by newlines or semicolons. A command is a string that is a list of words, separated by blanks. The first word is the name of the command, the other words are passed to it as its arguments. In Tcl, "everything is a command" - even what in other languages would be called declaration, definition, or control structure. A command can interpret its arguments in any way it wants - in particular, it can implement a different language, like expr. A word is a string that is a simple word, or one that begins with { and ends with the matching } . - (Part of) a word can be an embedded script: a string in [] brackets whose contents are evaluated as a script (see above) before the current command is called. In short: Scripts and commands contain words. Words can again contain scripts and commands. (This can lead to words more than a page long...) Arithmetic and logic expressions are not part of the Tcl language itself, but the language of the expr command (also used in some arguments of the if, for, while commands) is basically equivalent to C's expressions, with infix operators and functions. See separate chapter on expr below. The man page: 11 rulesEdit Here is the complete manpage for Tcl (8.4) with the "endekalogue", the 11 rules. (From 8.5 onward there is a twelfth rule regarding the {*} feature). The following rules define the syntax and semantics of the Tcl language: (1) Commands A Tcl script is a string containing one or more commands. Semi-colons and newlines are command separators unless quoted as described below. Close brackets are command terminators during command substitution (see below) unless quoted. (2) Evaluation Words of a command are separated by white space (except for newlines, which are command separators). (4) Double quotes) Braces) Command substitution) Variable substitution). $name(index) Name gives the name of an array variable and index gives the name of an element within that array. Name must contain only letters, digits, underscores, and namespace separators, and may be an empty string.) Backslash substitution. - \\ - Literal backslash (\), no special) Order of substitution. (11) Substitution and word boundaries Substitutions do not affect the word boundaries of a command. For example, during variable substitution the entire value of the variable becomes part of a single word, even if the variable's value contains spaces. The first rule for comments is simple: comments start with # where the first word of a command is expected, and continue to the end of line (which can be extended, by a trailing backslash, to the following line): # This is a comment \ going over three lines \ with backslash continuation One of the problems new users of Tcl meet sooner or later is that comments behave in an unexpected way. For example, if you comment out part of code like this: # if {$condition} { puts "condition met!" # } This happens to work, but any unbalanced braces in comments may lead to unexpected syntax errors. The reason is that Tcl's grouping (determining word boundaries) happens before the # characters are considered. To add a comment behind a command on the same line, just add a semicolon: puts "this is the command" ;# that is the comment Comments are only taken as such where a command is expected. In data (like the comparison values in switch), a # is just a literal character: if $condition {# good place switch -- $x { #bad_place {because switch tests against it} some_value {do something; # good place again} } } To comment out multiple lines of code, it is easiest to use "if 0": if 0 { puts "This code will not be executed" This block is never parsed, so can contain almost any code - except unbalanced braces :) }! Quotes (or braces) are rather used for grouping: set example "this is one word" set another {this is another} The difference is that inside quotes, substitutions (like of variables, embedded commands, or backslashes) are performed, while in braces, they are not (similar to single quotes in shells, but nestable): set amount 42 puts "You owe me $amount" ;#--> You owe me 42 puts {You owe me $amount} ;#--> You owe me $amount In source code, quoted or braced strings can span multiple lines, and the physical newlines are part of the string too: set test "hello world in three lines" To reverse a string, we let an index i first point at its end, and decrementing i until it's zero, append the indexed character to the end of the result res: proc sreverse str { set res "" for {set i [string length $str]} {$i > 0} {} { append res [string index $str [incr i -1]] } set res } sreverse "A man, a plan, a canal - Panama" amanaP - lanac a ,nalp a ,nam A Hex-dumping a string: proc hexdump string { binary scan $string H* hex regexp -all -inline .. $hex } hexdump hello 68 65 6c 6c 6f Finding a substring in a string can be done in various ways: string first $substr $str ;# returns the position from 0, or -1 if not found string match *$substr* $str ;# returns 1 if found, 0 if not regexp $substr $str ;# the same The matching is done with exact match in string first, with glob-style match in string match, and as a regular expression in regexp. If there are characters in substr that are special to glob or regular expressions, using string first is recommended. ListsEdit Many strings are also well-formed lists. Every simple word is a list of length one, and elements of longer lists are separated by whitespace. For instance, a string that corresponds to a list of three elements: set example {foo bar grill} Strings with unbalanced quotes or braces, or non-space characters directly following closing braces, cannot be parsed as lists directly. You can explicitly split them to make a list. The "constructor" for lists is of course called list. It's recommended to use when elements come from variable or command substitution (braces won't do that). As Tcl commands are lists anyway, the following is a full substitute for the list command: proc list args {set args} Lists can contain lists again, to any depth, which makes modelling of matrixes and trees easy. Here's a string that represents a 4 x 4 unit matrix as a list of lists. The outer braces group the entire thing into one string, which includes the literal inner braces and whitespace, including the literal newlines. The list parser then interprets the inner braces as delimiting nested lists. {{1 0 0 0} {0 1 0 0} {0 0 1 0} {0 0 0 1}} The newlines are valid list element separators, too. Tcl's list operations are demonstrated in some examples: set x {foo bar} llength $x ;#--> 2 lappend x grill ;#--> foo bar grill lindex $x 1 ;#--> bar (indexing starts at 0) lsearch $x grill ;#--> 2 (the position, counting from 0) lsort $x ;#--> bar foo grill linsert $x 2 and ;#--> foo bar and grill lreplace $x 1 1 bar, ;#--> foo bar, and grill To change an element of a list (of a list...) in place, the lset command is useful - just give as many indexes as needed: set test {{a b} {c d}} {a b} {c d} lset test 1 1 x {a b} {c x} The lindex command also takes multiple indexes: lindex $test 1 1 x Example: To find out whether an element is contained in a list (from Tcl 8.5, there's the in operator for that): proc in {list el} {expr {[lsearch -exact $list $el] >= 0}} in {a b c} b 1 in {a b c} d #ignore this line, which is only here because there is currently a bug in wikibooks rendering which makes the 0 on the following line disappear when it is alone 0 Example: remove an element from a list variable by value (converse to lappend), if present: proc lremove {_list el} { upvar 1 $_list list set pos [lsearch -exact $list $el] set list [lreplace $list $pos $pos] } set t {foo bar grill} foo bar grill lremove t bar foo grill set t foo grill A simpler alternative, which also removes all occurrences of el: proc lremove {_list el} { upvar 1 $_list list set list [lsearch -all -inline -not -exact $list $el] } Example: To draw a random element from a list L, we first determine its length (using llength), multiply that with a random number > 0.0 and < 1.0, truncate that to integer (so it lies between 0 and length-1), and use that for indexing (lindex) into the list: proc ldraw L { lindex $L [expr {int(rand()*[llength $L])}] } Example: Transposing a matrix (swapping rows and columns), using integers as generated variable names: proc transpose matrix { foreach row $matrix { set i 0 foreach el $row {lappend [incr i] $el} } set res {} set i 0 foreach e [lindex $matrix 0] {lappend res [set [incr i]]} set res } transpose {{1 2} {3 4} {5 6}} {1 3 5} {2 4 6} Example: pretty-printing a list of lists which represents a table: proc fmtable table { set maxs {} foreach item [lindex $table 0] { lappend maxs [string length $item] } foreach row [lrange $table 1 end] { set i 0 foreach item $row max $maxs { if {[string length $item]>$max} { lset maxs $i [string length $item] } incr i } } set head + foreach max $maxs {append head -[string repeat - $max]-+} set res $head\n foreach row $table { append res | foreach item $row max $maxs {append res [format " %-${max}s |" $item]} append res \n } append res $head } Testing: fmtable { {1 short "long field content"} {2 "another long one" short} {3 "" hello} } +---+------------------+--------------------+ | 1 | short | long field content | | 2 | another long one | short | | 3 | | hello | +---+------------------+--------------------+ Enumerations: Lists can also be used to implement enumerations (mappings from symbols to non-negative integers). Example for a nice wrapper around lsearch/lindex: proc makeEnum {name values} { interp alias {} $name: {} lsearch $values interp alias {} $name@ {} lindex $values } makeEnum fruit {apple blueberry cherry date elderberry} This assigns "apple" to 0, "blueberry" to 1, etc. fruit: date 3 fruit@ 2 cherry NumbersEdit Numbers are strings that can be parsed as such. Tcl supports integers (32-bit or even 64-bit wide) and "double" floating-point numbers. From Tcl 8.5 on, bignums (integers of arbitrarily large precision) are supported. Arithmetics is done with the expr command, which takes basically the same syntax of operators (including ternary x?y:z), parens, and math functions as C. See below for detailed discussion of expr. Control the display format of numbers with the format command which does appropriate rounding: expr 2/3. 0.666666666667 format %.2f [expr 2/3.] 0.67 Up to the 8.4 version (the present version is 8.5), Tcl honored the C convention that an integer starting with 0 is parsed as octal, so 0377 == 0xFF == 255 This changes in 8.5, though - too often people stumbled over "08" meant as hour or month, raised a syntax error, because 8 is no valid octal number. In the future you'd have to write 0o377 if you really mean octal. You can do number base conversions with the format command, where the format is %x for hex, %d for decimal, %o for octal, and the input number should have the C-like markup to indicate its base: format %x 255 ff format %d 0xff 255 format %o 255 377 format %d 0377 255 Variables with integer value can be most efficiently modified with the incr command: incr i ;# default increment is 1 incr j 2 incr i -1 ;# decrement with negative value incr j $j ;# double the value The maximal positive integer can be determined from the hexadecimal form, with a 7 in front, followed by several "F" characters. Tcl 8.4 can use "wide integers" of 64 bits, and the maximum integer there is expr 0x7fffffffffffffff 9223372036854775807 Demonstration: one more, and it turns into the minimum integer: expr 0x8000000000000000 -9223372036854775808 Bignums: from Tcl 8.5, integers can be of arbitrary size, so there is no maximum integer anymore. Say, you want a big factorial: proc tcl::mathfunc::fac x {expr {$x < 2? 1: $x * fac($x-1)}} expr IEEE special floating-point values: Also from 8.5, Tcl supports a few special values for floating-point numbers, namely Inf (infinity) and NaN (Not a Number): set i [expr 1/0.] Inf expr {$i+$i} Inf expr {$i+1 == $i} 1 set j NaN ;# special because it isn't equal to itself NaN expr {$j == $j} Characters are abstractions of writing elements (e.g. letters, digits, punctuation characters, Chinese ideographs, ligatures...). In Tcl since 8.1, characters are internally represented with Unicode, which can be seen as unsigned integers between 0 and 65535 (recent Unicode versions have even crossed that boundary, but the Tcl implementation currently uses a maximum of 16 bits). Any Unicode U+XXXX can be specified as a character constant with an \uXXXX escape. It is recommended to only use ASCII characters (\u0000-\u007f) in Tcl scripts directly, and escape all others. Convert between numeric Unicode and characters with set char [format %c $int] set int [scan $char %c] Watch out that int values above 65535 produce 'decreasing' characters again, while negative int even produces two bogus characters. format does not warn, so better test before calling it. Sequences of characters are called strings (see above).: string bytelength $c ;# assuming [string length $c]==1 String routines can be applied to single characters too, e.g [string toupper] etc. Find out whether a character is in a given set (a character string) with expr {[string first $char $set]>=0} As Unicodes for characters fall in distinct ranges, checking whether a character's code lies within a range allows a more-or-less rough classification of its category: proc inRange {from to char} { # generic range checker set int [scan $char %c] expr {$int>=$from && $int <= $to} } interp alias {} isGreek {} inRange 0x0386 0x03D6 interp alias {} isCyrillic {} inRange 0x0400 0x04F9 interp alias {} isHangul {} inRange 0xAC00 0xD7A3 This is a useful helper to convert all characters beyond the ASCII set to their \u.... escapes (so the resulting string is strict ASCII): proc u2x s { set res "" foreach c [split $s ""] { scan $c %c int append res [expr {$int<128? $c :"\\u[format %04.4X $int]"}] } set res } Variables can be local or global, and scalar or array. Their names can be any string not containing a colon (which is reserved for use in namespace separators) but for the convenience of $-dereference one usually uses names of the pattern [A-Za-z0-9_]+, i.e. one or more letters, digits, or underscores. Variables need not be declared beforehand. They are created when first assigned a value, if they did not exist before, and can be unset when no longer needed: set foo 42 ;# creates the scalar variable foo set bar(1) grill ;# creates the array bar and its element 1 set baz $foo ;# assigns to baz the value of foo set baz [set foo] ;# the same effect info exists foo ;# returns 1 if the variable foo exists, else 0 unset foo ;# deletes the variable foo Retrieving a variable's value with the $foo notation is only syntactic sugar for [set foo]. The latter is more powerful though, as it can be nested, for deeper dereferencing: set foo 42 set bar foo set grill bar puts [set [set [set grill]]] ;# gives 42 Some people might expect $$$grill to deliver the same result, but it doesn't, because of the Tcl parser. When it encounters the first and second $ sign, it tries to find a variable name (consisting of one or more letters, digits, or underscores) in vain, so these $ signs are left literally as they are. The third $ allows substitution of the variable grill, but no backtracking to the previous $'s takes place. So the evaluation result of $$$grill is $$bar. Nested [set] commands give the user more control. Local vs. globalEdit A local variable exists only in the procedure where it is defined, and is freed as soon as the procedure finishes. By default, all variables used in a proc are local. Global variables exist outside of procedures, as long as they are not explicitly unset. They may be needed for long-living data, or implicit communication between different procedures, but in general it's safer and more efficient to use globals as sparingly as possible. Example of a very simple bank with only one account: set balance 0 ;# this creates and initializes a global variable proc deposit {amount} { global balance set balance [expr {$balance + $amount}] } proc withdraw {amount} { set ::balance [expr {$::balance - $amount}] } This illustrates two ways of referring to global variables - either with the global command, or by qualifying the variable name with the :: prefix. The variable amount is local in both procedures, and its value is that of the first argument to the respective procedure. Introspection: info vars ;#-- lists all visible variables info locals info globals To make all global variables visible in a procedure (not recommended): eval global [info globals] Scalar vs. arrayEdit All of the value types discussed above in Data types can be put into a scalar variable, which is the normal kind. Arrays are collections of variables, indexed by a key that can be any string, and in fact implemented as hash tables. What other languages call "arrays" (vectors of values indexed by an integer), would in Tcl rather be lists. Some illustrations: #-- The key is specified in parens after the array name set capital(France) Paris #-- The key can also be substituted from a variable: set country France puts $capital($country) #-- Setting several elements at once: array set capital {Italy Rome Germany Berlin} #-- Retrieve all keys: array names capital ;#-- Germany Italy France -- quasi-random order #-- Retrieve keys matching a glob pattern: array names capital F* ;#-- France A fanciful array name is "" (the empty string, therefore we might call this the "anonymous array" :) which makes nice reading: set (example) 1 puts $(example) Note that arrays themselves are not values. They can be passed in and out of procedures not as $capital (which would try to retrieve the value), but by reference. The dict type (available from Tcl 8.5) might be better suited for these purposes, while otherwise providing hash table functionality, too. System variablesEdit At startup, tclsh provides the following global variables: - argc - number of arguments on the command line - argv - list of the arguments on the command line - argv0 - name of the executable or script (first word on command line) - auto_index - array with instructions from where to load further commands - auto_oldpath - (same as auto_path ?) - auto_path - list of paths to search for packages - env - array, mirrors the environment variables - errorCode - type of the last error, or {}, e.g. ARITH DIVZERO {divide by zero} - errorInfo - last error message, or {} - tcl_interactive - 1 if interpreter is interactive, else 0 - tcl_libPath - list of library paths - tcl_library - path of the Tcl system library directory - tcl_patchLevel - detailed version number, e.g. 8.4.11 - tcl_platform - array with information on the operating system - tcl_rcFileName - name of the initial resource file - tcl_version - brief version number, e.g. 8.4 One can use temporary environment variables to control a Tcl script from the command line, at least in Unixoid systems including Cygwin. Example scriptlet: set foo 42 if [info exists env(DO)] {eval $env(DO)} puts foo=$foo This script will typically report foo=42 To remote-control it without editing, set the DO variable before the call: DO='set foo 4711' tclsh myscript.tcl which will evidently report foo=4711 Dereferencing variablesEdit A reference is something that refers, or points, to another something (if you pardon the scientific expression). In C, references are done with *pointers* (memory addresses); in Tcl, references are strings (everything is a string), namely names of variables, which via a hash table can be resolved (dereferenced) to the "other something" they point to: puts foo ;# just the string foo puts $foo ;# dereference variable with name of foo puts [set foo] ;# the same This can be done more than one time with nested set commands. Compare the following C and Tcl programs, that do the same (trivial) job, and exhibit remarkable similarity: #include <stdio.h> int main(void) { int i = 42; int * ip = &i; int ** ipp = &ip; int ***ippp = &ipp; printf("hello, %d\n", ***ippp); return 0; } ...and Tcl: set i 42 set ip i set ipp ip set ippp ipp puts "hello, [set [set [set [set ippp]]]]" The asterisks in C correspond to calls to set in Tcl dereferencing. There is no corresponding operator to the C & because, in Tcl, special markup is not needed in declaring references. The correspondence is not perfect; there are four set calls and only three asterisks. This is because mentioning a variable in C is an implicit dereference. In this case, the dereference is used to pass its value into printf. Tcl makes all four dereferences explicit (thus, if you only had 3 set calls, you'd see hello, i). A single dereference is used so frequently that it is typically abbreviated with $varname, e.g. puts "hello, [set [set [set $ippp]]]" has set where C uses asterisks, and $ for the last (default) dereference. The hashtable for variable names is either global, for code evaluated in that scope, or local to a proc. You can still "import" references to variables in scopes that are "higher" in the call stack, with the upvar and global commands. (The latter being automatic in C if the names are unique. If there are identical names in C, the innermost scope wins). Variable tracesEdit One special feature of Tcl is that you can associate traces with variables (scalars, arrays, or array elements) that are evaluated optionally when the variable is read, written to, or unset. Debugging is one obvious use for that. But there are more possibilities. For instance, you can introduce constants where any attempt to change their value raises an error: proc const {name value} { uplevel 1 [list set $name $value] uplevel 1 [list trace var $name w {error constant ;#} ] } const x 11 incr x can't set "x": constant The trace callback gets three words appended: the name of the variable; the array key (if the variable is an array, else ""), and the mode: - r - read - w - write - u - unset If the trace is just a single command like above, and you don't want to handle these three, use a comment ";#" to shield them off. Another possibility is tying local objects (or procs) with a variable - if the variable is unset, the object/proc is destroyed/renamed away.
http://en.m.wikibooks.org/wiki/Tcl_Programming/Introduction
CC-MAIN-2014-15
refinedweb
5,590
55.78
I’ve touched on the awesomeness of data binding a few times in this blog. Data binding is one of those things that still seems a little bit like magic: tell a control about some data, and the control will automagically display it, reformatting as required. It’s something which works in a very different way from iOS and – for example – UITableView, which uses callbacks you need to deal with in order to generate the data to be displayed, on a row-by-row or cell-by-cell basis. The most magical part of data binding is the link between control and data can stay in sync, all by themselves. OK, so you might not think of using data binding in that awesome game you are working on in MonoGame, but if you ever need to display a table or other collection of data, I can guarantee that you’ll love it. I was most recently talking like a giddy schoolgirl on data binding in a blog entry More on ListBoxes: Using databinding to control appearance which used data convertors to alter the appearance of entries in the XAML ListBox. For example, you can data-bind a Foreground color to a variable (say, ‘deadliness of a given animal’) and use a data converter to swap a value (the deadliness, on a scale of 1 (tickles) to 10 (the remains were never found)) to a color. It’s fun stuff, and only requires you write a little extra Class to handle it. Unfortunately, although I managed to explain that part pretty well (I thought) I completely messed up something rather important: what do you have to do to make the list view (or other control) magically update itself if one of those properties changes. As an example, here’s a GridView which displays a list of names. It unhides a gold star when the scores are over 90. What happens when the scores are all increased? Will the stars appear by themselves? At the time I casually said I wasn’t sure, but setting the ItemSource to null and re-assigning it would take care of it. And it did. What I didn’t notice was that it also caused the control to completely redraw itself, and as I had Windows animation effects turned off, I didn’t realize just how ugly that looks. As in, very ugly indeed. Now is the time to correct this heinous error. In a nutshell, no: the stars will not appear by themselves. You need to tell the control that when a specific bound piece of data changes, it needs to take another look at the data and redraw it as necessary. And this means you need to implement something called the INotifyPropertyChanged interface on the data that you are binding to the control (drop that sentence into a conversation at a party and you’ll be an instant hero). Let’s see that in practice, with my amazing app which will display names of students and award them a gold star if they get 90 or higher. First of all, let’s create a class which defines what a Student looks like, and straightaway define that INotifyPropertyChanged stuff. public class Student : INotifyPropertyChanged { // Pupil's name public string Name { get; set; } // Pupil's score. This is the property which can trigger the redraw of the control, so // its definition is a little different from Name, as it implements the INotifyPropertyChanged interface. private int m_score; // Private variable to store the score // Some standard code for the INotifyPropertyChanged interface public event PropertyChangedEventHandler PropertyChanged; private void NotifyPropertyChanged(String updateScore) { if (PropertyChanged != null) { PropertyChanged(this, new System.ComponentModel.PropertyChangedEventArgs(updateScore)); } } public int Score get return m_score; set m_score = value; NotifyPropertyChanged("Score"); // Trigger the change event if the value is changed! public Student(string name, int score) Name = name; m_score = score; } As you can see, we need to do some extra work on the getter/setter of the Score field, as this is the variable that will trigger refreshing the data-bound control. We also need to implement the PropertyChanged event handler, using some very boiler-plate code you'll end up using time and time again. Now let’s define a class which represents the school of students for our app. We’ll create an ObservableCollection list to store a bunch of students, like this: public class SchoolDataType { public ObservableCollection<Student> students = new ObservableCollection<Student>(); } And then, in the App.cs file, let’s declare an instance of that school. Putting this variable in App.cs means that every other class we add to the project can access it. sealed partial class App : Application public static SchoolDataType SchoolData { get; set; } public App() this.InitializeComponent(); this.Suspending += OnSuspending; SchoolData = new SchoolDataType(); … } All we have left is the XAML and the code behind page. First the XAML. We declare a GridView, and inside it, bind a grid's DataTemplate it to the SchoolData’s student’s score. We tweak this binding via the Data Convertor to control the Visibility of the grid, and inside that grid we draw a gold star. The end result is that the star will be visible or hidden depending on the score. <Page x:Class="GoldStar.MainPage" xmlns="" xmlns:x="" xmlns:local="using:GoldStar" xmlns:d="" xmlns:mc="" mc: <Page.Resources> <local:StarDataConverter x:</local:StarDataConverter> </Page.Resources> <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}" Tapped="Tap_UpdateScores"> <GridView x: <GridView.ItemTemplate> <DataTemplate> <StackPanel Orientation="Vertical" Margin="10,10,0,0" Width="256" Height="256" Background="DarkBlue"> <TextBlock Text="{Binding Name}" FontSize="35" /> <TextBlock Text="{Binding Score}" FontSize="25"/> <Grid Visibility="{Binding Score, Converter={StaticResource DrawStar}}"> <Polygon Points="65,0,90,50,140,55, 105,90,120,140,65,115,20,140,35,90,0,55,50,50"> <Polygon.Fill> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <GradientStop Color="Yellow" Offset="0.0" /> <GradientStop Color="Orange" Offset="1.0" /> </LinearGradientBrush> </Polygon.Fill> </Polygon> </Grid> </StackPanel> </DataTemplate> </GridView.ItemTemplate> </GridView> </Grid> </Page> The XAML is quite straightforward. The DataTemplate is the design of the cell that will be displayed, containing two text boxes (one for the name, one for the score) and an extra grid which contains the gold star image. Rather than using a bitmap, I use XAML's drawing abilities to quickly create a filled shape. This star's parent grid is the part that's bound to the data converter that can show it, or hide it. I think you will have to agree there's actually a surprisingly small amount of XAML required to create a complete grid control and populate it! Obviously you would spend more time on appearance, and thus the XAML would quadruple in size :-) The code behind page is very simple. There are two parts: First the data initialization: public MainPage() App.SchoolData.students.Add(new Student("Adam", 86)); App.SchoolData.students.Add(new Student("Brian", 80)); App.SchoolData.students.Add(new Student("Charlie", 50)); App.SchoolData.students.Add(new Student("Dave", 78)); App.SchoolData.students.Add(new Student("Eve", 95)); App.SchoolData.students.Add(new Student("Francesca", 95)); App.SchoolData.students.Add(new Student("George", 72)); App.SchoolData.students.Add(new Student("Harry", 51)); myGridView.ItemsSource = App.SchoolData.students; Notice how we can access the SchoolData by prepending it with App.SchoolData. This is a good way to deal with the evil that is global variables, IMHO. And now the touch event. This is added purely to demonstrate that the GridView will update itself when the values change. If you touch anywhere on the grid, this method will increase all the scores and so display more stars. The foreeach loop makes quick work of cycling through all the student objects in the list. You could of course use LINQ if you wanted to apply changes to only specific pupils. private void Tap_UpdateScores(object sender, TappedRoutedEventArgs e) // Go through all the students and increase their scores. // Watch in wonder as the stars appear automatically! foreach (Student pupil in App.SchoolData.students) pupil.Score += 5; And here's what it looks like. After a couple of taps.. And some more.. There you go: it's practically magic!
http://blogs.msdn.com/b/johnkenn/archive/2014/01/30/yet-more-on-data-binding-refreshing-your-views.aspx
CC-MAIN-2014-35
refinedweb
1,353
55.95
Array Usage Guidelines For a general description of arrays and array usage, see. using System; using System.Collections; public class ExampleClass { public sealed class Path { private Path(){} private static char[] badChars = {'\"', '<', '>'}; public static char[] GetInvalidPathChars() { return badChars; } } public static void Main() { // The following code displays the elements of the // array as expected. foreach(char c in Path.GetInvalidPathChars()) { Console.Write(c); } Console.WriteLine(); // The following code sets all the values to A. Path.GetInvalidPathChars()[0] = 'A'; Path.GetInvalidPathChars()[1] = 'A'; Path.GetInvalidPathChars()[2] = 'A'; // The following code displays the elements of the array to the // console. Note that the values have changed. foreach(char c in Path.GetInvalidPathChars()) { Console.Write(c); } } }.
http://msdn.microsoft.com/en-us/library/k2604h5s(v=vs.100).aspx
CC-MAIN-2013-48
refinedweb
111
53.37
Java Reference In-Depth Information Many people, from the everyman to the super famous, use Twitter. Reports have it as the third biggest social networking site as of this writing. Some people (presidents, celebrities, and so on) have hundreds of thousands of followers, and at least one has in excess of a million followers. As you can imagine, the difficulties and logistics of managing a graph as complex and sprawling as the one Twitter manages can be frustrating and has been the source of many well publicized outages. Twitter furnishes a REST API through which users can interact with the system. The API lets one do anything she might do from the web site: follow users, stop following users, update status, and so on. The API is concise and has many language bindings already available. In the example, you'll use one project's API, called Twitter4J, which nicely wraps the API in simple approachable API calls. Twitter4J was created by Yusuke Yamamoto and is available under the BSD license. It's available in the Maven repositories, and it has a fairly active support mailing list. If you want to find more about Twitter4J, visit . Twitter Messages In the first example, you'll build support for receiving messages, not for sending them. The second example will feature support for outbound messages. In particular, you'll build support for receiving the status updates of the people to which a particular account is subscribed, or following . There are other types of Twitter messages. Although you won't build adapters for every type, it won't be difficult to imagine how it's done once you've completed the examples. Twitter supports direct messaging, in which you can specify that one recipient only sees the contents of the message; this is peer-to-peer messaging , roughly analogous to using SMS messaging. Twitter also supports receiving messages in which your screen handle was mentioned. These messages can often be messages directed to you and others, messages discussing you, or messages reposting ( retweeting ) something you said already. A Simple MessageSource There are two ways to build an adapter, using principally the same technique. You can create a class that implements MessageSource , or you can configure a method that should be invoked, effectively letting Spring Integration coerce a class into behaving like an implementation of a MessageSource . In this example, you'll build a MessageSource , which is very succinct: package org.springframework.integration.message; import org.springframework.integration.core.Message; public interface MessageSource<T> { Message<T> receive(); } In the example, you're building a solution that can pull status updates and return them in a simple POJO object called Tweet . package com.apress.springenterpriserecipes.springintegration.twitter; import java.io.Serializable; import java.util.Date; // … Search WWH :: Custom Search
http://what-when-how.com/Tutorial/topic-105fbg1u/Spring-Enterprise-Recipes-A-Problem-Solution-Approach-354.html
CC-MAIN-2019-13
refinedweb
462
55.24
Listening for connections Moin Phillips Greenhorn Posts: 4 posted 14 years ago Number of slices to send: Optional 'thank-you' note: Send Hi, I'm new to these forums and came across it through the Head First books. Anyway, Im creating a simple chat program and I've coded all the client side files, placed them in a jar and signed them. This part works perfectly fine on my local machine and and from a remote server (at the moment only shows the GUI of the chat room). I'm having trouble with the next bit, that is how do I listen for connections 24/7? I know that I need some form of loop that listens for connections, btw I'm using sockets and I know all about them and I have even coded a few parts to this. So, if I have a clas file with a main method that continuously listens for connections, how do i get it to run on a server 24/7? I'd prefer to stay away from Servlets and JSP as i am still readin the Head First book on that, but I know you can do it through Jar files cant you? Chris Beckey Ranch Hand Posts: 116 I like... posted 14 years ago Number of slices to send: Optional 'thank-you' note: Send java.net.ServerSocket may be what you are looking for Moin Phillips Greenhorn Posts: 4 posted 14 years ago Number of slices to send: Optional 'thank-you' note: Send Yes, I know about the ServerSockets, but what I'm trying to say is how do you run a main method on a server? Do servers have command lines from where you can run a server.java file and let it run continuously? Chris Beckey Ranch Hand Posts: 116 I like... posted 14 years ago Number of slices to send: Optional 'thank-you' note: Send Here is I think basically what you want ... 1.) from main, create an instance of a service thread 2.) the service thread opens the server socket and waits for client connections 3.) the main thread just waits for "administrative" input 4.) spawn off, or grab a thread from a pool to service requests from within the service thread The code below is basically it, but take it with a grain of salt as I wrote it in about 5 minutes. It will keep running as long as nothing is available on System.in. import java.io.IOException; import java.net.ServerSocket; import java.net.Socket; public class Server implements Runnable { private ServerSocket serverSocket = null; private boolean terminated = false; private Server(int port) { try { serverSocket = new ServerSocket(port); } catch (IOException e) { e.printStackTrace(); } } private boolean isTerminated() { return terminated; } public void terminate() { terminated = true; } public void run() { System.out.println("Listening on " + serverSocket.getLocalPort()); while(! isTerminated()) { try { // this will wait indefinately for a client connection Socket socket = serverSocket.accept(); // start a thread, or preferably get a pooled thread to service the request // don't do it on this thread or scalability will be an issue } catch (IOException e) { e.printStackTrace(); } } System.out.println("Terminated"); } /** * @param args */ public static void main(String[] args) { Server server = new Server(1963); Thread serverThread = new Thread(server); serverThread.setDaemon(true); serverThread.start();// the server socket servicing is now running on a seperate thread try { System.in.read(); server.terminate(); } catch (IOException e) { e.printStackTrace(); }// will wait for input from the keyboard } } Chris Beckey Ranch Hand Posts: 116 I like... posted 14 years ago Number of slices to send: Optional 'thank-you' note: Send start it with "java Server" from a command line. If you want to run it as a service (on Windows) Google for "java windows service", there are a number of apps that will help there. Moin Phillips Greenhorn Posts: 4 posted 14 years ago Number of slices to send: Optional 'thank-you' note: Send Thanks for the code, but I've already got most of that done. I'm not talking about running a service on windows, I'm talking about an actual Web Server (something you can buy commercially, the thing that this forum is running on). How do I get my Server.java file running on there, where there is no Windows? Plus whats the deal with these damn applets ? I've made mine into JAR files and signed them and done pretty much everything by the book and I still get SocketPermission errors! There are no files being accessed nothing being changed, only messages sent forth and back. I dont want to mess around with policy files as my chatroom will have hundreds of clients and I dont want to be giving them directions to changing their policy files. Thank you for you help so far. Mo Chris Beckey Ranch Hand Posts: 116 I like... posted 14 years ago Number of slices to send: Optional 'thank-you' note: Send >>I'm not talking about running a service on windows, I'm talking about an actual Web Server (something you can buy commercially, the thing that this forum is running on). How do I get my Server.java file running on there, where there is no Windows? Well that does change things rather significantly and unfortunately complicates things also. What platform (web server, servlet container, application server, etc...) do you have in mind (Apache, Tomcat , JBoss , WebSphere, IIS, ...)? Is it something you have control over or are you stuck with what an ISP provides? ========== The rest of this reply is sorta' random stuff until the above question is answered. The first question is why? it doesn't look like you are using what a web server provides you (i.e. HTTP protocol support) so why do you want to run on one? Also, can a chat application be adequately implemented given the limitations of HTTP (i.e. request/response pattern )? Speaking rather generally, web servers don't provide for their hosted applications to be starting service sockets because that is what the web server does. It is at least theoretically possible to write a connector (for Tomcat) that would do what you want but that is forcing a square peg into a round hole. For other web/app servers I don't know. You might try looking for "startup" classes in the documentation. What app/web server do you have in mind to run on? and again do you really have to? Could you run a standalone chat server on the same box as the web server? Another question, would you expect the clients to receive un-initiated messages? that is, not a response to a request that the applet made? Here is a link to an article on tunneling RMI over HTTP: Moin Phillips Greenhorn Posts: 4 posted 14 years ago Number of slices to send: Optional 'thank-you' note: Send The only reason I want to use a web server is to make my chatroom live, i.e. make it available to clients on the Internet. I would stick with the free webspace from Geocities if that worked! I been researching a lot about this and think these problems are due to my lack of knowledge of servers. I designed my chatroom on a client-server model, I have completed the client side and the server side only works on my local machine. The problem now arises when I try to make my chatroom 'public' or live on the Internet. There are so many technologies flying around, I've tried to experiment with PHP and PostGres and Java itself has so many solutions JSP/Servlets, RMI, Web Start which all sound confusing to me at the moment. Therefore i decided on just putting the class files in a JAR and on a server and hope they work without having a container like apache or tomcat or anything, btw I'm using my University's server at the moment which has java installed on it. Once this project is complete I plan on getting a commercial one. The client side works but when connecting to the server I get socketpermission errors. Basically, do you think that I'am going about this the right way or do you think I should use some other technology? Aaron Shaw Greenhorn Posts: 9 posted 14 years ago Number of slices to send: Optional 'thank-you' note: Send I think i know what you mean, Moin. I run a java MUD, which is a text based game where many players can connect. I have the main game loop run like this: while(shutdown == false) { blah blah blah.... Thread.sleep(50); } The game runs in this loop forever, until the boolean 'shutdown' is set to true by an admin. The thread.sleep(50) makes the thread pause for 50 miliseconds each loop, in an attempt to stop the program from locking up completely any other programs i need to run. Whether this is good practise or not, im not sure. Anyone feel free to provide a more elegant solution. [ January 23, 2007: Message edited by: Aaron Shaw ] The part that listens for incoming connections runs in another thread, but the loop is the same, and the thread is lower priority. You dont need a web server. You're not serving up html. Just run you program on whatever port you like, such as 6666, and connect using telnet or any custom client you made, etc. [ January 23, 2007: Message edited by: Aaron Shaw ] Chris Beckey Ranch Hand Posts: 116 I like... posted 14 years ago Number of slices to send: Optional 'thank-you' note: Send This response may be a bit pedantic, but will hopefully clear up the issue. Web Server 101 ... with vague generalities that will probably get me flamed.<g> A web server, in its most elemental form, simply listens for and responds to HTTP requests. Basically that means it gets an HTTP request (a string in a specified form), determines what resource (think HTML file) the request is for and sends the content of the resource (file) back to the requestor, formatted as an HTTP response. That is basically all a web server does. Other than the specific of parsing/assembling the HTTP requests/responses and locating resources, (involved and initially trivial, respectively) the code I posted earler, and that you have written, is the core of a web server. So a web server is an application that understands HTTP protocol. Fine ... call 'em "HTTP servers", its more accurate, more precise and avoids HTML involvement that the server doesn't really care about. In your case you have an HTML file that contains an applet tag. The HTML file is one resource, the applet is another. All the web server does is respond to requests for those resources (from a browser) and return the bytes that make up the page and the applet. Effectively this allows you to download code to the client. The client (the browser) then runs that code because it knows that it Java byte code (the response includes type information). In this example, the browser is far more sophisticated than the server. All fine up to now, you have gotten code loaded on a client and it is running. Now the applet wants to communicate with a server. Assuming it is going to be through a socket, the first question is what protocol? That is, what is the client going to send to the server and what is the server going to respond with? Also, what is the messaging pattern? Will the client always initiate a request, followed by a server response? Can the server send an unsolicited message to the client? ==>Draw boxes for the client(s) and servers, connect them with lines and then determine what goes from the client to the server, to the other client, etc ... Then think about the perspective of each box (i.e. server waits for messages and responds, client sends message and waits for response, etc ...) The applet is not restricted to HTTP or any other defined protocol, it can send whatever it wants down the wire. BUT, it is restricted to talking to only the server it was loaded from, that is a function of Java sandbox security (and maybe why you are getting the exception, see references below). Once you answer all that stuff about messaging pattern and content then you can determine if HTTP is a valid choice for the protocol and if it is then you may be able to use a web server as your chat server. Read on ... => The short answer is that it will work but not particularly well, which implies that a web server is not the optimal solution. The bit about drawing boxes and the perspective of each box should illustrate why. Now to slightly more complex stuff. In HTTP the requested resource does not have to be a static file. For example a request for an HTML page with the the current time must change constantly. That is where active server content in one form or another comes in, that may be servlet/JSP, PHP, Ruby, ASP, ASP.net, CGI, ISAPI DLL, etc. But basically the interface is the same, that is: Request - the name of the resource and possibly some parameters Result - the resource contents as a stream of bytes (with some type information) Note that the fact that the protocol is HTTP has not changed. The browser knows nothing about how the content is generated, it just gets back a stream of bytes and either displays it or executes it (again depending on type, which is part of the response). If active content on an HTTP server is the route you decide to take, then pick a technology and then pick a server that implements it. ==> Summary You're doing this for the education, right? If not, find an existing IRC server/client implementation and install it. An HTTP server is probably not the best fit. The ability to download and run Java code can be addressed with WebStart without the limitations on server communication of an applet. This problem has already been solved and codified (see below for IRC), implement to that spec, or at least read enough of it to understand the rationale. Deploying a generic Java (server) application on a hosted system is gonna' be half a step from impossible. If you do have the intention of deploying this commercially, you may have to host it yourself. Despite all that, it does sound like you are on the right track. The problem may be more complex than originally thought. References: HTTP protocol you don't have to read it from cover to cover, just the basics of request and response format IRC (chat) protocol this problem has been solved, even if you don't implement the entire spec the basics of the messaging will be the same Java tutorial on applets/server communication: ... and maybe find your exception answer here. Run TCP Monitor and watch HTTP browser/server interaction (you must download Axis to get it). This is most illuminating ... Don't get me started about those stupid light bulbs . reply reply Bookmark Topic Watch Topic New Topic Boost this thread! Similar Threads How to write a lobby game server Creating a Java chat server using sockets Is this a smart idea, or a dumb idea (or both)? how to run my jar file on webserver How to run my jar file on web More...
https://coderanch.com/t/405932/java/Listening-connections
CC-MAIN-2021-49
refinedweb
2,575
70.53
#include <deal.II/fe/fe_raviart_thomas.h> Implementation of Raviart-Thomas (RT) elements. The Raviart-Thomas space is designed to solve problems in which the solution only lives in the space \(H^\text{div}=\{ {\mathbf u} \in L_2: \text{div}\, {\mathbf u} \in L_2\}\), rather than in the more commonly used space \(H^1=\{ u \in L_2: \nabla u \in L_2\}\). In other words, the solution must be a vector field whose divergence is square integrable, but for which the gradient may not be square integrable. The typical application for this space (and these elements) is to the mixed formulation of the Laplace equation and related situations, see for example step-20. The defining characteristic of functions in \(H^\text{div}\) is that they are in general discontinuous – but that if you draw a line in 2d (or a surface in 3d), then the normal component of the vector field must be continuous across the line (or surface) even though the tangential component may not be. As a consequence, the Raviart-Thomas element is constructed in such a way that (i) it is vector-valued, (ii) the shape functions are discontinuous, but (iii) the normal component of the vector field represented by each shape function is continuous across the faces of cells. Other properties of the Raviart-Thomas element are that (i) it is not a primitive element; (ii) the shape functions are defined so that certain integrals over the faces are either zero or one, rather than the common case of certain point values being either zero or one. (There is, however, the FE_RaviartThomasNodal element that uses point values.) We follow the commonly used – though confusing – definition of the "degree" of RT elements. Specifically, the "degree" of the element denotes the polynomial degree of the largest complete polynomial subspace contained in the finite element space, even if the space may contain shape functions of higher polynomial degree. The lowest order element is consequently FE_RaviartThomas} \]). face as well as QGaussk+1 in the interior of the cell (or none for RT0). Definition at line 130 of file fe_raviart_thomas.h. Constructor for the Raviart-Thomas element of degree p. Definition at line 47 of file fe_raviart_thomas.cc. Return a string that uniquely identifies a finite element. This class returns FE_RaviartThomas<dim>(degree), with dim and degree replaced by appropriate values. Implements FiniteElement< dim, dim >. Definition at line 115 of file fe_raviart_thomas 137 442 of file fe_raviart_thomas 484 of file fe_raviart_thomas.cc. Return a list of constant modes of the element. This method is currently not correctly implemented because it returns ones for all components. Reimplemented from FiniteElement< dim, dim >. Definition at line 42027 398 of file fe_raviart_thomas.cc. Initialize the generalized_support_points field of the FiniteElement class and fill the tables with interpolation weights (boundary_weights and interior_weights). Called from the constructor. Definition at line 150 of file fe_raviart_thomas.cc. Initialize the interpolation from functions on refined mesh cells onto the father cell. According to the philosophy of the Raviart-Thomas element, this restriction operator preserves the divergence of a function weakly. Definition at line 266 of file fe_raviart_thomas.cc. Definition at line 243 of file fe_raviart_thomas.cc. Definition at line 226 214 222 of file fe_raviart_thomas.h.
https://dealii.org/current/doxygen/deal.II/classFE__RaviartThomas.html
CC-MAIN-2021-25
refinedweb
534
54.93
You can subscribe to this list here. Showing 2 results of 2 Well, I just went on and it allowed me to open a problem just fine.? I have no idea what I might have done.? The message when I was not able to open a problem said something about my id not belonging to the Jython group or project or something like that.? Probably user error. ?- Bob -----Original Message----- From: Charlie Groves <charlie.groves@...> To: boblusebob@... Cc: jython-dev@... Sent: Sun, 25 Nov 2007 2:12 pm Subject: Re: [Jython-dev] Fwd: Jython Clock Resolution On Nov 24, 2007 1:14 AM, <boblusebob@...> ________________________________________________________________________ Check Out the new free AIM(R) Mail -- Unlimited storage and industry-leading spam and email virus protection. Great!? A 15 millisecond to 5 microsecond improvement in time.clock() resolution is fantastic! ?- Bob -----Original Message----- From: Charlie Groves <charlie.groves@...> To: boblusebob@... Cc: jython-dev@... Sent: Sun, 25 Nov 2007 3:03 pm Subject: Re: [Jython-dev] Jython Clock Resolution On Nov 24, 2007 1:29 AM, <boblusebob@...> wrote: > 1) Java System.nanoTime() needs java 1.5 so you have to check if you are > going to put it in a release that allows < Java 1.5. Yep, this is only going in on trunk which is for 1.5+. > 2) // A suggested solution > > private static double __initialclock__ = 0.0; > public static double clock() { > if (__initialclock__ == 0.0) { > // set on first call > __initialclock__ = System.nanoTime(); // keep __initialclock__ in > nanoSeconds > return 0.0; // I would add this return here - it could improve > accuracy by a few microseconds > } > return (System.nanoTime() - __initialclock__) / 1000000000.0; // > convert the time to seconds here That's pretty much what I did and it's committed in r3723. It gets a resolution of 0.005 ms with your script. Charlie ________________________________________________________________________ Check Out the new free AIM(R) Mail -- Unlimited storage and industry-leading spam and email virus protection.
http://sourceforge.net/p/jython/mailman/jython-dev/?viewmonth=200711&viewday=26
CC-MAIN-2014-23
refinedweb
317
68.67
. Yeah, I really don't think you have anything to backup your claim that Apple's rep was hurt by the "month" of Apple "bugs". Yeah, I guess you are right. Remote exploits, kernel buffer overflows, and DoS attacks really enhance your reputation.. In the case of OpenDarwin, I can't imagine that it would be a #1 priority to spend money on a fork of something you're already spending money on (i.e. Darwin itself). @ Fraser The point is to make money. If you understand Open Source you can see how it can help you leverage your community for better, more secure, more usable products. If you think of it as a fork you are missing the point of Apple's own declaration regarding Open Source: "Apple believes that using Open Source methodology makes Mac OS X a more robust, secure operating system". Plus you SAVE money on development, you add users with new tools, you create new business, etc. This is "getting" Open Source and improving shareholder value. After all, I hold Apple shares and I want them to go up. I agree with Jeremiah and I think Apple's problems can be summed up with, "When you're riding high you think you know it all, you don't need anybody else, and no one can tell you anything." On another note, instead of developing its' own browser, e-mail client, word processor, etc., why doesn't Apple just work with the open source community to make sure that Firefox, Thunderbird, OpenOffice, etc., are the best choices for the Mac? I'm a Mac user and right now don't have any plans to switch away, but if it weren't for the iApps (iTunes, iPhoto, iMovie), I'd be buying AMD (another note to Apple - AMD blows Intel out of the water - give us a choice) based machines and putting Linux on them in a heartbeat... Kevin What exactly could be Apple's interest in having an implementation of GTK+ besides Aqua? Why should they invest their developement ressources in something decidedly cross-platformish like Firefox, Thunderbird or the bloated OpenOffice? The Mac is all about it's platform experience. Therefore they focus on improving their own frameworks (all this Core Thingies). The Unix-underpinnings are of course a different story. But one should not forget to mention that Apple made some contributions like Bonjour, launchd and the rest of the possee on macosforge. And yes, Adobe can be open (but still keep PDF and Flash closed!), cause there's not a single serious challenger left ... @A1lias What exactly could be Apple's interest in having an implementation of GTK+ besides Aqua? Apple states them here: I'll quote, "Cross-platform development environment Integrates easily with C/C++ code Robust feature set" Helping Gtk+2 will allow a huge collection of already written applications to run natively on the Mac expanding the usefulness of OS X. Think tools like the GIMP for example, Evolution, or nearly any Gnome tool. There are lots of compelling reasons to help Gtk. Jeremiah, yes, Apple invites developers to port whatever they have (Java-Gui, X11, Qt, Tcl, GTK+) to Mac OS X - but they adress the drawbacks in the same document too: However, that cross-platform nature comes with a price in flexibility. In essence, to be cross-platform, they can only support capabilities that are generic to all of the potential operating environments, and as such, they tend to provide only the lowest common denominator in terms of functionality. The Mac way is to separate itself through innovative and/or unique features and deep integration. Really, I'm not looking forward to work with Evolution on a Mac lacking Spotlight capabilities and bringing its own adress book and calendar. Otherwise I'd use a linux box as it's free. Gah. I pray to the gods GTK is never fully ported to OSX. Frankly, I'm happy with a decent UI and decent usability - if I wanted pain, I'd just use a hammer on my hand. (Which still beats the experience of most GTK apps) Just because Apple doesn't support every crummy project out there doesn't mean they are anti-open-source. Witness Webkit - they sorted out their issues with the KHTML guys and have become very open. Because KHTML/Webkit is actually a decent piece of sw, as opposed to many other open source projects. Are you kidding? George Ou as a reliable source on Apple? "Month of Apple Bugs and in its attempt to whitewash security issues that were published last summer." Whitewash? You mean that phony Ou article? TUAW and other blogs that were supposedly conspiring with Apple have already called his claims as bs. Why is everyone regurgitating this non-story? Before Apple bought them, NeXT already fought and lost the GNU battle; they were put in a position to rip out their GNU foundations or to return their modifications to the community. They chose to rip out the GNU foundations. The NeXT alumni have been the most aware of what commercial obligations they have: zlich. In terms of community, you can look to the Safari and webkit operations to see how Apple continues to try working with the free communities. They contribute back code improvements. They participate in standards development. And they namespace prefix their non-standard innovations rather than just forcing them on the community. Apple has been severely criticized for all of these things ("dumping code", "nagging", "forcing their way") And this is in an area that isn't critical or differentiating Apple in the marketplace. My guess is that there is discomfort between a closed source, commerical software company and the free / open communities. What else is new? @ Matthew - TUAW seems to take the bugs seriously look here. and they link to Landon Fuller who patches those bugs he. I have no idea about the truthfulness of Ou. @ William Moss - I find it remarkable that no one sees Apple's choice of BSD/NEXTSTEP platform as a differentiator or competitive advantage, despite Apple's claims to the contrary. It seems Apple customers just want Free Software people to go away so Apple can make new, shiny toys. Problem is, underneath those shiny toys runs a sophisticated OS that needs work. Many companies have figured out how to get that work done for free. Not Apple. Apple needs to contribute more than just updates to their fork of KHTML. I wonder if it's not you that misunderstands OpenSource. As long as you are not violating the license agreements under which the software is released, you can whatever the hell you want with it, and are under NO obligation to contribute back to the community. I have worked with OS software for over 10 years, and let me tell you that it's a very, VERY rare thing to find a company that uses it and contributes in any way back to the community. If you're pissed that people are taking freely available software and making money off of it, then I'm afraid you've been labouring under some kind of misapprehension all this time. They are, so long as they don't violate the Ts&Cs, perfectly entitled to do so. As it is, Apple DOES contribute significantly back to the OS community. I really can't see any case for complaint. I don't think Apple is failing at anything. Rather it is other companies who are failing to realize that Open Source is communism. "Free" is not a business model. Sun is going GPL because Sun is at death's door and trying to figure out how to stay in business. Pretty sad when your products are so bad that you have to give them away free to get people to accept them. The goal of Open Source, like all communist movements is to cheapen and degrade the value of products. By degrading the value of software, Open Source attempts to turn software from a creative endeavor into a cheap commodity - like a factory stamping out a bunch of uniform pencils. That is not where business profits lie. Apple is doing so well precisely *because* it is not embracing open source. The big profits lie in high-value creative work and in creating beauty, not in creating cheap commodities. As for MOAB, the communists who were apparently running it got angry because no one cared and stormed off in a huff. The relentless media attacks on Apple (including MOAB) can't seem to bring this down this company which the communists hate so much. Please, before posting again, learn how to write properly. Just repeating your claims again and again will not provide any proof. How can you point to Adobe as a good citizen and not realize that e.g. Apollo is build upon common efforts by Apple and the Konquerer team? Is it too hard to find macosforge? Or don't you realize that OSS is not about hyped bullshit like creative commons licensed support forums but about contributing code? Thank god you can blame all criticism above on zealotry and don't have to stop and think. Can I become a writer for DevCenter just by repeating my claims over and over again without even minimal proof? First, I can't believe you're still repeating the non-story driven by Ou's ephemeral allegations. That issue has been thoroughly debunked, again and again. Second, your main evidence that Apple doesnt "understand" OSS is that they don't throw all their resources at every project trying to compile against OS X. You've got to be kidding me. Apple, more than any other major commercial OS maker, has encouraged OSS development and made available information regarding the system to encourage it. It's not their obligation, nor, IMO, in their best interest, to spend significant resources on these projects. They make what they can available, but the point of OSS community-driven development is that others develop the applications, not the corporation. I would say that the MOAB enhanced OS X's reputation. Look at the severity and frequency of bugs found in a whole month of concentration compared to what is found monthly in Windows just in the normal course of business. @A1ias: I know most pro-opensource guys will go balls to the walls defending anything that is open source, and claiming it is better than all alternatives, but just because a program is open source doeesn't make it better. Firefox is a nifty program, I like it and if you are on Windows you are an idiot to not use it, but the WebKit rendering engine underneath Safari (which is open source isn't it?) loads pages faster than Firefox. Firefox's engine is extremely bloated by comparrison. The nice thing about Firefox is its extensibility through plug-ins, but Safari is a perfectly valid choice. The same could be said about Mail (to a lesser extent) and all the other iApps for one reason or another. I agree with Jeremiah on a lot of points, and Apple could leverage open source a bit more, and be a better open source community member, but utilizing open source for every app would be wasting a whole lot of really good software engineers they have hired over there at Apple. @Kevin: Seriously, I don't mean to start up a totally different debate, but you have to drop your blind loyalties here people. They don't serve you. I like AMD, I really do. They are the scrappy underdog. And a couple of years ago when Intel was still trying to push their decrepit NetBurst architecture, AMD had the faster chips, easily. Then Intel pushed its new Core architecture and dropped NetBurst like the junk it was. Look at any benchmark, Intel's current chips are faster across the board. When Apple first switched and went 100% intel I had no idea why they would do such a thing. NetBurst was slower than AMD and slower than PowerPC. But Intel switched to Core and we all saw why Apple made the choices it did. @Mike Pick up one of these, it might help... "Apple needs to contribute more than just updates to their fork of KHTML." Wow. What a disingenious thing to say. "TUAW seems to take the bugs seriously look here. and they link to Landon Fuller who patches those bugs he. I have no idea about the truthfulness of Ou." I didn't say anything about TUAW not reporting on MOAB. What I said was that they and many other bloggers have come forward to say that Ou's claim of Apple somehow conspiring with bloggers to spread a misinformation campaign is completely false. It would have taken all three seconds of a Google search to learn about this sinister "whitewash." Then you say that fixing these bugs is what the security updates were about. While that may be true for MOAB, it's not necessarily true for the "Macbook wireless hack" referenced in the Ou article you pointed to. John Gruber gets it right: GTK part is funny. It sounds like Apple should have used something open instead of Cocoa. But if you go to Wikipedia you'll see that's what they did. It's OpenStep and did exist long before GTK and the GIMP developers could have used that with GNUStep instead of creating a new GUI lib. With a simple recompile the same source could run with GNUStep, Cocoa or any other OpenStep implementation. Someone mentioned Apple could have used Firefox/Phoenix/Chimera or Gecko instead of creating Safari and Webkit. Now that's also funny, because Dave Hyatt worked on Phoenix, Chimera and Mozilla at Netscape. I guess with someone like him in their team the Webkit team had a reason why they created something based on KHTML. Open Source isn't something you do just because it's cool - that's what many don't understand. It's like saying the MOAB had a real impact ... oops. What a masterpiece of journalism. Do O'Reilly let just anybody post? Here's a paraphrasing of your article: Paragraph 1: Apple are failing at OSS. They don't get it, like I do. They are losing goodwill, I'm sure of that. Goodwill is important. Apple don't realize how much they need people like me. A controversial article I read claimed to discover a security exploit. (This has nothing to do with OSS, but I wanted to mention it anyway.) I think Apple covered it up. Security experts are wary of Apple. At least that's what I've heard. Paragraph 2: Apple needs to realize that I think OSS can strengthen their business. Look at SUN! They OS'd Java. And look how good they're doing now! I'm sure every day Steve Jobs wakes up wondering, "What can I do today to get Apple closer to SUN." Look at Adobe! They added a Creative Commons tag to their forums. Apple didn't DO THAT. Don't you see, they don't GET IT, like I do. Paragraph 3: Apple talks the talk, but don't walk the walk. That's exactly what I'm pretty sure the community is saying. Paragraph 4: I've found many OSS projects that Apple have't contributed to. Look at Gtk! Why is Jobs focusing on crap like Apple TV, iPhones and the lot, when GIMP still doesn't run well on OS X!!! Paragraph 5: I'll conclude my well-evidenced article by restating that there is a cost for not being a good Open Source citizen, and that is loss of goodwill. I personally happen to know what this cost is, but as I have clearly argued and demonstrated above, Apple don't. (Now, where's my Pullizer?) Lots of assertions in this rant but very little substance. Using MoAB and the Krebs debacle as support for your argument effectively reduced your credibility to zero. You'll need to do a lot better than that. For instance, you say that Apple's reputation was "badly damaged" by MoAB. Show me an objective measurement of that assertion. If anything, MoAB _strengthened_ the reputation of OS X since after the month was over I think only one was serious enough to worry about (RTSP). The rest were for services that no one uses (AppleTalk) or for third-party products (Rumpus FTP). And for Krebs? He and Ou have almost no credibility left. @Anonymous - I think you meant Pulitzer. Just to clarify, NeXT was never open-source. GNUstep, OK, OpenSTEP, maybe. But NeXTStep, nope. Check your facts before writing an article... Jeremiah- Please explain why you think that WebKit, Darwin, Darwin Streaming Server, Bonjour, and Calendar Server, all projects for which Apple provides the source code at no obligation, are not enough to generate good will. The list on macosforge of projects Apple has contributed back is a lot more than KHTML - a calendar server seems pretty handy to me, as does an open source Quicktime streaming server - then there is Chris Lattner's work with the LLVM - - an open source project in which he is one of the lead developers - on Apple time. Might be a bit techie but it's suspected the LLVM work was quite integral in making OS X portable enough to run on the iPhone. Not sure how much truth there is to that, but it's obviously important to Apple at some level. They also employ people who contribute changes back into BSD as part of their work through Apple - the kind of low-level work that advances BSD along. I presume there are contributions back to gcc too. It's not the kind of stuff that gets 'goodwill' and publicity like opening up Java, but it's the kind of low-key work, also engaged in by staff at IBM, Oracle and Novell, that makes open source as significant as it is. It just that it suits IBMs business model a lot more to court 'the community' a lot more than Apple (who sell hardware and software, not development services and consultancy). It does strike me that as closed as Apple are, the Open Source community is equally prone to hissy fits in the way it expects to be treated - all those posts about Apple not releasing the Intel kernel source - then it went up, and you didn't see a 100 blogs about it. The Open Source community is also often guilty of Not Invented Here syndrome too, and in fact here's a great example of how Apple does understand open source. Sun produced dTrace as part of Solaris and released it as open source. Apple assisted in porting it to BSD - then developed a proprietary OS X based diagnostic tool - X-Ray - to add value on top of a core FreeBSD operating system. The Linux community decided they wanted something similar - and developed SystemTap rather than porting dTrace from the off. It doesn't seem to have embraced Bonjour or launchd either despite those being useful services - there's a suspicion about launchd that seems almost wholly based on the fact it came from Apple. We have init and cron already, and they are the Unix way. You get a lot of talk about how Apple should open up Cocoa, while GnuStep barely gets any attention, so there can't really be that many open source developers who actually want to develop in Obj-C. The other 'problem' with community/standards based development (whether open source, or simply between vendors) is that it is often paralysed against innovation. For instance, OS X delivered a GPU accelerated desktop and graphically rich environment by breaking the Unix windowing architecture, while everyone else pursued getting acceleration working under X. And it was largely achieved by one guy working on a 'closed' project then donating it at the end. When people promptly bitched about it. Browser technology is in an even worse state - look at the timescales for HTML 5.0 - and compare them with progress in the proprietary Flash plug-in. Apple were criticised for pushing the canvas tag (now standard on non-IE browsers) rather than the correct SVG approach. Another example - Tapestry has recently been getting a lot of attention as a simpler alternative to Struts - yet the basic concepts of Tapestry are based on WebObjects. Now I can see why people didn't use WebObjects (it was incredibly expensive when it was x-platform, and Mac only after that) BUT the basic ideas could have been copied a lot earlier. Not invented here strikes again. >On another note, instead of developing its' own browser, e-mail client, word processor, etc., > why doesn't Apple just work with the open source community to make sure that Firefox, >Thunderbird, OpenOffice, etc., are the best choices for the Mac? Because Apple's software vision is very different from the community - it's Pages rather than Word. Mail rather than Outlook. It's about doing 80% of what 90% of people need in the simplest way - and forgetting the other 10% (that's what AppleScript is for). It's why Macs are popular with the over 50s. It would be very difficult to impose that vision onto community developed software, short of parachuting staff into the Mozilla Foundation (who do a great job of keeping Firefox a focused project, and pushing feature requests out into extensions). Look at the problems with GNOME and KDE - GNOME started pushing guidelines at developers, and the reaction has been to go over to KDE (try searching on HCI guidelines for both projects). Herding cats springs to mind. That's without even considering the fact that the strength of Apple's own apps lies in building on top of Cocoa, which they don't want to open up. I largely use Safari simply because the ability to use the system level dictionary while typing in web forms is a godsend. Ditto Mail. I'd say their open source strategy is quite sound - they're open at the points where there is no competitive advantage in being closed. What sucks is their community relations, but I guess that's a reflection of the whole company (no staff blogs except WebKit, 'no comment' on security issues), NDA on Leopard vs public beta at Microsoft and Adobe. As for MOAB - only one issue raised could have actually affected me. MOAB actually gave me a lot of confidence in the base level security of the Mac, just as I don't get alarmed about software quality every time Firefox or Apache issue security updates. Although it is notable that Quicktime seems the single worst component on the Mac. I'll refrain from commenting much about the only security researcher I personally know, but how do you think these guys make a living? (Clue : notoriety is a good form of publicity). Oops - that shouldn't have read FreeBSD. Darwin isn't FreeBSD. Brain error. It's not clear to me what you're asking for. The kernel and core OS is open source, along with WebKit (which Adobe and Nokia are using), most of CoreFoundation, and all of their changes to gcc and such. I don't think Apple intends to position Mac OS X as a flavor of a Unix in the traditional sense. It's an operating system which uses Unix as a foundation. More on LLVM - check the front page to see Apple credited as a significant industrial funder - and check the documents on what it does - essentially you can use gcc to compile C,C++,Obj-C, and Obj-C++ down into byte-code on a virtual machine, that already runs on a wide variety of CPUs. Now that's what I call an interesting project! Yeah, I guess you are right. Remote exploits, kernel buffer overflows, and DoS attacks really enhance your reputation I see nothing anywhere that would lead me to believe that Apple's rep was hurt. It came and went and people aren't even really talking about it. There are so many philosophies on what open source is and does and should be it is rediculous. Apple is doing just fine. Could it do better? Sure it could. So could every other company that is using open source software. @ JulesLt - Great comment, thanks. I agree with your criticisms of Open Source as well. @ worm eater - I think they are not enough. I agree with you that they do engender goodwill and that webkit in particular is an excellent project, but compare that to OpenDarwin. Sad. "Availability of sources, interaction with Apple representatives, difficulty building and tracking sources, and a lack of interest from the community have all contributed to this (the closure of OpenDarwin)." Furthermore, what about the threats of legal action from European governments? There is a tangible consequence of loss of goodwill. Another further tangible loss is the Symantec report which lists Microsoft as the most secure OS over the last six months, Apple came in third. While this does not paint an accurate picture of overall security, Apple now has to spend treasure in defending their image to the press. It also has to take security more seriously as it gains market share. This lack of openness to the community in the form of excessive secrecy, lack of Open Source licensed products, and unwillingness to be involved in significant projects will continue to cost Apple not only goodwill but real money. What more do you expect from a business? Any source code goverened by the BSD liscence doesn't require Apple to give back to the community. If you are unhappy with the liscence... change it. Jeremiah, are you even reading the stuff you’re claiming as evidence? That Symantec report has already been outed as FUD. Your article consists of gross generalizations about what various unidentified “communities” are thinking and feeling about Apple, you are setting up straw men left and right, you are forced to use the passive voice thanks to the lack of evidence, you are using things like MOAB that don’t have anything to do with Open Source as evidence of something or other, and you are making claims about Apple’s entire corporate well-being using such nebulous concepts as “goodwill” — can you tell me what this sentence you wrote even means in the real world? “If you think loss of goodwill is negligible, as apparently Apple does, you should consider its implications, many of which are already being realized in the ecosystem that Apple lives in.” Okey dokey. Nobody invested in creative Adobe products has any incentive to switch to a platform that can’t even run Photoshop. You bring up the presence of Creative Commons licensed documents on that Adobe page — did you happen to read the footer on the front page there? “Copyright © 2006.” Funny, that doesn’t look like a Creative Commons license. Try checking out their Terms of Use while you’re at it. Others here have done a good job of poking holes in what arguments you have, so I won’t repeat them. However, I will pull out one more example: “Merely the fact that Open Darwin was allowed to wither on the vine is direct evidence that Apple says one thing about Open Source but does completely another.” Again the passive voice: “was allowed to.” The stock answer from Open Source forum, mailing list, and IRC channel denizens seems particularly apropos here: Don’t like it? Fix it yourself. I said it before and I'll say it again - you don't use open source just to be cool. OpenDarwin was a nice idea, but without users it's useless. The only users of Darwin use it as part of Mac OS X. Everyone else is using Linux or BSD. For users of Mac OS X OpenDarwin isn't important. You don't build the system from scratch, you replace, improve, .. parts of it. And that's still possible because XNU is open source, launchd is open source and the other unix parts too. "So-called creatives...have incentive to switch to Linux," just because Adobe has opened up a few chunks of its code? Excuse me? Why on earth would a creative who has gotten used to using Mac OS want to throw everything (s)he knows UI-wise down the crapper just to use the same software the same way on what is probably an unfamiliar OS, and the apps aren't even the ones (s)he uses for work (read: Creative Suite CS* & Adobe Studio)? Folks, I like FOSS as much as everyone else, but let's get real here: every FOSS victory is not going to result in people flocking to Linux. I don't know where this logic originated, but it really needs to pack its bags and go away. Seriously, FOSS is available to ALL OSes, not just Linux. That means when Adobe opens up its source code, it's going to be compiled for Macs, too. Again, no reason to switch. P.S.: I'm one of your "so-called creatives." I used Linux for 14 years -- including GIMP, which is not a Photoshop killer by any stretch of the definition -- before switching to Macs for desktop action and Adobe's graphic design apps. I don't care if Adobe DOES port CS* and Studio to Linux, I'll STILL have no reason to switch back. Sorry, but I'm comfortable right where I am. Sorry, hastily edited post. I meant to say "Adobe CS*," not "Creative Suite CS*." I realize that people are talking about different forms of Free / Open licensing. The story I was making reference to happened long ago and not much can be found on the internet about it these days. But it was significant since it was one of the first times the GPL forced a commercial software company (in this case NeXT) to change its ways. Some people even like to claim that Steve Jobs was the Free Software Foundation's first GPL violator. Richard Stallman writes about the incident in a pdf essay at (Look at the end of page 91). The lawyers and technologists at NeXT learned quickly under threat of legal action decades ago that the Free / Open communities aren't just development farms. There are some big companies today that don't quite get this, but Apple appears to have understood it for some time. If you are give acknowledgment to the developers of the code, link and use LGPL software libraries if you want. If you want to use GPL code, be prepared to open up the source of your program, too. If you want to make something private, use a license like FreeBSD or the MIT license that permits anything but taking over the copyright. If you want to offer a standard to the public, offer a liberal open-source license to the public. If you just want free development for a closed-source product, the free / open community isn't going to help much. This is still a battle that gets hashed out in different ways inside Apple. You can bet Apple would love to include GNU readline actually compiled into some of their command-line tools, but they understand what they'd be obligated to if they did that (disclosure of some of their source code). Apple meets the obligations of the open source licenses and they do return code that they can't maintain or want other people to adopt as standard to the community to use. Microsoft doesn't understand community standards. Why invent a whole new dialect of C and keep it closed source? Microsoft isn't in the business of making (much) money off of software developers. When Apple looked at the limiting features of Objective-C, they could have decided to build a new language like C# that didn't have those limitations. Instead, they've decided to give back the changes they want to make to the standards to the Objective-C community. They want to push the standards to evolve, true, but they don't appear to be taking them over. At least one Apple open source project are world class; see Squeak - Mmmhhh...how about giving us some concrete examples of how, exactly, Apple is failing to be a good open source citizen instead of just making blanket statements? Frankly, whether Apple uses Creative Commons licenses for its support forums is irrelevant. Yes, there are a lot of abandoned open-source project. So what? That's very common. And please, citing Adobe as a model for open source collaboration is plain ridiculous. This lack of openness to the community in the form of excessive secrecy, lack of Open Source licensed products, and unwillingness to be involved in significant projects will continue to cost Apple not only goodwill but real money. Apple hasn't exactly made using it's source code easy, nor has it nourished a community who could really leverage the code (see comments by willbb over at Ars. However, I find the argument that, because Apple isn't creating Open Source Licensed products, that it will lose significant sales. This is absolutely the last thing on the users' minds in Apple's core markets. Apple's design philosophy is all about tyrannical vision and focus to achieve greatness - and influence every aspect of the user experience. I'm amazed that they've done any open source work at all, actually (perhaps a legacy of the Tevanian era, I don't know). This reminds me of the argument that because Macs don't use open formats for everything, switching to Linux (on that merit alone) is better. Didn't that guy move back over, or something? You say that Mac OS X was based "on Free Software in the form of NeXT/FreeBSD" - in what what was NeXT's software, acquired by Apple in 1997, Free? Also, why do you write Sun all in capitals, as in SUN? The claims about os x insecurity are academically and theoretically interesting, but I'm still waiting for reports from actual humans who have become infected. As for open source, Apple does a nice job of integrating open source and proprietary code to make a smooth product, and product is what they are in business to create. What about this? @jeremiah "Another further tangible loss is the Symantec report which lists Microsoft as the most secure OS over the last six months, Apple came in third." Question for Jeremiah - where in Symantec's report does it state Microsoft (or "Windows") as the most secure OS? Because it sure wasn't Symantec that declared "Microsoft" as the most secure OS. Secondly, since when is "average time to patch vulnerabilities" considered the primary criteria for determining whether an OS is secure? Because that seems to be the gist of the Symantec's report. There's no recognition of how many vulnerabilities were critical (12 by Microsoft, 1 for Apple) as opposed to "it's a little drafty around the windows" type of vulnerability? You really do yourself a disservice when you repeat someone else's argument's that "concludes" something a report doesn't even state. Makes it seem like you're being disingenuous just to advance your own agenda. Boo-hoo... GPL != BSD Lic Apple is under NO obligation to give anything back. If you use the BSD Lic for your code you can't really complain. And for what possible reason would apple want to contribute to any gtk port or even open darwin for that matter? AFAICT Open Darwin was a fun project that never got any significant community built around it. Why would it? Linux scratches any itch that OD would have. There's a cost for being a good GPL OSS citizen. Kind of the whole point of using the BSD Lic. Great, another "Free" Source Fascist bitching becuase Apple opens the source on dozens of its own internally created projects, while also giving back to the community thousands of bug fixes for open source products that they distribute with their operating system. Ok, Linux weenies and Free Software Fascists can be expected on oreillynet... but please keep these clueless dolts out to fht MacDevCenter blog. Bottom line is, Apple makes great products, while Linux and especially "GNU" haven't managed to ship something generally usable by the public--- and thus, the Free Software Fascists are quite jealous and annoyed. But the inadequacy is their own, not Apples. Apple has done more for open source than the free software foundation... Real Artists Ship! "Apple's reputation has already been damaged by the Month of Apple Bugs and in its attempt to whitewash security issues that were published last summer. " This phrase proves the author is just an attack dog, who will use any excuse to bash Apple-- no matter what the facts are. (The facts being that the "month of apple bugs" was a month of exploits in software NOT published by Apple, and the "security breach" was in another companies product, not the Macs that they were being used in.) The level that people will stoop-- and for what did you sell out your integrity? So cheap! "Open Darwin was allowed to wither on the vine" Yeah, its apple's fault that some people didn't get a successful distribution together.... but that doesn't mean its withered on the vine... darwin source is still made available, people are using it in a variety of ways, and Apple -- who you claim offers no spport-- is hosting the macports successor to the open darwin site. Lie much? This article is really... rubbish. You don't even make sense or explain what the hell you are talking about... Move on ppl this guy is clearly paid to spread BS. You state that open source is good, the only reason you give, to help x company generate good code for free. You say other stuff as well, but i'll ignore that. Okay, so, here is apple, all open source and Gnu loving with a 5% or whatever a 50% market share. Apple's technology is updated by the 'community' not by themselves. Any claims that apple makes about there system will have to mention the 'community' otherwise lawsuit. Apple has to develop an OS around the 'communities' wishes, features are thus released prior to Gold, and sucky Microsoft absorbs them and pushes them into Microsoft Windows, good plan. Apple does little innovation, cannot easily develop their OS, without having to consider stakeholder (not SHAREHOLDER) wishes. Any 'negative' change can be overruled by the 'community'. The aspect that the software is free, is a good incentive. But quality is never achieved by 'free', stingy poor linux c**ts may like their system because it is free, but apple will not be able to pull all the profit otherwise, no free developers, apple does not want to share these profits either, to a large unquantative group of people doing unquantative amounts of unquantativly useful work on the OS, giving people salaries (or facing lawsuits) is an impossible amount of work, instantly making, not free. Then quality issues arise, how can we be sure that sneaky developer x is not in fact a hacker making 'beneficial' changes on behalf of the 'community' but is in fact also producing back-doors and exploits? Quality on the non-malicious level can also not be achieved, all these people with too much time, and little incentive to code may not be producing good, code. As seen by the general lack of adoption of these generally lacking and frankly useless OS's (except for in some enviroments) You spew all this c**p about the 'people' and the 'community' and 'gnu' and how amazing and usefull it all is. But your thinking from YOUR perspective, not from a business perspective!!!! There is no business model for linux, and because of this, linux is going NOWHERE. There is no way that Open Source will ever work unless the idiots who run the thing think like business men, it doesn't work, yet, and only when there is a proper business model, which there isn't, yet, nothing is going to help it's Totally agree with you. Both Safari and Apple's Aperture could definitely use some Open Source plug-ins, etc. Hi, I don't think Apple of Microsoft will ever switch to OpenSource like Sun did after being established as a proprietary company. But if they switch to OpenSource model, definitely they save lots of money and they get better testing and bug fixing than now. let's hope. Adios. @ Q - Who states: "There is no business model for linux, and because of this, linux is going NOWHERE" Um, maybe look at RHAT. It is listed on the New York Stock Exchange and has a total revenue of 105 million dollars. They sell nothing but linux. I own stock in them and I own stock in Apple. I want both those stocks to go up. I see how RHAT uses the GPL to gain market share and make money. I want AAPL to do the same thing. There is one thing I do understand about Open Source. If you want to download, install and use a particular software application or development tool, in most cases, it doesn't work for one reason or another. Either a library is missing or it is a newer version. After you try to fix it by downloading the correct version of the library, you'll find another library that has issues. Then the operating system hangs up once in a while (yes it does happen to linux operating systems as well) Apple somehow managed to make their software work without any issues. Will you please leave them alone and let users have the comfort of a working Linux Desktop? This is something all Open Source enthusiasts and Linux enthusiasts wanted for a long time. When it comes to making money, Apple is quite good at it. Please check their stock price and their annual reports. Perhaps this will help stop a ground swell of lies being spread that on one hand "Linux is MS code" and on the other that "Mac runs Linux" Comments like this are spread by foolish people who mostly have never ran a GNU+Linux box. People who I would just soon ignore than listen to or explain the facts too. @ Jeremiah @ Q You say revenue, wow, thats a completely false statistic of a business. Oh look, their profit is $11 million, jeez, amazing, oh and it feel last year 34 %. Red hat may be an established business, with a business model, but that doesn't mean open source, which is Q's point, will succeed, because Open source, doesn't have a model. There are very companies that run as open source development opportunities solely, and red hat is the only notable one, with an even less notable product portfolio. I state the point again, open source has no business model, is going NOWHERE. Red Hat has a business model, runs open source, is going NOWHERE. Analyze properly, you have given no real reasons for profitability of open source. I think the comments about Adobe are pretty interesting, because Adobe really are courting goodwill in the FOSS community, with thinks like Tamarin, the Flex SDK, the Flash player on Linux, and increasingly releasing free versions of their server software for single-CPU systems. Of course, a lot of this stuff is just Free - as in beer - not actually Open, but that probably counts for a lot more for people who want to use these tools, without actually being concerned with the politics of free software. It also means Adobe don't get the benefits of open source - i.e. contribution back. It's more like Sun's courting of a development community for Java. I bet if you actually looked into it, Apple probably do contribute more back into FOSS development than Adobe, but 'goodwill' does count for a lot in some places. Apple's above weight presence in education and design can be largely attributed to goodwill, for instance, rather than teachers and creatives appreciating the technical superiority of OS X to Windows. Microsoft have discovered the same with security - they don't buy out security researchers, but they have discovered that it pays to listen to them, and make them feel worthwhile, rather than treat them as enemies to be silenced. In fact Microsoft have been very good at cultivating the appearance of openness over the last few years, with developer blogs, etc. What I take from Jeremiah's post is more a plea for Apple to improve their community relationship than to actually open up more software. I also agree with many posters who suggested OpenDarwin died through lack of interest more than anything else. It didn't add any significant value over OpenBSD or FreeBSD. The issue with the Intel kernel was probably the last straw for a dying project. >how can we be sure that sneaky developer x is not in fact a >hacker making 'beneficial' changes on behalf of the 'community' but is > in fact also producing back-doors and exploits Well, you look at their code. And any serious major Open Source project has a lot of people involved, a lot of reviewers, and usually an identifiable project lead employed by someone like Novell, IBM, Mozilla or Apache foundation, University teams, etc. Sure, if you download some DVD-ripping software from some guy called aPuckerLypz then you might be downloading a trojan horse, but I'd have no worry about mySQL or Apache. Otherwise I'd be worrying about the security of OS X, which as you may have forgotten, has an open source foundation. You need to get away from the idea that GNU/Linux was something knocked up by hackers in their bedrooms. >There is no business model for linux, and because of this, linux is going NOWHERE That's why IBM, Oracle, Novell and Red Hat all support it? And why Linux servers are pushing out ALL flavours of proprietary Unix? Or why Shake was available on Linux before Apple bought it and canned future Linux development? Or Linux is used inside a huge number of CE devices? There's a definite business model to open source - basically it's about sharing costs on something that has a near-zero business value IN ITSELF in order to make money by adding value to that commodity. Even rival car manufacturers do that these days (develop shared chassis / platforms, that they then customise into specific brands and models) - because the cost of doing it from scratch is too high to produce budget models. If you wanted to produce an Apple-TV like device, would you (a) write an operating system from scratch (b) buy one from Microsoft, tying you in to supported architectures (c) use Linux or BSD, even if you were obliged to contribute back any modifications you made (but NOT the front-end application the end user sees). And indeed that's exactly what Apple did (first with GPL'd then BSD code). On the other hand, Apple sees a distinct business value in keeping the graphics and GUI subsystems proprietary (aside from OpenGL of course). I'd wager that as open source components (such as XGL and Cairo) catch up, you will see Apple switch to endorse and use them, licence permitting, because there is no business value in NOT doing so. Even on the desktop, Novell expect to make millions. Now someone above complains about downloading FOSS tools and them not working. That's the value Linux distro companies add - they package up a supported release, out of free components and applications, that will work. And they do managed point releases like Apple and Microsoft do. Companies are happy to pay for that, rather than employ someone to roll their own.. > > Sorry, but the Open Source community was desperate for validation by businesses like Apple. In fact the *REAL Open Source community* never did and still doesn't give a *DAMN* about Apple or it's userbase. It's the Appple fanboy posers like *YOU* who run around cheerleading for Apple and it's fradulent "Open source" efforts. My God, get a clue. Stop spreading FUD. Supposedly, you research before you write, you know. @ Rhonald :- > But if they switch to OpenSource model, definitely they save lots > of money and they get better testing and bug fixing than now. Yeah right. Nearly every piece of OpenSource software that I've used has had far more bugs (excepting Firefox) than ANY Apple app I've used. Scruff - read the above. The foundation of OS X is open source BSD. Seems to run pretty well. Most Mac software is compiled using an open source compiler. Major banks run Linux based servers. Don't confuse applications knocked up by hobbyists, with commercial and academically funded open source. What is it with Apple users? The fact they have an operating system is completely due to free and open source software. Without them there would be no Mac OS X. Apple have stated as much themselves. So when people want to port Unix/Linux software to run on Mac OS X it is wrong? Because some free software might turn out to be better than what Apple itself provides? Such as Jahshaka's problems getting help from Apple. I think it's an arrogant stance to take, just like Microsoft does. Even Sun has a lot more respect nowadays because they don't only talk the talk but they walk the walk as well, as can be seen in the open sourcing of Solaris and Java. And for that matter OS X's XNU (Mach/BSD) kernel foundation is "utter crap" as Linux Torvalds has noted himself. Until they adopt a pure FreeBSD/DragonFlyBSD or Solaris kernel it can only be considered a toy, just like NT, only even slower. Vista has changed this, now both of them are slow! As a Unix/Linux consultant I don't have to do much as the competition is killing themselves. Many customers and acquaintances have now switched to a Slackware GNU/Linux-based Linux desktop that can compete with Solaris for stability and (stock) Windows XP for speed. That is not to say that OS X is not an intuitive user-friendly operating system. But it is certainly not as stable and performant as GNU/Linux, real/pure BSDs and Solaris. With Apple free and open source is a one-way street. Real communities share. This is something Apple will never understand nor practice. I would be installing more OS X desktops if only Apple were willing to do their part in making open source run well on OS X and reaching out to the community like Sun does. So besides spreading FUD you enjoy censoring comments? Great! You do deserve a jackass of the week prize, jackass... @zac, You say, "Look at any benchmark, Intel's current chips are faster across the board." I would remind you of a quote by the late Seymour Cray, "Anyone can build a fast CPU. The trick is to build a fast system." I work in an academic High Performance Computing Center. Let me assure you, our AMD based systems blow anything Intel puts out out of the water. Kevin Huh? Where did that come from? You are seriously linking to Ou for commentary on that security thing? Seriously? Huh. Okay. So... What about Open Source? What exactly are you claiming Apple isn't doing right, or isn't getting? Truly a weird blog entry. What's your point? This article brings up some good and note worthy subjects. Apple has made a great os thats easy to use and creative. They do tend towards the 'money-making' over quality and user's freedoms sometimes though. iTunes store is one example, ipods that don't let users move they're music around as they see fit, etc.. the examples are there. Many Mac programs could be alot better from an openSource community approach. I love my mac but some of the bundled programs are very bare-bones, and i don't use iphoto simple due to its obtuse photo-filing directory structure, and slowness. someone said: "What I take from Jeremiah's post is more a plea for Apple to improve their community relationship than to actually open up more software." I agree but think they could probably benefit from opening some things to more community participation with their devices and software. And perhaps allowing more of their code to be used on non-apple devices. Apple-centric thinking is not the way to go. Jeremiah, people are being too hard on you. You can't write, you can't read, and you don't know much about what you're talking about. But there's no need for people to point that out. It's mean-spirited. It would have been enough for them to call attention to your inability to recognize whether something counts as evidence in favor of a conclusion. You know Jeremiah, sometimes if people hear a particularly bad argument for a view they'll walk away thinking it's false. Of course that would be a mistake because as we all know there can be bad arguments for true conclusions. Still, it happens. It's akin to losing goodwill, which could be more expensive than you realize. Dear foo fighter, Normally I do not respond to anonymous posters who do not stand up for what they post, I find it cowardly and contemptible and to respond is to feed the troll, but here I go anyway. Your comment is mean-spirited and ad hominem. Recently there has been a pushback against such negative commentary here at O'Reilly and I am going to follow Tim O'Reilly's advice in this regard and start to delete anonymous postings that are of the kind you wrote. So, my misguided friend, enjoy your notoriety while it lasts, soon your post will find its proper home.
http://www.oreillynet.com/mac/blog/2007/03/apple_failing_to_understand_op.html
crawl-002
refinedweb
9,082
70.84
V data across multiple clients and platforms. The Firebase website is available at:.. In the following you’ll learn how to set up a Firebase realtime database and bind to real time data in your Vue application by using this library step-by-step. What We’re Going To Build The application we’re going to build in this tutorial can be seen in the following: That’s a simple book manager application printing out a list of books (with a link to the corresponding website). The books data are retrieved from the Firebase database. The user is able to add new books to the list and delete existing entries. The data is stored inside a Firebase database and all database operations are synced in real time across all application instances. To build up the user interface of the application we’re using the Bootstrap Framework. To show toast notification to the user after a book has been deleted successfully the Toastr JavaScript library is used: Setting Up Firebase Before starting to implement the Vue application, we first need to setup the Firebase real time database. To set up Firebase you first need to go to create an account and login to the Firebase console. The Firebase console gives you access to all firebase services and configuration options. First, we need to create a new Firebase project: Once created a project you can click on the project tile and you’ll be redirected to the Firebase project console: The project console is the central place for the Firebase configuration settings. From the menu on the left side you can access the various Firebase services. Click on the link Database to access the realtime database view. You can use the online editor to add content to the database, e.g.: As you can see the data in the realtime database is organized in a hierarchical tree which contains nodes and key-value-pairs. This is similar to a JSON data structure. Under the root node add a sub node items. Under items create a few key-value-pairs like you can see it in the screenshot. We’ll use these data elements in our sample application later on. Finally, we need to edit the security rules of the database, so that we’re able to access data without authentication later on in our Angular 2 sample application. Switch to the tab Rules and edit the given security rules to correspond to the following: { "rules": { ".read": true, ".write": true } } Now read and write database access is possible without authentication. Setting Up a Vue.js 2 Project (With Vue CLI) With the Firebase real time database prepared we’re now ready to go and start the implementation of the view layer. To initiate a new Vue.js 2 project the easiest way is to use Vue CLI: $ vue init webpack vuejs-firebase-01 A new project directory vuejs-firebase-01 is created and the Vue Webpack template is downloaded into that directory. We need to complete the project setup by changing into that directory and executing the following command: $ npm install This makes sure that all dependencies are downloaded and installed into the node_modules subfolder. Adding Firebase And VueFire Library To The Project The VueFire library () makes it easy to bind Firebase data to Vue.js data properties. To add VueFire and the Firebase core library to the project execute the following command within the project directory: $ npm install firebase vuefire --save In file main.js we’re now able to import VueFire. It’s also needed to call Vue.use(VueFire) to make the library available in the project: import Vue from 'vue' import VueFire from 'vuefire' import App from './App' Vue.use(VueFire) new Vue({ el: '#app', template: '<App/>', components: { App } }) Adding Bootstrap To The Project Because we’d like to use Bootstrap CSS and components within our sample app, we also need to add this framework to the project. To add the framework in our project we need to include the corresponding CSS and JavaScript file in index.html. The resources can be retrieved from a CDN as you can see in the following: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>vuejs-firebase-01<> <div id="app"></div> <!-- built files will be auto injected --> > Connecting To Firebase With all libraries added to the project, we’re now able to establish a connection to the Firebase database. Let’s add the following code inside the script section in file App.vue: import Firebase from 'firebase' let config = { apiKey: "...", authDomain: "...", databaseURL: "...", storageBucket: "...", messagingSenderId: "..." }; let app = Firebase.initializeApp(config) let db = app.database() let booksRef = db.ref('books') First, we’re importing Firebase from the core firebase library. To create an Firebase instance we’re using the factory method initializeApp. This method expects a configuration object containing the following properties: apiKey, authDomain, databaseURL, storageBucket and messagingSenderId. The property values needs to be set according to the Firebase project which has been created. You can retrieve the values from the Firebase console very easily. Open the project overview page and click on the link Add Firebase to your web app. In the opening pop-up window you get a prefilled config object in JSON notation. With the Firebase instance stored in app we’re now able to retrieve a database reference by using the app.database() call. With the database reference available, finally we’re able to retrieve a reference to the books node in our database: let booksRef = db.ref('books') Implementing Book List Output The VueFire library makes it very easy to bind Vue.js properties to Firebase data. Just add the following to component configuration object: firebase: { books: booksRef }, Now the books variable gives us access to the books items from the Firebase database, so that we can output the data in the template: <div class="panel-body"> <table class="table table-striped"> <thead> <tr> <th>Title</th> <th>Author</th> </tr> </thead> <tbody> <tr v- <td><a v-bind:{{book.title}}</a></td> <td>{{book.author}}</td> </tr> </tbody> </table> </div> Just use v-for directive to iterate over the elements available in books in generate a new table row for each book item. Implementing Book Input form Next, let’s see how new elements can be added to the real time database. First we’re defining the data model: data () { return { newBook: { title: '', author: '', url: 'http://' } } }, Next, the corresponding input form is implemented in the component’s template: > The HTML form consists of three input elements for book properties title, author and url. By using the v-model directive the newBook properties are bound to the corresponding input controls. By using the v-on directive the submit event is bound to the event handler method addBook. To implement addBook add a methods property to the component’s configuration object and assign an object containing the implementation of addBook. methods: { addBook: function () { booksRef.push(this.newBook); this.newBook.title = ''; this.newBook.author = ''; this.newBook.url = 'http://'; }, }, As we’ve used data binding to sync the value of the form input controls with the newBook object, we’re able to use this object within addBook and pass it the call of method booksRef.push. This inserts the new books object into the Firebase books node and syncs the data across all connected client instances of the application automatically. Implementing Delete Function In the next step, let’s extend the implementation and add a delete function, so that the user is able to delete entries from the books table. First add another column to the table, displaying an icon on which the user can click to delete the entry: <td><span class="glyphicon glyphicon-trash" aria-</span></td> Again, we’re using the v-on directive to connect the click event to the event handler method removeBook. The method implementation can be seen in the following: removeBook: function (book) { booksRef.child(book['.key']).remove() } To delete a book element from the Firebase database we’re first selecting the book node by using the child method. This method is expecting to get the key of book element which should be selected. We already have access to this information via book['.key']. Finally we need to call remove() on the returned book object to delete it from the database. Adding Toast Notifications For the final touch we’re adding the Toastr library to our application to present a toast notification each time a book is deleted. Add the Toastr CSS file from a CDN in the head section of index.html: <link href="" rel="stylesheet"/> Next, add the toastr NPM package to the project and save that dependency in package.json: $ npm install toastr --save Now we’re able to add the following import statement on top of the script section in App.vue: import toastr from 'toastr' Inside of removeBook the call of method toastr.success makes sure that a notification is shown: removeBook: function (book) { booksRef.child(book['.key']).remove() toastr.success('Book removed successfully') } Summary Adding all pieces together you’ll end up with the following code inside of Vue.app: <template> <div id="app" class="container"> <div class="page-header"> <h1>Vue.js 2 & Firebase <small>Sample Application by CodingTheSmartWay.com</small></h1> </div> > <div class="panel panel-default"> <div class="panel-heading"> <h3 class="panel-title">Book List</h3> </div> <div class="panel-body"> <table class="table table-striped"> <thead> <tr> <th>Title</th> <th>Author</th> <th></th> </tr> </thead> <tbody> <tr v- <td><a v-bind:{{book.title}}</a></td> <td>{{book.author}}</td> <td><span class="glyphicon glyphicon-trash" aria-</span></td> </tr> </tbody> </table> </div> </div> </div> </template> <script> import Hello from './components/Hello' import Firebase from 'firebase' import toastr from 'toastr' let config = { apiKey: "...", authDomain: "...", databaseURL: "...", storageBucket: "...", messagingSenderId: "..." }; let app = Firebase.initializeApp(config) let db = app.database() let booksRef = db.ref('books') export default { name: 'app', firebase: { books: booksRef }, data () { return { newBook: { title: '', author: '', url: 'http://' } } }, methods: { addBook: function () { booksRef.push(this.newBook); this.newBook.title = ''; this.newBook.author = ''; this.newBook.url = 'http://'; }, removeBook: function (book) { booksRef.child(book['.key']).remove() toastr.success('Book removed successfully') } }, components: { Hello } } </script> <style> #app { font-family: 'Avenir', Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; color: #2c3e50; margin-top: 20px; } </style> Summary Vue.js 2 makes it easy to implement the view layer of single page web application. As Vue makes no assumption of the backend technology, it can be used together with different application stacks. If you want to have real-time data synchonisation in your application Firebase is a great option for building your backend.
https://codingthesmartway.com/vue-js-2-and-firebase/
CC-MAIN-2020-45
refinedweb
1,777
54.22
I am trying to do a program where it draws a green square within a SimpleWindow. The square has to have 1.5 cm sides and be centered 3.5 cm from the left edge and 2.5 from the top edge of the window. I am required to do this with a square.h class that I have included with this post.. I am getting the following message when attempting to compile it (with VC++ 6.0): c:\program files\microsoft visual studio\myprojects\ex_712a\ex_712a.cpp(27) : fatal error C1010: unexpected end of file while looking for precompiled header directive Error executing cl.exe. ex_712a.exe - 1 error(s), 0 warning(s) Here is the code for my main source and my square class.. main source: Square.h (class):Square.h (class):Code:#include <iostream> #include "square.h" using namespace std; int ApiMain() { SimpleWindow Test; Test.Open(); Square GreenSquare(Test); GreenSquare.SetColor(Green); GreenSquare.SetPosition(3.5,2.5); GreenSquare.SetSize(1.5); GreenSquare.Draw(); cout << "Type a character followed by a\n" << "return to remove the display and exit" << endl; char AnyChar; cin >> AnyChar; Test.Close(); return 0; } I did also have the following enum in the square.h, but the compiler was complaining about it, so I thought that it might not be necessary..I did also have the following enum in the square.h, but the compiler was complaining about it, so I thought that it might not be necessary..Code:#ifndef SQUARE_H #define SQUARE_H #include "ezwin.h" class Square { public: Square(SimpleWindow &W); void Draw(); void SetColor(const color &Color); void SetPosition(float XCoord, float YCoord); void SetSize(float Length); private: color Color; float XCenter; float YCenter; float SideLength; SimpleWindow &Window; }; #endif enum color {Red, Green, Blue, Yellow, Cyan, Magenta}; Any help or pointers that someone may have would be great. Thanks!!
http://cboard.cprogramming.com/cplusplus-programming/38271-probably-something-simple-but.html
CC-MAIN-2014-23
refinedweb
308
67.86
User Details - User Since - Dec 2 2018, 12:02 AM (24 w, 9 h) Mar 18 2019 Of course, sorry. The script is functional. Mar 16 2019 import bpy Mar 15 2019 There is one specialty from classic animated cube and camera ... I had to use a length of the curve (for rolling compute)and that is possible only through animation nodes ... so maybe the Animation nodes is the problem. I do not know exactly. Or is there another possibility how to measure the length trough expression? Because dimension cant be used for expression. Mar 14 2019 Working perfectly, thank you very much again. You are best, and you know it 8-) Perfect i will try in the evening, thank you very much. Feb 26 2019 @Daniel Dietrich (daniel) Salazar (zanqdo) and did you try it in eevee ? No no, it’s not working with enable autorefresh neither withou it. Feb 25 2019 I dont know inside architecture of the blender, but maybe ... could be timeslider connected with every texture node in scene only if the node is set to movie or image sequence. Then after moving time slider the nodes could be updated... But its only way, maybe its bad way ... trying to help. But it is same problem as before ... If we created new report we will wait for enough people with same problem then it will be in progress, so it may be long time story. @Daniel Dietrich (daniel) Salazar (zanqdo) Downloaded last build as before, but now is the 893fa598319e. But not working - blend file inside. And technique what I using is there: Feb 24 2019 Is it functional in 2.8 - 6bab905c9d40 ? If so, it is only working with open node editor and active texture node - with properties. Is that right? Feb 18 2019 Feb 7 2019 I would like to send some resources for this problem particularly to speed the process, is it possible? Dec 11 2018 And the offset is calculated differently from 2.79b .. offset is adding to frames number, not to current frame .. so it cant be one number and it cant be changed by slider or any number control. I think if you have so nice animation support it would be a pity not support flat style puppets depends on right alpha and sequence preview in working 3d view or eevee :) I love blender and its only function I missing to absolutely change the whole pipeline to blender. Dec 10 2018 Dec 5 2018 Yes sir, when I opened last version of 2.80 beta the segments was functional - and I cant delete this task. I am sorry for opening it. I will take care more next time.
https://developer.blender.org/p/spawn_x/
CC-MAIN-2019-22
refinedweb
448
83.25
Xbox One Reputation System Penalizes Gamers Who Behave Badly Soulskill posted about 7 months ago | from the good-news-for-everyone's-mothers dept. (5, Insightful) ruir (2709173) | about 7 months ago | (#46591099) OMG FAG LOL (4, Interesting) Anonymous Coward | about 7 months ago | (#46591117) (0) Anonymous Coward | about 7 months ago | (#46591137) Meh, big data is the answer obviously, they already have plenty of data of their players, should be easy enough to crunch some numbers and figure out who's an asshole. Re:OMG FAG LOL (4, Interesting) Joce640k (829181) | about 7 months ago | (#46592361) It's even simpler than that. All you need to to separate them by age. Put all the 13-years-olds on their own server (separated from the under 20s and over-20s servers). Re:OMG FAG LOL (5, Insightful) JaredOfEuropa (526365) | about 7 months ago | (#46591163) (5, Informative) Sneftel (15416) | about 7 months ago | (#46591181) (1) Anonymous Coward | about 7 months ago | (#46591349) It's actually extremely easy to tell the difference between a good player and a cheater. It's just hard to tell the difference between two good players, one of which is cheating. A bad player who scores highly thanks to cheats is very easy to spot. Lame players (1) Etherwalk (681268) | about 7 months ago | (#46591743). Re:OMG FAG LOL (4, Informative) Frobnicator (565869) | about 7 months ago | (#46591505). Re:OMG FAG LOL (0) AmiMoJo (196126) | about 7 months ago | (#46591807) With a real live human involved they can nicely handle people who were wrongly accused. That's the theory, but in practice there are far too few moderators to properly investigate the tidal wave of complains that roll in. I know someone who had their internet connection go on the fritz for a month. First Capcom marked them as a "rage quitter" and now only pairs them up with other rage quitters. After a while he started getting warnings about complains from other players from Microsoft, so had to stop playing online until he was sure the problem was fixed. Even now there is no appeal process or consideration given, no way to remove the black marks from his name. There was a story in a news a while back about an autistic kid who was banned because he "cheated" by loading his friend's save game to unlock stuff he couldn't access. His mother contacted Microsoft but they told her to fuck off and buy a new console. To be honest I might have done that myself because in the past I couldn't be bothered to unlock every god damn thing on Tekken Tag Tournament just to play the bowling mini-game either, but now it isn't about fun it's about leaderboards and rankings. Buy a new console? (2) twocows (1216842) | about 7 months ago | (#46592007) Re:OMG FAG LOL (2) cbhacking (979169) | about 7 months ago | (#46592013) (0) Anonymous Coward | about 7 months ago | (#46592295) I never, ever, access the leaderboards in games. That keeps it simple. I play for the moment, for fun, really. And being able to say in the chat channel that "no, I have never looked at the leaderboard" validates the way I play. Re:OMG FAG LOL (1) PPalmgren (1009823) | about 7 months ago | (#46591735) and the dota 2 report system is useless given you can make a new steam account and its free to play, but at least it solves part of the problem. Re:OMG FAG LOL (1) Knightman (142928) | about 7 months ago | (#46591185):OMG FAG LOL (0) Anonymous Coward | about 7 months ago | (#46591369) Yes, it will magically "be handled", do you not recognize marketspeak when you see it? Re:OMG FAG LOL (2) TheDarkMaster (1292526) | about 7 months ago | (#46591451) I prefer to just do not play online. I've tried a few times and concluded that it is not worth. Offline matchmaking (1) tepples (727027) | about 7 months ago | (#46592133): Offline matchmaking (0) Anonymous Coward | about 7 months ago | (#46592299) Facebook? Social networking? Oh wait, forgot all the nerds here are against that... Re:Offline matchmaking (2) TheDarkMaster (1292526) | about 7 months ago | (#46592301) Thankyou for calling (0) Anonymous Coward | about 7 months ago | (#46591193) and I doubt whether an operator could make the distinction between a rage-report and an inaccurate report made in good faith. Me too. Especially if the operator is yet another well-meaning but overworked indian call-center guy who doesn't fully grasp the culture of the persons involved. So the real alternative is to suck it up and deal with trolls ala 4chan? I find the idea appealing because it ceases to try to use technology to fix human problems just like humans should not be used to fix technical problems. Re:OMG FAG LOL (1, Troll) pla (258480) | about 7 months ago | (#46591773) compete for the worst ratings possible. Go ahead and refuse to play with 95% of the userbase - Which rating will end up with reduced matchmaking possibilities? Animal Crossing (1) tepples (727027) | about 7 months ago | (#46592179):OMG FAG LOL (1) Grizzley9 (1407005) | about 7 months ago | (#46591821) I play BF4 some (unfortunately). If I start getting tagged by the same guy over and over, and he's taking out multiples at a time when they are shooting him w/ a gun/tank/whatever, you can sure bet I'll be the one "abusing the system" by reporting them. Whether his fault or Dice's or MS's, if enough do it then hopefully some attention will be brought to the issue. Value the reports (1) Anonymous Coward | about 7 months ago | (#46591223) Increase and decrease the value of reports, depending on metadata: + Person has been reported by others + reporter is an ally -- reporter is an opponent -- reporter reports often (multiple players every game) +/- whatever else they can think of based on a LOT of experience Remember: they are not going to block players or anything like that. It's probably mostly feedback to the player self that their behavior is not appreciated by the community. Re:Value the reports (0) Anonymous Coward | about 7 months ago | (#46591377) reporter reports often (multiple players every game) This one is likely to solve most of the problems with some adaptation. There is a saying that goes along the lines of "If you meet an asshole you met an asshole, if you meet assholes all the time you are the asshole." A thing that could work is if the reported and the reporter are treated equally. If you report someone you get a mark and the reported person gets a mark. Just a few marks is irrelevant, everyone is going to piss someone off. If you meet someone who is a dick you report them if you think they are bad enough to get a mark over. This also means that it will be OK to joke around without having to fear overly sensitive people. Still doesn't prevent bullying if many assholes decides to gang up on one person and report them into oblivion. Re:OMG FAG LOL (2) Wootery (1087023) | about 7 months ago | (#46591423):OMG FAG LOL (2) arth1 (260657) | about 7 months ago | (#46591541) or accepting that the SNR is higher from trolls ala 4chan. If the signal to noise ratio is high, it means there's far more signal than noise. Either say "lower", or use "noise ratio". Re:OMG FAG LOL (0) Anonymous Coward | about 7 months ago | (#46591871) I don't know how they implemented it, but I would leave aimbot reports out of the reputation. The reputation score seems to be a score that says "Is this kid a douche or not?", not "Does this kid use hacks or not?". The two can be mutually exclusive. Besides, Microsoft runs the anti-cheat system on XBox. Surely their perfect system catches all attempts to use hacks. /s Assuming the cheat detection worked flawlessly, the *only* aimbot report that should matter is from the cheat detection system, not the individual players who have no idea. Steam on PC already does this. Their cheat detection is know as VAC(Valve Anti-Cheat), and if it catches you using hacks on one game, it can affect you on other games, and it will be visible in your player profile. Re:OMG FAG LOL (1) phishen (1044934) | about 7 months ago | (#46591881) And not to mention anyone who beats you in-game is CLEARLY cheating. Preface: I know that was a joke =) +1 funny However, I am primarily a PC gamer, but I have a PS3 for Final Fantasy, etc (never shooters, I just prefer those on PC). I have always wondered, how does one actually cheat on a console? You can't install auto-aim programs or anything else like you can on PC, or am I mistaken? Re:OMG FAG LOL (1) cbhacking (979169) | about 7 months ago | (#46592059) jumps or the like. Re:Bullying (1) gl4ss (559668) | about 7 months ago | (#46591133) and uh wtf twitch broadcasting a privilige? sure, twitch could ban broadcasters.. but fuck.. what's next, can't view youtube with an account that 10 of your classmates decided to give bad ratings to?? I agree (0) Anonymous Coward | about 7 months ago | (#46591173) I agree. Xbox Live is one of the most abusive social media platforms in existence. What's to stop people from abusing this just like they abuse everything else? Re:Bullying (0) Anonymous Coward | about 7 months ago | (#46591189) You mean like on reddit? Downvote people or opinions you don't like? Re:Bullying (4, Insightful) Chrisq (894406) | about 7 months ago | (#46591287) You mean like on reddit? Downvote people or opinions you don't like? You mean like on slashdot? Re:Bullying (2, Funny) Anonymous Coward | about 7 months ago | (#46591329) Sorry, I would mod you down if I could... Re:Bullying (5, Interesting) Swistak (899225) | about 7 months ago | (#46591235) (5, Interesting) Swistak (899225) | about 7 months ago | (#46591419). Re:Bullying (-1, Troll) Lumpy (12016) | about 7 months ago | (#46591521) You are expecting that Microsoft programmers are competent and actually can figure out something that complex. They are doing a simple - system. no positives and all negatives, and you crater quickly when someone mods you down because they disagree or suck at a game and are whiny babies. Re:Bullying (1) Thruen (753567) | about 7 months ago | (#46591607) Is the current reputation system just as bad? I don't know, I hope not. But I'm not willing to risk losing access to things I pay for based on a system like this. I'm certainly never going to agree to any terms of service that suggest my ability to fully use the service depends on what other users think of me. It reminds me a bit of DRM, the trolls are going to have no trouble getting around this by changing accounts when their rep gets bad or just using trial accounts, the people who suffer will be regular players like us who aren't gaming the system. compared to forums (5, Interesting) kevlar_rat (995996) | about 7 months ago | (#46591801):compared to forums (2) Swistak (899225) | about 7 months ago | (#46592283) Re:Bullying (0) Anonymous Coward | about 7 months ago | (#46591815) give me gold or i will report you? I've seen that happening elsewhere... Re:Bullying (1) ErroneousBee (611028) | about 7 months ago | (#46591245) just a small problem (0) Anonymous Coward | about 7 months ago | (#46591255) I like this one: "the system will adjust for false reports." As if that's just a petty little detail, something the engineers can work out. Re:Bullying (0) Anonymous Coward | about 7 months ago | (#46591283) Everywhere I see a Report button, I hit it. I fake report all the time and even use multiple-accounts to fake report too. No not really. I don't really do that but I know trolls who do. I have seen many of my comments deleted and sometimes even my account gets banned. Somehow being anti-Arabic is good but being anti-semite earns you a ban. Go figure! Will Microsoft stop their dumbassery? Or will they say we have sophisticated system which can detect fraudulent reporting,i.e. all reports they get are legit! Re:Bullying (0) Anonymous Coward | about 7 months ago | (#46591321) It is actually a way of allowing bullied classmates give bad scores to the bully. Usually when 20 people complain about 1 person, it's the single person that is an asshole, not the other way around. Re:Bullying (1) Millennium (2451) | about 7 months ago | (#46591591). Re:Bullying (5, Interesting) pehrs (690959) | about 7 months ago | (#46591361):Bullying (0) Anonymous Coward | about 7 months ago | (#46591441) A new form of teen bullying, giving bad scores to the classmate you do not like... Or, if the damn thing were to actually work properly, a new form of punishment that creates a better gaming community. Yeah, yeah I know...we all thought the Internet was going to be like Disneyland too, a magical place. I don't know what Microsoft was thinking either...this will be hacked in a week. Re:Bullying (1) Chas (5144) | about 7 months ago | (#46591533) Exactly. These type of rep systems exist already. Pretty much NONE of them work as intended, and devolve into griefing tools. Re:Bullying (1) Bobakitoo (1814374) | about 7 months ago | (#46591855) Hm (-1) Anonymous Coward | about 7 months ago | (#46591101) Sounds fucked up to me. Obligatory.. (-1, Troll) Travis Mansbridge (830557) | about 7 months ago | (#46591103) +1 if you agree Re:Obligatory.. (5, Insightful) Joce640k (829181) | about 7 months ago | (#46591129) -1 (just because I can...) Re:Obligatory.. (0) Anonymous Coward | about 7 months ago | (#46591247) -1 (just because I can...) Look at the mod points coming in to reward this guy for being a dick. When did Slashdot become Reddit? Re:Obligatory.. (1) Joce640k (829181) | about 7 months ago | (#46591299) -1 irrelevant. Re:Obligatory.. (1) fellip_nectar (777092) | about 7 months ago | (#46591139) Re:Obligatory.. (0) Anonymous Coward | about 7 months ago | (#46591191) Moderation systems are stupid! +1 if you agree You're joking, but moderation systems *only* work when most of the community aren't assholes. Taking a random survey of Xbox Live, I'm not sure you'd conclude it meets that criteria. Karma? (1) ls671 (1122017) | about 7 months ago | (#46591143) They just added karma to xbox accounts? Re:Karma? (1) master5o1 (1068594) | about 7 months ago | (#46591149) Xbox One Colourful Karma. Re:Karma? (1) RogueyWon (735973) | about 7 months ago | (#46591213) experience so far has been that it tends to average out reasonably well over time. I'm sat on a reputation of around 4.7/5.0 and most people on my friends list are in similar positions. The only guy who is significantly lower (just under 3.5) plays a lot of Call of Duty. My experience is that spending any significant amount of time playing the big spunkgargleweewee games is a good way to get karma-bombed even if you are the most charming player in the world, due to the general level of anger and immaturity in the communities for those games. Re: Karma? (0) Anonymous Coward | about 7 months ago | (#46591339) I don't play online, do not have an active account to play online, and have a 3 out of 5 star rating. I should be thankful I haven't been banned. How much do they charge for each of those stars? Re: Karma? (1) RogueyWon (735973) | about 7 months ago | (#46591515) Re:Karma? (1) nospam007 (722110) | about 7 months ago | (#46591263) "They just added karma to xbox accounts?" Yes. If thieves and rogues steal too much from Paladins, they get a record. No way soar losers will abuse his... (2) captainpanic (1173915) | about 7 months ago | (#46591151)... (3, Interesting) captainpanic (1173915) | about 7 months ago | (#46591159) Also: I get the feeling that European English speaking people swear a lot more than in the USA, and I wonder if this will be reflected in the moderation. Re:No way sore losers will abuse his... (2) fey000 (1374173) | about 7 months ago | (#46591177) Also: I get the feeling that European English speaking people swear a lot more than in the USA, and I wonder if this will be reflected in the moderation. I too %*&!#$! wonder if this will be *(@&#$&%@ reflected in the @$&!%(#!%$&! moderation. Re:No way soar losers will abuse his... (1) Anonymous Coward | about 7 months ago | (#46591201) Also: I get the feeling that European English speaking people swear a lot more than in the USA, and I wonder if this will be reflected in the moderation. I have no idea where you got that impression. Also, those are two massive generalizations about two entire continents worth of people. I'm guess what I'm saying is, umm, what? Re:No way soar losers will abuse his... (2) TapeCutter (624760) | about 7 months ago | (#46591697) Re:No way soar losers will abuse his... (3, Funny) Anonymous Coward | about 7 months ago | (#46591241):No way soar losers will abuse his... (0) Anonymous Coward | about 7 months ago | (#46591257)? You think American TV standards (which are decided by very few people based on threat of lawsuits from a few other people) are representative of the culture? I think that's downright ridiculous. First of all, it's not as if people in coastal California even have the same culture as people in rural Indiana. They might as well be different countries. Re:No way soar losers will abuse his... (0) Anonymous Coward | about 7 months ago | (#46591427) woooosh Re:No way soar losers will abuse his... (1) arth1 (260657) | about 7 months ago | (#46591603):No way soar losers will abuse his... (1) SuricouRaven (1897204) | about 7 months ago | (#46591639) The media reflects society, and society reflects the media. Re:No way soar losers will abuse his... (1) coinreturn (617535) | about 7 months ago | (#46591477) Re:No way soar losers will abuse his... (2) CrimsonAvenger (580665) | about 7 months ago | (#46591535) Obviously, I live a sheltered life, since I have never heard "knob-gobbler" before... That said, knob-gobbler is too funny to be a swear word, and I think everyone should use it instead of cocksucker. Re:No way soar losers will abuse his... (2) gstoddart (321705) | about 7 months ago | (#46592277) Yes, yes you do ... I think I've known of that one for at least 30 years. And Tits [segall.net] , wow. Tits doesn't even belong on the list, you know. It's such a friendly sounding word. Re:No way soar losers will abuse his... (0) Anonymous Coward | about 7 months ago | (#46591563) >US vs UK language To-may-toe, to-mah-toe; po-tay-toe, po-tah-toe; cunt, cunt. It's all the same. Also, I believe you mean "bollocks". [wikipedia.org] Re:No way soar losers will abuse his... (0) Anonymous Coward | about 7 months ago | (#46592237) no one says po-tah-toe Re:No way soar losers will abuse his... (0) Anonymous Coward | about 7 months ago | (#46592045) the brits use "bollocks, bugger, bloody, bullshit, fuck and fucking" as well as many more colourful and wonderful swear words. Re:No way soar losers will abuse his... (0) Anonymous Coward | about 7 months ago | (#46591771) Have you *used* Xbox Live lately? Re:No way soar losers will abuse his... (4, Funny) AmiMoJo (196126) | about 7 months ago | (#46591847):No way soar losers will abuse his... (1) jellomizer (103300) | about 7 months ago | (#46591273)... (5, Funny) korbulon (2792438) | about 7 months ago | (#46591425) Re:No way soar losers will abuse his... (0) Anonymous Coward | about 7 months ago | (#46591633) Soar losers won't give a flying fuck. I see what you did there. Re:No way soar losers will abuse his... (0) Anonymous Coward | about 7 months ago | (#46591833) soar losers... are those that have their heads in the clouds? Finally! (0) Anonymous Coward | about 7 months ago | (#46591337) Finally a relevant game publisher adopts an obvious idea that has been standard in sites like eBay (or Slashdot, for that matter) for over a decade. I have to say I'm a bit surprised it was Microsoft, I've been expecting Valve to add something like this to Steam for the past four or five years. I don't expect companies like Blizzard, CCP or Arena Net to ever adopt this, because they are notorious for their internal policies designed to protect jerks and cheaters (including some of their own employees), but Valve has always made their bans public, Gabe Newell has talked about rewarding well-liked players, etc.. For those saying it will be abused, the same goes for eBay's reputation system. It still works in 99% of cases, and greatly reduces other types of abuse. It's not hard to attribute each vote a "confidence level", or to take action only when enough different (and reliable) ratings have been given. Good (0) Anonymous Coward | about 7 months ago | (#46591351) Fuck the gamers with horrible attitudes. So it's a little different to Steam then? (2) dohzer (867770) | about 7 months ago | (#46591363) Re:So it's a little different to Steam then? (0) Anonymous Coward | about 7 months ago | (#46591529) If you think changing your name somehow reverts bans, you don't have a clue how Steam works. Excellent! (0) Anonymous Coward | about 7 months ago | (#46591469) I have some experience in this area, not as admin, but as a gamer a few years ago (mainly Quake) I've seen my share of bots and folks who were so good they looked like cheaters though one could not dismiss the possibility they weren't. I expect this service to have the same quality as other products they offer like Windows and Office. Of course, nothing close to the excellent system we have here at /. which works flawlessly. (preemptively whoosh!) Oh this wont be abused.... (1) Lumpy (12016) | about 7 months ago | (#46591503). Re:Oh this wont be abused.... (0) Anonymous Coward | about 7 months ago | (#46591557) Nothing of value was lost. It's about time... (4, Insightful) egarland (120202) | about 7 months ago | (#46591593) XBox has long been known as the most potent example of the Greater Internet Fuckwad Theory [penny-arcade.com] . Adding a bit of accountability for being a horrible person is overdue. Gee! Why is it so difficult? (0) 140Mandak262Jamuna (970587) | about 7 months ago | (#46591619) Is it any wonder the gamers extend the "boundaries" of the game to include the entire gaming infrastructure? The gaming companies say, "these are the games, Here you can get hit other cars, or trick another user into losing they cyberberries. But this part is not a game, and you should behave honestly". The gamers see it holistically, they would steal the game from the gaming companies if they can, they will steal the cybergoodies from the companies if they can, they will run the game under virtual machines and use software to change the game state and cheat. After encouraging and rewarding such behavior "inside the game", trying to discourage it "outside the game" is not going to work. The gamers do not agree with the gaming companies on what "inside" and "outside" of the game are. The usual problems with such a system... (0) Anonymous Coward | about 7 months ago | (#46591673) Problem is that griefer type players will quickly figure out how to abuse it to attack other players. Such a system should NOT be made automatic and should require proof (recording of something built into the mechanism) And since that then usually requires manual judgement to review the evidence, additional company dollars then need to be paid to hire employees to do THAT (which they are loath to do so either there will be too few or none at all) Such problems are hard to have a proper solution and the troublemakers always will figure out how to abuse a weak system. behavior modification (1) PopeRatzo (965947) | about 7 months ago | (#46591695) When I read the headline, I was kind of hoping the XBone reputation system was going to give little electroshocks to kids when they act out in front of company, pick on their sister or don't lift the toilet seat. Giving "The Bully" Another Tool (3, Interesting) EXTomar (78739) | about 7 months ago | (#46592177). Revenue Source (0) Anonymous Coward | about 7 months ago | (#46592329) If you see this as anything other than a revenue source, you're a fool. Good players have always been good, but now they have a "legitimate" way to force people to open new Live Gold accounts. They can always ban the worst offenders and hackers, but now, for those who are a nuisance but not bannable, they can simply degrade your matchmaking and access to services to a point where you'll pay for a new account.
http://beta.slashdot.org/story/199953
CC-MAIN-2014-42
refinedweb
4,165
70.33
by Zoran Horvat Apr 15, 2013 Given the array of N integers (N > 1), write the function that finds the two smallest numbers. Example: Suppose that array is: 9, 4, 5, 3, 2, 7, 6, 1, 8. Two smallest values in the array are 1 and 2. Keywords: Array, two smallest, minimum. Finding smallest element in an array is fairly straight-forward: just traverse the array, keeping the current minimum value; every time when an element is encountered that has value smaller than current minimum, simply update the current minimum and continue further. Once the end of the array is reached, current minimum will be the overall minimum of the array. Initially, the first element of the array is taken as the minimum. Should the array have only one element, that element would naturally be the overall minimum. The following picture shows the process of selecting minimum value from an example array. This solution requires N-1 comparisons for an array of N elements. Looking for the second smallest value means just to track two minimum values: the overall minimum and the runner-up. Off course, we need to take first two values in the array as initial values for the current minimums pair, but everything else remains almost the same. This process is demonstrated on the following picture. The process of selecting two values rather than one goes by iterating through the rest of the array (skipping the leading two values) and trying to insert each value into the two entries that are kept aside. In the example above, initial pair was (4, 9). Once value 5 has been encountered, value 9 drops out. Same thing happens to value 5 as soon as value 3 has been reached. And so the process goes on, until all values in the array have been exhausted leaving (1, 2) as the final pair of smallest values from the array. We can now estimate number of comparisons required to solve the problem using this algorithm. Initializing the pair requires one comparison just to find whether the first element is smaller than the second or vice versa. Further on, for the remaining N-2 elements, we have one or two comparisons (with two comparisons being the worst case) to find out whether the current element should stick into the current smallest pair and, if yes, on which position. This makes total of 2N-1 comparisons in the worst case and constant space to solve the problem. Out of 2N-1 comparisons, N-1 was spent to find the smallest value and additional N comparisons to find the second smallest. This raises the question whether there is a better way to find the runner-up? And there is one: it is solution to the problem known for ages, and it has to do with tennis tournaments. The question was, knowing the outcome of the tennis tournament, how can we tell which player was the second best? The defeated finalist is a good candidate, but there are other players that were defeated directly by the tournament winner and any of them could also be a good candidate for the second best. So the solution to the problem is quite simple: Once the tournament finishes, pick up the logN competitors that were beaten by the tournament winner and hold a mini-tournament to find which one is the best among them. If we imagine that better players correspond with smaller numbers, the algorithm now goes like this. Hold the tournament to find the smallest number (requires N-1 comparisons). During this step, for each number construct the list of numbers it was smaller than. Finally, pick the list of numbers associated with the smallest number and find their minimum in logN-1 steps. This algorithm requires N+logN-2 comparisons to complete, but unfortunately it requires additional space proportional to N (each element except the winner will ultimately be added to someone’s list); it also requires more time per step because of the relatively complex enlisting logic involved in each comparison. When this optimized algorithm is applied to example array, we get the following figure. Tournament held among numbers promotes value 1 as the smallest number. That operation, performed on an array with nine numbers, requires exactly eight comparisons. While promoting the smallest number, this operation has also flagged four numbers that were removed from competition by direct comparison with the future winner: 6, 2, 3 and 8 in that order. Another sequence of three comparisons is required to promote number 2 as the second-smallest number in the array. This totals 11 comparisons, while naive algorithm requires 17 comparisons to come up with the same result. All in all, this algorithm that minimizes number of comparisons looks to be good only for real tournaments, while number cracking algorithms should keep with the simple logic explained above. Implementation of simple algorithm may look like this: a – array containing n elements min1 = a[0] – candidate for the smallest value min2 = a[1] – candidate for the second smallest value if min2 < min1 min1 = a[1] min2 = a[0] for i = 2 to n – 1 if a[i] < min1 min2 = min1 min1 = a[i] else if a[i] < min2 min2 = a[i] Here is the of the TwoSmallestValues function, with surrounding testing code. using System; namespace TwoSmallestNumbers { public class Program { static Random _rnd = new Random(); static int[] Generate(int n) { int[] a = new int[n]; for (int i = 0; i < n; i++) a[i] = _rnd.Next(100); return a; } static void TwoSmallestValues(int[] a, out int min1, out int min2) { min1 = a[0]; min2 = a[1]; if (min2 < min1) { min1 = a[1]; min2 = a[0]; } for (int i = 2; i < a.Length; i++) if (a[i] < min1) { min2 = min1; min1 = a[i]; } else if (a[i] < min2) { min2 = a[i]; } } static void Main(string[] args) { while (true) { Console.Write("n="); int n = int.Parse(Console.ReadLine()); if (n < 2) break; int[] a = Generate(n); int min1, min2; TwoSmallestValues(a, out min1, out min2); Console.Write("{0,4} and {1,4} are smallest in:", min1, min2); for (int i = 0; i < a.Length; i++) Console.Write("{0,4}", a[i]); Console.WriteLine(); Console.WriteLine(); } Console.Write("Press ENTER to continue... "); Console.ReadLine(); } } } Here is the output produced by the test application: n=10 16 and 41 are smallest in: 44 85 41 95 70 81 77 70 16 52 n=10 0 and 16 are smallest in: 87 27 89 16 55 62 34 18 63 0 n=10 9 and 17 are smallest in: 68 29 89 49 89 17 19 57 33 9 n=17 6 and 6 are smallest in: 38 94 58 65 9 15 95 91 6 53 62 19 37 6 28 77 79 n=5 5 and 10 are smallest in: 10 28 5 64 13 n=3 28 and 35 are smallest in: 35 58 28 n=2 51 and 59 are smallest in: 51 59 n=2 70 and 77 are smallest in: 77 70 n=0 Press ENTER to continue....
http://codinghelmet.com/exercises/two-smallest
CC-MAIN-2019-04
refinedweb
1,180
57.81
NAME io_cancel - cancel an outstanding asynchronous I/O operation SYNOPSIS #include <libaio.h> int io_cancel(aio_context_t ctx_id, struct iocb *iocb, struct io_event *result); Link with -laio. int io_cancel(aio_context_t ctx_id, struct iocb *iocb, struct io_event *result); Link with -laio. DESCRIPTION io_cancel() attempts to cancel an asynchronous I/O operation previously submitted with io_submit(2). ctx_id is the AIO context ID of the operation to be canceled. If the AIO context is found, the event will be canceled and then copied into the memory pointed to by result without being placed into the completion queue. RETURN VALUE On success, io_cancel() returns 0. For the failure return, see NOTES. ERRORS VERSIONS The asynchronous I/O system calls first appeared in Linux 2.5, August 2002. CONFORMING TO io_cancel() is Linux-specific and should not be used in programs that are intended to be portable. NOTES Glibc does not provide a wrapper function for this system call. The wrapper provided in libaio for io_cancel()) COLOPHON This page is part of release 3.23 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
https://linux.fm4dd.com/en/man2/io_cancel.htm
CC-MAIN-2022-05
refinedweb
190
59.4
The proper approach to designing a solution is one that meets business objects and that protects the confidentiality, integrity, and availability against identified risks with controls that are transparent to the user. The approach sounds simple enough; the challenge is defining what needs to be protected, what are the risks and types of controls needed, and how to implement them in a cost effective way. There is no such thing of a security mechanism that will protect your secrets for eternity. If the security mechanism is strong, attacks could be mounted against the environment, vulnerabilities in key management or with a person's willingness to be helpful. Aside from secure design and coding practices you need to consider policies, setting expectations, environmental control, and training. Developing a secure solution requires creating a layered security strategy with the support of policies, controls, and training. Writing secure code and knowing how the environment impacts security is important to designing secure software. Solutions extend beyond writing secure software; it involves designing a system that will likely interact with users, and legacy systems to meet some business objective. This requires understanding the business objectives and mitigating any risks through policy and controls. To begin with, security needs to be considered from the onset of a design rather than as an afterthought. You need to know your risks which come with awareness and knowing your adversaries and their capabilities. Mitigating risks will involve building controls and security mechanisms that will impact policies, environment, and coding. If there are any weaknesses in the implementation, security becomes ineffective. Security need not be complex but failing to understand the impact of your choices will mean the difference between what is secure and what is just obscure. We will take a detailed look at how to define security, and how policies, expectations, environment, and coding decisions can impact our security stance. Through awareness and knowledge we begin designing secure solutions. Before digging into development and implementation, we need to have some understanding of the requirements for security mechanisms. This will ultimately have an impact on everything else that follows. In particular we need to document: These questions will dictate the security stance and influence our design decisions. Let's take a look at each of these in more detail. Knowing the business objective is important because without a business objective there is no need to develop a solution. The business objectives will dictate the complexity of the system and the level of interaction with users and other legacy systems. Identifying sensitive data and its useful lifespan is one of the most important exercises in designing a secure solution. Useful lifespan can be measured by its economic value over time. Its value can be a hard dollar amount; an economic advantage to the company such as in a secret recipe or trade secret; or, a potential liability such as a patient's medical history in the wrong hands. Over time, the economic value diminishes until it becomes worthless. The lifespan measured in time can be as little as a few hours to as long as centuries. The lifespan of data will determine the type of security mechanisms required to protect it. Performance constraints can have a big impact on the confidentiality of data and the usability of the system. For real-time high volume data stream processing, performance may be a major criteria on the success of a solution. The other performance measurements would measure reliability and availability. Performance constraints will be a major influence on the cost of the system. The performance of the system extends beyond the costs of development to also encompass hardware and infrastructure. Security goals deal with the balance of data confidentiality, integrity, and availability. If the availability to the data is reduced or if the security mechanisms are not transparent, workarounds end up being created by users at the risk of security. The demand for increased availability comes at a price of equipment redundancy. Risk analysis determines threat exposure to our assets, operations, and reputation. During the risk analysis you would do a threat and risk assessment to discover the types of threats and the likelihood of occurrence and its impact on assets, operations, and reputation. Part of risk analysis is knowing your adversary and their capabilities and resources. Areas that would need to be reviewed are the environment, software design and construction, operations, controls and policies. There are three approaches to risk management: we can accept the risk as a normal part of the operation; mitigate the risk by reducing potential impacts; or we could transfer the risk to a third-party by insuring against it. It is possible to have security goals, risks, performance constraints, and business objectives at odds with each other. In such cases you need to do a cost/ benefit analysis and review your risk management strategy. Once a balance has been struck you could begin designing security mechanisms to mitigate risks. Even in the proper implementation of security mechanisms, data is only as secure as the weakest link. Other factors that will have an affect on security are: Organizational policies give guidance on where responsibilities lay, and outline procedures and guidelines on how the organization should deal with information security. They're tools used to control the environment and management of information security. Policies should be individually targeted to all levels of personnel dealing with sensitive information and all personnel should be aware of these policies and why they are in place and how it affects their work. Good policies are relevant and easy to understand and clearly cover the procedures for protecting information and responsibilities expected of employees. Policies should be revised to reflect changes in the environment. Changes should be integrated back in to the employee manual and training curriculum. An example of a policy would be a policy on how to label sensitive information [4]. Education and setting right expectations is very important within the context of security. All of us have certain expectations with the products and services we use. Not all of us have the same expectations, some are warranted and others need to be reevaluated. Employees and contractors need to be educated on policies & procedures and have the right expectations of what encryption offers, including where, when, and how it should be used; and what are the risks and threats. Key management policies are very important when implementing confidentiality through encryption. [10] Without proper key management, encryption could easily be ineffective at protecting secrets. Key management policies should describe procedures and guidelines on: Although keys are not physical like the keys for your home and car, they are none the less used for specific roles and applications. Keys should be label indicating when it was created, purpose, target application, owner, access restrictions and privileges. Keying material include the encryption key and initialization vector that are used to support encryption. The generation of keying material should be done using a true random numbers or at minimum a cryptographic random number generator. Both are explained in the random numbers section of this paper. The storage and distribution of keying material must be secure from tampering until it is destroyed. Distributing in electronic form must be done with key wrapping. That is encrypting the keying material with a master key or using a public encryption scheme. Once the keying material is no longer needed it must be destroyed. Environmental control is an important issue when it comes to security. The environment that the code runs under determines some of the vulnerabilities that it's susceptible to. In most cases we will never have full control of the environment so we risk attacks with reverse engineering, Trojan horses, key-press loggers [9], and not quite destroyed secrets; and packet sniffers can be used to attack if not go around security mechanisms altogether. When implementing security, we must assume that attackers have detailed knowledge of the algorithms used, as well as copies of source code, components, framework and libraries. With this assumption we must place the strength of confidentiality in the encryption key and the management of those keys and not have it depend on the secrecy of code. This is perhaps on of the most important point I could make regarding security. Because of this lack of foresight the DVD encryption has been cracked and unauthorized software floats on the Internet. [17]. Understanding the issues and challenges that face programmers is important to reducing the threat of someone coming along and exploiting a common vulnerability. Let us take a look at some coding pitfalls that we should avoid and at some design strategies that could be implemented to improve security. Awareness, good design and coding practices are vital to implementing secure software. All software solutions follow a life cycle known as the Software Development Life Cycle (SDLC). The stages of the SDLC follow: Gathering requirements and business rules is not a vulnerability in of itself, it seems though when there are problems with requirements, usually the developer shoulders the brunt of retrofitting changes with the product nearing the final stages of development. The cost of changes in the development process becomes exponentially greater as the product nears the delivery date. The risks also increase if changes are not handled in change management processes. During this stage assumptions should be documented. Depending on whom you ask, assumptions on any give topic can vary. A documentation of assumptions brings everyone on the same page. Testing is an important phase in the SDLC that is used for quality assurance of accuracy, reliability and performance. Defects will always exist in large systems this is partly do to the fact that as the quality of the system improves the cost of testing increases. To measure the quality we need to collect metrics on the number and types of defects found. Developers should not be the ones testing their software because they tend to test for scenarios that we have already considered and also because of a potential conflict of interest when recording their software defects for quality control metrics. Before any testing can begin, there needs to be a document that defines the testing scope, expected production environment and its state; testing environment and its state if it differs from the production environment; expected system behavior and specification; and bug fix turnaround time. It is important that all assumptions are recorded since they will also form the basis for test cases. [13] There are various types of testing: Unit testing tests the functionality of a procedure and how well it conforms to design specification. Integration testing takes the various procedures together to test how well they integrate with each other. User acceptance testing looks at the final product to see how well it performs in the user's environment. The inventory of test cases should include stress testing where computing resources are gradually reduced until the procedure breaks. Stress testing should also be done over an extensive period of time to measure resource usage and any memory leaks. Input parameter bounds testing checks for buffer overflows and input that falls outside specifications. Without going too deep into the SDLC it is important to formalize and follow some design, development and testing methodology. The benefits of formalizing a development methodology is that there is less misinterpretation, scope of work is documented, and better design, which results in fewer revisions. From a coding perspective there are a number of vulnerabilities that seam to reoccurs that weakens the security of a solution. The vulnerabilities include: Security through obscurity and secrecy is a bad approach to information security. The idea is that if security mechanisms are secretive no one will know of any vulnerability. The idea is interesting but a little naïve to think that attackers do not have any skills in reverse engineering code to analyze security mechanisms. A Buffer overflow exploits vulnerabilities in unchecked buffer boundaries with buffers that use static allocations. The threat is overwriting the buffer's boundaries into sections of code or data. Usually the overflow contains malicious code that gets injected into a processes address space and eventually executes. Format string is similar to buffer overflows but exploits vulnerabilities with formatted I/O constructs. An example of a formatted I/O construct is the printf types of commands. Buffer overflow and format string threats have been around for many years, validating user input is the only way for dealing with these types of threats. printf In database environments locking is used to protect the atomicity, consistency, isolation, and durability (ACID) of changes to a database. The ACID properties protect the integrity of the data and resources during updates and querying of resources. Locking mechanisms are used to implement the ACID properties of a transaction. A transaction contains one or more data modification language (DML) set commands that manipulate records in tables. A transactional lock protects all the set commands in the transaction, or rolls back all the changes. Various lock types are escalated on resources until the transaction is complete. Some types of locks are not compatible with each other. This ensures that transactions complete in its entirety by blocking other transactions that might conflict with the transaction integrity. The risk is when transactions occur outside a transaction lock and risks breaking the database integrity. Take an example of making a purchase at an online store. When an order is placed payment is accepted, inventory is changed, and shipping is arranged. Payment, inventory change, and shipping is considered a transaction, each task considered a DML set command. If the inventory of an item was shown as having one unit in stock, and multiple customers attempted initiate a transaction to purchase the unit at the same time, a failure will result with one of the transactions. The set command that fails will cause the entire transaction to roll back any changes made during that transaction. Without transaction locks the payment table will contain more payments received than orders recorded, the integrity of the database is now shot. A deadlock is a bad locking situation that could occur when not properly placing locks in a transaction. Deadlocks occur when a transaction has placed locks on records but is block from completing because of locks from another transaction that are also blocked from completing because of locks from the first transaction. This situation could have been avoided if we consistently placed locks on resources in the same order. In a deadlock situation, usually the database system decides on a victim allowing the other transaction to complete. The victim transaction rolls back any changes it started making during the transaction. Locks need to be considered as part of an overall security strategy because it is a mechanism for protecting the integrity of data. Avoiding deadlock situations should be in the back of your mind when developing. How a database server handles deadlocks should also be considered when deciding on a database system. Deadlocks affect availability and database systems that do not handle deadlocks could be vulnerable to a denial of service by having vital records locked up blocking legitimate transactions from completing. Race conditions take on many forms but could all be characterized by scheduling dependencies between multiple threads that are not properly synchronized that cause an undesirable timing of events. An example of how a race condition could have a devastating outcome on security would be say you have a system that is multi threaded, you have one thread handling the security and another thread handling the storage. If the thread handling the storage works faster than the thread handling the security, you could be faced with a scenario that stores sensitive information unsecured. Race conditions usually do not manifest themselves until it is too late. There are a number of programming constructs that could be used to control the synchronization of threads and they are semaphore, mutex, and critical sections. In the .NET world there are thread management classes that could be used to handle thread synchronization and scheduling. Trusting user and parameter input can lead to disasters. Even if code is protected against buffer overflows, vulnerabilities such as semantic or SQL injection can still pose risks to confidentiality and integrity. Semantic injection is a well formed user input that when processed produces unexpected results. SQL injection is similar to semantic injection in that a well formed SQL syntax that is passed on to a database server for processing. Here is an example of an SQL injection. A user is prompted for a branch and account number to view an account summary. The specification of the procedure expects an account number and a branch number. The procedure takes the two arguments and constructs a SQL command to execute against an SQL Server. The line that is used to construct the SQL command is as follows: SQLCmd = "select * from tblaccounts where branch = " + branch + " and account = " + account If branch = 277 and account = 3455 then the value of SQLCmd will be: SQLCmd Select * from tblaccounts where branch = 277 and account = 3455 The SQL command is a well formed select command. However the procedure has a flaw in that there is a potential that the user may enter more then an account number. What if branch = 277 and account = 3455 ;select * from tblaccounts, then the value of SQLCmd will be: ;select * from tblaccounts select * from tblaccounts where branch = 277 and account = 3455; select * from tblaccounts The above SQL select commands are also well formed. The injection of ;select * from tblaccounts would cause the procedure to return all the accounts in the database table. Exceptions are events which disrupts the normal flow of code. Exception handling is used for trapping exceptions that would otherwise crash or worse continue executing in an unstable state and produce unexpected results. Although unhandled exceptions are bad, exception handling is essential for keeping control of code flowing through the program. Depending on the severity of the exception you may want a mechanism to record the exception and perform a graceful shutdown. Exceptions could be avoided by testing for conditions that could lead to an exception. For example before opening a file for reading, the presence of the file should be tested first. In Visual C++.NET, exception handling is done through the try, catch, and finally code blocks. When an exception occurs inside a try block such as a read command to a missing file, an exception is thrown and caught by the nearest catch block designed to catch that specific exception. The flow of code then continues with the finally regardless of an exception to do some cleanup before continuing with the rest of the code. If there are no catch blocks designed to catch the exception, the program risks crashing. Complexity is a big threat to the security because it can easily mask logic errors, and could be used to hide logic bombs. Complex procedures make it more difficult to spot vulnerabilities. The definition and measurement of complexity can be different from programmer to programmer. I would measure complexity by the number of logic conditions and nesting depth in a procedure. Sometimes it is impossible to avoid writing complex procedures but by implementing coding and documenting conventions, and hiding or segregating complexity from other code, complex procedures would be less of a threat to security. Software that processes user input may need various levels of control to match the separation of duties for users on the system. The privileges that management requires will be greater than those for users entering data. Management my request a higher granularity of control based on the levels of management or functional roles users play on the system. System with a low level of granularity may lack the controls to implement proper segregation of duties and therefore prevent abuse or misuse of the system. An exclusive reliance on software controls is bad because you are placing all your security measures within the software. The risk is that if a software defect crashes or causes the software to become unstable, the confidentiality and integrity of the system or data could be lost. Software controls should be balanced with system privileges and native control mechanisms. Privileges are the authorities that a system provides to users to complete a task. Generally code privileges should inherit the privileges of the user on the system. Excessive privileges are privileges that code offers to a user that they would not otherwise have on the system. The risk of excess privileges is that if the users is able to escape the controls of the application, data would be vulnerable to abuse, misuse, or corruption. If excessive privileges are required, the code should be kept to a minimum with controls in place to track usage and misuse. Compiler intricacies are important to know because the code you develop may be optimized to an extent that it may not be implemented the way it was intended in the executable. Take the following three lines of pseudo code, compiling with the Visual C++. NET compiler with optimization turned on, the third line that scrubs the secret from memory would be omitted in the executable. Since Memorybuffer is no longer read from memory after the second line, the compiler ignores the memory scrubbing on the third line. The executable will resemble the code on the first and second lines only [11]. Memorybuffer 1: MemoryBuffer = Validate (Secret_User_Input) 2: SecretMemoryBuffer = Encrypt (MemoryBuffer) 3: MemoryBuffer = RandomData A work around would be to turn off optimization or read the Memorybuffer variable after scrubbing it. The following couple of lines demonstrate how to read the Memorybuffer. if MemoryBuffer = SecretMemoryBuffer then Remark do nothing The above two lines are meaningless and may even be optimized out with future compiler patches or revisions. Compiler intricacies can vary from compiler build types, compiler patches, and from compiler to compiler. This underlines the importance of keeping compiler patches up to date. Build types such as the debug build contain debug information that is used to help the developer trace through code and contains source code comments. Usually debug builds have optimization turned off and release builds remove debug info and source code comments from the executable with optimization turned on. Because the compiler treats code differently from build types it is important to test the release you plan on using in production. This underlines the importance of testing, peer review, as well as possibly considering an open design. In addition to avoid the vulnerabilities we have just talked about we should follow the following design strategies into our software. Designing for fail safe is defensive programming. It involves validating user input and parameters, setting defaults, and handling exceptions to protect the confidentiality, integrity and availability of data and resources when a failure occurs or when garbage is processed. Generally this means that a safe state should always be maintained through the course of code execution. Variables should be initialized with safe defaults, input/ parameters must be validated, error handling should be in place to trap potential exceptions; and return parameters must conform to design specification. In the event of a failure resource locks must be removed and return an error code to the calling routine. Input and parameter validation verifies that user input and parameters conform to the design specification for the procedure. Procedures with parameters that have little to no checking are tightly coupled with the calling routine. Tightly coupled procedures are dependent on the calling routine to provide proper and well formed parameters. Loosely coupled procedures do not assume the quality of parameters and will verify that they conform to specification. Any variances to specification should be corrected or the procedure should return an error code or throw an exception. The risk of not validating input data or verifying parameters is having the procedure run outside of specification. Secrets in memory or on storage need to be protected. Encrypting secrets or wiping memory after you done with the secret protects it from ending up in a crash dump file or on a page file. The process of encrypting secretes should be done using a proven cryptographic primitive. Ideally, you should implement a double buffer, one for the plaintext secret and the other for the ciphertext. This is to protect against any possible race conditions that could use the secret before it is fully secured. Any secrets in memory should be scrubbed along with wiping secrets persisting on the disk before deleting. Scrubbing memory requires changing its value to a safe default before you are done with the variable. The code required to scrub the memory will depend on the data type. The code snippet below demonstrates scrubbing a secret from an integer data type. m_SecretRounds = 19630213; ProcessSecret (m_SecretRounds); // Scrub secret from memory using a safe default m_SecretRounds = 0; The code sample below demonstrates an example of wiping the contents of the file before deleting it from the filesystem. The name of the file would still be recoverable but the contents would not be. The code is written for the .NET environment using Visual C++.NET. bool WipeFile (String * inFilePath, int nWipes ) { bool status = false; FileStream * sw; Byte Patterns[]; __int64 fileLength=0; int patIndex =0; if (nWipes<1) nWipes =1; else if (nWipes>200) nWipes=200; try { // Open the file with exclusive access sw = new FileStream (inFilePath, FileMode::Open , FileAccess::Write, FileShare::None); Patterns = new Byte[nWipes] ; // Create a pattern index for (patIndex = 0; patIndex < Patterns->Count ; patIndex++) Patterns[patIndex] = patIndex; fileLength = sw->Length ; for (patIndex=0; patIndex < Patterns->Count ; patIndex++) { sw->Seek (0,SeekOrigin::Begin ); for (int i =0; i <= fileLength ;i++) sw->Write (Patterns,patIndex,1); } sw->Flush (); sw->Close (); // Safe to remove the file now File::Delete (inFilePath); status = true; } catch (Exception * e) { Console::Writeline (e->ToString()); } return status; } Security and control measures should be placed closest to sensitive data either by implementing native access control mechanisms provided by the operating system or database server. The further the control measures are away from data, the more opportunity there is to circumvent the control. An exclusive reliance on software control can potentially be bypassed altogether by accessing the sensitive data using another program without controls. Restricting the accessibility scope of an object interface offers better control over sensitive methods, properties and attributes for the object. Control mechanisms within objects unless well defined could be potentially bypassed or escaped by using object inheritance and thus avoiding any authorization or authentication to access. [12] Control mechanisms provided by the operating system or database server have little to no changes of being escaped from. Privileges to the data should be the minimal set required to perform a task. The risk with privileged code is that it may be vulnerable to luring attacks which is when a user or other procedure will less privileges makes use of a privileged procedure to gain access to methods or attributes they would not otherwise have. Code that must be privileged to perform some task must be kept as short as possible. Reducing the complexity and size of privileged code makes it easier to audit and track. An example would be a user who normally has read only access against a database could use a system with full access to make modifications. Or, a developer could implement a component that could be used to gain greater access to data. Loosely coupled and high cohesion describe how procedures related to each other. Loosely coupled procedures are very independent with other procedures. High cohesion describes procedures that perform a single function. An example of a function with high cohesion would CalculatePi(). Poor cohesion would define a function CalculatePiAndSin (). A procedure that is loosely coupled and has high cohesion is more reliable and is usually easier to maintain and read. CalculatePi() CalculatePiAndSin () Configuration management is a well defined process to control the integrity of a system changes during its lifetime. It allows all the stakeholders of the system and solution to know of upcoming proposed changes and evaluate the impact to their environment, priorities, and processes. From a business perspective change management controls development and integration costs, and since changes are known by all the stakeholders, changes are better integrated into the production environment. There is a risk when changes to code are required, there is a potential that vulnerabilities might be introduced during development or rollout. Usually programmers are under pressure to complete changes as quickly as possible and have little time to analyze how a change may affect security. Change requirements are unavoidable but their effects could be limited by anticipating them. Changes requirements can be brought about by changes to the business climate, environment, policies, or new opportunities. During the initial design of a system, requirements and business rules that are susceptible to change should be identified so that they could be segregated from the rest of the code by placing the unstable procedures into a module or assembly. Layers of abstraction should be considered to abstract the stable code from the unstable code. For example, if the database environment is susceptible to change over the next year. Database access procedures should be placed in a separate assembly and a layer of abstraction created to separate code specific for the database environmental with the business rules. The list of programming pitfalls grows larger as new vulnerabilities are discovered. Staying ahead requires a staying on top of vulnerabilities and changes. You could never be fully assured that your code is secure from any implementation flaws. Peer review helps uncover any glaring logic flaws and weaknesses. Open design also offers the same benefits but through public scrutiny. There are debates on if open design promotes security through public review or make it easier for attackers to find vulnerabilities to exploit. .NET is a new initiative from Microsoft that changes the software development paradigm [6]. The new paradigm similar to Java focuses on security by creating an abstraction layer that sits between system resources and the program code. The difference between .NET and Java is that .NET also creates a language abstraction in that you could mix various languages that conform to the .NET language specification within an assembly. The abstraction layer known as the common language runtime (CLR) provides a controlled environment and is responsible for security enforcement through: The CLR treats code as having varying degrees of trust in the system that is dictated by security policies and based on evidence that the CLR collects to determine if the code is trustworthy before executing it. Some of the technology behind the CLR is based on Microsoft's Authenticode Technology. It is designed to identify the publisher of the code to verify that no one has tampered with it. Code strictly developed for the CLR using a .NET language is known as managed code. Both managed and traditional code forms an assembly which can be in the form of an EXE or DLL. Regardless of the extension of the file, code developed for the .NET environment compiles to Microsoft Intermediary Language (MSIL) by the compiler. The MSIL file needs to be further compiled to native code which is done by the CLR on the fly at runtime along with performing security checks. .NET applications run in what is like a silo but called an application domain. Application domains are where the .NET applications gets initiated and controlled from. It is at this point that code gets loaded from an assembly and begins execution. The types of hosts that can create an application domain are the IE browser, server host such as ASP.NET; and command shell host. The CLR does a just-in-time (JIT) compile on the managed code sections in the MSIL creating native code that is executed in the domain. Unmanaged code runs outside of the CLR and therefore can still pose a security risk even when it is part of an assembly. During the JIT compile process of managed code, code verification occurs to ensure type safety, bounds checking which avoids buffer overflows; proper initialization of objects; and, stack frame corruption to avoid attacks that transfer execution to an arbitrary memory location. As you can see the verifications process adds a lot of security without having to program the constraints in. These constraints prevent most types of attacks that occur to code. Unfortunately, when it comes to unmanaged code, the CLR does not do any security checks or policy enforcements. A recent attack to ASP.NET [8] using a buffer overflow in an unmanaged section of code can attest to that. Role based security allows a developer to build constraints into the application that modifies the flow of code based on the user's role on the system or in the enterprise. The roles can be taken from group membership in the domain or active directory. Code access security prevents luring attacks when code with a lower set of privileges to call on code with a higher set of privileges and evoke methods that the original code couldn't otherwise. When invoking methods or constructing .NET objects, the CLR will do a stack walk to see if the caller along the call chain has the security privilege to instantiate the object or call on methods. Code access security can also restrict access to unmanaged code within the assembly. Security policy management defines the set rules that are applied to code groups. The CLR analyses the evidence in the assembly to determine which code group the assembly falls under [7], then applies those rules to the assembly before executing the code. Examples of evidence that the CLR looks for in an assembly include publisher detail, digital signature, location of installation directory, and the type of host that created the application domain. Isolated storage is a mechanism that allows .NET applications to use a Framework controlled file storage system isolated from the operating system's file system. [16] The isolated storage protects the file structure of the operating system without preventing applications requesting File IO that otherwise would not have the permissions or privileges with the file system. Although the increased security built into the .NET environment does come with a slight performance penalty, the performance of managed code is very good. However, because a lot of work has been done to secure an environment, a poor security design could still leak sensitive data. We will take a look at implementing data confidentiality and integrity using the .NET framework's Cryptographic Service namespace using some code samples written in Managed Extensions for C++. The language semantics would be different from other .NET languages, but the interaction of the classes and method calls within the .NET Framework would be the same. The components of the symmetric cipher are the algorithm, initialization vector, key, and stream. A discussion on the encryption key can not be made without taking about the effective key space, entropy, modes of operation; hashing, and the random number generator. The .NET Framework base class library or BCL includes four symmetric encryption algorithms: Each of the algorithms has had public peer review and have all had research papers published on the Internet. The security of each of the algorithms is located in the size and quality of the key. The key being is nothing more than a very large binary number to the encryption algorithm. The strength of an encryption is based on the algorithm and the effective key size. Each of the symmetric algorithms has different performance and encryption strength properties that are best suited for specific scenarios. To begin using the cryptographic services found in the .NET Framework we need to reference the namespace. #using <system.dll> using namespace System::Security::Cryptography ; You may ask which of these algorithms is best. The answer as is that it all depends on what you define as best, your business objectives, performance requirements, and the value of your data over time and sensitivity of the information. Although each of these algorithms has had expert peer review and has been extensively tested over time, there is no guarantee of everlasting security. The selection of security mechanism will depend on your adversaries and what are their perceived technical abilities and financial resources. Lets look at each of the algorithms and when and how to use them. RC2 was developed in 1987 by Ron Rivest of RSA Securities. RC2 is a block cipher with a variable key and block length. This encryption is used in many commercial products. To Create an RC2 encryption object from the class: RC2CryptoServiceProvider * myRC2 = new RC2CryptoServiceProvider(); DES developed by IBM in 1974 and finally adopted by the National Institute of Standards and Technology in 1976. [1] DES uses a 64 bit key, but since the last byte is used for parity, the effect key strength is 56 bits. The DES key is known to be brute forced in a few days. The ideal application for this algorithm is one that needs to be backwards compatible with DES or that the information lifespan becomes worthless in a day. An example of data that would have a short lifespan would be Internet/intranet session tokens. Here is an example of creating a DES object from the class. DESCryptoServiceProvider * myDES = new DESCryptoServiceProvider(); The 3DES algorithm is also developed by IBM. [2] 3DES is basically three iterations of DES each iteration with a different key. The sequence would be: E ( D ( E (m) ) ) That is: encrypt - decrypt - encrypt. With 3DES you could use either two keys or three keys. The .NET Framework uses a three key version. In a two key version you would have: Ek1 (Dk2 (Ek1 (m) ) ) In a three key version you would have all keys different: Ek1 (Dk2 (Ek3 (m) ) ) The key space on 3DES is 168 bits (56 bits x 3) but because of a "Meet in the middle" theoretical attack the effect key size is 112 bits (56 bits x 2). 3DES is slower than DES since it has to go through the extra iterations. To create a 3DES object from the class. TripleDESCryptoServiceProvider * myTDes = new TripleDESCryptoServiceProvider(); The Rijndael algorithm was developed by Joan Daemen and Vincent Rijmen as a candidate for the U.S. Advanced Encryption Standard (AES) which was selected on October 2, 2000 [3]. It uses key sizes of 128, 192, or 256 bits and block lengths of 128, 192, or 256 bits. You can use any combination of key sizes and block lengths. The design goals of the algorithm is that it must resist all known attacks, must have design simplicity, code compactness and speed on a wide spectrum of platforms. To create a RijndaelManaged object from the class: RijndaelManaged * Rm = RijndaelManaged(); Of the block ciphers listed the Rijndael cipher is the fastest by far followed by RC2, DES, 3DES. Rijndael also has the largest key space of the algorithms. To put the size of the key space into perspective, if there was a machine fast enough that could brute force a DES key in one second, it would take 149 trillion years to brute force a 128 bit key for the Rijndael algorithm. The symmetric encryption primitives in .NET are all block oriented but use data streams for the transformation. The transformations pass through the cryptostream that links a regular data stream object to the transformation. A block is taken from the data stream and is transformation based on the properties of algorithm. This approach is different from the traditional stream cipher that makes the transformation on bits as it travels through the stream. From the .NET Framework perspective, a stream represents an abstract sequence of bytes that flows through a device. Some examples of the derived stream objects that could be used are: file IO, memory, network IO, and buffered IO. cryptostream The cryptostream class constructor is as follows: CryptoStream (Stream*, ICryptoTransform*, CryptoStreamMode ); The ICryptoTransform interface inherits from either a derived symmetric decryptor, or encryptor object. Use the encryption algorithm's CreateDecrytor() method to create a decryptor object used to do the decryption transformation, and the CreateEncryptor() method for the encryption transformation. ICryptoTransform decryptor encryptor CreateDecrytor() CreateEncryptor() The CryptoStreamMode mode is used to set the direction of the stream and can be read or write depending on whether you are encrypting or decrypting. If you are encrypting you would set the CryptoStreamMode to write mode, otherwise set it for read mode for decrypting. CryptoStreamMode An example of creating a cryptostream for the RC2 encryption algorithm to encrypt a memory stream is as follows: 1: MemoryStream * myMemoryStream = new MemoryStream; 2: RC2CryptoServiceProvider * myRC2 = new RC2CryptoServiceProvider(); 3: ICryptoTransform * myEncryptor = myRC2->CreateEncryptor (); 4: // Create an encryption stream that will use the memory stream for 5: // a storage area. 6: CryptoStream * myEncryptStream = new CryptoStream(myMemoryStream, myEncryptor, CryptoStreamMode::Write ); If we need to decrypt a memory stream we would change the lines 3 and 6 to read: 3: ICryptoTransform * myDecryptor = myRC2->CreateDecryptor (); 6: CryptoStream * myDecryptStream = new CryptoStream(myEncryptedMemoryStream, myDecryptor, CryptoStreamMode::Read ); Since block ciphers encrypt a block at a time, two plaintext blocks that are identical would results in the cipher text blocks also being identical. This pattern could be used to recover the keys. To avoid this from occurring the previous cipher text block is chained back into the encryption process modifying the next cipher block. This continues until the entire plaintext is encrypted. There are different chaining modes that could be use. The acronyms are CBC, ECB, CFB, CTS, and OFB. myRC2->Mode = CipherMode::CFB; //Sets mode to CFB The cipher block chaining or (CBC) is the default mode for the encryption algorithms included with the .NET Framework. It is also one of the most secure. It takes the previous ciphertext block and does an XOR operation with the current plaintext block before it is encrypted to produce the next ciphertext block. Initially the initialization vector is XOR with the first plaintext block before it is encrypted. If the plaintext always begins the same way (Dear Sir: ) and the initialization vector never changes, the beginning of the ciphertext block will also always be the same. This is why the IV should change from session to session. The electronic code book or (ECB) encrypts each block independent of the previous block. This creates a one to one relationship the plaintext and the ciphertext. If the is duplicate blocks in the plaintext there will be duplicate ciphertext blocks. This independence of the previous block makes this the highest performance mode and also the weakest mode of operation in terms of data security. The plaintext has to be larger then the block size. The cipher feedback (CFB) is similar to CBC except that it begins encryption with a single byte rather than the entire block. This mode is ideal for data streaming. If there is an error in the encryption of the byte the remainder of the plaintext block will be corrupted. The ciphertext stealing (CTS) produces ciphertext that is the same size as the plaintext where the plaintext is larger than the block size. If the plaintext is smaller then the block size padding is added to the message before it is encrypted. CTS works similarly to the CBC mode until the second last block than the last block and second last block are XORed with each other to produce the final encrypted block. The CTS mode is not supported by any of the symmetric encryption algorithms currently shipped with the .NET Framework BCL. It is included to support new symmetric algorithms that might derive from the SymmetricAlgorithm class at a later time. The output feedback (OFB) this is similar to the CFB except that if an error occurred in the encryption the remainder of the cipher text will be corrupted. The legal key sizes are the key spaces that the algorithm supports. Keys that do not match any of the key sizes will throw an exception. The block size is the number of bits of data that is encrypted at a time. The legal block size lists your block size options. If the amount being encrypted is less than the block size the plaintext is padded. The default block is set to the largest legal block size. The initialization vector (IV) is a random sequence of bytes pre-appended to the plaintext before the initial block is encrypted. The IV plays a big role by reducing the chances of successfully factoring the key using a chosen plaintext attack. The IV does not need to be secret but should vary from session to session. It is important to note that although I believe that the security should be dependent only on the quality of key alone. The U.S. Federal Government states that with regards to the government's usage, the IV should also be encrypted if the data encryption uses the CBC encryption mode and you need to transmit the IV over an unsecured channel. [5] I can't argue that encrypting the IV makes the makes the data encryption more secure. However, the added complexity of encrypting an IV before transmitting it may not be worth the effort when the data is already encrypted and the key is secure. The same IV is required to retrieve the plaintext from the ciphertext. To initially produce the random sequence for an IV you would make a call to the GenerateIV() method of the encryption object. The IV property will contain the generated value that we need to store to be used later to decrypt the message. GenerateIV() encryption IV RijndaelManaged * Rm = new RijndaelManaged(); Rm->GenerateIV(); If the IV is retrieved from the encryption stream or some other communication channel you would set the IV property directly with the IV value. Rm->IV = myIV; // myIV is a Byte array The effective key space is one of the determining factors of the encryption strength. The difference between effective key space and key space is that the effective key space represents the maximum work effort to brute force recover the keys. The key space on DES is 64 bits; however, since eight bits is used for parity, the maximum work effort to recover the key is based on an effective key space of 56 bits. Regardless of effective key space, if the method to generate keys is predictable, which means it has little entropy, recovering the keys could be relatively easy using statistical analysis. Entropy is used to describe the amount of disorder in a sequence or system. Higher entropy has greater disorder. A 128 bit key may be equivalent to 30 bits of entropy if we base the key on a 20 character password or phrase entered by the user. Entropy is the amount of randomness in the bits. In this case the effective key size is 30 bits even though the key space is 128 bits. The problem with relying on a user to generate a good pass phase is that the more difficult the password/phrase is, the harder it is to remember it. The passwords that we could remember would be too small to be considered safe. If using a Standard English passphrase with each letter taking a full byte, there would be 1.3 bits of entropy per byte. This is taken from the fact that in a Standard English phrase there would be a statistically higher occurrence of certain (e,r,s,t) letters than others. [18] A passphrase would have to be 85 characters long to have 110 bits of entropy, and you simply couldn't use 110 bits without first distilling it to match the encryption key requirements. To produce enough entropy for a hash function such as MD5 to consider random you would need a passphrase at least 99 characters long. (8 bits to a byte / 1.3 entropy bits to a byte) * (128 bits in MD5 / 8 bits to a byte) = 98.5 bytes If each letter had a statistically equal occurrence in a password (any letter occurs as often as any other letter) the amount of entropy per letter would be 4.7 bits per byte. The increased entropy would require a little as 27 random letters to hash by MD5 to produce a good random key. log (26) / log (2) = 4.7 bit of entropy with random letter occurrences (8 bits to a byte / 4.7 entropy bits to a byte) * (128 bits in MD5 / 8 bits to a byte) = 27 bytes One of the problems with hashing a password or passphrase as a key is that we have to assume that whoever attacking your software also has a copy of the source code and knows the hashing algorithm used to generate the key. With this knowledge an attacker can easily download a very large dictionary with common English phrases and hash each entry to try to recover your key. The selection of a hashing function to use for distilling entropy for encryption will depend on the encryption algorithm's key space requirement. For RC2 which has a default key space of 128 bits you would use MD5 to create the message digest of the entropy pool for the key. Your option of encryption algorithms becomes limited to ones that have a key space of 128, 160, 256, 384, and 512 bits. These key spaces are the available key spaces of the hashing functions that are available with the .NET Framework. The preferable way for generating keys would be to use a true random number generator. Greater the effective key space makes a brute force attack less feasible. The effective key space also depends largely on the encryption algorithm being used and the amount of entropy found in your keys. For private key encryption, an effective key size of 56 bits is considered to be weak; similarly 512 bit keys are weak for public key encryption systems. For each bit that the key increases by there is a doubling of work effort required to brute force the key. It is preferable to rely on the .NET Framework role based authentication model to supply user authentication and authorization. The .NET Framework BCL contains a number of classes that support role based security. Hashing algorithms are generally used to assure data integrity by producing a unique numerical message digest or fingerprint that represents the data being hashed. Hashing takes an arbitrary amount of data and produces a message digest that is fixed length. Hashing works one way because you can't reproduce the data given the message digest and it is computational impossible to produce two documents that produce the same digest. This type of hashing is known as Message Detection Code (MDC). There are different hashing algorithms that produce varying lengths of the message digest. Greater the length, the less likely there would be any collisions with two documents producing a similar message digest. Although MDC primitives can be used to detect changes to data, if the message digest is sent along with the data, both pieces of information could be intercepted and altered before being sent along. A solution to this is to use a keyed hash primitive. The .NET Framework Cryptographic Namespace contains a number of MDC primitives with varying hash sizes. The MD5 algorithm was developed by Rivest to succeed his previous version MD4. MD5 is a little slower than the previous version but more secure. The MD5 algorithm produces a hash of 128 bits and the SHA1 produces hashes of 160 bits. So it is less likely that there would be a collision with SHA1 than with MD5. There are also variations of SHA1 which create message digests of the following bit sizes 256, 384, and 512. Here is a code sample written in Visual C++. NET that shows an example of creating a message digest of a Unicode message using SHA1 outputting the digest to console. //"; SHA1Managed * mySHA1 = new SHA1Managed(); UnicodeEncoding * myUE = new UnicodeEncoding(); try { Byte myMessageBytes[] = myUE->GetBytes(myMessage); // Generate a 160 bit hash Byte myHash[] = mySHA1->ComputeHash (myMessageBytes); String * myHashResult = BitConverter::ToString (myHash); // Remove the "-" from the hash Trace::WriteLine (myHashResult->Replace ("-","")); } catch (Exception * e) { Trace::WriteLine (e->ToString()); } return 0; } To hash a file, we open a filestream on the file and pass the handle of the stream to the ComputeHash() method. Here is an example of creating a message digest of a file. ComputeHash() //) { FileStream * sw; Byte calculatedhash[]; SHA1Managed * mySHA1 = new SHA1Managed(); calculatedhash = new Byte[mySHA1->HashSize / 8]; try { sw = new FileStream ("c:\\myfile.exe", FileMode::Open, FileAccess::Read, FileShare::None); calculatedhash = mySHA1->ComputeHash (sw); } catch (Exception * e) { Trace::WriteLine (e->ToString()); } __finally { sw->Close (); } return 0; } Keyed hash primitives are hashes that produce a message digest based on the data and a secret key. Keyed hash algorithms are known as Message Authentication Codes (MAC). MAC serves two purposes: assure data integrity and authentication. There are two types of MAC, ones based on hash algorithms such as SHA1 and ones based on encryption algorithms such as TripleDES. The .NET Framework BCL includes both types of MAC algorithms and both are derive from the abstract KeyedHashAlgorithm class, the HMACSHA1 is based on the SHA1 hash algorithm; and the MACTripleDES class is based on the TripleDES algorithm. KeyedHashAlgorithm HMACSHA1 MACTripleDES TripleDES The main difference between both key hash algorithms is the restriction on the key and the size of the message digest. The HMACSH1 takes any size key and produces a 20 byte message digest. The MACTripleDES is restricted to key sizes of 8, 16, or 24 bytes and produces an 8 byte message digest. HMACSH1 The code below is an example of creating a message digest using the key hashed algorithm HMACSHA1. The key that I used for the code comes from the cryptographic random number generator. To reproduce the same message digest value on the data you would need to use the same key. //"; HMACSHA1 * myhmac = new HMACSHA1 (); RNGCryptoServiceProvider * rng = new RNGCryptoServiceProvider; Byte rndhashkey[] = new Byte[10]; // allocate a 10 byte array. // Array gets filled with random numbers. rng->GetNonZeroBytes (rndhashkey); myhmac->Key = rndhashkey; // sets the hash key UnicodeEncoding * myUE = new UnicodeEncoding(); try { Byte myMessageBytes[] = myUE->GetBytes(myMessage); // Generate a 160 bit hash Byte myHash[] = myhmac->ComputeHash (myMessageBytes); String * myHashResult = BitConverter::ToString (myHash); // Remove the "-" from the hash Trace::WriteLine (myHashResult->;Replace ("-","")); } catch (Exception * e) { Trace::WriteLine (e->;ToString()); } return 0; } At this point we covered enough to begin encrypting and decrypting secrets. Let's take a look at the following source code that encrypts and decrypts a secret in memory. The example uses the 128 bit RC2 encryption algorithm, and the 128 bit MD5 hash to convert the pass phrase into a usable encryption key for RC2. Since we are encrypting and decrypting within the same module there is no need to store the IV. If we were to transmit the encrypted message we would also need to transmit the IV with the message. //::Diagnostics ; using namespace System::IO ; using namespace System::Security::Cryptography ; using namespace System::Text ; // This is the entry point for this application int _tmain(void) { MD5CryptoServiceProvider * myMD5 = new MD5CryptoServiceProvider(); RC2CryptoServiceProvider * myRC2 = new RC2CryptoServiceProvider(); MemoryStream * myMemoryStream = new MemoryStream; UnicodeEncoding * myUnicodeEncoding = new UnicodeEncoding(); Encoding * myEncodingMethod = new UnicodeEncoding(); String * myPassphrase = S"Bill Ferreira can not think of a good passphrase"; String * myMessage = S"This message will be encrypted!"; try { Byte myPassphraseBytes[] = myUnicodeEncoding->GetBytes( myPassphrase ); Byte myMessageBytes[] = myUnicodeEncoding->GetBytes( myMessage ); // Create a 128 bit Hash of the passphrase to be used as a key Byte myHash[] = myMD5->ComputeHash ( myPassphraseBytes ); // Use the initialization vector and passphrase for both // encrypting and decrypting functions myRC2->GenerateIV (); myRC2->Key = myHash ; ICryptoTransform * myDecryptor = myRC2->CreateDecryptor (); ICryptoTransform * myEncryptor = myRC2->CreateEncryptor (); // Create an encryption stream // that will use the memory stream for // a storage area. CryptoStream * myEncryptStream = new CryptoStream(myMemoryStream, myEncryptor, CryptoStreamMode::Write ); myEncryptStream->Write (myMessageBytes, 0,myMessageBytes->Length ); myEncryptStream->FlushFinalBlock (); // The memory stream now holds an encrypted message. Byte myEncryptedMessage[] = myMemoryStream->ToArray (); Trace::WriteLine (Convert::ToBase64String ( myEncryptedMessage )); myEncryptStream->Close (); myMemoryStream->Close (); ///////////////////////////////////////// // Decrypt an encrypted memory stream ///////////////////////////////////////// MemoryStream * myEncryptedMemoryStream = new MemoryStream; myEncryptedMemoryStream->Write (myEncryptedMessage, 0,myEncryptedMessage->Length ); // Reset stream pointer to the beginning of the memory stream myEncryptedMemoryStream->Seek (0,SeekOrigin::Begin ); // Create a decryption stream that will read from the // memory stream to decrypt. CryptoStream * myDecryptStream = new CryptoStream(myEncryptedMemoryStream, myDecryptor, CryptoStreamMode::Read ); // The stream reader will pull data through // the myDecryptStream CryptoStream // using the Unicoding character set. StreamReader * sr = new StreamReader ( myDecryptStream, myEncodingMethod, false ); Trace::WriteLine ( sr->ReadToEnd () ); myDecryptStream->Close (); myEncryptedMemoryStream->Close (); } catch (Exception * e) { Trace::WriteLine ( e ); } return 0; } Generating random numbers is not as easy as it sounds. Random numbers are an effect a series of numbers that arbitrary, unknowable, and unpredictable. [14] Every number has an equal probability of coming up. Because of the characteristics of random numbers using a predictable or deterministic machine such as a computer as a source of random data is not a preferable way for generating cryptographic key material. Random numbers generated by a computer is done through a pseudorandom number generators (PRNG) use a mathematical formula and an initial seed value. The preferable way is to use a non deterministic source which produces randomness outside the control of humans. Examples of non deterministic sources would be sampling atmospheric noise from radio or measuring radioactive decay. This type of source would produce genuine random numbers. Cryptographic PRNG algorithms use cryptographic hash or encryption functions to produce the random data. A seed used to initialize the initialization vector and key. The problem with using a mathematical algorithm is that if you use the same seed value you will be able to reproduce that same series of numbers. Sampling the computer environment such as typing rates and mouse movements could potentially be tampered with by programs that take over control of the keyboard or mouse buffer. The .NET Cryptographic Services includes the PRNG class RNGCryptoServiceProvider that you could use for your keying material. Generating random bytes is as easy as calling the method GetBytes() passing in an array. The GetNonZeroBytes() method could be called instead to return a random sequence without a zero value. GetNonZeroBytes() is preferable since a byte value of zero represents eight bits. This example shows generating 16 bytes of random data. RNGCryptoServiceProvider GetBytes() GetNonZeroBytes() RNGCryptoServiceProvider * prng = new RNGCryptoServiceProvider; Byte rndbytes[] = new Byte[16]; // 16 bytes or 128 bits prng->GetNonZeroBytes (rndbytes);// Byte array filled with random data. As you have seen, there are a few things that you need to consider when developing secure solutions. The .NET environment helps with a lot of the security plumbing but you still have to design and code the solution. Regardless of how secure you develop your software or solution you will always be susceptible to a clever attack. If there was just one thing that you take away from this article that should be: What ever you design, develop, and implement, regardless of security measures, your code will be reverse engineered and your security mechanisms will be analyzed. Because of this, secrecy should be fully dependant on a proven encryption algorithm on the size and quality of the encryption.
http://www.codeproject.com/Articles/3435/Secure-Architecture-Design-Methodologies?fid=13952&df=90&mpp=50&noise=3&prof=True&sort=Position&view=Expanded&spc=Relaxed
CC-MAIN-2016-26
refinedweb
9,803
51.78
Vincent. > I had changed the MPFR code to: > ... > #if defined(MPFR_HAVE_NORETURN) > /* _Noreturn is specified by ISO C11 (Section 6.7.4); > in GCC, it is supported as of version 4.7. */ > # define MPFR_NORETURN _Noreturn > #elif __MPFR_GNUC(3,0) || __MPFR_ICC(8,1,0) > # define MPFR_NORETURN __attribute__ ((noreturn)) > #else > # define MPFR_NORETURN > #endif > > I think that something like > > #elif 1200 <= _MSC_VER > # define MPFR_NORETURN __declspec (noreturn) > > could be added. Sure. What you do here is reasonable. However, for gnulib the goal is different: In programs that use gnulib we don't want to have the user (developer) to learn new set of macros, different from what ISO C or POSIX specify. Rather, we want the user to be able to use *exactly* the syntax that is specified in ISO C or POSIX. And it happens that the task of defining a MPFR_NORETURN macro that works like 'noreturn' is an easier task than to define a 'noreturn' macro. The reason is that with MSVC, the compiler wants to see __declspec (noreturn) but this is hardly possible if 'noreturn' is at the same time defined as a macro without arguments. Bruno
https://lists.gnu.org/archive/html/autoconf/2012-04/msg00065.html
CC-MAIN-2017-17
refinedweb
185
68.7
Name: jk109818 Date: 07/09/2002 FULL PRODUCT VERSION : java version "1.4.1-beta" Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.1-beta-b14) Java HotSpot(TM) Client VM (build 1.4.1-beta-b14, mixed mode) FULL OPERATING SYSTEM VERSION : Windows 98 [Version 4.10.2222] A DESCRIPTION OF THE PROBLEM : Using the file chooser, browse to a directory with many files, the dialog takes a long time to show. After it shows, it will takes a long time to show the files. The folder I tested with has: 1545 files and 49 folders. Note that Notepad.exe open this instantly. There is a related bug that is closed: 4621272 It says that it fixes the bug in hopper, unfortunately, I have no way of knowing if j2sdk1.4.1 beta is "hopper" or not. Also, note that the 2 bugs are different. The related bug show slowness when select many files. This bug shows slowness by merely open a directory. Also note that with j2sdk1.4.1 beta, the previous bug is still there, and select 1 file, the select another file, there is a long pause between (inconsistently though, and only happen in a directory with many files - note: just select 1 file, not many files). STEPS TO FOLLOW TO REPRODUCE THE PROBLEM : 1) Run the application 1) Open a JFileChooser 2) Browse to a directory that has many files (it's even better to have the first directory of JFileChooser to have many files to show the effect of slowness). EXPECTED VERSUS ACTUAL BEHAVIOR : Things work faster. REPRODUCIBILITY : This bug can be reproduced always. ---------- BEGIN SOURCE ---------- import javax.swing.*; import java.io.*; public class BugDemonstration { public static void main(String args[]) { final JFrame frame = new JFrame("The Frame"); frame.setDefaultCloseOperation (JFrame.EXIT_ON_CLOSE); frame.setSize(200,200); frame.setVisible(true); SwingUtilities.invokeLater(new Runnable() { public void run() { JFileChooser file = new JFileChooser(); file.setMultiSelectionEnabled(true); file.setDialogTitle ("Select lots of files..." ); file.showDialog(frame.getContentPane(), "Demonstrate problem"); File [] selected = file.getSelectedFiles(); System.out.println (selected.length+" files selected."); } }); } } ---------- END SOURCE ---------- (Review ID: 158588) ======================================================================
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=4712307
CC-MAIN-2018-05
refinedweb
349
59.6
Understanding Scope in VB.net By: Steven Holzner Printer Friendly Format The scope of a variable or constant is the set of all code that can refer to it without qualifying its name. A variable's scope is determined by where the variable is declared. It's usually a good idea to make the scope of variables or constants as narrow as possible (block scope is the narrowest). This helps conserve memory and minimizes the chances of your code referring to the wrong item. I'll take a look at the different kinds of scope in VB .NET here. Block Scope A block is a series of statements terminated by an End, Else, Loop, or Next statement, and an element declared within a block can be used only within that block. Here's what block scope looks like in an example. Procedure Scope An element declared within a procedure is not available outside that procedure, and only the procedure that contains the declaration can use it. Elements at this level are also called local elements, and you declare them with the Dim or Static statement. Note also that if an element is declared inside a procedure but outside any block within that procedure, the element can be thought of as having block scope, where the block is the entire procedure. Module Scope When discussing scope, Visual Basic uses the term module level to apply equally to modules, classes, and structures. You can declare elements at this level by placing the declaration statement outside of any procedure or block within the module, class, or structure. When you make a declaration at the module level, the accessibility you choose determines the scope. The namespace that contains the module, class, or structure also affects the scope. Elements for which you declare Private accessibility are available for reference to every procedure in that module, but not to any code in a different module. The Dim statement at module level defaults to Private accessibility, so it is equivalent to using the Private statement. However, you can make the scope and accessibility more obvious by using Private. In this example, I've declared Function1 as private to Module2, so it's inaccessible in Module1 Namespace Scope If you declare an element at module level using the Friend or Public statement, it becomes available to all procedures throughout the entire namespace in which it is declared. Note that an element accessible in a namespace is also accessible from inside any namespace nested inside that namespace.
https://java-samples.com/showtutorial.php?tutorialid=1274
CC-MAIN-2019-47
refinedweb
418
50.57
In learning React (as a .net developer) I suggested using React and Typescript together. I’m generally a fan of Typescript (I think, for now at least) and appreciate how it catches typos/errors which might otherwise slip through the net (and into production). But it’s important to realise that learning React using Typescript is potentially harder than learning React using Javascript. Not least because most of the React docs, tutorials, blog posts are written using Javascript. This means you have to figure out how the Typescript parts works, and adjust the examples you see accordingly. Chances are high you’ll run into Typescript errors which leave you scratching your head, and one in particular comes early and is a bit confusing if you’re just starting out. STOP! Read this first… The rest of this article refers to an issue you may encounter if you spin up a new React Typescript project using the template found here. But, as of “Create React App” 2.1, Typescript is “baked in”; So you can create a new Typescript React application with this command instead… npx create-react-app app-name —-typescript This approach avoids the issues mentioned in the rest of this article! Check out this brief video I made, showing the new CRA template in action! But if you’re interested in TSLint and how it works anyway, read on… JSX No Lambda When you start following the Tutorials, on reactjs.org for example, you’ll invariably see things like this… <button className="square" onClick={() => alert('click')}> {this.props.value} </button> Note the onClick event handler uses the ES2015 arrow syntax (if you’re a C# developer you’re used to this being referred to as a lambda). Now, if you try to declare an inline function like this with your React Typescript project (created with the wmonk template) you’ll run into an error. Lambdas are forbidden in JSX attributes due to their rendering performance impact So what’s going on? The wmonk React Typescript template employs TSLint to enforce certain rules. You can think of “linters” as tools which check your code for potential problems before you compile and run it. They enforce any number of rules which are considered “good practice” for specific programming languages. One of those rules is called jsx-no-lambda. Here’s the explanation (taken from the GitHub Repo). Creating new anonymous functions (with either the function syntax or ES2015 arrow syntax) inside the render call stack works against pure component rendering. When doing an equality check between two lambdas, React will always consider them unequal values and force the component to re-render more often than necessary. Now there are a number of workarounds for this. You can create a named function and reference it instead… public render() { return ( <button className="square" onClick={buttonClicked}> {this.props.value} </button>) } private buttonClicked = () => { alert('click'); } But if you just want to follow along with the tutorials, and would like TSLint to keep out of your way while you do so, you can always disable rules in the tslint.json config file. { "extends": [ "tslint:recommended", "tslint-react", "tslint-config-prettier" ], "rules": { "jsx-no-lambda": false } } Here you can turn rules off as you wish. Naturally, this demands a word of caution! If you turn all the rules off there would be no point having TSLint in the first place! So it pays to think twice before just switching them off willy-nilly! But (as Mark Erikson helpfully noted in the comments) the jsx-no-lambda rule is generally unnecessary and if you’re just starting, it pays to remove any friction which might get in the way. So turn the rule off (or use the official Create React Typescript template), go forth and follow the tutorials!
https://jonhilton.net/typescript-and-react-forbidden-lambdas/
CC-MAIN-2020-40
refinedweb
631
61.77
71293/createelement-expected-components-composite-components I should be able to export my App component file and import it into my index.js. I get the following error React.createElement: type is invalid -- expected a string (for built-in components) or a class/function (for composite components) but got: object My index.js const React = require('react'); const ReactDOM = require('react-dom'); const App = require('./components/App'); require('./index.css'); ReactDOM.render( <App />, document.getElementById('app') ); Then in my components/App.js const React = require('react'); export default class App extends React.Component { render() { return ( <div> Hell World! Wasabi Sauce! </div> ); } } // module.exports = App; If I uncomment module.exports = App; it will work, but I'm trying to use the export syntax. In another project I am doing the exact same thing here and it's working fine Hello @kartik, I was facing same issue, i did this trick, worked for me, You just need to put export before class App, export class App; you can also check your router you are using, I have used this works for me, npm install react-router-dom@next --save Hope it work!! Hello @ Kanishk, You can know that by following this:. The issue you have encountered was caused by mixing two different module systems which differ in the way they are resolved and implemented. CommonJS modules are dynamic and contrary to that ES6 modules are statically analyzable. Tools like Babel transpile ES6 modules to CommonJS as of now because native support is not ready yet. But, there are subtle differences. By using default export (exports default) the transpiler emitted a CommonJS module with { default } property as there can be named and default export alongside in ES6 module. The following example is a perfectly valid ES6 module: export default class Test1 { } export class Test2 { } This would result in { default, Test2 } CommonJS module after transpiling and by using require you get that object as a return value. In order to import ES6 module's default export in CommonJS you must use require(module).default syntax by the reasons mentioned above. Angularjs routeParams is a service that allows ...READ MORE I made a function that you pass ...READ MORE Hello, Go to the index.html file and import ...READ MORE Hello @kartik, You should populate data with AJAX ...READ MORE Hello @kartik, It is happening because any where ...READ MORE Hii @kartik, You can use Simple JSON for PHP. ...READ MORE Hello @kartik, You just need to put your ...READ MORE Hello @kartik, You're just calling JavaScript functions. You ...READ MORE Hello @kartik, You need export default or require(path).default var ...READ MORE The ngRoute module helps your application to become a ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/71293/createelement-expected-components-composite-components
CC-MAIN-2021-49
refinedweb
475
57.67
The: OK, with some delicate soldering I can reclaim the two LED pins -- just remove the LEDs and solder wires in that place. That gives me 20 pins to work with. My keyboard has 88 keys. That means, that if I make a matrix 8×11, I can support them all with 19 pins, and even have one pin left for a LED or something. Yay. But 8×11 is not exactly how the keyboard looks physically -- it's more like 5.5×16 (some columns have 5 rows, some have 6). So, to get 8×11, I will have to transpose it and merge every two neighboring columns together. That's doable, it just means I will have fun time converting the layouts. Now, let's look for some ready-to-use firmware, so that I don't have to do all this coding myself (not that it's very complicated, but I'm lazy). For that chip, this seems to be pretty popular: First, I burned one of the example keyboards to the board with avrdude: avrdude -p atmega32u4 -P /dev/ttyACM0 -c avr109 -U flash:w:gh60.hex (you have to get it into the boot mode first by pressing reset right when it boots). Then I connected some of the switches to some of the column/row pins, and pressed them -- and voila, it typed some letters! So the firmware works great. Next, I will have to modify it to support my particular keyboard layout, with this almost square matrix. Looking at the matrix.c file in the examples, you can see code like: /* Row pin configuration * row: 0 1 2 3 4 * pin: D0 D1 D2 D3 D5 */ static void unselect_rows(void) { // Hi-Z(DDR:0, PORT:0) to unselect DDRD &= ~0b00101111; PORTD &= ~0b00101111; } Hmmm.... Does that mean I need to have all row pins on the same port? I need 8 rows, so that would be doable... Let's see... Nope. Whoever designed the Pro Micro, he or she left out a single pin from each port, so that no port has a complete set of pins broken out. Splendid. Let's look at the other examples... OK, I can pretty much write anything I want in those functions, all I need is to initialize the pins I want and read them all into a single number with something like: static uint8_t read_rows(void) { return (PIND&(1<<0) ? (1<<0) : 0) | (PIND&(1<<1) ? (1<<1) : 0) | (PIND&(1<<2) ? (1<<2) : 0) | (PIND&(1<<3) ? (1<<3) : 0) | (PIND&(1<<5) ? (1<<4) : 0) | (PINB&(1<<7) ? (1<<5) : 0); }Not very pretty, I bet I could write it nicer, but that should work. OK, writing it all and writing the layout definition is going to take some time, but at least I know how to proceed. See you at the other end. Discussions Become a Hackaday.io Member Create an account to leave a comment. Already have an account? Log In.
https://hackaday.io/project/8282-alpen-clack/log/27395-matrix
CC-MAIN-2022-05
refinedweb
499
82.44
AutoMapper is a «convention-based object-to-object mapper». According to the description, .» I’ve been using AutoMapper to map entity objects between data and service layers, as well as service and UI layers. What I mean with mapping is doing stuff such as: public TDestination Map(TSource src) { TDestination dest = new TDestination (); dest.Field1 = src.Field1; dest.Field2 = src.Field2 + src.Field3; dest.Field3 = src.Field10; // … return dest; } This is boring and repetitive code, and it’s what AutoMapper wants to avoid writing. In its simplest usage, we setup the mapper with: using AutoMapper; […] Mapper.CreateMap<TSource , TDestination>(); and then convert by calling: TDestination dest = Mapper.Map<TSource , TDestination>(srcObject); By default, AutoMapper only maps properties that have the same names on the source and destination, however you can parameterize the mapper in a different way. For example, assuming that Field10 in TSource becomes Field3 in TDestination, you can write: Mapper.CreateMap<TSource , TDestination>() .ForMember( dst => dst.Field3, options => options.MapFrom(src => src.Id)); You can add as many ForMember clauses as you want. Also note that dst, options are src are not variable names, but are part of the lambda function definitions. After the map is setup like this, the call to Map will now convert the objects correctly. So AutoMapper is an extremely convenient way to map between objects, and more convenient the more similarity there is in the names of the properties of the objects being mapped. However, and quite obviously, AutoMapper does this by using reflection, and I wanted to measure the impact of doing these conversions this way, compared with the hand-coded assignments shown at the top. I created two examples: in the first, the two objects have exacly the same structure, so AutoMapper does all the work. In the second, the destination type has the same fields, but with different names, so I had to use ForMember once for each of the 6 fields in my test classes. I then created a loop that converted using either of the two methods a number of times and printed out the elapsed time. Here are the results: The first time was according to what I expected, but the second was actually much larger. Getting these results, I then tried an optimization, which was to create the map outside of the test loop – but still after the timer start (assuming it could be created and stored in some in-memory cache), and re-ran the tests. This time the results where much better, especially in the renames case, showing that setting up the map with renames can have a large impact on the execution times. The following table shows the results. Quite surprising that the renames option is now actually faster than the automatic direct conversion. My conclusion: I’ll go on using AutoMapper for its convenience when writing code, but if performance is an issue, I’ll just directly hand-code the mapping. Pre-creating and populating a cache of Mappers would also be a viable alternative, but hard to justify in terms or architecture. Check out the codeplex site for more features of AutoMapper, such as Flattening or Projection, or contact me if you want the source code I used for these tests.
http://blog.joaopedromartins.net/2011/08/
CC-MAIN-2018-39
refinedweb
540
52.29
Add elements to a java vector using index : Vector is a good replacement of array in Java if you want to add elements dynamically. We can add elements dynamically to a vector and it will increase its size, unlike arrays. Previously we have learned different examples of vectors like how to create vectors, how to add elements to a vector and how to clear a vector. In this tutorial, we will learn how to add elements to a vector in a specific position, i.e. using index numbers. add() method : Following method we are going to use for adding new elements to a vector using the index : public void add(int index, E element) The method add takes two parameters: the first one is the index where we are adding the element and the second parameter is an element to be inserted. This method will add the element at the specific index and moves all other elements to the right if available. Be careful to use the proper index while using this method. If the index is not valid, it will throw one exception. For example, if you are trying to add one element to the 2nd _index to an empty vector, it will throw _ArrayIndexOutOfBoundsException. Java Example : import java.util.Vector; public class Example { public static void main(String[] args) { Vector<string> strVector = new Vector<>(); //1 strVector.add(0,"one"); strVector.add(1,"two"); strVector.add(2,"three"); //2 System.out.println(strVector); //3 strVector.add(1,"four"); //4 System.out.println(strVector); } } Output : [one, two, three] [one, four, two, three] Explanation : The commented numbers in the above program denote the step numbers below : - Add three elements to the vector strVector. The elements are added to the 0,1 and 2 positions. - Print out the vector. It will print_ [one, two, three]._ - Now add one more element ’four’ to the position_ ‘1’_ of the vector. - We already have element ’two’ on position ‘1’. So, all elements will move to the right and the new element will add to the first position. It will print_ [one, four, two, three]._ This program is shared on Github. Conclusion : We have learned how to use add method to add elements to a vector in Java. This method comes in handy if you need to add an element to the middle of the vector. Try to run the example above and drop one comment below if you have any queries. Similar tutorials : - Java program to remove element from an ArrayList of a specific index - Java program to find the duplicate elements in an array of Strings - Java program to print the boundary elements of a matrix - How to remove elements of Java ArrayList using removeIf() method - How to read elements of a Java Vector using iterable - Java program to clear a vector or delete all elements of a vector
https://www.codevscolor.com/add-elements-to-java-vector-using-index/
CC-MAIN-2020-29
refinedweb
478
63.59
LilyGO, the Chinese manufacturer of the TTGO brand, is very prolific. After the ESP32 and ESP8266 development boards, he offers us the T-Watch series which concentrates in a 20mm thick mini case an ESP32 battery-powered development board equipped with a color touch screen (or not). The box can accommodate a specialized expansion board (Lora, GPS, GPRS …). The T-Watch 2020 is a real connected watch with a design reminiscent of the Apple Watch. As it is above all an ESP32 development board, it can be entirely programmed with Arduino code! T-Block et T-Bot The LilyGo T-Block is an attractive small plastic case ideal for children. The PCB has an ESP32, 8MB of flash memory, 16MB of SPRAM memory, an RTC clock (PCF8563), a 3-axis accelerometer (MPU6050) and an AXP202 power controller that can be driven via the I2C bus with C++ code. The display is entrusted to a 1.54” e-Paper screen. The T-Bot is a version that integrates an expansion board on which an HC-SR04 ultrasonic proximity sensor is placed. The T-Block is then installed on a base equipped with a motor to transform the T-Bot into a mini robot! Up to 3 analog sensors can be installed on the expansion board in addition to the HC-SR04. The T-Block is clearly intended for beginners and offers a fairly limited number of expansion boards (keyboard, LED matrix, e-paper). Past the playful aspect of the case, you quickly risk being limited in your projects. T-Watch TOUCH et NO TOUCH (2019) The T-Watch Touch (2019) is an improved version of the T-Block. The e-paper screen has been replaced by a color TFT screen of 1.54 “diagonal available in a touch version or not called T-Watch-N sold in a yellow packaging. The case measures 40 x 38mm and 20mm thick. It comes with a bracelet that will allow it to be worn on the wrist. The T-Watch case is clearly not a Smart Watch but has many advantages. LilyGo has concentrated in this mini box a real SP32 development board. On the Core PCB , the motherboard we find - The ESP32 SoS backed by 16MB of Flash memory and 8MB of SPRAM memory - A 1.54″ color TFT touch screen (or not) driven by an ST7789V controller supported by the excellent LVGL graphics library - A 300mAh LiPo battery sufficient for most applications - One user programmable button - An external I2C connector - An AXP202 power controller that takes care of charging the LiPo battery. CL’AXP202 has several I / Os that can be used to power accessories via the I2C bus! - A 3-axis accelerometer BMA423 - A PCF8563 RTC clock - A connector for an expansion board (mandatory for the LiPo battery) An expansion board is superimposed on the Core PCB. It is needed to connect the LiPo battery. LilyGo offers 13 specialized expansion boards (in addition to the Basic Expansion Board): - Basic Expansion Board (supplied with each T-Watch) , 2 x 8-pin expansion connectors providing access to pins 33, 34, 21 (SDA), 22 (SCL) of ESP32 and IO0, IO1, IO2 and IO3 of the AXP202 (power manager). This is the board delivered as standard - T-Fork connector for breadboard - GPS and M8N Blox - Handle , joystick and 4 buttons to transform the T-Watch into a Gameboy! - Motor & Speaker (Pack H329), vibrator + speaker - MPR121, external touch interface - SIM800L, modem GPRS - MP3 , MP3 player - NFC , contactless reader (note, this is not an RFID reader) - T-Car, can drive up to 3 servo motors via the 1-Wire bus - T-Quick , can control up to 2 motors in I2C - MAX98357 (Pack H328), I2S audio output - S76G Lora + GPS (Pack H327) - S78G Lora (433 to 470 MHz) + GPS (Pack H397). LoRa modem The list is constantly evolving. [ wpsm_box] Most expansion boards have a micro SD card reader accessible without opening the case via a side slot. Difficult to find so compact on the side of traditional ESP32 development boards! See more expansion boards for the T-Watch Here, the T-Watch Touch with a Lora + GPS S78G expansion board. The Lora antenna unfortunately comes out of the case via the slot for the bracelet. It can be glued under the case … but be careful when removing it to replace the battery. The T-Watch case can replace a classic ESP32 development board in many cases. The price / function integration ratio is unbeatable. The color touch screen is a real plus. The T-Watch case can replace a classic ESP32 development board in many cases. The price / function integration ratio is unbeatable. The color touch screen is a real plus. - Ultra compact format - Good quality color touch screen - Unbeatable compactness / functionality ratio - Easy-access i2c connector - LiPo battery - AXP202 power controller - - No access to Reset without opening the box (replaced by holding 6s on the main button) - Molex connector not very widespread and bulky - Protective cap of the i2c connector does not hold much T-Watch 2020, a DIY Apple Watch! The T-Watch 2020 is a compact version that adopts the design of the Apple Watch. The T-Watch 2020 is above all designed as a connected watch. It adopts exactly the same internal architecture as the T-Watch Touch, which means that the code will work on both boxes. However, it does not offer any possibility of extension, nor of GPRS modem. It will therefore be impossible to connect the TWatch 2020 to the internet. It only has one USB-C standard port for recharging the Li-Ion battery and programming. This is protected by a removable cover (quite difficult to open moreover). Small regret, the rotation of the crown of the main power button is not functional. It would have been really a plus to navigate the menus: cry: The bottom cover (very easy to remove) allows access to the 3.7V Li-Ion battery with a capacity of 380mAh for its replacement. Compatible replacement batteries are fairly easy to find. LiLyGo references two versions of the T-Watch 2020 in the technical documentation on this GitHub page . Version 2 does not appear to be released yet at the time of this writing. In the event that version 2 is marketed, it should include a GPS receiver (Air 530) as well as a microSD card reader (internal or external, for the moment nothing has been indicated). Here is a summary table of the main technical characteristics.. - Design of a connected watch - ESP32 - Available memory (16MB + 8MB) - Li-Ion battery easy to replace - Excellent quality / price ratio - - Crown not functional - No GPRS modem - No GPS in v1 - K210 AIOT with OV2640 camera Unlike the other boxes developed on the basis of the ESP32 from Espressif, the K210 AIOT uses a K210 processor from Kendryte . On paper, the K210 looked promising, but few manufacturers adopted it. Seeed Studio integrated it into several development boards last year, Sipeed M1 and Sipeed Maixduino in particular. The easiest way to use the camera module is to use the fork of the MaixPy project from LiLyGo. MaixPy uses MicroPython for programming. Here is an example to take a snapshot and display it on the LCD screen of the box. import sensor import image import lcd lcd.init() sensor.reset() sensor.set_pixformat(sensor.RGB565) sensor.set_framesize(sensor.QVGA) sensor.run(1) while True: img=sensor.snapshot() lcd.display(img) It is also possible to develop in C ++ using the library available for the Arduino IDE . The project is enticing on paper, remains to be seen if it will continue to be maintained and supported by SeedStudio and LiLyGo? At the moment this does not appear to be the case based on the state of official documentation and the closure of the official site of Kendryte, the maker of the K210. Currently, I advise you instead to turn to an M5stack box and the OV2640 camera module compatible with all the examples offered for ESP32-CAM modules. Comparison of models Here is a comparative table of the main technical characteristics (*) requires a cable fitted with a Molex 51065-0400 connector - - T-Watch. Display XBM (TFT_eSPI) and C++ (LVGL) images. ESP32, Arduino compatible Nice looking watches these are
https://diyprojects.io/ttgo-t-watch-esp32-which-model-choose-available-expansion-boards/?noamp=mobile
CC-MAIN-2022-40
refinedweb
1,374
61.46
Hi guys, I have a problem with arranging my components in a GridBagLayout. The frame is 600x600. Inside it, there are two pairs of (Label,TextField). They are not where I want them to be. What I would like is to have them all at the top of the frame, one after each other. Moreover, I would like all the labels aligned together (easy) but also all the textField aligned together... Now, my pairs of (Label,TextField) are spread all over the panel, they aren't all at the top. The second pair is at the middle of the frame, and I would like it to be right after the first pair, just following it. And of course, the frame have to stay 600x600. I would like to be able to do that using the GridBagLayout because it will be useful later if I need to do any modification. I probably did all wrong with the anchors. Any idea how to get this result ? Thanks heaps! Code : import java.awt.*; import javax.swing.*; public class GUI_example extends JFrame { private final static long serialVersionUID = 1L; JFrame frame; JLabel labelIndex; JLabel labelCategory; JLabel labelUnicode; JTextField textFieldIndex; JTextField textFieldCategory; JTextField textFieldUnicode; public GUI_example() { super(); createGUI(); } public void createGUI() { frame = new JFrame(); frame.setPreferredSize(new Dimension(600,600)); frame.setTitle("Hello!"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); Container pane = frame.getContentPane(); pane.setLayout(new GridBagLayout()); GridBagConstraints c1 = new GridBagConstraints(); labelIndex = new JLabel("Index: "); c1.anchor = GridBagConstraints.NORTHWEST; c1.weightx = 0.5; c1.weighty = 0.5; c1.ipady = 10; c1.gridx = 0; c1.gridy = 0; pane.add(labelIndex,c1); textFieldIndex = new JTextField("Some index..."); c1.gridx = 1; c1.gridy = 0; pane.add(textFieldIndex,c1); labelCategory = new JLabel("Category: "); c1.anchor = GridBagConstraints.NORTHWEST; c1.weightx = 0.5; c1.weighty = 0.5; c1.gridx = 0; c1.gridy = 1; pane.add(labelCategory,c1); textFieldCategory = new JTextField("Some category..."); c1.gridx = 1; c1.gridy = 1; pane.add(textFieldCategory,c1); frame.pack(); frame.setVisible(true); } public static void main(String[] args) { new GUI_example(); } }
http://www.javaprogrammingforums.com/%20awt-java-swing/3504-problem-gridbaglayout-printingthethread.html
CC-MAIN-2018-17
refinedweb
331
53.78
You. You can relax, because software told you the name of the song, and you know that you can hear it again and again until it becomes a part of you…or you get sick of it. Mobile technologies, along with the huge progress in audio signal processing, have given us algorithm developers the ability to create music recognizers. One of the most popular music recognition apps is Shazam. If you capture 20 seconds of a song, no matter if it’s intro, verse, or chorus, it will create a fingerprint for the recorded sample, consult the database, and use its music recognition algorithm to tell you exactly which song you are listening to. But how does Shazam work? Shazam’s algorithm was revealed to world by its inventor Avery Li-Chung Wang in 2003. In this article we’ll go over the fundamentals of Shazam’s music recognition algorithm. Analog to Digital - Sampling a Signal What is sound really? Is it some sort of mystical material that we cannot touch but which flies into our ears and makes us hear things? Of course, this is not quite the case. We know that in reality, sound is a vibration that propagates as a mechanical wave of pressure and displacement, through a medium such as air or water. When that vibration comes to our ears, particularly the eardrum, it moves small bones which transmit the vibration further to little hair cells deep in our inner ear. Finally, the little hair cells produce electrical impulses, which are transmitted to our brain through the auditory ear nerve. Recording devices mimic this process fairly closely, using the pressure of the sound wave to convert it into an electrical signal. An actual sound wave in air is a continuous pressure signal. In a microphone, the first electrical component to encounter this signal translates it into an analog voltage signal - again, continuous. This continuous signal is not so useful in the digital world, so before it can be processed, it must be translated into a discrete signal that can be stored digitally. This is done by capturing a digital value that represents the amplitude of the signal. The conversion involves quantization of the input, and it necessarily introduces a small amount of error. Therefore, instead of a single conversion, an analog-to-digital converter performs many conversions on very small pieces of the signal - a process known as sampling The Nyquist-Shannon Theorem tells us what sampling rate is necessary to capture a certain frequency in continuous signal. In particular, to capture all of the frequencies that a human can hear in an audio signal, we must must sample the signal at a frequency twice that of the human hearing range. The human ear can detect frequencies roughly between 20 Hz and 20,000 Hz. As a result, audio is most often recorded at a sampling rate of 44,100 Hz. This is the sampling rate of Compact Discs, and is also the most commonly used rate with MPEG-1 audio (VCD, SVCD, MP3). (This specific rate was originally chosen by Sony because it could be recorded on modified video equipment running at either 25 frames per second (PAL) or 30 frames per second (using an NTSC monochrome video recorder) and cover the 20,000 Hz bandwidth thought necessary to match professional analog recording equipment of the time.) So, when choosing the frequency of the sample that is needed to be recorded you will probably want to go with 44,100 Hz. Recording - Capturing the Sound Recording a sampled audio signal is easy. Since modern sound cards already come with analog-to-digital converters, just pick a programming language, find an appropriate library, set the frequency of the sample, number of channels (typically mono or stereo), sample size (e.g. 16-bit samples). Then open the line from your sound card just like any input stream, and write to a byte array. Here is how that can be done in Java: private AudioFormat getFormat() { float sampleRate = 44100; int sampleSizeInBits = 16; int channels = 1; //mono boolean signed = true; //Indicates whether the data is signed or unsigned boolean bigEndian = true; //Indicates whether the audio data is stored in big-endian or little-endian order return new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian); } final AudioFormat format = getFormat(); //Fill AudioFormat with the settings DataLine.Info info = new DataLine.Info(TargetDataLine.class, format); final TargetDataLine line = (TargetDataLine) AudioSystem.getLine(info); line.open(format); line.start(); Just read the data from TargetDataLine. (In this example, the running flag is a global variable which is stopped by another thread - for example, if we have GUI with the STOP button.) OutputStream out = new ByteArrayOutputStream(); running = true; try { while (running) { int count = line.read(buffer, 0, buffer.length); if (count > 0) { out.write(buffer, 0, count); } } out.close(); } catch (IOException e) { System.err.println("I/O problems: " + e); System.exit(-1); } Time-Domain and Frequency-Domain What we have in this byte array is signal recorded in the time domain. The time-domain signal represents the amplitude change of the signal over time. In the early 1800s, Jean-Baptiste Joseph Fourier made the remarkable discovery that any signal in the time domain is equivalent to the sum of some (possibly infinite) number of simple sinusoidal signals, given that each component sinusoid has a certain frequency, amplitude, and phase. The series of sinusoids that together form the original time-domain signal is known as its Fourier series. In other words, it is possible to represent any time domain signal by simply giving the set of frequencies, amplitudes, and phases corresponding to each sinusoid that makes up the signal. This representation of the signal is known as the frequency domain. In some ways, the frequency domain acts as a type of fingerprint or signature for the time-domain signal, providing a static representation of a dynamic signal. The following animation demonstrates the Fourier series of a 1 Hz square wave, and how an (approximate) square wave can be generated out of sinusoidal components. The signal is shown in the time domain above, and the frequency domain below. Source: René Schwarz Analyzing a signal in the frequency domain simplifies many things immensely. It is more convenient in the world of digital signal processing because the engineer can study the spectrum (the representation of the signal in the frequency domain) and determine which frequencies are present, and which are missing. After that, one can do filtering, increase or decrease some frequencies, or just recognize the exact tone from the given frequencies. The Discrete Fourier Transform So we need to find a way to convert our signal from the time domain to the frequency domain. Here we call on the Discrete Fourier Transform (DFT) for help. The DFT is a mathematical methodology for performing Fourier analysis on a discrete (sampled) signal. It converts a finite list of equally spaced samples of a function into the list of coefficients of a finite combination of complex sinusoids, ordered by their frequencies, by considering if those sinusoids had been sampled at the same rate. One of the most popular numerical algorithms for the calculation of DFT is the Fast Fourier transform (FFT). By far the most commonly used variation of FFT is the Cooley–Tukey algorithm. This is a divide-and-conquer algorithm that recursively divides a DFT into many smaller DFTs. Whereas evaluating a DFT directly requires O(n2) operations, with a Cooley-Tukey FFT the same result is computed in O(n log n) operations. It’s not hard to find an appropriate library for FFT. Here are few of them: - C – FFTW - C++ – EigenFFT - Java – JTransform - Python – NumPy - Ruby – Ruby-FFTW3 (Interface to FFTW) Below is an example of an FFT function written in Java. (FFT takes complex numbers as input. To understand the relationship between complex numbers and trigonometric functions, read about Euler’s formula.) public static Complex[] fft(Complex[] x) { int N = x.length; // fft of even terms Complex[] even = new Complex[N / 2]; for (int k = 0; k < N / 2; k++) { even[k] = x[2 * k]; } Complex[] q = fft(even); // fft of odd terms Complex[] odd = even; // reuse the array for (int k = 0; k < N / 2; k++) { odd[k] = x[2 * k + 1]; } Complex[] r = fft(odd); // combine Complex[] y = new Complex[N]; for (int k = 0; k < N / 2; k++) { double kth = -2 * k * Math.PI / N; Complex wk = new Complex(Math.cos(kth), Math.sin(kth)); y[k] = q[k].plus(wk.times(r[k])); y[k + N / 2] = q[k].minus(wk.times(r[k])); } return y; } And here is an example of a signal before and after FFT analysis: Music Recognition: Fingerprinting a Song One unfortunate side effect of FFT is that we lose a great deal of information about timing. (Although theoretically this can be avoided, the performance overheads are enormous.) For a three-minute song, we see all the frequencies and their magnitudes, but we don’t have a clue when in the song they appeared. But this is the key information that makes the song what it is! Somehow we need to at know what point of time each frequency appeared. That’s why we introduce kind of sliding window, or chunk of data, and transform just this part of the information. The size of each chunk can be determined in a few different ways. For example, if we record the sound, in stereo, with 16-bit samples, at 44,100 Hz, one second of such sound will be 44,100 samples * 2 bytes * 2 channels ≈ 176 kB. If we pick 4 kB for the size of a chunk, we will have 44 chunks of data to analyze in every second of the song. That’s good enough density for the detailed analysis needed for audio identification. Now back to programming: byte audio [] = out.toByteArray() int totalSize = audio.length int sampledChunkSize = totalSize/chunkSize; Complex[][] result = ComplexMatrix[sampledChunkSize][]; for(int j = 0;i < sampledChunkSize; j++) { Complex[chunkSize] complexArray; for(int i = 0; i < chunkSize; i++) { complexArray[i] = Complex(audio[(j*chunkSize)+i], 0); } result[j] = FFT.fft(complexArray); } In the inner loop we are putting the time-domain data (the samples) into a complex number with imaginary part 0. In the outer loop, we iterate through all the chunks and perform FFT analysis on each. Once we have information about the frequency makeup of the signal, we can start forming our digital fingerprint of the song. This is the most important part of the entire Shazam audio recognition process. The main challenge here is how to distinguish, in the ocean of frequencies captured, which frequencies are the most important. Intuitively, we search for the frequencies with the highest magnitude (commonly called peaks). However, in one song the range of strong frequencies might vary between low C - C1 (32.70 Hz) and high C - C8 (4,186.01 Hz). This is a huge interval to cover. So instead of analyzing the entire frequency range at once, we can choose several smaller intervals, chosen based on the common frequencies of important musical components, and analyze each separately. For example, we might use the intervals this guy chose for his implementation of the Shazam algorithm. These are 30 Hz - 40 Hz, 40 Hz - 80 Hz and 80 Hz - 120 Hz for the low tones (covering bass guitar, for example), and 120 Hz - 180 Hz and 180 Hz - 300 Hz for the middle and higher tones (covering vocals and most other instruments). Now within each interval, we can simply identify the frequency with the highest magnitude. This information forms a signature for this chunk of the song, and this signature becomes part of the fingerprint of the song as a whole. public final int[] RANGE = new int[] { 40, 80, 120, 180, 300 }; // find out in which range is frequency public int getIndex(int freq) { int i = 0; while (RANGE[i] < freq) i++; return i; } // result is complex matrix obtained in previous step for (int t = 0; t < result.length; t++) { for (int freq = 40; freq < 300 ; freq++) { // Get the magnitude: double mag = Math.log(results[t][freq].abs() + 1); // Find out which range we are in: int index = getIndex(freq); // Save the highest magnitude and corresponding frequency: if (mag > highscores[t][index]) { points[t][index] = freq; } } // form hash tag long h = hash(points[t][0], points[t][1], points[t][2], points[t][3]); } private static final int FUZ_FACTOR = 2; private long hash(long p1, long p2, long p3, long p4) { return (p4 - (p4 % FUZ_FACTOR)) * 100000000 + (p3 - (p3 % FUZ_FACTOR)) * 100000 + (p2 - (p2 % FUZ_FACTOR)) * 100 + (p1 - (p1 % FUZ_FACTOR)); } Note that we must assume that the recording is not done in perfect conditions (i.e., a “deaf room”), and as a result we must include a fuzz factor. Fuzz factor analysis should be taken seriously, and in a real system, the program should have an option to set this parameter based on the conditions of the recording. To make for easy audio search, this signature becomes the key in a hash table. The corresponding value is the time this set of frequencies appeared in the song, along with the song ID (song title and artist). Here’s an example of how these records might appear in the database. If we run a whole library of songs through this music identification process, we can build up a database with a complete fingerprint of every song in the library. The Music Algorithm: Song Identification To identify a song that is currently playing in the club, we record the song with our phone, and run the recording through the same audio fingerprinting process as above. Then we can start searching the database for matching hash tags. As it happens, many of the hash tags will correspond to the music identifier of multiple songs. For example, it may be that some piece of song A sounds exactly like some piece of song E. Of course, this is not surprising - musicians have always “borrowed” licks and riffs from each other, and these days producers sample other songs all the time. Each time we match a hash tag, the number of possible matches gets smaller, but it is likely that this information alone will not narrow the match down to a single song. So there is one more thing that we need to check with our music recognition algorithm, and that is the timing. The sample we recorded in the club might be from any point in the song, so we cannot simply match the timestamp of the matched hash with the timestamp of our sample. However, with multiple matched hashes, we can analyze the relative timing of the matches, and therefore increase our certainty. For example, if you look in the table above, you will see that hash tag 30 51 99 121 195 corresponds to both Song A and Song E. If, one second later, we match the hash 34 57 95 111 200, that’s one more match for Song A but in this case we know that both the hashes and time differences matches. // Class that represents specific moment in a song private class DataPoint { private int time; private int songId; public DataPoint(int songId, int time) { this.songId = songId; this.time = time; } public int getTime() { return time; } public int getSongId() { return songId; } } Let’s take i1 and i2 as moments in the recorded song, and j1 and j2 as moments in the song from database. We can say that we have two matches with time difference match if: <span style=font-family: TimesNewRoman;> RecordedHash(i1) = SongInDBHash(j1) AND RecordedHash(i2) = SongInDBHash(j2)</span> <span style=font-family: TimesNewRoman;>AND</span> <span style=font-family: TimesNewRoman;> abs(i1 - i2) = abs (j1 - j2) </span> This gives us flexibility to record the song from the beginning, middle, or end. Finally, it is unlikely that every single moment of the song we record in the club will match every corresponding moment of the same song in our library, recorded in the studio. The recording will include a lot of noise that will introduce some error in the matches. So instead of of trying to eliminate all but the correct song from our list of matches, at the very end, we sort all the matched songs in descending order of likelihood, and our favorite is the first song on the ranking list. From Top to Bottom To answer the question, “How does Shazam work?” here’s an overview of the entire music recognition and matching process, from top to bottom: For this kind of system, the database can get pretty huge, so it is important to use some kind of scalable database. There is no special need for relations, and the data model ends up being pretty simple, so it is a good case for using some kind of NoSQL database. How Does Shazam Work? Now You Know This kind of song recognition software can be used for finding the similarities between songs. Now that you understand how Shazam works, you can see how this can have applications beyond simply Shazaming that nostalgic song playing on the taxi radio. For example, it can help to identify plagiarism in music, or to find out who was the initial inspiration to some pioneers of blues, jazz, rock, pop or any other genre. Maybe a good experiment would be to fill up the song sample database with the classical music of Bach, Beethoven, Vivaldi, Wagner, Chopin and Mozart and try finding the similarities between songs. You would think that even Bob Dylan, Elvis Presley and Robert Johnson were plagiarists! But still we cannot convict them, because music is just a wave that we hear, memorize and repeat in our heads, where it evolves and changes until we record it in the studio and pass it on to the next great musical genius. Understanding the basics How does the Shazam algorithm work? The Shazam algorithm distills samples of a song into fingerprints, and matches these fingerprints against fingerprints from known songs, taking into account their timing relative to each other within a song. What is an audio fingerprint? An audio fingerprint is a collection of hash tags, or signatures, of a song's samples. They measure which frequencies in each sample are the strongest. How does Shazam find music? Shazam finds music by comparing the audio fingerprint of a user-supplied recording with fingerprints of known songs from its database.
https://www.toptal.com/algorithms/shazam-it-music-processing-fingerprinting-and-recognition
CC-MAIN-2020-29
refinedweb
3,074
58.82
C++ is not a purely object-oriented language but a hybrid that contains the functionality of the C programming language. This means that you have all the features that are available in C: Central to C++ is object-oriented programming (OOP). As just explained, OOP was the impetus for the creation of C++. Because of this, it is useful to understand OOP’s basic principles before you write even a simple C++ program. Object-oriented programming offers several major advantages to software development: /* This is a simple C++ program. Call this file Sample.cpp. */ #include<iostream> using namespace std; // A C++ program begins at main(). int main() { cout << "C++ is power programming."; return 0; } /* When run, the program displays the following output:*/ C++ is power programming.
https://www.cseworldonline.com/cplusplus/cpp_A_Brief_History_of_CPP.php
CC-MAIN-2021-39
refinedweb
126
57.98
Amanda Silver - Getting into Visual Basic.NET - Posted: Oct 22, 2004 at 6:40 PM - 82,634 programming VB since version 3. When .Net came around and I couldn't show Form2 I just closed Visual Studio and went back to VB6. It really was frustrating. If it wasn't for the "Student Evangelist" program, I think I'd still have VB6 laying around. Now I love VB.Net and I'm really appreciating Object Oriented Programming! It's too damn cool! I would love Amanda to expand on how to really make use of the framework, something she just mentioned. I remember the days of going through the Win32API definitions trying to make use of all those cool APIs. I use the Object Browser tool with .Net when I look for those cool features you don't find readily avaialble, but I still find myself bringing some old WIN32APIs into .Net to do things because I really can't find them. Such things as Windows Handles of Open Processes is what I can think of right now when I needed to automate an external program. (Heh, reminds me of SendKeys Great interview, really confirmed my views. That is very insulting. VB6 was a full programming language that can do almost anything C++ can do. The few limitations that it does have can be programmed around. I can build Win32 and MFC (shudder) applications in VC++ now but yet I find that when I need to write an application for my own consumption I will start VB(6) and it would have been written and debugged in half the time - with better results normally. VB(6) does not stifle creative freedom which is something lower level languages are very good at, even if you get more freedom to do other things (most of which programmers don't bother with). VB.net is not very good, it doesn't feel the same. Simple things that worked in VB6 and you would expect to work in its 'big brother' don't work any more. For example = Right() Left() Len(). Not to mention that VB.net has moved from a base of functions over to a base of classes, so for new things now you would never consider typing NetFunction(Thing) you would be doing Thing.NetFunction(). Which although in theory simpler it is not what VB is about it is what C++ is about. VB6 - Day to day, '5min' programs (these can be more complicated than SOME people's C++ apps). C# - Stuff? - Too time consuming for quick write-out apps however has advance features such a multi-threading.. so I would use this for 'something' .. ? VB.net - Compiling Other people's code. VC++ - When I am learning about memory pointers and the fun and wonder of how to cause a memory leak by mis-typing a single letter. I very very rarely write non-learning applications in VC++. I also rarely set my self on fire.. Java - They make me.. VBScript - When I need it.. More than you might expect. About the Win32 thing, you know that the average VB6 programmer knows MORE about the Win32 system than those that do C++. I also love to hear the C++ bitching about how lame it is calling an API directly when they are doing the exact same only via a library call! (or a .Net call) You learn a great deal about Windows and how things work. Video: I find it ironic that she uses the Win32 as an example for why VB6 is worse than .net - but once you figure out the pattern I find finding a Win32 call (using the API-List tool) hell of a lot faster than searching though libraries of libraries of libraries of the .Net framework. Anyway you get more freedom with direct Win32 calls. VB.net only allows you to do what Microsoft wants you to do.. No problem: I still use VB6 because 1) I feel that .Net built desktop applications are resource hogs and 2) All of my tools and code modules are already written in VB6 and I don't want to do them over again. I'll stop using VB6 when it stops working. I know writing console apps in VB6 can be done, just not without some serious weird stuff. That was my point. It DOES make sense what she says, really couldn't have said it better myself Amanda, please continue the great work you and your team do with VB! Anyway, I don't agree with the "VB is for beginners thing". Yes, VB is relatively easy to learn, but that doesn't mean it's just for beginners. I never understand programmers that seem to think that something that isn't complex can appeal to experienced programmers as much as to beginners. I know C++. I fact, I think I would not be exaggerating if I say that I know C++ better than at least any other student (and quite possibly most of the teaching staff too) at Leiden University. I certainly know it better than the teacher that taught it to me (no disrespect to him, he's a great guy, and he knows what he needs to know to teach it to first year, but when you get to the really interesting stuff, he's not the one to ask). Still, I'll greatly prefer VB.NET for 95% of the projects I do. I have created high-grade applications in VB.NET that would've taken me 3 times as much work (if not more) if I'd had to do them in C++. Currently I'm working on a project for Uni, the Data Integration Wizard, which we have to do in C++, simply because we have to seemlessly integrate with the existing DataConversionTool, which was written in Visual C++ 6 and Borland C++Builder 5 (and don't get me started on the horrors of using C++Builder...). During the course of developing this, I can't remember how many times I've thought "if this was VB.NET, it'd take 10 times less code." I've firmly rediscovered what a pain it is to do COM from C++, which I'd previously managed to lock away in my subconscious. Granted, C# would have exactly the same benefits over C++ as VB.NET. C# has a few advantages over VB.NET, and VB.NET has a few advantages over C#, in the end it's purely a matter of style and preference which you choose. Still a lot of people see C# as a much more viable "professional" language than VB.NET. But that wasn't what we were talking about. We were talking about C++ vs. .Net. I will not say that .Net is more powerful than C++. Being a so-called lower language, C++ is closer to the metal, and therefore you can do more with it. But for a lot of real-world projects, higher languages like VB.NET or C# can get the job done just as well as C++, but with less development time, less lines of code, and generally also less bugs. I don't even see the two as competing. I would never, ever pick C++ to do a typical GUI or data-driven app (unless forced to do so by outside forces). If I need performance, or system-level code, then I'll pick C++. In the end, it's all a matter of preference, and if any of you like programming complex GUIs using nothing but the bare Windows API, that's your perogative. I just don't agree that just because it's not as complex as C++, stuff like VB.NET, C#, or even Java is necessarily less powerfull or "for beginners". End of rant. Why does it always have to be a man thing to get distracted by appearences and why don't women make comments on how Scoble's warm voice make them weak in the knees? ! Tell you what. Show me your C/C++/MFC/ATL/WTL code for even a simple data-entry form and I'll show you why VB isn't for beginners.. Once proceedure programming began its descention with the onset of "OBJ", the fundamentals of BASIC as a language have stayed the same. Even in BASIC's most evolved state, .NET there are trace similarities between GW-BASIC and .NET. Obviously the Syntax and paradigm has changed but the core structure has remained the same. For that I am grateful. i) Ideas, a recognized spiritual prescence. ii) The ability to place these ideas in an orderly arrangement, a design. iii) The ability to communicate these designed ideas within the artificial environment of "The Tool." Maybe a new category should be added for old VB6 programmers: OO wannabie using System;); } } Cool video... does anyone experience that problem and know that problem? please let me know... thank u so much God Bless U All Ferry Avianto ferry@aisin-indonesia.co.id PT. Aisin Indonesia Remove this comment Remove this threadclose
http://channel9.msdn.com/Blogs/TheChannel9Team/Amanda-Silver-Getting-into-Visual-BasicNET
CC-MAIN-2014-52
refinedweb
1,516
73.58
I have just started learning python (I have learned how to open the python console) so that I can do the following: Code: Select all //Loop through each .psk file in a folder //import current .psk file into scene //Get the name of the .psk file for export //export as .fbx into another folder and with name of the .psk file //clear the scene for the next .psk file ... in_Blender ... 23590.html So far I have: Code: Select all import bpy.ops #Import a .psk file bpy.ops.import_scene.psk(filepath="C:/BlenderImport/Example_01.psk") #Export as .fbx file bpy.ops.export_scene.fbx(filepath="C:/BlenderExport/Example_01.fbx") #Clear existing bpy.ops.wm.read_homefile()
https://www.blender.org/forum/viewtopic.php?t=21606
CC-MAIN-2017-43
refinedweb
114
72.02
Catalyst::View::Tenjin - Tenjin view class for Catalyst. version 0.050001 # create your view script/myapp_create.pl view Tenjin Tenjin # check your new view's configuration __PACKAGE__->config( USE_STRICT => 1, # false by default INCLUDE_PATH => [ MyApp->path_to('root', 'templates') ], TEMPLATE_EXTENSION => '.html', ENCODING => 'UTF-8', # this is the default ); # render view from lib/MyApp.pm or lib/MyApp::C::SomeController.pm sub message : Global { my ($self, $c) = @_; $c->stash->{template} = 'message.html'; $c->stash->{message} = 'Hello World!'; $c->forward('MyApp::View::Tenjin'); } # access variables from template The message is: [== $message =]. # example when CATALYST_VAR is set to 'Catalyst' Context is [== $Catalyst =] The base is [== $Catalyst->req->base =] The name is [== $Catalyst->config->name =] # example when CATALYST_VAR isn't set Context is [== $c =] The base is [== $base =] The name is [== $name =] # you can also embed Perl <?pl if ($c->action->namespace eq 'admin') { ?> <h1>admin is not implemented yet</h1> <?pl } ?> This is the Catalyst view class for the Tenjin template engine. Your application should define a view class which is a subclass of this module. There is no helper script to create this class automatically, but you can do so easily as described in the synopsis. Once you've created the view class, you can modify your action handlers in the main application and/or controllers to forward to your view class. You might choose to do this in the end() method, for example, to automatically forward all actions to the Tenjin view class. # In MyApp or MyApp::Controller::SomeController sub end : Private { my( $self, $c ) = @_; $c->forward('MyApp::View::Tenjin'); } This module is now Moose-based, so you can use method modifiers. For example, you can perform some operation after or before this module begins processing the request or rendering the template. This method is automatically called by Catalyst when creating the view. The method creates an instance of Tenjin using the configuration options set in the view. Renders the template specified in $c->stash->{template} or $c->action (the private name of the matched action, with the default extension specified by the TEMPLATE_EXTENSION configuration item. Calls render to perform actual rendering. Output is stored in $c->response->body. Checks if a template named $template_name was already registered with the view. Returns 1 if yes, undef if no. Registers a template with the view from an arbitrary source, for immediate usage in the application. $tmpl_name is the name of the template, used to distinguish it from others. $tmpl_content is the body of the template. Templates are registered in memory, so don't expect them to remain registered between application restarts. Renders the given template and returns output, or throws an exception if an error was encountered. $template is the name of the template you wish to render. If this template was not registered with the view yet, it will be searched for in the directories set in the INCLUDE_PATH configuration item. The template variables are set to %$args if $args is a hashref, or %{$c->stash} otherwise. In either case the variables are augmented with $base set to $c->req->base, $name to $c->config->{name} and the Catalyst context, which will be set to $c unless the CATALYST_VAR configuration item is set to a different name. If so, the $c, $base and $name variables are omitted. Returns a list of key-value pairs to be used as the context variables (i.e. the context object) in the Tenjin templates. To configure your view class, you can call the config() method in the view subclass. This happens when the module is first loaded. package MyApp::View::Tenjin; use strict; use base 'Catalyst::View::Tenjin'; __PACKAGE__->config( USE_STRICT => 1, INCLUDE_PATH => [ MyApp->path_to('root', 'templates') ], TEMPLATE_EXTENSION => '.html', ENCODING => 'utf8', ); You can also', root => MyApp->path_to('root'), 'View::Tenjin' => { USE_STRICT => 1, INCLUDE_PATH => [ MyApp->path_to('root', 'templates') ], TEMPLATE_EXTENSION => '.html', ENCODING => 'utf8', }, }); The USE_STRICT configuration option determines if Tenjin will use strict when evaluating the embedded Perl code inside your templates. If USE_STRICT is set to a true value (1), strict will be used. This is recommended, but if you're having trouble using strict, you can set it to 0, or just not set it at all (by default, Tenjin will not use strict on embedded Perl code). The ENCODING configuration option tells Tenjin that how your template files are encoded. By default, Tenjin will try to decode your templates as utf8. If you set TEMPLATE_EXTENSION, this extension will be automatically appended to <$c-stash->{template}>> before being searched in the('Tenjin')-('Tenjin')-.html'; $c->forward('MyApp::View::Tenjin'); } If a stash item isn't defined, then it instead uses the stringification of the action dispatched to (as defined by $c->action) in the above example, this would be message, but because the default is to append '.html', it would load root/message.html. The items defined in the stash are passed to Tenjin for use as template variables. sub default : Private { my ($self, $c) = @_; $c->stash->{template} = 'message.html'; $c->stash->{message} = 'Hello World!'; $c->forward('MyApp::View::Tenjin'); } A number of other template variables are also added: $c A reference to the context object, $c $base The URL base, from $c->req->base() $name The application name, from $c->config->{name} These can be accessed from the template in the usual way: # message.html The message is: [== $message =] The base is [== $base =] The name is [== $name =] The output generated by the template is stored in $c->response->body. Catalyst::View::Tenjin adds an easy method for providing your own templates, such that you do not have to use template files stored on the file system. For example, you can use templates stored on a DBIx::Class schema. This is similar to Template Toolkit's provider modules, which for some reason I never managed to get working. You can register templates with your application, and use them on the fly. For example: # check if the template was already registered unless ($c->view('Tenjin')->check_tmpl($template_name)) { # Load the template my $tmpl = $c->model('DB::Templates')->find($template_name); $c->view('Tenjin')->register($template_name, $tmpl->content); } If you wish to use the output of a template for some other purpose than displaying in the response, you can use the render method. For example, use can use it with Catalyst::Plugin::Email: sub send_email : Local { my ($self, $c) = @_; $c->email( header => [ To => 'me@localhost', Subject => 'A TT Email', ], body => $c->view('Tenjin')->render($c, 'email.html', { additional_template_paths => [ $c->config->{root} . '/email_templates'], email_tmpl_param1 => 'foo' }), ); # Redirect or display a message } Please report any bugs or feature requests to bug-tenjin::Tenjin You can also look for information at: Tenjin, Catalyst, Catalyst::View::TT Ido Perlmuter <ido at ido50.net>. This module was adapted from Catalyst::View::TT, so most of the code and even the documentation belongs to the authors of Catalyst::View::TT. Development of this module is tracked via github at. This program is free software, you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~idoperel/Catalyst-View-Tenjin-0.050001/lib/Catalyst/View/Tenjin.pm
CC-MAIN-2016-36
refinedweb
1,166
55.44
Bigtop::Docs::Modules - An annotated list of modules in the Bigtop distribution This document goes into some depth on each piece of of the Bigtop distribution. Some of the details are left for the POD of the pieces themselves. If you want to know exactly what's legal in a bigtop file, look in Bigtop::Docs::Syntax or the more concise (and less complete) Bigtop::Docs::AutoKeywords. Or, you could look where tentmaker looks: Bigtop::Keywords. Bigtop.pm is primarily a documentation module. It does provide two useful functions for backend authors. One writes files on the disk the other makes directory paths. See its docs for details. This is the real workhorse of Bigtop. It is a grammar driven parser for Bigtop files. Interactions with this parser are usually indirect. End users use the bigtop script, which in turn uses the parser to first build an abstract syntax tree (AST) and then to generate output by passing the AST to the backends. Developers should write backends which receive the AST in methods named for what they should produce (see "Backends" below). If you have a file on the disk and want to parse it into an abstract syntax tree (AST), call Bigtop::Parser->parse_file( $file_name ). This returns the AST. The bigtop script is quite simple. (It relies on Bigtop::ScriptHelp when it needs to manufacture or modify bigtop source files.) Mostly, it handles command line options, then directly passes the rest of its command line arguments to gen_from_file in Bigtop::Parser. gen_from_file reads the file into memory and passes it and the other command line arguments to gen_from_string. gen_from_string first parses the config section of the Bigtop file to find the backends. It requires each of those (using Bigtop::Parser->import), then calls gen_YourType on each one that is not marked no_gen. The backend's gen_YourType is called as a class method. It receives the base directory of the build (where the user wants files to end up), the AST, and the input file name (if one is available). Once you have an AST, you can call methods on it. Most of these will return lists (whose element are most often strings). The most useful method provided is walk_postorder. It takes care of walking the tree. For each element, it calls walk_postorder for all of the children, pushing their output lists into a meta-list. Then it passes that result to the action in the current class. This is a depth first traversal. (If there is no action for the current class, walk_postorder returns the collection of child output unmodified; except that if the array of such output is empty, it returns undef and not an empty array.) You can pass a single item of data (one scalar) to walk_postorder. That item (which is usually a hash reference or object) is in turn passed to all walk_postorder methods in the descendents as they are called. All of the walk_postorder methods pass this item on to the action methods when they call them. To make this concrete, consider what the Bigtop::Backend::Control::Gantry does in its gen_Control method: my $sub_modules = $bigtop_tree->walk_postorder( 'output_controller', { module_dir => $module_dir, app_name => $app_name, lookup => $wadl_tree->{application}{lookup}, #... more hash keys } ); This specifies the callback action as output_controller. Each output_controller has a definition like this: sub output_controller { my $self = shift; my $child_output = shift; my $data = shift; # ... } Where $self is the current tree element, $child_output is the result returned from all of this element's children (as an array reference), and the data hash originally passed to walk_postorder (the one that contains module_dir, app_name, lookup, etc.). Keep in mind that this is a post order (depth first) traversal, so children finish making their output before parents are called on to make output. In particular, this means you can't feed your children, or prune off their behavior (though you could discard their output). The initial caller must pre-feed all the children. Children must prune for themselves. [footnote: If you do want to avoid child behavior in a parent, you can change the name of the action method in the child classes. This makes the parent the end of the first recursion, allowing it to decide whether or not to start a new recursion on its subtree. Upon deciding to initiate a new recursion, it can feed the children whatever they need. This technique is less useful in generators, where the child output is usually straightforward (like a set of column definitions for the body of a CREATE TABLE statement in SQL). Where it shines is in the methods of Bigtop::Parser which manipulate already parsed trees on behalf of tentmaker.] The output_controller action methods live in packages named for rules in the grammar. See directly below for the package names and how to implement them. The AST lives in a hash. This section explains the anatomy of that hash. It is presented as a nested list so you can see the tree structure by indentation. The top level element is blessed into the bigtop_file package. It responds to these methods: get_config (returns the config subtree) and get_appname (among others). It has two children. The first is configuration (available through get_config) which is a hash reference representing the config section of the Bigtop file. The configuration child is not part of the AST you walk when generating output, but its info can be essential to your backend. Since this description was written, the tree has grown. I decided to leave this section as is, rather than increase its already somewhat daunting complexity. Mainly this reflects my laziness, but I think it will aid your laziness as well. You can gain an initial understanding without as much detail. Further, most of the new things the tree node classes do support the tentmaker, which I hope you don't need to work on. The tree continues with the other child. First I will show a simple outline of the whole tree as generated by outliner a script in the lib/Bigtop/Docs directory of the distribution. I've compressed the output of outliner to compress the tree vertically. This means that attributes which are not themselves AST nodes are listed in line with their parent. Below the summary, I will show it again with discussion. Note that in both the summary and the full discussion, nodes appear in logical order as they normally would in a Bigtop source file (modulo placing rare nodes near the bottom). This is not the same order as the productions in the grammar. In summary: application __NAME__ __BODY__: block(s?): app_config_block __BODY__: app_config_statement __KEYWORD__ __ARGS__ app_statement __KEYWORD__ __ARGS__ table_block __IDENT__ __NAME__ __TYPE__ __BODY__: __IDENT__ __NAME__ __ARGS__ __TYPE__ __BODY__: field_statement __KEYWORD__ __DEF__: field_statement_def __ARGS__ controller_block __IDENT__ __NAME__ __TYPE__ __BODY__: controller_method __IDENT__ __NAME__ __TYPE__ __BODY__: method_body: __KEYWORD__ __ARGS__ controller_config_block __BODY__: controller_config_statement __KEYWORD__ __ARGS__ controller_literal_block __IDENT__ __BACKEND__ __BODY__ controller_statement __KEYWORD__ __ARGS__ join_table __IDENT__ __NAME__ __BODY__: join_table_statement __KEYWORD__ __DEF__ literal_block __IDENT__ __BACKEND__ __BODY__ seq_block __IDENT__ __NAME__ __TYPE__ __BODY__ Responds to thes method: get_name returns the app name. show_idents dumps out the name, type, and ident of every ident bearing node. Useful when building tests of tree manipulations. There are many other methods, most support tentmaker. Has these children: A string with the app name in it. This is available through get_appname on the whole tree or through get_name on the application subtree. Created by Parse::RecDescent's autotree scheme. Has one child: This child is an array (ref) of objects, each blessed into the block class. Since autotree builds this for us, there is some litter. We are only concerned with children whose package names end with _block or _statement. These children are: Represents a simple statement at the app level. Has two keys: The statement's keyword (like authors). An arg_list (see below). Represents an app level config block. Has one child: An array (possibly undef) of objects blessed into: Has two attributes: The name of the set var. An arg_list (see below). Responds to get_name which returns the name of the block's table. The attributes of a table node are: The internal and unchanging name of the node. The name of the constructed sequence or table. As string, either sequences or tables. The body of the block. This is an array (ref) of nodes blessed into: There are two types of these: statements and field blocks. Both are blessed into the table_element_block class. They have the following keys: For field blocks only. The internal and unchanging name of the node. For field blocks only. The name of the field (and its SQL column). For statements only, the arguments of the statement. This is an arg_list, see below. Either 'field' for field blocks or the statement keyword for statements. Either the statement keyword for statements, or an array (ref) of nodes blessed into: The class for field blocks. These nodes have the following keys: The keyword of the statement. A node blessed into the field_statement_def package, which has a single key: An arg list, see below. Responds to get_name which returns the name of the controller. Has these children: The internal and unchanging name of the node. The name of the controller relative to the app name, available through get_name. Controllers are specified as: controller Name is type {...} This attribute is the controller's type. Note that if the type is base_controller, the controller cannot have an explicit name, but must be written as: controller is base_controller {...} This is an array (ref) of nodes blessed into one of these classes: controller_method, controller_statement, controller_config_block, controller_literal_block. The first two are the most common. Controller config blocks are quite rare. They specify controller level adjustments to the apps top level config block. These are either new variables only this controller wants, or replacement values this controller needs in place of global values. Controller literal blocks allow placement of literal text into the httpd.conf Location for this controller. All of these types are described further below: Represents a method. Responds to get_name which returns the method's name. Has these children: Unique and unchanging internal name. A string attribute. The name of the method available through get_name. A string attribute. As with controllers, methods have types: method name is type { ... } This is the type name. There should probably be an accessor for this. The body of the method, including all of its statements. Responds to these methods: get_method_name, get_controller_name, and get_table_name (which works if the controller has a controls_table statement). Blessed into: An array (ref) of nodes blessed into the method_statement class, whose keys are: The statement's keyword. An arg_list (see below). Has a single key: An array (ref) of nodes blessed into: Each of these is a leaf with two attributes: The config variable's name. An arg_list (see below). This is really a statement, not a block (the name stuck before I decided statements would be easier to work with). Responds to make_output which is similar to the method of that name in the literal_block package. The key difference is that this one does not handle multiple backend types gracefully. If the backend type you ask for matches, you get the output. No hash keyed by backend type is available. (Trailing new lines are supplied exactly as for make_output in the literal_block package.) A leaf with two attributes: Internal and unchanging name. The backend which the user wants to handle the literal. A string to put literally in the __BACKEND__'s output. It's easier to call make_output than to fish in these manually. These are the simple statements in the controller block (like controls_table). They have two keys: The statement name. An arg list (see below). This is optional and may therefore be undef. Represents a many-to-many relationship between two tables and the implicit table which goes between them. Has three keys: The internal invariant name of the block. These are used by tentmaker to make updates to the existing tree and may vary from parse to parse. The name of the implicit table. The SQL backend will make SQL statements to generate this table in the schema. An array (ref) of statements in the block. There must be a joins statement. There may be an optional names statement. Each array element blessed into: These are somewhat like field_statements, but they are simpler since both legal statements expect exactly one pair. The statement keyword, must be either joins or names. Exactly one joins statement must be present (if the parse is valid). At most one names statement may be present. These rules are enforced by the backend. An arg_list containing a single pair. This is a leaf node. It responds to one highly useful method: make_output. Backends call it on the current subtree (remember it's a leaf) passing in their backend type. If the current literal block has the same type, the text of the backquoted string in the Bigtop file is returned. (A trailing new line is added to the user's input unless that input already had trailing whitespace.) If the current node is of a different type, undef is returned. There is an optional additional parameter: want_hash. Pass a true value if you need the output as [ { $backend_type => $output } ] instead of the default: [ $output ] This is useful if your backend handles multiple literal blocks in different ways. For example, PerlTop and PerlBlock literals are both handled by Bigtop::Backend::Gantry::HttpdConf. It needs the hash form to know where to put the literal output. literal_blocks have three attributes: Internal unchanging name. The name of the backend this literal is intended for. The literal text for the backend. Usually it is easier to call make_output than to fish for these. Responds to get_name. Represents a sequence block. Only the Postgres SQL backend understands sequence blocks. All other backends ignore them completely, even if a table includes a sequence statement. Has the following keys: The internal and unchanging name of the node. The name of the sequence. Hold over from when sequences were blessed into the same package as tables. Deprecated and may be removed. If there were any legal sequence statements (which there aren't), this would be an array ref holding the statements in the sequence block. As it is, you can't use this. In addtion to those packages, there is one which is a frequent leaf: An arg_list is an array whose elements are either single strings or pairs. There is no help in Bigtop::Parser for these. They look like this: [ 'value1', { key => 'value2' }, 'value3', ] While the items in an arg_list are not blessed, the whole list is. The arg_list pacakge in Bigtop::Parser provides many convenience methods for getting and setting data in the list. Here are some highlights: When you know that your statement only uses one arg, call this to get it. It saves you having to fish in the array for the first arg. If the first arg is a pair, you will receive a hash with one key. Returns all of the args as valid Bigtop input. The above example would come back as: value1, key, value2, value3 This is useful when you don't want quoted values and you don't expect pairs. Primarily useful when deparsing. Returns the arg list as a string which is valid bigtop input. This adds all needed backquotes (but none that aren't required). Returns an array of the args, but with pairs converted to strings like so: key => value2 Only occasionally useful by backends. Most backend either use get_first_arg, or walk the list themselves. To see what the current backends do, consult Bigtop::Docs::Syntax. To write your own, keep reading. Each backend should have a generation method called gen_BackendType (where BackendType is part of the package name: Bigtop::Backend::BackendType::Backend). These are called as class methods with the build directory, the AST generated in Bigtop::Parser, and the source file name (if one is available). In addition to the generation methods, if your backend wants to work with the TentMaker, you must also implement what_do_you_make and backend_block_keywords. See the example below for what these should do. The gen_* methods produce output on the disk. For testing, you can call the methods that the gen_* methods call. Usually these are prefixed with output_, but that is not enforced. Or you can call the gen_* method and test the generated files (say with Test::Files) as the Bigtop test suite tends to do. To know what a particular backend will do, see Bigtop::Docs::Keywords or Bigtop::Docs::Syntax. That is also where you will see a list of the keywords they understand and what values those keywords take. To write a backend, you need to write the gen_* method and have one package for each AST element type you care about. It is easiset to see this by example. A good example is Bigtop::Backend::SQL::SQLite. I'll show it here so you can see how it goes with commentary interspersed amongst the code. To see the whole of it, look for lib/Bigtop/Backend/SQL/SQLite.pm in the Bigtop distribution. (Note that I have removed some details to make this presentation easier, and the real version may have been updated more recently than this discussion.) There is nothing really fancy about the start of a backend: package Bigtop::Backend::SQL::SQLite; use strict; use warnings; use Bigtop::Backend::SQL; use Inline; Note that the package name must begin with Bigtop::Backend:: in order for the bigtop and tentmaker scripts to find it. I use Bigtop::Backend::SQL, which registers the SQL keywords with the Bigtop parser. Actually, Bigtop::Backend::SQL uses Bigtop::Keywords which is a central repository of all keywords any backend could want. It is really best to add the keywords there. Among other things it makes maintenance easier. But this is not a requirement (even for proper tentmaker functioning). In all of my backends I use Inline::TT to aid in generating the output. It needs Inline loaded. (See setup_template below for how templates are installed for use.) sub what_do_you_make { return [ [ 'docs/schema.sqlite' => 'SQLite database schema' ], ]; } what_do_you_make should return an array reference describing the things your backend writes on the disk. Each array element is also an array reference with two entries. First is the name of something made by the module, second is a brief description of what that piece has in it. These appear as documentation in the tentmaker application. sub backend_block_keywords { return [ { keyword => 'no_gen', label => 'No Gen', descr => 'Skip everything for this backend', type => 'boolean' }, { keyword => 'template', label => 'Alternate Template', descr => 'A custom TT template.', type => 'text' }, ]; } backend_block_keywords is similar to what_do_you_make. It lists all the valid keywords which can go in the backend's block in the config section at the top of the Bigtop file. These appear in order in the far right column of the Backends tab of tentmaker. The above keys are required, if you need a default use the default key. If the type is boolean, spell out true or false as the default value (these are going to HTML and/or Javascript as strings). If you don't specify a default, you get false (unchecked) for booleans and blank for strings. sub gen_SQL { shift; my $base_dir = shift; my $tree = shift; The bigtop script will call gen_SQL (via gen_from_sting) when the user has this backend in their config section and invokes bigtop with SQL or all in the list of build items. The class name is not needed, so I shifted it into the ether. The $base_dir is where the output goes. The $tree is the full AST (see above for details). # walk tree generating sql my $lookup = $tree->{application}{lookup}; my $sql = $tree->walk_postorder( 'output_sql_lite', $lookup ); my $sql_output = join '', @{ $sql }; The lookup subtree of the application subtree provides easier access to the data in the tree (though it doesn't have all the connectors the AST has for parsing use, in particular it uses hashes exclusively, so it never intentionally preserves order). I let Bigtop::Parser's walk_postorder do the visiting of tree nodes for me. It will call 'output_sql_lite' on each of them. I implement that on the packages my SQL generator cares about below. I pass the lookup hash to walk_postorder so it will be available to the callbacks. Note that the name of the walk_postorder action needs to be unique among all Bigtop::Backend::* modules. This prevents subroutine redefinitions (and their warnings) when multiple SQL backends are in use. It also makes tentmaker run more quietly in all cases. Choose names with some tag relating to your backend to avoid namespace collisions. The output of walk_postorder is always an array reference. I join it together and store it in $sql_output. # write the schema.postgres my $docs_dir = File::Spec->catdir( $base_dir, 'docs' ); mkdir $docs_dir; By the convention of our shop, the schema.sqlite file lives in the docs directory of the generated distribution. Here, I make that directory (if that fails we'll hear loud screaming shortly). All that remains is to put the output into the file: my $sql_file = File::Spec->catfile( $docs_dir, 'schema.sqlite' ); open my $SQL, '>', $sql_file or die "Couldn't write $sql_file: $!\n"; print $SQL $sql_output; close $SQL or die "Couldn't close $sql_file: $!\n"; } So, the whole generation method is only 22 lines. Except for the specific use of 'sqlite' or 'lite', this method is the same for the other SQL backends. Of course, there is still a lot left for me to do. Like most backends, this one uses Inline::TT to control the appearance of the output. If users don't like the appearance, they have only to copy the template into another file, edit it to suit them, and tell the module by including a template statement in the config block for the backend: config { SQL Postgres { template `my.tt`; } # ... } Here is my default template: our $template_is_setup = 0; our $default_template_text = <<'EO_TT_blocks'; [% BLOCK sql_block %] CREATE [% keyword %] [% name %][% child_output %] [% END %] [% BLOCK table_body %] ( [% FOREACH child_element IN child_output %] [% child_element +%][% UNLESS loop.last %],[% END %] [% END %] ); [% END %] [% BLOCK table_element_block %] [% name %] [% child_output %][% END %] [% BLOCK field_statement %] [% keywords.join( ' ' ) %] [% END %] [% BLOCK insert_statement %] INSERT INTO [% table %] ( [% columns.join( ', ' ) %] ) VALUES ( [% values.join( ', ' ) %] ); [% END %] [% BLOCK three_way %] CREATE TABLE [% table_name %] ( id INTEGER PRIMARY KEY AUTOINCREMENT, [% FOREACH foreign_key IN foreign_keys %] [% foreign_key %] INTEGER[% UNLESS loop.last %],[% END +%] [% END %] ); [% END %] EO_TT_blocks There are six blocks -- whose names usually correspond to grammar rules -- each of which may be used repeatedly while generating output: Wraps the body of an SQL CREATE statement with 'CREATE name'. Wraps all of the column definitions in each CREATE TABLE statement with parentheses. Makes the column definition statements for CREATE TABLE bodies. Concatenates the individual definition clauses for a column definition statement. Makes the INSERT statements which correspond to data statements in the Bigtop file. Makes the CREATE TABLE block for implicit tables which join other tables. These come from join_table blocks in the Bigtop file which in turn come from a<->b in ASCII art passed to bigtop or tentmaker at the command line. To make the template operative, requires implementing setup_template: sub setup_template { my $class = shift; my $template_text = shift || $default_template_text; return if ( $template_is_setup ); Inline->bind( TT => $template_text, POST_CHOMP => 1, TRIM_LEADING_SPACE => 0, TRIM_TRAILING_SPACE => 0, ); $template_is_setup = 1; } The parser calls this (if the package can respond to it) prior to calling gen_SQL. If the user has supplied an alernate template, it is passed to setup_template. To avoid bad template binding, $template_is_setup keeps track of whether we've been here before. Inline's bind method creates subs in the current name space for callbacks to use. Note that if $template_text is a file name, that file will be bound correctly. I've tried to abstract out this code so all backends can share it, but the nature of Inline bindings makes that difficult, so I gave up. All that remains is the real work. We need to implement output_sql in about half a dozen packages. package # table_block table_block; use strict; use warnings; sub output_sql_lite { my $self = shift; my $child_output = shift; return if ( $self->_skip_this_block ); my %output; foreach my $statement ( @{ $child_output } ) { my ( $type, $output ) = @{ $statement }; push @{ $output{ $type } }, $output; } my $child_out_str = Bigtop::Backend::SQL::SQLite::table_body( { child_output => $output{table_body} } ); if ( defined $output{insert_statements} ) { $child_out_str .= "\n" . join "\n", @{ $output{insert_statements} }; } my $output = Bigtop::Backend::SQL::SQLite::sql_block( { keyword => $self->get_create_keyword(), child_output => $child_out_str, name => $self->get_name(), } ); return [ $output ]; } As all callbacks do, this one receives the current tree node as its invocant and the output of its children as parameters. (It also receives the data passed to walk_postorder, but this method doesn't need it.) The child output comes from the walk_postorder method of this package. It is always an array reference. In this case, that array has one or more subarrays. Each of those has two elements: type and text. The types are: table_body and insert_statements. The table_body elements must go inside the body of the CREATE TABLE statement. The insert_statements must be placed after the CREATE TABLE statement. There is one of those for each data statement in the table block of the Bigtop file. While the child output is always an array reference, its contents are up to you. We'll see how I formed the child output for this package below, when we walk into the child packages. Usually I only put arrays in it, to avoid confusion. Once the child output is divided into two pieces, I first call the table_body template BLOCK. Then I join the insert statements together as strings. Finally, I call the sql_block template BLOCK. I've used the get_name provided by the sql_block package in Bigtop::Parser. The get_create_keyword sub is defined in Bigtop::Backend::SQL. The same method (with different output) is defined there for the seq_block package used by Bigtop::Backend::SQL::Postgres to build sequence statements. Note that I am careful to give my output back in an array, even though I have only one item. This is required by all walk_postorder actions. package # table_element_block table_element_block; use strict; use warnings; sub output_sql_lite { my $self = shift; my $child_output = shift; if ( defined $child_output) { my $child_out_str = join "\n", @{ $child_output }; my $output = Bigtop::Backend::SQL::SQLite::table_element_block( { name => $self->get_name(), child_output => $child_out_str } ); return [ [ table_body => $output ] ]; } There are two kinds of children for the table_element_block: fields (which are themselves blocks) and statements (where are leaves in the AST). So, if there is child output, we can safely assume that this node is a field block. In which case, we join the child output, send it to the table_element_block TT BLOCK and return the output, being careful to note that it belongs in the table_body. Note that I always return arrays, even if I want to return a hash. This avoids rare but nasty bugs when the returned values pass through packages which aren't responding to the current action. else { return unless ( $self->{__TYPE__} eq 'data' ); my @columns; my @values; foreach my $insertion ( @{ $self->{__ARGS__} } ) { my ( $column, $value ) = %{ $insertion }; $value = "'$value'" unless $value =~ /^\d+$/; push @columns, $column; push @values, $value; } my $output = Bigtop::Backend::SQL::SQLite::insert_statement( { table => $self->get_table_name, columns => \@columns, values => \@values, } ); return [ [ insert_statements => $output ] ]; } } If there is no child output, we must be working on a statement. But, in this backend, the only statements I care about are data statements. So, I return unless this is one of those, which I know by checking the __TYPE__ key of the node's hash. Recall that data statements look like this: data f_name => Phil, l_name => Crow; The __ARGS__ for the node is an arg_list (which is a blessed array). The foreach walks that array hashifying each entry, since the user provided these as pairs. The value is quoted to keep SQL happy, unless it is an integer. Both key and value are pushed into arrays for easy use by the insert_statement TT BLOCK. The result is returned with a note that the output is for the insert_statements list. Now we are arriving at the most intricate piece. It handles the only statement in the field block we care about: is. package # field_statement field_statement; use strict; use warnings; my %expansion_for = ( int4 => 'INTEGER', primary_key => 'PRIMARY KEY', assign_by_sequence => 'AUTOINCREMENT', auto => 'AUTOINCREMENT', ); sub output_sql_lite { my $self = shift; shift; # there is no child output my $lookup = shift; return unless $self->get_name() eq 'is'; my @keywords; foreach my $arg ( @{ $self->{__DEF__}{__ARGS__} } ) { my $expanded_form = $expansion_for{$arg}; if ( defined $expanded_form ) { push @keywords, $expanded_form; } else { push @keywords, $arg; } } my $output = Bigtop::Backend::SQL::SQLite::field_statement( { keywords => \@keywords } ); return [ $output ]; } Now we see $lookup coming into the sub. I gave it to the original call to walk_postorder, which has been dutifully passing it to all the output_sql subs it calls. It's finally come to the place that needs it (see below). If the statement's keyword is not 'is', output_sql returns undef. For 'is' statements, it loops through the __DEF__ __ARGS__. Each of those is one of the comma separated clauses or clause abbreviations in the 'is' statement. For example: is int4, primary_key, auto; has three items in its list: int4, primary_key, and auto. For each of those, output_sql looks in the expansion_for hash to see if there is alternate text for the input word. If it finds alternate text, it uses it. Otherwise, it merely uses the arg directly. The input order is preserved in the output. This is the mechanism that allows all Bigtop input files to use int4, primary_key, and auto. Each backend uses a scheme like this one (though Postgres' is more complex) to generate the SQL for its database engine. Once the proper clauses have all been pushed into the keywords array, it is passed to the field_statement TT BLOCK. To allow you to put additional SQL statements into the schema.* file, Bigtop provides the literal SQL statement. This package handles it: package literal_block; use strict; use warnings; sub output_sql { my $self = shift; return $self->make_output( 'SQL' ); } Bigtop::Parser provides make_output in its literal_block package to facilitate literal statements of all sorts. Simply tell it what backend type you are interested in. If the node you're working on is of that type, it makes meaningful output (giving back an array reference with one element containing the full literal string from the user's statement). Otherwise, it returns undef which is discarded by the proper walk_postorder method. Join tables are manufactured tables whose only purpose is to embody a many-to-many relationship between other tables. sub output_sql { my $self = shift; my $child_output = shift; my $three_way = Bigtop::Backend::SQL::SQLite::three_way( { table_name => $self->{__NAME__}, foreign_keys => $child_output, } ); return [ $three_way ]; } This method just passes the buck to the three_way TT block. sub output_sql { my $self = shift; my $child_output = shift; return unless $self->{__KEYWORD__} eq 'joins'; my @tables = %{ $self->{__DEF__}->get_first_arg() }; return \@tables; } 1; The __DEF__ key stores the joins or names statement pair, it is an arg_list object. These respond to get_first_arg and other methods. Since there is only ever one pair allowed for either of these statements, we just want that one pair. It comes back as a hash, but we must once again make it an array to comply with walk_postorder's return value API. The rest of the module is all POD. The lookup hash is stored inside the AST. You can get it out like so: my $lookup = $tree->{application}{lookup}; as shown in the gen_SQL method shown above. The keys in the lookup hash are (these are optional, so only some of them might appear in your hash): This subhash represents all the simple statements at the app level. Its keys are the statement keywords. The values are arg_lists of the values for the statement. arg_lists are the only arrays in the lookup hash. This subhash represents all the controllers in the Bigtop app section. Its keys are the controller names. Each value is a hash with these keys (some of which are optional): This subhash represents all the methods defined for the controller. It is keyed by method name. The value is a hash with two keys: Just like the app_statements subhash at the top level. (Always present and defined.) A string with the type supplied in the Bigtop file. Just like the configs subhash at the top level (see below). Just like the app_statements subhash at the top level. These are the simple statements of the controller. (Always present, but could be undef.) This is the controller's type. It will be a string storing the type from the Bigtop file for this controller or undef if no type was given by the user. The keys are config names. The values are arg_lists with a single element. If the config is marked no_accessor, that element will be a hash keyed by the value storing no_accessor. Otherwise, that element will be a simple string. This subhash represents all the tables in the Bigtop file. It is keyed by table name and has these subkeys (which are optional): (Useless.) This has a single key __ARGS__ which stores the arg_list for the last data statement in the table. This is a subhash keyed by the name of the field storing subhashes of simple statements keyed by the statement keyword and storing a hash with a single key 'args' whose value is an arg_list. This has a single key __ARGS__ which is an arg_list whose single element is the text of the foreign_display template for this table. This has a single key __ARGS__ which is an arg_list whose single element is the sequence name for this table. This subhash represents all join_tables in the Bigtop file, but each one appears twice -- once for each table involved in the many-to-many relationship. The keys of the hash are the names of the tables on either side of the many-to-many. The values are hashes, which are slightly complex. Here is an example: join_tables => { skill => { joins => { job => 'job_skill' } }, job => { joins => { skill => 'job_skill' } }, fox => { joins => { sock => 'fox_sock' }, name => 'socks' }, sock => { joins => { fox => 'fox_sock' }, name => 'foxes' }, } which corresponds to these join_table blocks: join_table job_skill { joins job => skill; } join_table fox_sock { joins fox => sock; names foxes => socks; } Note that I said that the lookup hash is easier to use than direct AST walking. But I never said it was trivial or even well designed. It was easy to build. Here is the gist of the lookup hash in summary (as generated by outliner): app_statements controllers methods statements type configs statements type configs tables data fields foreign_display sequence join_tables Phil Crow <crow.phil@gmail.com>
http://search.cpan.org/dist/Bigtop/lib/Bigtop/Docs/Modules.pod
CC-MAIN-2016-44
refinedweb
5,734
65.32
operator new in custom namespace Discussion in 'C++' started by dirk@dirkgregorius.de,35 - William F. Robertson, Jr. - Jul 29, 2003 - Replies: - 8 - Views: - 563 - Neil Cerutti - Dec 22, 2005 Stream operator in namespace masks global stream operatormrstephengross, May 9, 2007, in forum: C++ - Replies: - 3 - Views: - 527 - James Kanze - May 10, 2007 Custom Controls: Import a custom namespace and use its functions withinuser, Jul 18, 2007, in forum: ASP .Net - Replies: - 1 - Views: - 531 - Kevin Spencer - Jul 19, 2007 What are the key differences between operator new and operator new[]?xmllmx, Feb 3, 2010, in forum: C++ - Replies: - 6 - Views: - 597 - xmllmx - Feb 3, 2010
http://www.thecodingforums.com/threads/operator-new-in-custom-namespace.501536/
CC-MAIN-2016-07
refinedweb
106
59.23
Hello, I've got a problem with SoftReference's. According to specification all soft references must be cleared before OutOfMemoryError is thrown. But it is not the case. The code below throws OutOfMemoryError with Xmx 32Mb. When I change to WeakReference everything works fine. Does anybody know whether it is a bug in JRE or there is someting wrong with my code? Code: import java.lang.ref.Reference; import java.lang.ref.SoftReference; import java.util.Collections; import java.util.HashMap; import java.util.Iterator; import java.util.LinkedList; import java.util.Map; public class Test { private byte[] buf=new byte[100000]; private Integer key; private Map map; protected void finalize() throws Throwable { map.remove(key); super.finalize(); } public static void main(String[] args) throws InterruptedException { Map map=new HashMap(); for (int i=0; i<1000; i++) { Test test = new Test(); test.key=new Integer(i); test.map=map; map.put(test.key, new SoftReference(test)); } LinkedList keyList=new LinkedList(map.keySet()); Collections.sort(keyList); Iterator it=keyList.iterator(); while (it.hasNext()) { Object o=it.next(); synchronized (map) { if (map.containsKey(o)) { System.out.println(o + " -> "+((Reference) map.get(o)).get()); } } } } } Bug in SoftReference? (6 messages) - Posted by: Pavel Vlasov - Posted on: May 19 2004 12:06 EDT Threaded Messages (6) - Bug in SoftReference? by Mircea Crisan on May 20 2004 04:26 EDT - Bug in SoftReference? by Pavel Vlasov on May 20 2004 13:29 EDT - Bug in SoftReference? by Lars Stitz on May 21 2004 10:14 EDT - Bug in SoftReference by Pavel Vlasov on May 21 2004 12:29 EDT - Sun confirmed that this is a bug by Pavel Vlasov on June 19 2004 15:12 EDT - Sun confirmed that this is a bug by sy man on September 03 2004 12:39 EDT Bug in SoftReference?[ Go to top ] Hi, - Posted by: Mircea Crisan - Posted on: May 20 2004 04:26 EDT - in response to Pavel Vlasov It might be a JVM thing, because it works on some JVM's and doesn't on others. Here are the results: Sun JDK 1.4.2_03 Linux, Sun JDK 1.5.0 Linux, Blackdown-1.4.2-rc1 Linux, Sun JDK 1.4.2_02 Windows it does not work. BEA JRockit 1.4.2_03-b02 Linux and IBM 1.2.2 Windows it works. However if you comment out the finalize method it works on all the platforms I have metioned. I don't know if this is a bug or a feature, I am looking forward to find out. Best regards, Mircea Bug in SoftReference?[ Go to top ] Hello, - Posted by: Pavel Vlasov - Posted on: May 20 2004 13:29 EDT - in response to Mircea Crisan If you change 100000 to 1000000 in private byte[] buf=new byte[100000]; it should fail even with finalize() commented. I'm under impression that garbage collection of SoftReferences doesn't suspend thread which creates them and this is why it fails with OutOfMemoryError. Main thread is too fast to pollute heap that garbage collection fails to clean it timely. --- Regards, Pavel. Bug in SoftReference?[ Go to top ] I'm under impression that garbage collection of SoftReferences doesn't suspend - Posted by: Lars Stitz - Posted on: May 21 2004 10:14 EDT - in response to Pavel Vlasov > thread which creates them and this is why it fails with OutOfMemoryError. Main > thread is too fast to pollute heap that garbage collection fails to clean it > timely. That's not a bug, it's a feature. I'm sure you do not want the garbage collector stop your worker threads completely when it is not neccessary (like for minor garbage collections). If you need this behaviour, use System.gc() (which triggers full garbage collections in a "stop-the-world-mode"), but then you can stop using SoftReferences altogether and watch your application's performance degrade to death. Is your interest in this subject purely of an academic nature or do you have a business case where you need a solution to this problem? Cheers, Lars Bug in SoftReference[ Go to top ] If it is a feature then Java specification shall not say: All soft references to softly-reachable objects are guaranteed to have been cleared before the virtual machine throws an OutOfMemoryError. I don't want gc to stop my threads every time, but I want it to stop my thread and clean memory before throwing OutOfMemoryError, as promised. - Posted by: Pavel Vlasov - Posted on: May 21 2004 12:29 EDT - in response to Lars Stitz Calling gc manually kills performance. I have a business case. I need to have a cache of parsed java files which can be reparsed from disk, but I want to keep them in memory as long as possible. I used SoftReferences and ran to OutOfMemoryErorr on projects bigger than one thousand files. Now I have to use WeakReference, but gc doesn't make any effort to keep WeakReferences in memory as long as it can. I posted a bug report to Sun, but I don't expect them to fix it in nearest years. Sun confirmed that this is a bug[ Go to top ] I got a confirmation from Sun that this is a bug - Posted by: Pavel Vlasov - Posted on: June 19 2004 15:12 EDT - in response to Pavel Vlasov Sun confirmed that this is a bug[ Go to top ] Is there a formal report of the bug. I am currently trying to pursue several different paths for implementing a Memory Cache.We are using JDK 1.3, so I would like to see a formal acceptance from Sun acknowledging the bug so that I could document it as a justification in my report for the case not to use it. - Posted by: sy man - Posted on: September 03 2004 12:39 EDT - in response to Pavel Vlasov Thanks Sy
http://www.theserverside.com/discussions/thread.tss?thread_id=26023
CC-MAIN-2016-36
refinedweb
975
65.93
The Qt namespace contains miscellaneous identifiers used throughout the Qt library. More... #include <Qt> The Qt namespace contains miscellaneous identifiers used throughout the Qt library. This enum type is used to describe alignment. It contains horizontal and vertical flags. type defines what happens to the aspect ratio when scaling an rectangle. See also QSize::scale() and QImage::scaled(). Background mode:..: The ItemFlags type is a typedef for QFlags<ItemFlag>. It stores an OR combination of ItemFlag values. The key names used by Qt. See also QKeyEvent::key(). This enum describes the modifier keys.. and QAbstractItemDelegate::elidedText(). This enum type is used to define some modifier flags. Some of these flags only make sense in the context of printing:..: Obsolete flags: The WindowFlags type is a typedef for QFlags<WindowType>. It stores an OR combination of WindowType values.(). Auxiliary function. Converts the plain text string plain to a rich text formatted string with any HTML meta-characters escaped..
http://doc.qt.digia.com/4.0/qt.html
CC-MAIN-2014-52
refinedweb
157
61.93
CodePlexProject Hosting for Open Source Software Hi all I would like to know the way of getting a list of broadband pppoe. (Using C#) Actually What I would like to do is to get randam connection and disconnect to internet through maltiple pppoe connection list it's already set in Windows XP. Calling RasDevice.GetDevices() method doesn't return the list of pppoe. Is the any way ? Thanks. Also I know the way of rasdial command through dos prompt RasDevice.GetDevices is probably reporting actual PPPoE hardware connected to the machine, which it sounds like you're trying to retrieve broadband entries from within a phone book. In which case you'll want to use the RasPhoneBook and RasEntry classes (for more information on usage, see the DotRas SDK Help Documentation). Here's a very basic example: using System.Collections.Generic; using System.Linq; using DotRas; RasPhoneBook pbk = new RasPhoneBook(); pbk.Open(); IEnumerable<RasEntry> broadbandEntries = from entry in pbk.Entries where entry.EntryType == RasEntryType.Broadband select entry; Keep in mind, your application will only support Windows XP or later since you're accessing broadband entries, which means you need to use the WINXP or later build type. - Jeff Hi Jeff Thanks for your reply. I managed to solve the problem as the same way you tell me. I tried first "RasPhoneBook" but did not get anything. I should have known the open method. Anyway it's working fine for me. Jeff , thank you for your reply again, hope this discussion help others. Thanks Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://dotras.codeplex.com/discussions/231904
CC-MAIN-2017-43
refinedweb
289
67.96
Set field value in structure array, given index and field name #include "matrix.h" void mxSetField(mxArray *pm, mwIndex index, const char *fieldname, mxArray *pvalue); subroutine mxSetField(pm, index, fieldname, pvalue) mwPointer pm, pvalue mwIndex index character*(*) fieldname Pointer to a structure mxArray. Call mxIsStruct to determine whether pm points to a structure mxArray. Index of an element in the array. In C, the first element of an mxArray has an index of 0. The index of the last element is N-1, where N is the number of elements in the array. In Fortran, the first element of an mxArray has an index of 1. The index of the last element is N, where N is the number of elements in the array. See mxCalcSingleSubscript for details on calculating an index. Name of a field in the structure. The field must exist in the structure. Call mxGetFieldNameByNumber or mxGetFieldNumber to determine existing field names. Pointer to an mxArray containing the data you want to assign to fieldname. Use mxSetField to assign the contents of pvalue to the field fieldname of element index. If you want to replace the contents of fieldname, you must first free the memory of the existing data. Use the mxGetField function to get a pointer to the field, call mxDestroyArray on the pointer, then call mxSetField to assign the new value. You cannot assign pvalue to more than one field in a structure or to more than one element in the mxArray. If you want to assign the contents of pvalue to multiple fields, use the mxDuplicateArray function to make copies of the data then call mxSetField on each copy. To free memory for structures created using this function, call mxDestroyArray only on the structure array. Do not call mxDestroyArray on the array pvalue points to. If you do, MATLAB® attempts to free the same memory twice, which can corrupt memory. In C, you can replace the statements: field_num = mxGetFieldNumber(pa, "fieldname"); mxSetFieldByNumber(pa, index, field_num, new_value_pa); with a call to mxSetField: mxSetField(pa, index, "fieldname", new_value_pa); In Fortran, you can replace the statements: fieldnum = mxGetFieldNumber(pm, 'fieldname') mxSetFieldByNumber(pm, index, fieldnum, newvalue) with a call to mxSetField: mxSetField(pm, index, 'fieldname', newvalue) See the following examples in matlabroot/extern/examples/mx. mxcreatestructarray.cmxcreatestructarray.c
http://www.mathworks.com/help/matlab/apiref/mxsetfield.html?nocookie=true
CC-MAIN-2014-35
refinedweb
380
54.63
The process is resize to small for more fast processing to blur for avoid noise affection morphology make blob and remove noise findContour to draw blob rectangle Example movie of result. more detail refer to source code.. #include < stdio.h> #include < iostream> #include < opencv2\opencv.hpp> #include < opencv2/core/core.hpp> #include < opencv2/highgui/highgui.hpp> #include < opencv2/video/background_segm.hpp> #ifdef _DEBUG #pragma comment(lib, "opencv_core247d.lib") #pragma comment(lib, "opencv_imgproc247d.lib") //MAT processing #pragma comment(lib, "opencv_objdetect247d.lib") //HOGDescriptor //#pragma comment(lib, "opencv_gpu247d.lib") //#pragma comment(lib, "opencv_features2d247d.lib") #pragma comment(lib, "opencv_highgui247d.lib") #pragma comment(lib, "opencv_ml247d.lib") //#pragma comment(lib, "opencv_stitching247d.lib"); //#pragma comment(lib, "opencv_nonfree247d.lib"); #pragma comment(lib, "opencv_video247d.lib") #else #pragma comment(lib, "opencv_core247.lib") #pragma comment(lib, "opencv_imgproc247.lib") #pragma comment(lib, "opencv_objdetect247.lib") //#pragma comment(lib, "opencv_gpu247.lib") //#pragma comment(lib, "opencv_features2d247.lib") #pragma comment(lib, "opencv_highgui247.lib") #pragma comment(lib, "opencv_ml247.lib") //#pragma comment(lib, "opencv_stitching247.lib"); //#pragma comment(lib, "opencv_nonfree247.lib"); #pragma comment(lib, "opencv_video247d.lib") #endif using namespace cv; using namespace std; int main() { //global variables Mat frame; //current frame Mat resize_blur_Img; Mat fgMaskMOG2; //fg mask fg mask generated by MOG2 method Mat binaryImg; //Mat TestImg; Mat ContourImg; //fg mask fg mask generated by MOG2 method Ptr< BackgroundSubtractor> pMOG2; //MOG2 Background subtractor pMOG2 = new BackgroundSubtractorMOG2(300,32,true);//300,0.0); char fileName[100] = "mm2.avi"; //video\\mm2.avi"; //mm2.avi"; //cctv 2.mov"; //mm2.avi"; //";//_p1.avi"; VideoCapture stream1(fileName); //0 is the id of video device.0 if you have only one camera //morphology element Mat element = getStructuringElement(MORPH_RECT, Size(7, 7), Point(3,3) ); //unconditional loop while (true) { Mat cameraFrame; if(!(stream1.read(frame))) //get one frame form video break; //Resize resize(frame, resize_blur_Img, Size(frame.size().width/3, frame.size().height/3) ); //Blur blur(resize_blur_Img, resize_blur_Img, Size(4,4) ); //Background subtraction pMOG2->operator()(resize_blur_Img, fgMaskMOG2, -1);//,-0.5); /////////////////////////////////////////////////////////////////// //pre procesing //1 point delete //morphologyEx(fgMaskMOG2, fgMaskMOG2, CV_MOP_ERODE, element); morphologyEx(fgMaskMOG2, binaryImg, CV_MOP_CLOSE, element); //morphologyEx(fgMaskMOG2, testImg, CV_MOP_OPEN, element); //Shadow delete //Binary threshold(binaryImg, binaryImg, 128, 255, CV_THRESH_BINARY); //Find contour ContourImg = binaryImg.clone(); //less blob delete vector< vector< Point> > contours; findContours(ContourImg, contours, // a vector of contours CV_RETR_EXTERNAL, // retrieve the external contours CV_CHAIN_APPROX_NONE); // all pixels of each contours vector< Rect > output; vector< vector< Point> >::iterator itc= contours.begin(); while (itc!=contours.end()) { //Create bounding rect of object //rect draw on origin image Rect mr= boundingRect(Mat(*itc)); rectangle(resize_blur_Img, mr, CV_RGB(255,0,0)); ++itc; } /////////////////////////////////////////////////////////////////// //Display imshow("Shadow_Removed", binaryImg); imshow("Blur_Resize", resize_blur_Img); imshow("MOG2", fgMaskMOG2); if (waitKey(5) >= 0) break; } } Great example, it helped me a lot. Thanks could u please tell me how to get that dataset Hi, what dataset do you tell? mm2.avi? mm2.avi is sample avi file in my local hard disk. Thank you. thanks can u tell me please why u used these last values (300,32) ? pMOG2 = new BackgroundSubtractorMOG2(300,32,true) refer to this opencv document..-> first parameter is Length of the history second parameter is Threshold on the squared Mahalanobis distance to decide whether it is well described by the background model (see Cthr??). This parameter does not affect the background update. A typical value could be 4 sigma, that is, varThreshold=4*4=16 In short, First is the average time for a background. Second is Threshold for separating the foreground For more understanding, we have to read the algorithm paper. Thank you so much very helpful stay blessed kindly share the document for algorithm i have to design a system which can robustly works against noises and illumination changes also can track multiple objects will u please help me kindly Hi htc onex, what help do you need? how to over come the occlusion problems in mog2 background subtraction can u give me an idea or refer document or give me an idea to extend this code please I think that your intersting problme is tracking topic. The background subtraction can be use in object tracking. but the method may not work well because there is not prediction and dynamic equation. Popular method for tracking is kalman, particle(condensation), optical flow, camshift. There tracking method is need initial region or object for start tracking. The background subtraction can be use for this initial region. thanks Coud u please tell me how did u get the background model is the program using linux? how if I want to apply it on Visual C using windows? The example source code is C, C++, you can test the code in the VS on window os. Thank you This comment has been removed by the author. I've tried it on VS C, but C code not using Mat function? could you help me converting that program to c code? thx before great help but I want to dry bounding rectangles on resultant image not on the original image. What modifications should I perform? You should change Mat parameter in rectangle function. (line 99). binaryImg, resize_blur_Img, fgMaskMOG2 is same size that is resized to small. Origin image is frame. If you want to draw in origin image, you should divide rectangle vlaue as resizing scale factor. Thank you. Hey bro, how to Get background model from BackgroundSubtractorMOG2 in c++ Help mee, please !! thank you so much !! I am using Opencv 3.0 and Visual Studio 2012 This Code Not working for me please help me out. sir i can't my code compile as compiled it as g++ -ggdb `pkg-config --cflags --libs opencv` bg_sub.cpp -o bg mant error occurs. I got error LNK1181: cannot open input file 'opencv_calib3d2410d.lib' Any help Hi everybody, I simulated the code but I got some error. firstly: invalid new-expression of abstract class type ‘cv::BackgroundSubtractorMOG2’ pMOG2 = new BackgroundSubtractorMOG2(300,32,true); //300,0.0); secondly: ‘class cv::BackgroundSubtractor’ has no member named ‘operator()’ pMOG2->operator()(resize_blur_Img, fgMaskMOG2, -1);//,-0.5); Any help? I am new in Computer Vision. What opencv version do you use sir? I'm getting the same error, I have opencv 3.2.0 what should I change? refer to this post how to apply tracking method to moving objects in this code? You can compare bounding box between t time frame and t+1 time frame. What compare? how coordinate and how much overlapped.. But this is very simple way, so easy to loose object when crossing and occlusion.. But it will be good practice. Obviously, tracking is very difficult problem. Thank you.
http://study.marearts.com/2014/04/opencv-study-background-subtraction-and.html?showComment=1413283146728
CC-MAIN-2019-51
refinedweb
1,078
60.31
Other Aliasreadlinkatreadlink() places the contents of the symbolic link pathname in the buffer buf, which has size bufsiz. readlink() does not append a null byte to buf. It will truncate the contents (to a length of bufsiz characters), in case the buffer is too small to hold all of the contents. readlinkat()TheOn success, these calls return. NOTESThe following program allocates the buffer needed by readlink() dynamically from the information provided by lstat(), making sure there's no race condition between the calls. #include <sys/types.h> #include <sys/stat.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> int main(int argc, char *argv[]) { struct stat sb; char *linkname; ssize_t r; if (argc != 2) { fprintf(stderr, "Usage: %s <pathname>\n", argv[0]); exit(EXIT_FAILURE); } if (lstat(argv[1], &sb) == -1) { perror("lstat"); exit(EXIT_FAILURE); } linkname = malloc(sb.st_size + 1); if (linkname == NULL) { fprintf(stderr, "insufficient memory\n"); exit(EXIT_FAILURE); } r = readlink(argv[1], linkname, sb.st_size + 1); if (r == -1) { perror("readlink"); exit(EXIT_FAILURE); } if (r > sb.st_size) { fprintf(stderr, "symlink increased in size " "between lstat() and readlink()\n"); exit(EXIT_FAILURE); } linkname[r] = '\0'; printf("'%s' points to '%s'\n", argv[1], linkname); free(linkname); exit(EXIT_SUCCESS); } COLOPHONThis page is part of release 4.06 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
http://manpages.org/readlink/2
CC-MAIN-2021-21
refinedweb
233
50.73
.pyiGenerator The Ghidra .pyi Generator generates .pyi type stubs for the entire Ghidra API. Those stub files can later be used in PyCharm to enhance the development experience. You can either use the stubs released here, or follow the instructions below to generate them yourself. The release contains PEP 561 stub package, which can simply be installed with pip install ghidra-stubs*.whl into the environment in which the real ghidra module is available. Any conformant tool will then use the stub package for type analysis purposes. If you want to manually add the stub files to PyCharm, follow the instructions in Install, uninstall, and upgrade interpreter paths. Once installed, all you need to do is import the Ghidra modules as usual, and PyCharm will do the rest. import ghidra To get support for the Ghidra builtins, you need to import them as well. The type hints for those exist in the generated ghidra_builtins.pyi stub. Since it is not a real Python module, importing it at runtime will fail. But the .pyi gives PyCharm all the information it needs to help you. try: from ghidra.ghidra_builtins import * except: pass If you are using ghidra_bridge from a Python 3 environment where no real ghidra module exists you can use a snippet like the following: import typing if typing.TYPE_CHECKING: import ghidra from ghidra.ghidra_builtins import * else: b = ghidra_bridge.GhidraBridge(namespace=globals()) # actual code follows here typing.TYPE_CHECKING is a special value that is always False at runtime but True during any kind of type checking or completion. Once done, just code & enjoy. To properly extract all types from Ghidra, make sure to extract the API documentation. Help -> Ghidra API Help The script depends on both the attr and typing packages. # Create a virtualenv for Ghidra packages. # It is important to use Python2.7 for this venv! # If you want, you can skip this step and use your default Python installation. mkvirtualenv -p python2.7 ghidra # Create Jython's site-pacakges directory. jython_site_packages=~/.local/lib/jython2.7/site-packages mkdir -p $jython_site_packages # Create a PTH file to point Jython to Python's site-packages directories. # Again, this has to be Python2.7. # Outside a virtualenv, use python2.7 -c "import site; print(site.getusersitepackages()); print(site.getsitepackages()[-1])" > $jython_site_packages/python.pth # If using virtualenv, use the following instead python2.7 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())" > $jython_site_packages/python.pth # Use pip to install packages for Ghidra pip install attrs typing .pyifiles Script Directoriesin the Ghidra Script Manager generate_ghidra_pyi.py(will be located under IDE Helpers) .pyifiles in. $GHIDRA_ROOT/support/analyzeHeadless /tmp tmp -scriptPath $(pwd) -preScript generate_ghidra_pyi.py ./ generate_ghidra_pyi.py generates a setup.py inside the directory that was selected. This allows using pip install to install a PEP 561 stub package that is recognized by PyCharm and other tools as containing type information for the ghidra module.
https://openbase.com/python/ghidra-stubs
CC-MAIN-2021-43
refinedweb
478
61.22
On Monday 19 May 2003 11:50, Wannheden, Knut wrote: > P. Nothing is impossible..., but it is difficult to have meaning-free URIs and to support (as in ignore) other URIs. I would like ant to ignore other namespaces so that it is a civilized member of the xml community. For example the following specifies which processor deals with which element tag: <project xmlns: <echo message="hello world"/> <h:html> <h:body>hello world</h:body> </h:html> </project> the above produces "hello world" as an ant script and "hello world" in an NS aware browser. > I > think it makes perfectly sense. Namespaces are to avoid name clashes. This is not the only reason for XML namespaces. The standard says that are to allow "elements and attributes that are defined for and used by multiple software modules". > Ant > doesn't either force you to use certain Java package names when > implementing tasks. > > Can't the <typedef/> task just add another antlib to the Project? ? this is what <typedef/> does > > > . I would agree with you here. Maybe another pattern could be allowed? I only have a problem with an arbitrary uri ;-) > IMO > an URI starting with "antlib:" should always mean that the following > denotes a package with that antlib. What would happen if I had a Java > package on the classpath called "arbitarystring"? The definitions are appended to the definitions defined in {converted package}/antlib.xml. This is the same behaviour as definitions going into the default namespace. Peter
http://mail-archives.apache.org/mod_mbox/ant-dev/200305.mbox/%3C200305191355.39003.peter.reilly@corvil.com%3E
CC-MAIN-2015-27
refinedweb
248
66.64
ColdFusion Iterating Business Objects (IBOs) From The Ground-Up P. BaseObject.cfc. Access And Mutate. Number Of Records. IsLast(). Run Time Property Evaluation). Returning Empty String As Invalid. LoadQuery(). Too Dynamic / Too Naming-Convention Dependent. - It explicitly points out which methods are for getting and setting attributes. Not only does this allow you to have other "Get" and "Set" prefixed methods that are NOT used for getting and setting properties, I find explicit declarations a bit more comforting. They help me sleep a bit better a night. - It allows for differences in naming conventions between your database and your CFC methods. For example, I use the "_" in my database naming convention. A person's first name in the database would be "first_name". In order for this to be Get'ed as a function rather than a query column, I would have to have a CFC method named GetFirst_Name(). This violates my CFC naming convention which states that I cannot use "_" in CFC names or methods. What I want to do is have a CFC method named GetFirstName(). This however would NOT be picked up by the original IBO which relied only on naming conventions. The MapTo attribute allows me to have a GetFirstName() method and map it to the first_name property via mapto="first_name". Now, On To The Code. AbstractIBO.cfc - <cfcomponent - displayname="AbstractIBO" - output="false" - - <!--- Run the pseudo constructor to set up default data structures. ---> - <cfscript> - // Set up the instance object in the private memory scope. - VARIABLES.Instance = StructNew(); - // This is the index of the current iteration. - VARIABLES.Instance.IterationIndex = 1; - // = "*"; - VARIABLES.Instance.SetAttributesList = "*"; - // This is a structure of keys that can be "get"ed and "set"ed. These - // will be populated based on the attributes list above. - VARIABLES.Instance.GetAttributes = StructNew(); - VARIABLES.Instance.SetAttributes = StructNew(); - // This is a structure that will alias the getter and setter methods. - // We are doing this so that we don't have to care about any method - // mapping. - VARIABLES.Instance.GetAttributeMethods = StructNew(); - VARIABLES.Instance.SetAttributeMethods = StructNew(); - // This is the underlying data object. I am initializing to an empty qyery. - // I can't believe this actually works, but at least it puts a query object - // as this variable value. - VARIABLES.Instance.RecordSet = QueryNew( "" ); - // Set up the non-instance private data. - // These are the types of getter/setters that can be accessed/mutated. - // Right now, a value can either be accessed as a property or as a - // function of the underlying data structure. These values are meant to - // be constant and are therefore not part of the instance variable struct. - VARIABLES.KeyTypes = StructNew(); - // This defines a key as being accessed as a method. - VARIABLES.KeyTypes.METHOD = 1; - // This defines a key as being accessed as a property. - VARIABLES.KeyTypes.PROPERTY = 2; - // Set up the default public data. - THIS.RecordCount = 0; - THIS.CurrentRow = 0; - </cfscript> - <cffunction name="Init" access="public" returntype="AbstractIBO" output="false" - - <!--- Return This reference. ---> - <cfreturn THIS /> - </cffunction> - <cffunction name="First" access="public" returntype="void" output="false" - - <!--- Set the index. ---> - <cfset VARIABLES.Instance.IterationIndex = 1 /> - <!--- Set the public data. ---> - <cfset THIS.CurrentRow = 1 /> - <!--- Return out. ---> - <cfreturn /> - </cffunction> - <cffunction name="Get" access="public" returntype="any" output="false" - - <!--- Define arguments. ---> - <cfargument name="Key" type="string" required="true" /> - <!--- Define the local scope. ---> - <cfset var LOCAL = StructNew() /> - <!--- Check to see if the key is available. ---> - <cfif StructKeyExists( VARIABLES.Instance.GetAttributes, ARGUMENTS.Key )> - <!--- This key is valid. Check to see how it is being accessed. ---> - <cfif (VARIABLES.Instance.GetAttributes[ ARGUMENTS.Key ] EQ VARIABLES.KeyTypes.PROPERTY)> - <!--- Return the property straight out of the record set. ---> - <cfreturn VARIABLES.Instance.RecordSet[ ARGUMENTS.Key ][ VARIABLES.Instance.IterationIndex ] /> - <cfelse> - <!--- - The key is being accessed as a method. Get a pointer to the method. - We need to get the interim method since ColdFusion cannot handle the - parsing of an array notation followed by a method call. - ---> - <cfset LOCAL.Method = VARIABLES.Instance.GetAttributeMethods[ ARGUMENTS.Key ] /> - <!--- Return the value of the get method. ---> - <cfreturn LOCAL.Method() /> - </cfif> - <cfelse> - <!--- The key is not available for getting. Throw an error. ---> - <cfthrow - message="You have attempted to access an invalid key." - type="IBO.InvalidGetKey" - detail="The key you are trying to get, which is currently #UCase( ARGUMENTS.Key )#, is not available in this IBO." - /> - </cfif> - </cffunction> - <cffunction name="GetAccessHints" access="public" returntype="struct" output="false" - - <!--- Define the local scope. ---> - <cfset var LOCAL = StructNew() /> - <!--- Create a structure to return both the getter and setter info. ---> - <cfset LOCAL.Hints = StructNew() /> - <!--- Set the keys so that the access methods make sense. ---> - <cfset LOCAL.Hints.Keys = Duplicate( VARIABLES.KeyTypes ) /> - <!--- Set the getters and setters. ---> - <cfset LOCAL.Hints.Get = Duplicate( VARIABLES.Instance.GetAttributes ) /> - <cfset LOCAL.Hints.Set = Duplicate( VARIABLES.Instance.SetAttributes ) /> - <!--- Return the hints. ---> - <cfreturn LOCAL.Hints /> - </cffunction> - <cffunction name="GetPropertyList" access="public" returntype="string" output="false" - - <cfreturn VARIABLES.Instance.GetAttributesList /> - </cffunction> - <cffunction name="IsFirst" access="public" returntype="boolean" output="false" - - <cfreturn (VARIABLES.Instance.IterationIndex EQ 1) /> - </cffunction> - <cffunction name="IsLast" access="public" returntype="boolean" output="false" - - <cfreturn (VARIABLES.Instance.IterationIndex EQ VARIABLES.Instance.RecordSet.RecordCount) /> - </cffunction> - <cffunction name="IsRecord" access="public" returntype="boolean" output="false" - - <cfreturn (VARIABLES.Instance.IterationIndex LTE VARIABLES.Instance.RecordSet.RecordCount) /> - </cffunction> - <cffunction name="LoadQuery" access="public" returntype="any" output="false" - - <!--- Define arguments. ---> - <cfargument name="RecordSet" type="query" required="true" /> - <cfscript> - // Define the local scope. - var LOCAL = StructNew(); - // Duplicate the query when storing it in the IBO. I am not sure that - // I completely agree with this, but it seems that this is (in part) - // what the original code is doing. We are doing this so that even if - // the query is altered after it is sent it, it does not affect the - // underlying data of the IBO. - VARIABLES.Instance.RecordSet = Duplicate( ARGUMENTS.RecordSet ); - // Now that we have the data stored in our IBO, we can populate the - // structures of getters and setters. If the list is "*" then we - // are going to copy the record set's column list over into the list - // of getters and setters. Otherwise, we will keep it as is. - // Set local flags for select all. - LOCAL.IsGetAll = VARIABLES.Instance.GetAttributesList.Matches( "\*" ); - LOCAL.IsSetAll = VARIABLES.Instance.SetAttributesList.Matches( "\*" ); - // Check to see if we are getting all properties. - if (LOCAL.IsGetAll){ - // The Get attributes list will be the record set's column list. - VARIABLES.Instance.GetAttributesList = VARIABLES.Instance.RecordSet.ColumnList; - } - // Check to see if we are setting all properties. - if (LOCAL.IsSetAll){ - // The Set attributes list will be the record set's column list. - VARIABLES.Instance.SetAttributesList = VARIABLES.Instance.RecordSet.ColumnList; - } - // At this point, we have the get and set attributes list which have - // been populated either by the record set OR hard coded by the concrete - // extending IBO. Let's store these keys into the get and set attributes - // structures for faster lookup and editing. - // Loop over get property list. - for ( - LOCAL.PropertyIndex = 1 ; - LOCAL.PropertyIndex LTE ListLen( VARIABLES.Instance.GetAttributesList ) ; - LOCAL.PropertyIndex = (LOCAL.PropertyIndex + 1) - ){ - // Get the current property. - LOCAL.Property = ListGetAt( VARIABLES.Instance.GetAttributesList, LOCAL.PropertyIndex ); - // Set the property to a default value. - VARIABLES.Instance.GetAttributes[ LOCAL.Property ] = 0; - } - // Loop over set property list. - for ( - LOCAL.PropertyIndex = 1 ; - LOCAL.PropertyIndex LTE ListLen( VARIABLES.Instance.SetAttributesList ) ; - LOCAL.PropertyIndex = (LOCAL.PropertyIndex + 1) - ){ - // Get the current property. - LOCAL.Property = ListGetAt( VARIABLES.Instance.SetAttributesList, LOCAL.PropertyIndex ); - // Set the property to a default value. - VARIABLES.Instance.SetAttributes[ LOCAL.Property ] = 0; - } - // Now, let's collect the IBO methods that have been mapped to properties. - // This is done to allow for differences in naming conventions between the - // database (which results in the record set) and the CFC methods. For - // instance, the database might have "first_name", but for naming convention's - // sake, we can't have a method named "GetFist_Name". To accomodate this, - // we are forcing the methods to use the "mapto" attribute. Method's without - // this attribute (for both GET and SET) will be ignored. - // Loop over the methods to check for anything that begins with "get" or "set". - for (LOCAL.CFCProperty in VARIABLES){ - // Check to see if this is a user-defined function. We only care - // about things that can be called as functions. - if ( - IsCustomFunction( VARIABLES[ LOCAL.CFCProperty ] ) AND - StructKeyExists( GetMetaData( VARIABLES[ LOCAL.CFCProperty ] ), "MapTo" ) AND - ( - (Left( LOCAL.CFCProperty, 3 ) EQ "GET") OR - (Left( LOCAL.CFCProperty, 3 ) EQ "SET") - )){ - // Get the mapto attribute value. - LOCAL.MapTo = GetMetaData( VARIABLES[ LOCAL.CFCProperty ] ).MapTo; - // Check to see if this is a get method and that it is in the defined - // list of "get"able properties. - if ( - (Left( LOCAL.CFCProperty, 3 ) EQ "GET") AND - ( - LOCAL.IsGetAll OR - StructKeyExists( VARIABLES.Instance.GetAttributes, GetMetaData( VARIABLES[ LOCAL.CFCProperty ] ).MapTo ) - )){ - // Flag this property as being accessed as a method. - VARIABLES.Instance.GetAttributes[ LOCAL.MapTo ] = VARIABLES.KeyTypes.METHOD; - // Set the method alias. - VARIABLES.Instance.GetAttributeMethods[ LOCAL.MapTo ] = VARIABLES[ LOCAL.CFCProperty ]; - // Check to see if this is a set method and that it is in the defined - // list of "set"able properties. - } else if ( - (Left( LOCAL.CFCProperty, 3 ) EQ "SET") AND - ( - LOCAL.IsSetAll OR - StructKeyExists( VARIABLES.Instance.SetAttributes, GetMetaData( VARIABLES[ LOCAL.CFCProperty ] ).MapTo ) - )){ - // Flag this property as being accessed as a method. - VARIABLES.Instance.SetAttributes[ LOCAL.MapTo ] = VARIABLES.KeyTypes.METHOD; - // Set the method alias. - VARIABLES.Instance.SetAttributeMethods[ LOCAL.MapTo ] = VARIABLES[ LOCAL.CFCProperty ]; - } - } - } - // Convert the record set column list to a structure for faster access. - LOCAL.Columns = StructNew(); - // Loop over columns and add keys to column struct. - for ( - LOCAL.ColumnIndex = 1 ; - LOCAL.ColumnIndex LTE ListLen( VARIABLES.Instance.RecordSet.ColumnList ) ; - LOCAL.ColumnIndex = (LOCAL.ColumnIndex + 1) - ){ - // Add to struct. The value here is not important. - LOCAL.Columns[ ListGetAt( VARIABLES.Instance.RecordSet.ColumnList, LOCAL.ColumnIndex ) ] = 0; - } - // At this point, we should already have all the mapped methods accounted for. - // Now, we have to figure out which of the properties will be accessed as a - // property (and which ones are not valid in any access form). To get this, - // let's loop over the get and set properties and for each property that is - // not being accessed as a method, check to see if there is an equivalent - // record set column. - // Loop over get attributes. - for (LOCAL.Property in VARIABLES.Instance.GetAttributes){ - // Check to see if this attribute is zero. This would indicate a - // property that has not been validated yet (and is just based on - // the list of properties). - if (NOT VARIABLES.Instance.GetAttributes[ LOCAL.Property ]){ - // This property has not been accounted for. Check for a - // matching record set column. - if (StructKeyExists( LOCAL.Columns, LOCAL.Property )){ - // There is a matching record set column. Set this property as - // being accessed as a standard property. - VARIABLES.Instance.GetAttributes[ LOCAL.Property ] = VARIABLES.KeyTypes.PROPERTY; - } else { - // This property has not been accounted for by either the mapped - // methods or the record set columns. Delete this property from - // the get attributes. - StructDelete( VARIABLES.Instance.GetAttributes, LOCAL.Property ); - } - } - } - // Loop over set attributes. - for (LOCAL.Property in VARIABLES.Instance.SetAttributes){ - // Check to see if this attribute is zero. This would indicate a - // property that has not been validated yet (and is just based on - // the list of properties). - if (NOT VARIABLES.Instance.SetAttributes[ LOCAL.Property ]){ - // This property has not been accounted for. Check for a - // matching record set column. - if (StructKeyExists( LOCAL.Columns, LOCAL.Property )){ - // There is a matching record set column. Set this property as - // being accessed as a standard property. - VARIABLES.Instance.SetAttributes[ LOCAL.Property ] = VARIABLES.KeyTypes.PROPERTY; - } else { - // This property has not been accounted for by either the mapped - // methods or the record set columns. Delete this property from - // the Set attributes. - StructDelete( VARIABLES.Instance.SetAttributes, LOCAL.Property ); - } - } - } - // At this point, we have set up the Get and Set attributes based on the - // mapped methods and the record set column list. There might be some - // differences between the hard coded set/get lists and the available - // Set and Get attributes. Not sure how to handle that at the moment. I - // am going to leave it as-is for now. However, we could certainly set - // the get and let list based on the keys of the attribute structrures: - // - // For example: - // VARIABLES.Instance.GetAttributesList = StructKeyList( VARIABLES.Instance.GetAttributes ); - // VARIABLES.Instance.SetAttributesList = StructKeyList( VARIABLES.Instance.SetAttributes ); - // Set public data. - THIS.RecordCount = VARIABLES.Instance.RecordSet.RecordCount; - THIS.CurrentRow = 1; - // Now that we have populated the IBO's getter/setter definitions, return - // the This reference. We are doing this so that the LoadQuery() method can - // be chained with the IBO's constructor. - return( THIS ); - </cfscript> - </cffunction> - <cffunction name="Next" access="public" returntype="boolean" output="false" - - <cfscript> - // Increment the iteration index to point to the next record. At this - // point, we don't care about any sort of validation. That is going - // to be taken care of in the next step. - VARIABLES.Instance.IterationIndex = (VARIABLES.Instance.IterationIndex + 1); - // Point the public row to the iteration index. - THIS.CurrentRow = VARIABLES.Instance.IterationIndex; - // Check to see if the new index points to a valid row. If it does, return true - // otherwise, return false. - return( VARIABLES.Instance.IterationIndex LTE VARIABLES.Instance.RecordSet.RecordCount ); - </cfscript> - </cffunction> - <cffunction name="Set" access="public" returntype="void" output="false" - - <!--- Define arguments. ---> - <cfargument name="Key" type="string" required="true" /> - <cfargument name="Value" type="any" required="true" /> - <!--- Define the local scope. ---> - <cfset var LOCAL = StructNew() /> - <!--- Check to see if the key is available. ---> - <cfif StructKeyExists( VARIABLES.Instance.SetAttributes, ARGUMENTS.Key )> - <!--- This key is valid. Check to see how it is being accessed. ---> - <cfif (VARIABLES.Instance.SetAttributes[ ARGUMENTS.Key ] EQ VARIABLES.KeyTypes.PROPERTY)> - <!--- Set the property directly in the record set. ---> - <cfset VARIABLES.Instance.RecordSet[ ARGUMENTS.Key ][ VARIABLES.Instance.IterationIndex ] = ARGUMENTS.Value /> - <cfelse> - <!--- - The key is being accessed as a method. Get a pointer to the method. - We need to get the interim method since ColdFusion cannot handle the - parsing of an array notation followed by a method call. Furthermore, - we don't want to use CFInvoke as CFInvoke requires the naming of the - arguments. By creating a method pointer, we can use positional argument - mapping, not mappings by name. This makes for a more flexible interface. - ---> - <cfset LOCAL.Method = VARIABLES.Instance.SetAttributeMethods[ ARGUMENTS.Key ] /> - <!--- Call the set method. ---> - <cfset LOCAL.Method( ARGUMENTS.Value ) /> - </cfif> - <!--- - At this point we have either set the value as a property or by - method. Eitherway, return out. - ---> - <cfreturn /> - <cfelse> - <!--- The key is not available for setting. Throw an error. ---> - <cfthrow - message="You have attempted to access an invalid key." - type="IBO.InvalidSetKey" - detail="The key you are trying to set, which is currently #UCase( ARGUMENTS.Key )#, is not available in this IBO." - /> - </cfif> - </cffunction> - </cfcomponent> GirlIBO.cfc The GirlIBO.cfc ColdFusion component is a concrete IBO that extends the AbstractIBO.cfc. It provides some of its own Getter and Setter methods. These override the getting and setting of straight up query columns. - <cfcomponent - displayname="GirlIBO" - extends="AbstractIBO" - output="false" - - <!--- Run the pseudo constructor to set up default data structures. ---> - <cfscript> - // = "id,name,sexyness_factor"; - // VARIABLES.Instance.SetAttributesList = ""; - </cfscript> - <cffunction name="GetFullName" access="private" returntype="string" output="false" - mapto="full_name" - - <!--- Check to see what the sexyness factor is when we determine the girl's full name. ---> - <cfif (VARIABLES.Instance.RecordSet[ "sexyness_factor" ][ VARIABLES.Instance.IterationIndex ] GTE 9)> - <!--- This a really hot girl. ---> - <cfreturn ( - "Crazy Hot " & - VARIABLES.Instance.RecordSet[ "name" ][ VARIABLES.Instance.IterationIndex ] - ) /> - <cfelseif (VARIABLES.Instance.RecordSet[ "sexyness_factor" ][ VARIABLES.Instance.IterationIndex ] GTE 8)> - <!--- This a sexy girl. ---> - <cfreturn ( - "Sexy " & - VARIABLES.Instance.RecordSet[ "name" ][ VARIABLES.Instance.IterationIndex ] - ) /> - <cfelse> - <!--- This a really an average girl. ---> - <cfreturn VARIABLES.Instance.RecordSet[ "name" ][ VARIABLES.Instance.IterationIndex ] /> - </cfif> - </cffunction> - <cffunction name="GetSexynessFactor" access="private" returntype="numeric" output="false" - mapto="sexyness_factor" - - <cfreturn Fix( VARIABLES.Instance.RecordSet[ "sexyness_factor" ][ VARIABLES.Instance.IterationIndex ] ) /> - </cffunction> - <cffunction name="SetName" access="public" returntype="void" output="false" - mapto="name" - - <!--- Define arguments. ---> - <cfargument name="Value" type="string" required="true" /> - <!--- Set the value and convert to upper case. ---> - <cfset VARIABLES.Instance.RecordSet[ "name" ][ VARIABLES.Instance.IterationIndex ] = JavaCast( "string", ARGUMENTS.Value.ToUpperCase() ) /> - <!--- Return out. ---> - <cfreturn /> - </cffunction> - </cfcomponent> Testing The Code - <!--- Build a test query. ---> - <cfset qGirl = QueryNew( - "id, name, sexyness_factor", - "CF_SQL_INTEGER, CF_SQL_VARCHAR, CF_SQL_DECIMAL" - ) /> - <!--- Add rows to the test query. ---> - <cfset QueryAddRow( qGirl, 3 ) /> - <!--- Set the row data for the query. ---> - <cfset qGirl[ "id" ][ 1 ] = JavaCast( "int", 1 ) /> - <cfset qGirl[ "name" ][ 1 ] = JavaCast( "string", "Sarah" ) /> - <cfset qGirl[ "sexyness_factor" ][ 1 ] = JavaCast( "float", 10.0 ) /> - <cfset qGirl[ "id" ][ 2 ] = JavaCast( "int", 2 ) /> - <cfset qGirl[ "name" ][ 2 ] = JavaCast( "string", "Libby" ) /> - <cfset qGirl[ "sexyness_factor" ][ 2 ] = JavaCast( "float", 8.5 ) /> - <cfset qGirl[ "id" ][ 3 ] = JavaCast( "int", 3 ) /> - <cfset qGirl[ "name" ][ 3 ] = JavaCast( "string", "Alex" ) /> - <cfset qGirl[ "sexyness_factor" ][ 3 ] = JavaCast( "float", 7.5 ) /> - <!--- - Create the IBO and load in the query. Notice that I can - chain these two actions together, which the original - IBO could not. Just a neat little feature. - ---> - <cfset objIBO = CreateObject( - "component", - "GirlIBO" - ).Init().LoadQuery( qGirl ) /> - <!--- - Loop over the IBO. You might want to run a First() call - before iterating, but for this demo, that is not - required. - ---> - <cfloop condition="objIBO.IsRecord()"> - <!--- Set the name just to test the set as fn. ---> - <cfset objIBO.Set( - "name", - objIBO.Get( "name" ) - ) /> - <p> - #objIBO.Get( "full_name" )# :: - #objIBO.Get( "sexyness_factor" )# - </p> - <!--- Go to the next record. ---> - <cfset objIBO.Next() /> - </cfloop>: - <!--- Kill extra output. ---> - <cfsilent> - <!--- Param the tag attributes. ---> - <!--- This is the IBO object that we are using to iterate. ---> - <cfparam - name="ATTRIBUTES.IBO" - type="any" - /> - <!--- - Check to see which mode of the tag we are executing. This - tag is being used to loop back over itself using the LOOP - value of the CFExit tag. When that happens, the Start mode - of the tag only gets fired once. Every other iteration fired - the End mode of the tag. Therefore, most of our actions are - done in the End mode. - ---> - <cfswitch expression="#THISTAG.ExecutionMode#"> - <cfcase value="START"> - <!--- - Move the IBO to it's first record. While this is not always - necessary, if this IBO has already been used for iteration, - then this is required. - ---> - <cfset ATTRIBUTES.IBO.First() /> - </cfcase> - <cfcase value="END"> - <!--- Move on to next row iteration. ---> - <cfset ATTRIBUTES.IBO.Next() /> - <!--- Check to see if we have a record to point to. ---> - <cfif ATTRIBUTES.IBO.IsRecord()> - <!--- - We are still at a valid record. Allow this tag to - execute again. Loop to next record. - ---> - <cfexit method="LOOP" /> - <cfelse> - <!--- - The Next() command we just performed put out IBO outside - the range of valid records. Do not let this tag execute - again. Exit out of the tag. - ---> - <cfexit method="EXITTAG" /> - </cfif> - </cfcase> - </cfswitch> - </cfsilent> And then, once we have that ColdFusion custom tag down, we can use it to loop in a fashion similar to the CFLoop tag: - <cfmodule - template="./iboloop.cfm" - - <p> - #objIBO.Get( "full_name" )# :: - #objIBO.Get( "sexyness_factor" )# - </p> - </cfmodule>. Sorry for the bold text. I have to take that stuff out. It's bothering me. I think I will just allow people to use some basic HTML tags.. Sounds like we are indeed saying the same things! This company stock (ROKE) is set to take off. Worldwide client base in the mobile communications space. See the details at any clarification contact at icoft123@gmail.com, the first gets the object and starts iterating, the second comes in moments later, gets the SAME object and resets the iterator thus messing it up for the first thread. Has anyone come across a way around this? I.e. a thread safe way of using these objects. Its nice to have built an ibo from potentially a long query etc but you would like to cache them (good for memory use too to resuse objects). You could just cache the query object and or use cached queries but its not as clean as just sticking an object in a cache to use at will. @Adrian, What I think you'll want to do is cache the query that populates it, not the CFC itself. Creating the single CFC shouldn't be a huge hit, but caching the data from page to page will be very beneficial. that individually you wouldnt want to cache. What we decided to do was build an helper object that captures the payload of the ibo and has a package level get and setPayload methods and added similar public methods on the ibo abstract cfc. This at least gives us encapsulation. We could even add a type property to prevent payload objects being added to incorrect ibos e.g. a payload object of type plants cant be added to ibo of type animals etc Means building more objects that wouldve been nice to avoid and is a tad messy but hopefully it wont be too much of an overhead. @Adrian, I am not following you exactly? Which part of the data are you caching? more appropriate to let these child classes inherit from an abstract class because they share common method stubs (e.g., getStartDate(), getEndDate()) but have their own way implementing these methods. Or I'm considering making this simple and just keep my IBO as a non-abstract class and have it sub-classed by the Assignment class (with PermanentAssignment and TemporaryAssignment sub-classing the Assignment class), but I don't ever see the need in intantiating a generic Assignment, so why make it non-abstract? Any thoughts?
http://www.bennadel.com/blog/412-coldfusion-iterating-business-objects-ibos-from-the-ground-up.htm
CC-MAIN-2014-15
refinedweb
3,452
53.37
Re: List 2003-11-01 14:25:50 GMT Hi Oki, I think you are confusing the matter a bit. There is no object "here" and "there" in Ozone, there is only "there" i.e. on the server. You should put proxies in your ArrayList, not actual ozoneObject instances. If you think about it this way then there is not need to "put an object back in the database" Every person you put in your PersonList needs to be created on the server first using createObject (or the factory generated by OPP available in the HEAD version). If you do this then every deleteObject will work fine. Regards, Per On Friday 31 October 2003 03.53, Oki DZ wrote: > Hi, > > I have a class like the following: > public class PersonListImpl extends OzoneObject implements PersonList, > Serializable { > > final static long serialVersionUID = 1L; > private ArrayList personList; > > public PersonListImpl() { > personList = new ArrayList(); > } > //... > } >(Continue reading)
http://blog.gmane.org/gmane.comp.java.ozone.user/month=20031101
CC-MAIN-2014-49
refinedweb
150
61.67
In Visual Basic .NET, the ArrayList class is used to create an array (or list) of objects. Unlike simple arrays, ArrayLists are designed to have no fixed size – contents can be added or removed while the program is running. Although arrays can be re-dimensioned in VB.NET, they are not built to be used this way. In situations where this is likely to occur often, performance may suffer and it is often preferable to use an ArrayList. To make full use of the information and examples presented here, a working knowledge of programming in Visual Basic.NET is recommended. Beginners can learn application programming with VB.NET and Microsoft Visual Studio 2013 at Udemy.com. It is also possible to work with ArrayList objects in Visual Basic for Applications (VBA), and a very short example of this is shown below. Readers who are new to the concepts of working with VBA in Excel, or unaware of the differences between VBA and VB.NET might care to take a useful course in writing Excel macros using Visual Basic. Declaring ArrayLists in VB.NET ArrayLists are part of the System.Collections namespace that is usually imported automatically when a project is created in Microsoft Visual Studio. If your project does not import this namespace, or if you wish to check, you can find the imported namespaces in the References page of the Project Designer. To open the References page in Microsoft Visual Studio: - In the Solution Explorer, right-click the project name and then click Properties. - In the Project Designer, click the References tab. System.Collections is part of the Visual Basic core library and will already be in the references. The list of imported namespaces is at the bottom of the References tab. The checkbox next to System.Collections should be checked so that the contents of the namespace are available for use by your program. In a Visual Basic code module, ArrayLists are declared using the Dim statement: Dim myList As New ArrayList Declaring ArrayLists in VBA In Visual Basic for Applications, ArrayList is not a built-in object type. However, VBA can create objects from the .NET framework and work with them directly. To create an ArrayList from a VBA function or macro: - In the Visual Basic Editor, in the desired module, declare a variable as an object type: Dim myList As Object - Set the variable to an instance of ArrayList using the CreateObject() function: Set myList = CreateObject("System.Collections.ArrayList") - Work with the ArrayList object as described in the following sections. Working with ArrayLists Adding items to an instance of an ArrayList is a relatively straightforward process. There are four methods for doing this: Add(), AddRange(), Insert(), and InsertRange(). Add(value As Object) accepts a single argument, which can be any object type. The item is then added to the end of the list. myList.Add("Add a string to the list.") You can use AddRange(c As ICollection) to append the contents of one list to another. AddRange() accepts a single argument, which must be an object that is derived from ICollection. The ArrayList class is derived from ICollection and so you can add one list to another using a call such as: Dim myList As New ArrayList Dim secondList As New ArrayList … myList.Add(secondList) Once the list contains some items, they can be accessed using the same array-addressing syntax that is used when working with simple arrays. For example, to refer to the first item in the list: myList(0) Insert(index As Integer, value As Object) and InsertRange(index As Integer, c As ICollection) are very similar to the add functions described above, except that they accept two arguments. The first argument is a number that specifies the index at which the second argument should be inserted into the list. To remove items from an ArrayList, you can use either the method Remove(value As Object) or RemoveAt(index As Integer). Remove() accepts a single object and, if that object is found in the list, it is removed. The method RemoveAt(index As Integer) accepts only a single number – the item at the position specified by this number is deleted from the list. To clear a list entirely, use the method Clear(). This accepts no parameters and simply deletes everything that is currently in the list: myList.Clear() Note: In VBA, empty calls to subroutines like Clear() are made without parenthesis. Searching through ArrayLists If you want to check whether an ArrayList already contains an item, you can use the method Contains(value As Object). This returns true if the object or value is found the in the list, or false if it cannot be found. When checking for simple types, such as strings or numbers, the Contains() method will look for any item with a value that matches the one specified in the call. For example: If (myList.Contains("A string!")) Then System.Diagnostics.Debug.WriteLine("Found it!") End If Contains() makes decisions based on calls to the items’ Equals() and CompareTo() methods. For complicated objects, it may be necessary to loop through the ArrayList and implement your own logic for finding items. The two most common loops used for looping through lists are for…next and for each…next. A standard Visual Basic for…next loop can be constructed using the ArrayList’s method Count() to return the number of items in the list. Items are then retrieved from the list in the body of the loop. With a for each…next loop, each item in the ArrayList is processed in turn and is assigned to the declared variable. Sorting ArrayLists Items in an ArrayList are stored in the order that they are added to the list, except when Insert() or InsertRange() are used to insert items into a specific position. The order of items can be reversed with a call to: myList.Reverse() The ArrayList class also has a built-in method for sorting simple object types – appropriately named Sort() – using the QuickSort algorithm. The basic form of this method accepts no arguments and returns no values, it just sorts the list in alphabetical, ascending numerical, or chronological order depending on whether the items are based on strings, numbers, or dates. Dim myList As New ArrayList myList.Add("Elephant") myList.Add("Cat") myList.Add("Anteater") myList.Add("Deer") myList.Add("Bear") myList.Sort() For Each item In myList System.Diagnostics.Debug.WriteLine(item) Next An override for Sort() accepts objects that implement IComparer, to provide sorting based on arbitrary comparisons. Getting Help Programmers who are used to simple arrays might find the leap to using collection objects to be quite large. As object types, many of the object-oriented programming techniques used for other objects will also work on ArrayLists. For examples and explanations of object-oriented programming in Visual Basic .NET, see Learn VB.NET with Microsoft Visual Studio 2013.
https://blog.udemy.com/vb-net-arraylist/
CC-MAIN-2017-13
refinedweb
1,152
63.39
The Q3ActionGroup class groups actions together. More... #include <Q3ActionGroup> This class is part of the Qt 3 support library. It is provided to keep old source code working. We strongly advise against using it in new code. See Porting to Qt 4 for more information. Inherits Q3Action. The Q3ActionGroup class groups actions together.(). This property holds whether the action group does exclusive toggling. If exclusive is true only one toggle action in the action group can ever be active at any one time. If the user chooses another toggle action in the group the one they chose becomes active and the one that was active becomes inactive. Access functions: See also Q3Action::toggleAction. This property holds whether the group's actions are displayed in a subwidget of the widgets the action group is added to. Exclusive action groups added to a toolbar display their actions in a combobox with the action's Q3Action::text and Q3Action::iconSet properties shown. Non-exclusive groups are represented by a tool button showing their Q3Action::iconSet and text() property. In a popup menu the member actions are displayed in a submenu. Changing usesDropDown only affects subsequent calls to addTo(). This property's default is false. Access functions: Constructs an action group called name, with parent parent. The action group is exclusive by default. Call setExclusive(false) to make the action group non-exclusive. Constructs an action group called name, with parent parent. If exclusive is true only one toggle action in the group will ever be active. See also exclusive. Destroys the object and frees allocated resources. This signal is emitted from groups when one of its actions gets activated. The argument is the action which was activated. See also setExclusive(), isOn(), and Q3Action::toggled(). Adds action action to this group. Normally an action is added to a group by creating it with the group as parent, so this function is not usually used. See also addTo(). Adds a separator to the group. Adds this action group to the widget w.. Reimplemented from Q3Action. See also setExclusive(), setUsesDropDown(), and removeFrom(). This function is called from the addTo() function when it has created a widget (actionWidget) for the child action a in the container. This is an overloaded function. This function is called from the addTo() function when it has created a menu item for the child action at the index position index in the popup menu menu. Use add(action) instead. This signal is emitted from exclusive groups when toggle actions change state. The argument is the action whose state changed to "on". See also setExclusive(), isOn(), and Q3Action::toggled().
http://doc.trolltech.com/4.5-snapshot/q3actiongroup.html#activated
crawl-003
refinedweb
436
60.72
table of contents NAME¶ stdarg, va_start, va_arg, va_end, va_copy - variable argument lists SYNOPSIS¶ #include <stdarg.h> void va_start(va_list ap, last); type va_arg(va_list ap, type); void va_end(va_list ap); void va_copy(va_list dest, va_list src); DESCRIPTION¶()¶ The va_start() macro initializes ap for subsequent use by va_arg() and va_end(), and must be called first. The argument last()¶ The Each. An obvious implementation would have a va_list be instead, since that was the name used in the draft proposal. ATTRIBUTES¶ For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶ The va_start(), va_arg(), and va_end() macros conform to C89. C99 defines the va_copy() macro. BUGS¶ Unlike the historical). EXAMPLES¶ The function foo takes a string of format characters and prints out the argument associated with each format character based on the type. #include <stdio.h> #include <stdarg.h> void foo(char *fmt, ...) /* '...' is C syntax for a variadic function */ { va_list ap; int d; char c; char ); } SEE ALSO¶ vprintf(3), vscanf(3), vsyslog(3) COLOPHON¶ This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.debian.org/unstable/manpages-dev/va_copy.3.en.html
CC-MAIN-2022-33
refinedweb
203
66.23
7/19/06, Raphael Neider <rneider@...> wrote: >. >. Regards, Xiaofan Am Dienstag, den 18.07.2006, 09:36 -0500 schrieb Gene Daunis: > Thanks Jim, > > I do have the extended instruction set disabled, doesn't help. <code name="sfraccess.c"> #include <pic18fregs.h> void foo( int b ) { PORTAbits.RA3 = 0; UEP15bits.EPCONDIS = 0; } void main( void ) { } </code> Compiling this fragment using the current svn snapshot and sdcc -mpic16 -p18f4550 sfraccess.c I obtain a .asm file with ; .line 8; sfraccess.c PORTAbits.RA3 = 0; BCF _PORTAbits, 3 ; .line 9; sfraccess.c UEP15bits.EPCONDIS = 0; BCF _UEP15bits, 3 and a .hex file, which gpdasm -p18f4550 sfraccess.hex shows as [...] 0000ea: 9680 bcf 0x80, 0x3, 0 0000ec: 967f bcf 0x7f, 0x3, 0 [...] which is just fine, isn't it? This might be a... change... in gpasm (it may be a bug, it may be a feature...). I use gpasm-051219 beta (some daily snapshot?), probably I should update to check more recent versions... wait a moment.... I conclude that gputils have received updated device descriptions which tell gpasm the the 4550 has SFRs at 0xf80--0xfff but not below. Consequently, 0xf7f (=UEP15bits) is assumed to be accessed using "BANKSEL 0xf; bcf 0x7f, 3, 1", only SDCC does not emit the BANKSEL... Still this is rather an error in (recent) gputils than in sdcc. Will you file a bug report with gputils or shall I do it? >>? Added include <pxxx.inc> to .asm file preamble in my local tree, will commit after the release (or should this be included?). It compiles/assembles, but does not solve your problem (unfortunately). Regards, Raphael Neider Thanks Jim, I do have the extended instruction set disabled, doesn't help.>? Gene -----Original Message----- From: sdcc-user-bounces@... [mailto:sdcc-user-bounces@...] On Behalf Of Mark Rages Sent: Monday, July 17, 2006 5:08 PM To: sdcc-user@... Subject: Re: [Sdcc-user] PIC18F4550 - SFR access On 7/17/06, Jim Paris <jim@...> wrote: > > If I access any SFR above F80h, the access goes to the GPR starting at 080h, > > i.e. instead of selecting the Access Bank (a=0) the BSR is used to > > the GPR bank (a=1). > > Do you have the extended instruction set enabled? It changes the way > the access bank works, and SDCC expects it to be disabled, IIRC. > > -jim > There is a switch for it (-y) but I doesn't work as far as I can tell. Regards, Mark markrages@... -- You think that it is a secret, but it never has been one. - fortune cookie ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys -- and earn cash _______________________________________________ Sdcc-user mailing list Sdcc-user@...
https://sourceforge.net/p/sdcc/mailman/sdcc-user/?viewmonth=200607&viewday=18&style=flat
CC-MAIN-2017-47
refinedweb
461
78.55
Jan 28 Meet the New Face of Crime … and the Law + Posted by Chris Whiteside // Lead Game Designer from bigBig Studios! Tagged // psp, pursuit force Filed: Title Spotlight 16 Comments Add your own 1 mpnjdevil | January 28th, 2008 at 12:20 pm nice, looks good… 2 davivman | January 28th, 2008 at 12:20 pm Any plans for letting people buy this game as a digital download in the playstation store for PC? 3 C-h-a-o-s | January 28th, 2008 at 12:25 pm I am torn between this game and twisted metal. One of them is going to have to wait. I played the demo and it was great, although it was shorter than I expected it was fun. 4 AwRy108 | January 28th, 2008 at 12:26 pm Looks good; and it’s nice to see that SONY has finally figured out that $30 is a much better price point for PSP games, not $40. 5 Gorvi | January 28th, 2008 at 12:32 pm I always felt like I was one of the few that really liked the first game, despite the awful on foot segments (just run around cuffing everyone). I’ll pick this one up as soon as I have time, that’s for sure. The driving/shooting portions were some of the best fun to be had on the PSP early in it’s life. 6 StalkingSilence | January 28th, 2008 at 12:37 pm I just bought a 4GB memory card. i would buy this on Playstation Store for PC or my PS3. Probably wouldn’t get it as UMD though. I guess that all depends on file size for the download. I am running out of physical space tho in my PSP slim hard case. 7 ftwrthtx | January 28th, 2008 at 12:59 pm The game sounds awesome! 8 Federation | January 28th, 2008 at 1:52 pm I got the Demo and I love it! 9 Crumpilstilskin | January 28th, 2008 at 3:49 pm def gonna check this one out! 10 Kamahl | January 28th, 2008 at 4:16 pm this will probably be available here in a few weeks, i really want to check it out 11 DankandSticky | January 28th, 2008 at 5:28 pm doesnt look like it should have been a psp game, i think games like loco roco/patapon/Katamari and than even ratchet and daxter are all good but its just horrible for shooters like medal of honor and this is too much for it….. 12 Cesar | January 28th, 2008 at 5:54 pm Final Fantasy 13 is coming on 2008, Thats the Game to have. 13 Cesar | January 28th, 2008 at 6:02 pm just and idea, I was thinking of buying the Eye of judgement but there are not alot of car games out there maybe if pokemon or Yugio were there i get one, maybe Dungeos and dragons. 14 foolio_67 | January 28th, 2008 at 10:45 pm Looks like a good game, I may pick it up. 15 mrtruffle | January 29th, 2008 at 12:51 am cool! 16 thataide | January 30th, 2008 at 9:31 am Hello Chris Whiteside, is Pursuit Force: Extreme Justice also coming to the Playstation 2 in America? We close the comments for posts after 30 days. If you would still like to comment on this post, please use our contact form.
http://blog.us.playstation.com/2008/01/28/meet-the-new-face-of-crime-and-the-law/comment-page-1/
crawl-001
refinedweb
559
75.34
I've been testing out some code I wrote on a test program. Although GDB doesn't catch anything and it runs fine, it's supposed to generate output files. It did that at first, but after debugging various other aspects of the test program, it doesn't generate any files at all. Extensive debugging of that yielded nothing. The strange part is I started another test -- copy/pasted/compiled the fstream example from cplusplus.com and even *that* didn't generate files. I added a simple "cout << filestr.fail() << endl;" statement and the compiled example always outputs a 1. I'm running these tests with root privileges, plenty of memory, and 92 GB of free disk space. The fact that not even a simple C++ code snippet from a reference site generates files is eerie. This is the said snippet, with the failbit check I added and a test I/O operation: Code:#include <iostream> #include <fstream> using namespace std; int main () { fstream filestr ("test.txt", fstream::in | fstream::out); cout << filestr.fail() << endl; filestr << " dsgesvsvds"; filestr.close(); return 0; }
https://cboard.cprogramming.com/cplusplus-programming/131706-std-fstream-fail-anomaly.html
CC-MAIN-2017-13
refinedweb
181
68.06
IRC log of svg on 2011-11-04 Timestamps are in UTC. 00:19:30 [RRSAgent] RRSAgent has joined #svg 00:19:30 [RRSAgent] logging to 00:19:44 [ChrisL] ChrisL has joined #svg 00:20:11 [heycam] ChrisL, I forgot to make minutes, and it looks like the day is already "over" :/ 00:20:20 [heycam] ChrisL, (RRSAgent disappeared earlier) 00:21:19 [ChrisL] rrsagent, make minutes 00:21:19 [RRSAgent] I have made the request to generate ChrisL 00:21:26 [heycam] oh wrong channel! 00:21:34 [ChrisL] heh 01:02:47 [myakura] myakura has joined #svg 01:30:51 [dino] dino has joined #svg 01:34:14 [myakura] myakura has joined #svg 01:55:25 [stakagi] stakagi has joined #svg 02:41:59 [si-wei] si-wei has joined #svg 02:43:04 [thorton] thorton has joined #svg 02:43:24 [si-wei_] si-wei_ has joined #svg 03:07:40 [myakura] myakura has joined #svg 03:28:18 [myakura] myakura has joined #svg 04:22:39 [plinss] plinss has joined #svg 15:53:44 [RRSAgent] RRSAgent has joined #svg 15:53:44 [RRSAgent] logging to 15:53:48 [Zakim] Zakim has joined #svg 15:53:50 [jun] jun has joined #svg 15:53:56 [heycam] RRSAgent, this meeting spans midnight 15:54:23 [plinss] plinss has joined #svg 15:54:26 [heycam] Meeting: Friday 4 November 2011 SVG F2F at TPAC2 15:54:29 [heycam] Chair: Cameron 15:54:34 [heycam] Agenda: 15:58:32 [stakagi] stakagi has joined #svg 15:58:56 [si-wei] si-wei has joined #svg 16:01:26 [cyril] cyril has joined #svg 16:02:19 [cyril] scribe: Cyril 16:02:24 [cyril] scribeNick: cyril 16:02:36 [cyril] Topic: SVG Japan updates 16:03:19 [thorton] thorton has joined #svg 16:03:41 [thorton] thorton has joined #svg 16:04:37 [Rossen] Rossen has joined #svg 16:05:35 [cyril] CM: First session is Updates from SVG Japan IG and then presentation for a mapping task force 16:05:47 [cyril] JF: I'd like to share some of the SVG related group in Japan 16:06:01 [cyril] ... the updates of the SVG JIS standardization activity 16:06:11 [cyril] ... JIS has been working for over 3 years 16:06:12 [jdaggett_] jdaggett_ has joined #svg 16:06:15 [cyril] ... 3 committees 16:06:24 [howard] howard has joined #svg 16:06:39 [cyril] ... the first committee was held in 2009: translation of SVG T 1.2 in Japanese 16:06:49 [cyril] ... the 2010 meeting added features for mapping 16:07:00 [cyril] ... it is called the SVG Tiling module 16:07:29 [cyril] ... KDDI recently the W3C and submitted this spec for consideration by the SVG WG 16:07:49 [cyril] ... this year, we had the 3rd committee and its goal is to finalize the publication of the specs 16:08:12 [cyril] ... both spec should be published as official JIS standards in 2012 16:08:15 [si-wei] si-wei has joined #svg 16:08:23 [efidler] efidler has joined #svg 16:08:33 [cyril] ... remaining question: is there any room for the alignment for SVG Tiling and Layering module with SVG 2 16:08:41 [cyril] ... the anszer is no for the moment 16:09:02 [cyril] ... but we expect to have a chance to update the SVG Tiling and Layering spec when SVG 2.0 will be ready 16:09:08 [r12a] r12a has joined #svg 16:09:13 [cyril] s/anszer/answer/ 16:09:29 [cyril] CM: is there anything specific about Tiny in SVG Tiling and Layer? 16:09:39 [cyril] JF: there is nothing specific, you can use SVG 1.1 16:09:54 [cyril] ... but the committee requested a scope for SVG Tiling and Layering 16:10:18 [cyril] ... and we chose SVG Tiny 1.2: officially it's a limitation, but technically not 16:10:46 [cyril] CM: JIS timeline is long, is there any concern about browsers not focusing on Tiny but on Full instead ? 16:10:53 [cyril] JF: yes, there are some concerns 16:11:22 [cyril] ... but when we decided for SVG T 1.2, the SVG WG was thinking of SVG T 1.2 as the core of future SVG specs 16:11:39 [cyril] ... we can update our standard when SVG 2 becomes available 16:11:57 [cyril] CM: JIS is 1.2 T + Mapping, that's it 16:11:59 [cyril] JF: yes 16:12:21 [cyril] CM: what implementations are you targetting ? 16:12:32 [cyril] JF: there are several implementations 16:12:38 [cyril] ... of tiling and layering features 16:13:12 [cyril] JF: ePub 16:13:23 [cyril] ... ePub 3.0 is being developped 16:13:30 [cyril] ... finished in may 16:13:39 [jun] 16:13:41 [cyril] ... published as the final specification from IDPF 16:14:13 [cyril] ... ePub 3.0 is based on HTML 5 and CSS technologies, with some support for vertical writing and asian languages 16:14:21 [cyril] ... SVG is also supported 16:14:31 [cyril] ... in the past you could only used SVG referenced from HTML 16:14:41 [TabAtkins_] TabAtkins_ has joined #svg 16:14:48 [cyril] ... in ePub 3 you can have only SVG 16:15:15 [cyril] ... the discussion of the next version has already started 16:15:22 [cyril] ... strong demand to get to high design publications like magazines 16:15:39 [cyril] ... IDPF held a workshop "Advanced Adaptive Layout Workshop" 16:15:42 [jun] 16:15:53 [jun] 16:16:30 [cyril] ... based on the discussions, IDPF decided to start a new activity Advanced Adaptive Latout WG 16:16:54 [jun] 16:17:11 [jun] 16:17:12 [cyril] ... starting from 2 Adobe proposals: CSS Regions & Exclusions, 16:17:13 [jen] jen has joined #svg 16:17:25 [jun] 16:17:38 [cyril] ... and CSS Page Layout 16:17:50 [cyril] s/PAge Layout/Page Templates/ 16:18:00 [cyril] JD: these specs are new specs and touch layout 16:18:18 [cyril] ... the ePub book has a schedule that is going to be 16:18:32 [cyril] ... because the potential of divergence between CSS and ePub is high 16:19:00 [cyril] ... the proposal that Tab has put recently has more adoption 16:19:13 [cyril] ... but the layout part is still problematic 16:19:29 [cyril] ... the implementers don't have all the answers 16:19:45 [cyril] CM: the ePub guys want to embed off-the-shelf engines 16:19:57 [cyril] JD: but the plan also on repurposing the content 16:20:19 [cyril] JF: IDPF identified similar but different demands from the publishing industries 16:20:24 [cyril] ... like fixed layout 16:20:33 [cyril] ... similar but different from adaptive layout 16:20:51 [cyril] ... there was another workshop last week in Taipei on this topic 16:20:57 [cyril] ... I attended this workshop 16:21:01 [jun] 16:21:16 [cyril] CM: I remember something about horizontal page 16:21:20 [ericm] ericm has joined #svg 16:21:41 [cyril] JF: the discussions are about using combinations of different technologies 16:21:52 [cyril] ... one proposal is based on the use of raster images 16:22:00 [cyril] ... the other proposal uses SVG 16:22:11 [cyril] CM: is it completely fixed layout 16:22:15 [cyril] ... with no change possible 16:22:17 [jun] 16:22:18 [cyril] JF: yes 16:22:24 [cyril] CL: like a comic book 16:23:00 [cyril] JF: it seems AAP (Association for American Publishers) and other Japanese publisher for mangas are in favor of this approach 16:23:13 [cyril] ... compared to adaptive layout 16:23:24 [cyril] ... their primary goal is to preserve the author intention 16:23:41 [cyril] JD: why not PDF then? 16:24:12 [jun] 16:24:13 [cyril] CM: one of the proposed mechanism is PDF 16:24:15 [cyril] ... did they decide ? 16:24:31 [cyril] JF: no, IDPF decided to have a new ad-hoc group 16:24:42 [cyril] ... rendition mapping data structure 16:25:02 [cyril] [see Taipei meeting notes for names of groups] 16:25:13 [heycam] Some metadata they seem to want: 16:25:17 [jun] 16:25:17 [jun] 16:25:23 [karl] karl has joined #svg 16:25:29 [cyril] s/a new ad-hoc group/2 new ad-hoc groups/ 16:25:57 [cyril] JF: in summary, there are 2 activities for high quality 16:26:11 [cyril] ... related but based on different requirements 16:26:21 [cyril] CM: do they have requirements for SVG ? 16:26:25 [cyril] JF: not yet 16:26:55 [shepazu] shepazu has joined #svg 16:26:56 [cyril] ... many japanese publishers creating mangas are interested in using SVG 16:27:15 [cyril] ... for dynamically showing flames with script, even on mobile devices 16:27:28 [cyril] CM: does ePub 3 target a particular edition of SVG 16:27:31 [cyril] CL: 1.1 SE 16:28:02 [cyril] JF: we are interested in keeping working in this area 16:28:18 [cyril] JF: Character Information Platform 16:28:40 [cyril] ... a ministry of the Japanese gov released font data 16:28:42 [jun] 16:28:49 [jun] 16:29:12 [cyril] JD: are you involved in that effort 16:29:41 [cyril] JF: yes, durnig the technical WG within the committee, I'm the chair of the technical WG 16:29:59 [r12a] i guess that METI stands for Ministry of Economy, Trade and Industry 16:30:00 [cyril] JD: is it the group registering the 16:30:14 [jdaggett_] Hanyo-Denshi 16:30:25 [jun] 16:31:17 [cyril] JF: this PDF contains a diagram 16:31:32 [cyril] JD: in Unicode you have a set of ideographic characters 16:31:47 [cyril] ... but in some cases, there are variants of the same characters 16:32:11 [cyril] ... but it's sometimes hard to say if glyphs are distinct or not 16:32:32 [cyril] ... Unicode code point for a base glyph and then ideographic variations of the character 16:32:45 [cyril] DS: is it for signatures, names of places 16:32:47 [cyril] JF: yes 16:33:17 [cyril] JD: names are not registered, only the characters 16:33:24 [cyril] DS: like a signatures 16:33:31 [cyril] s/signatures/signature/ 16:33:38 [cyril] RI: they can be used for anything 16:34:01 [cyril] CM: without a register, does it mean that the IVS is not useful 16:34:48 [cyril] JD: the way Unicode defined it, they have a database (IVD) and interested parties can register this glyphs to have this selector 16:34:56 [cyril] CL: it's an ongoing registry 16:35:11 [cyril] JD: but if group A and group B are registering 16:35:19 [cyril] ... there is no requirement to see if the same gkyph is being used 16:35:31 [cyril] ... so you could have 2 ways to encode the same glyph 16:35:44 [cyril] ... it's problematic at a different number of levels 16:35:55 [cyril] ... font designers need to know 16:36:13 [cyril] ... they can't until the parties involved do the effort 16:36:16 [cyril] ... and that hasn't been done 16:36:32 [cyril] ... in June, the CSS WG sent a comment to Unicode, that it is not good for the Web 16:36:58 [cyril] ... because if someone does not have the right font, they wont see the variation 16:37:14 [cyril] ... from the perspective of people concerned about open standards, it's a mess 16:37:32 [cyril] ... in Japan it works but in the long term it's going to be a problem 16:37:46 [cyril] ... especially communicating outside of Japan 16:38:25 [cyril] JF: METI decided to define the Character Information Platform and created a committee to create a character set 16:38:42 [cyril] RI: is it defining glyphs and variation ? 16:38:46 [cyril] JF: yes 16:38:59 [cyril] ... the result is a widely available font 16:39:02 [jun] 16:39:20 [cyril] ... the size of the font is 30 MB in OTF 16:39:44 [cyril] ... the number of glyphs is over 58 K 16:39:56 [cyril] CL: 30 MB is zipped 16:40:02 [cyril] ... and 54 MB otherzise 16:40:11 [cyril] JF: the name of the font is IPA 16:40:35 [cyril] Information Technology Promotion Agency 16:40:46 [cyril] ... they provide the list of the characters defined 16:40:48 [jun] 16:41:15 [cyril] ... the table has an image over every glyph provided in SVG format 16:41:25 [cyril] you can click on each image to get the SVG version 16:41:31 [cyril] ... using the SVG font mechanism 16:42:24 [cyril] CM: one of the limitation of SVG, is that there wouldnt be ways of defining variations 16:42:30 [cyril] CL: there would using ligatuers 16:42:44 [cyril] JD: there is a difference between ligatures and variations 16:42:49 [cyril] ... spacing breaks ligatures 16:42:57 [cyril] JF: it's a practical way 16:43:01 [cyril] JD: no it breaks 16:43:12 [cyril] ... OpenType has a mechanism 16:43:17 [cyril] ... that's practical 16:43:24 [cyril] ... it's not gsub but cmap 16:43:40 [cyril] ... base character + selector = glyph 16:43:57 [cyril] ... there are several cases were ligatures are split (letter spacing) 16:44:19 [cyril] ... sometimes ligatures must be turned of 16:44:24 [cyril] s/of/off/ 16:45:02 [cyril] RI: I don't understand the difference between handling lam-alif ligatures and variations 16:45:24 [cyril] JD: there is a distinction between required ligatures and other ligatures 16:45:45 [cyril] CL: it would be futile to add i18n features to SVG, it would be huge 16:46:14 [cyril] DS: and just information about required ligatures only 16:46:14 [cyril] CL: maybe 16:46:25 [cyril] DS: I agree that there is an existing mechanism 16:46:46 [cyril] ... but we could find another one 16:47:00 [cyril] CL: I was pleased to see that the publication provides the font and the mapping 16:47:21 [cyril] JD: but again the problem is that the publishing industry follows standards by Adobe 16:47:38 [cyril] ... and because of the way this has been registered, there are problems 16:48:19 [cyril] ... we made a comment to Unicode to not have a loose association 16:48:43 [cyril] JF: another interesting part is the creation of a new technical WG to perform demonstration experiments 16:48:48 [cyril] ... using that font 16:48:54 [cyril] ... I'm the chair of the WG 16:49:12 [cyril] ... with vendors like Microsoft, Mozilla, Google 16:49:34 [cyril] ... on the demonstration system, we are planning to use SVG fonts for non UCS code points 16:49:43 [cyril] ... and we plan to have a WOFF version of the font 16:50:13 [cyril] ... we will probably discuss how we can split the font so that the browsers download only the required glyphs 16:50:28 [cyril] RI: are you subsetting based on the document used ? 16:50:40 [cyril] JF: based on unicode-range 16:50:58 [cyril] RI: you might have then a document using a character that is not in the font 16:51:38 [cyril] JD: I'm interested in trying to improve the practicality of the subsetting 16:51:44 [cyril] ... you have to put a long list 16:51:57 [cyril] ... but if you group with the most used first and then the least used 16:52:20 [cyril] ... is there a way to decide on names for the ranges 16:52:22 [Suresh] Suresh has joined #svg 16:52:52 [cyril] ... I've asked Google to try and analyse the number to come with on the Web in Japanese what are the rankings 16:53:50 [cyril] CL: the meaning of based character + IVS was not defined so these variations won't appear in the ranking 16:54:12 [cyril] JD: as long as the base character was defined you will get them 16:54:46 [cyril] CM: for this demonstration, practically, generating a WOFF font might be a good idea 16:55:11 [cyril] JF: I'd like suggestions from the SVG WG on SVG fonts, how to create WOFF fonts, on the use of SVG in OTF fonts 16:55:29 [cyril] CL: there are things that OTF does that SVG does not 16:55:42 [cyril] ... and there are things that SVG fonts can do but not OTF 16:55:54 [cyril] ... there is an effort to put SVG outlines in OTF 16:56:15 [cyril] ... that's how we get the best of both worlds 16:56:16 [cyril] ... including multi-colored, animated fonts 16:56:32 [cyril] ... WOFF brings compression, subsetting and license and metadata 16:57:24 [cyril] JD: the format is not important, but not all browsers support the type 14 of cmap 16:57:43 [cyril] ... not webkit, IE9 does, some version of firefox does 16:58:20 [cyril] EF: most of this stuff is handled in a platform specific platform not generically in Webkit 16:59:01 [cyril] JD: for the support in browsers you need to have wide availability of the font and it has to be small size for phones 16:59:17 [cyril] CM: if you are looking for format, there is not a single one 16:59:33 [cyril] JF: we already decided on OTF but we want to test other formats 16:59:45 [cyril] CL: what about Opera and Type 14 cmap ? 16:59:54 [cyril] ED: I'm not sure 17:00:12 [cyril] JD: when you subset, instead of have 1 font you have 10 17:00:46 [cyril] ... the first one has the 2K most frequent characters 17:01:23 [cyril] ... in unicode-range, you declare that you don't have the characters outside the range 17:01:47 [cyril] RI: if you split a 54 MB font in 10, you still have large fonts 17:02:02 [ed] ACTION: ed to check the status of opera support for type 14 CMAP in opentype fonts 17:02:02 [trackbot] Created ACTION-3172 - Check the status of opera support for type 14 CMAP in opentype fonts [on Erik Dahlström - due 2011-11-11]. 17:02:11 [cyril] JD: frequency is very important to manage the size 17:02:30 [cyril] JF: we want to study the feasability of downloadable fonts in Japan 17:02:44 [cyril] JD: it would be useful to know the frequency of character data 17:03:01 [cyril] CL: in practice, you want to split the font into 100 of fonts 17:03:41 [cyril] Topic: Mapping Taskforce / Tiling and Layering 17:04:27 [cyril] ST: I'd like to share some information on the Mapping Taskforce 17:04:46 [cyril] ... Tiling and Layering is a fonctionality for Mapping 17:04:52 [stakagi] 17:05:00 [ChrisL] ChrisL has joined #svg 17:05:19 [cyril] ... I divided in 3 categories 17:05:28 [ChrisL] rrsagent, this meeting spans midnight 17:05:34 [ChrisL] rrsagent, here 17:05:34 [RRSAgent] See 17:05:39 [cyril] ... markup language, fonctionality and UI of the browsers and last API 17:06:10 [cyril] ... some topics were discussed in F2F last week 17:06:11 [ChrisL] rrsagent, make logs public 17:06:16 [cyril] ... I appended my comments to each items 17:06:41 [stakagi] 17:07:25 [stakagi] 17:08:13 [stakagi] 17:09:38 [cyril] CC: why use the <animation> element instead of <use> or <image>? 17:37:44 [RRSAgent] RRSAgent has joined #svg 17:37:44 [RRSAgent] logging to 17:38:46 [cyril] RESOLUTION: Richard must have a Happy Birthday ! 17:39:15 [cyril] DS: one of the use besides mapping is for High Res photos for medical imaging data 17:39:55 [cyril] ACTION: Doug to contact OpenStreetMap people to participate in the Mapping TF 17:39:55 [trackbot] Created ACTION-3173 - Contact OpenStreetMap people to participate in the Mapping TF [on Doug Schepers - due 2011-11-11]. 17:40:26 [cyril] DS: it might make sense to have it as a community group also 17:40:30 [cyril] [break] 17:46:59 [r12a] r12a has joined #svg 17:47:50 [r12a] r12a has joined #svg 17:48:08 [si-wei] si-wei has joined #svg 17:50:42 [r12a] r12a has joined #svg 17:57:25 [jay] jay has joined #svg 17:58:16 [jay] Present+ Jongyoul_Park 18:07:40 [r12a] r12a has joined #svg 18:08:05 [jdaggett_] jdaggett_ has joined #svg 18:08:41 [ed] scribeNick: ed 18:08:46 [ed] topic: testing 18:09:13 [ed] CL: automated script type testing 18:09:36 [ed] ... new w3c testing reporting framework, being developed 18:09:44 [ed] ... gets automatic reporting back 18:10:03 [ed] CM: different to the test harness thing? 18:10:18 [ed] CL: yes 18:10:28 [ed] CM: someone should have a look at the existing frameworks to figure out how we can use them 18:10:44 [ed] CL: i have some experience with that 18:11:14 [jen] jen has joined #svg 18:11:14 [ed] ACTION: CL to investigate testing template needs for the new test system 18:11:15 [trackbot] Created ACTION-3174 - Investigate testing template needs for the new test system [on Chris Lilley - due 2011-11-11]. 18:11:24 [ed] CL: already discussing how this should work for svg 18:11:45 [ed] CC: linking from tests to spec? 18:12:11 [ed] CL: yes, you have to edit some metadata to get that, but there's a script that inserts the tests into the spec 18:12:27 [ed] s/tests into/links to the tests into/ 18:12:50 [ed] CM: when people create tests they should link to the spec 18:13:00 [ed] CL: yes, the other way around would be harders 18:13:43 [ed] ... when we create a setup it will generate a harness automatically 18:14:12 [ed] ... it gives us pass/fail buttons for manual test reporting 18:14:20 [ed] ... you can also import results from a textfile 18:14:57 [ed] ... stats are provided, you can run per chapter, most needed tests (least tested) 18:15:12 [ed] CM: what's the current state of running reftests? 18:15:20 [ed] CL: not sure about that, need to investigate 18:15:30 [ed] CM: what's teh scope of the browser testing group? 18:15:37 [ed] CL: infinite, don't know 18:15:54 [ed] CM: would be good for automation 18:16:03 [ed] ... would be good to look into 18:16:30 [ed] ... svg might need special API's, would be good if someone from here was in that group 18:16:44 [ed] ... TA you're in that group right? 18:16:46 [ed] TA: yes 18:17:18 [ed] CM: would be good to sort out testing now so that we can start writing tests while developing the new specs 18:17:50 [ed] CL: we will have a good start if we import the existing testsuite 18:18:11 [ed] CM: right, but it will still need to go through review again 18:18:40 [ed] CL: at 12pm we'll have a guy from nvidia to talk about 2d graphics features 18:19:14 [ed] ... alex danilo mentioned this new nvidia API at svg open 18:19:25 [ed] ... we'll hear more about how svg could utilise this 18:19:38 [ed] ... let's return to the svg2 requirements list 18:19:46 [ed] topic: svg2 requirements 18:20:11 [heycam] 18:20:12 [ed] CM: polar coordinates for paths 18:20:44 [ed] CC: before we dig in, we're still getting requests for features, how should we deal with them? 18:20:53 [ed] DS: maybe we should use bugzilla 18:21:14 [ed] CC: yes, but is there a cutoff date for new reqs? 18:21:36 [ed] DS: we should use bugzilla, one feature per request 18:21:48 [ed] ... avoid the long lists in one email 18:21:57 [ed] ... and it gives us trackability 18:22:27 [ed] CM: it's unlikely that reqs will come in before we're done going through the list we have atm 18:22:42 [ed] ... unlikely that we can't handle any additional requests 18:22:59 [ed] CC: if we get one or two sure, but if we get a lot of them? 18:23:13 [ed] CM: after we've settled on the list of reqs, that's a good cutoff point 18:23:25 [ed] ... probably ok to keep gathering reqs for a while longer 18:23:46 [ed] CM: ok, back to the issues list 18:23:53 [heycam] 18:23:55 [ed] ... this is DOH's proposal 18:24:13 [ed] ... remember seeing a script impl of his proposal 18:24:37 [ed] ... not grounded in use-cases 18:24:46 [ed] ... not sure it's worth the complexity 18:24:57 [ed] ... it's reasonably simple to do in script 18:25:47 [ed] ... there's another proposal to be able to setup a polar coordinate system 18:26:21 [ed] CC: the whole issue with scripting, there are envs where scripting is not enabled 18:26:38 [ed] ... need to be sure we don't disregard it if it's an important use-case 18:26:44 [ed] ... fonts, etc 18:26:57 [ed] ... not sure if this is the case here 18:27:11 [ed] JY: what does it do? 18:27:41 [ed] CM: being able to easily create polygons, without having to figure out exact points and so on 18:27:54 [ed] ... i think this one is probably not so important 18:28:02 [ed] ... does everyone agree with that? 18:28:17 [ed] (silence in the room) 18:28:32 [ed] DS: such a small script, we could just add it, but it doesn't seem broadly useful 18:28:42 [ed] JY: yes 18:28:58 [ed] TA: it does more than just the polygons 18:29:20 [ed] DS:i had a competing proposal 18:29:34 [ed] ... basically a polygon thing, but it wasn't using polar coordinates 18:29:45 [ed] ... i think it could be useful 18:29:49 [Rossen] Rossen has joined #svg 18:30:37 [ed] DS: maybe broaden the scope to investigate improving polygon 18:30:52 [ed] CM: don't the path extensions already address this? 18:31:25 [ed] CC: the script is so small, but we still need to test, which is more work 18:31:52 [ed] ... even if the implementation is small too 18:32:14 [ed] DS: the number of things you can do with it means it needs more testing 18:32:15 [ed] TA: if you need total coverage yes 18:32:27 [ed] ... you can machine generate tests, so it's not impossible 18:32:40 [ed] ... must be justified by the functionality though 18:33:11 [ed] CM: i'd be ok with a req for us to make it easier draw and animate regular polygons 18:33:26 [ed] DS: another aspect is that there are visualizations that are polarbased 18:33:34 [ed] ... would be nice if we could do those easily 18:34:20 [ed] CL: if we introduce polar coordinates we'll have to worry about how that works with the existing transforms and so on 18:34:50 [ed] ... we also have the ref transform, so it can get complex 18:35:20 [ed] CM: stars are not regular polygons, but they are similar 18:35:27 [ed] CC: you can do them with the polar element 18:35:35 [shepazu] 18:36:13 [ed] CL: in 1996 i was given an action to draft polygon which had number of corners etc, but it was dropped 18:36:39 [ed] DS: (shows the starmaker script) 18:36:46 [ed] ... not a concrete proposal 18:37:17 [ed] CM: do we want to broaden the scope outside of regular polygons or not? 18:37:43 [ed] ... even for regular polygons, you might have a stopsign or something, but how often would it be useful? 18:38:15 [ed] DS: stars would be useful, need the centerpoint for easily translation and animation 18:38:43 [ed] CM: if the improved path commands would solve the usecase... 18:38:59 [ed] CC: right, it's not so important how it's solved at this stage 18:39:16 [ed] CM: might open the door for something too complex 18:40:29 [ed] RESOLUTION: make it simpler to constuct regular polygons and stars 18:41:12 [cyril] RRSAgent, pointer please 18:41:12 [RRSAgent] See 18:41:15 [ed] CM: next, introduce elementbased path syntax 18:41:16 [heycam] 18:41:51 [ed] DS: have written a prototype, don't know where it is now 18:42:02 [ed] ... pretty easy to collapse the syntax back to a path 18:42:11 [ed] ... the EXI group wants the elementbased syntax 18:42:32 [ed] ... another proposal was to... (doug draws on whiteboard) 18:43:30 [ed] <path d="M 0 0..., seg(#somepath) Z"/> 18:43:44 [ed] ... this would make it possible to do composite paths 18:43:55 [ed] ... not exactly what the EXI group wanted though 18:44:20 [ed] ... not clear EXI is the way ppl push content to the web, but we might want to make a module for it 18:44:40 [r12a] r12a has joined #svg 18:44:45 [ed] ... that might create a division since the content might not work in browsers 18:45:16 [ed] ... if we do this we need to define a normalization algorithm so that we can go from one to the other syntax 18:45:34 [ed] TA: i like it, because generating a path string is a bit annoying 18:45:45 [jun] 18:45:55 [ed] CC: js libraries help you with some of this, have path helpers 18:46:13 [ed] ... raphael, d3 etc 18:47:14 [ed] (DS gives an example of element syntax on whiteboard) 18:47:33 [ed] DS: each element has their own attributes 18:47:47 [ed] CC: the cubic element could be used for the gradient meshes 18:48:35 [ed] ... if we can compress svg content to deliver it to mobile phones and networks that's good, but we need to make sure the DOM doesn't explode 18:49:14 [ed] CM: it will be slow if the DOM is too big 18:49:20 [ed] TA: my use-case is to make a non-sucky path API 18:49:57 [ed] CM: it's not clear EXI is the solution 18:50:14 [ed] TA: it's for XML, not for html 18:50:25 [ed] ... unless it's extended 18:50:44 [ed] CC: they want a specific coding for elements and attributes for compression 18:51:01 [ed] CM: so they can compress based on the schema 18:51:34 [ed] TA: don't think svg is a big deal for EXI 18:51:43 [ed] JF: EXI can be applied to any xml content 18:51:53 [ed] ... we are discussing how to apply EXI to html content 18:52:03 [ed] DS: it would still have to be wellformed html? 18:52:18 [ed] CC: but html is generally small, however svg maps can be huge 18:52:35 [ed] JF: maps contain a lot of path data 18:52:57 [ed] TA: so we could make an appendix covering this, or a module 18:53:02 [ed] DS: yes for transport 18:53:06 [dbaron] dbaron has joined #svg 18:53:48 [r12a] r12a has joined #svg 18:54:19 [ed] CL: a path with an attribute could be expanded out to a shadow dom, let's you fiddle with it 18:54:28 [ed] CC: could be done with a nice path API 18:54:56 [ed] DS: declarative animation of subpaths 18:55:37 [jun](v=vs.95 ).aspx 18:55:40 [ed] JF: MS silverlight provided two ways to describe the path information 18:56:13 [ed] DS: all children of a path could be discarded by the DOM and put into the attribute? 18:56:20 [ed] ... the DOM doesn't allow that 18:56:33 [ed] CM: would feel weird to me 18:57:17 [heycam] s/let's/lets/ 18:58:14 [ed] DS: there are modes in the html5 parser where it inserts tbody 18:58:31 [ed] ... there's precedent for it 18:58:42 [ed] CM: i'd be ok with having an appendix for EXI 18:59:12 [ed] ... but i don't see allowing EXI to compress svg is enough to require implementations to support this syntax 18:59:23 [ed] CC: maybe if it got more popular? 19:00:42 [ed] CM: would prefer to not resolve to require element syntax, but to have a document for EXI purposes how you could represent paths in element syntax 19:01:13 [ed] DS: yoking this to a better path api would be useful 19:01:46 [ed] CC: we'd have to make sure the element syntax is better for EXI, there are many ways we could chose the syntax 19:01:57 [ed] DS: so we should ask the EXI group about that 19:02:20 [ed] ... if we do this at all I'd like us to be strict about the normalization 19:02:35 [ed] ... so that implementations know what to do with this syntax 19:02:59 [ed] CM: i would expect browsers to just put it in the DOM and not do anything wiht it 19:03:13 [ed] DS: worst of both worlds 19:03:24 [cabanier] Is there a conference code for today's meeting? 19:04:33 [ed] CM: if it's alot of overhead to transmit documents like this then we don't want ppl to use this 19:07:25 [heycam] Zakim, room for 3? 19:07:26 [ed] ACTION: JF to talk to the EXI WG about requirements for element based syntax for svg 19:07:27 [Zakim] ok, heycam; conference Team_(svg)19:07Z scheduled with code 26632 (CONF2) for 60 minutes until 2007Z 19:07:27 [trackbot] Created ACTION-3175 - Talk to the EXI WG about requirements for element based syntax for svg [on Jun Fujisawa - due 2011-11-11]. 19:08:24 [Zakim] Team_(svg)19:07Z has now started 19:13:03 [ed] topic: Hardware acceleration 19:14:44 [ed] (Neil Travet from nvidia gives presentation about GPU acceleration of svg) 19:14:59 [ed] NT: nvidia will get more involved in the svg wg 19:15:34 [ed] ... we've created an extension to opengl to offload path rendering to the GPU 19:15:41 [ed] ... up to 100 times faster 19:15:55 [ed] ... without sacrificing quality, instead improving it 19:16:16 [ed] ... can save power, good for mobiles 19:16:18 [ed] ... mixes well with 2d and 3d 19:16:37 [ed] ... shipping today, all desktop gpus have this in their installed drivers 19:16:42 [ed] ... coming to mobile soon 19:16:48 [ed] ... it's a stencil then cover approach 19:17:42 [ed] ... once the stencil is genrated it can be used for rendering 19:17:54 [ed] ... has full support for text 19:18:33 [ed] ... can avoid approximations when running on gpu, improves quality 19:18:48 [ed] ... more accurate 19:19:13 [ed] ... doesn't have to do subdivison or tesselation, stroking is exact, caps, dashing supported 19:19:51 [ed] ... antialiasing uses jitter pattern 19:20:29 [ed] ... avoids some artefacts, overlaps and holes 19:21:24 [ed] ... (compares performance software vs hardware) 19:22:08 [ed] ... hw is often 5 times faster, but it's never slower than software 19:22:20 [ed] CC: coat of arms example is slower no? 19:22:29 [ed] NT: not slower, it's the same speed 19:25:56 [jun] jun has joined #svg 19:26:12 [ericm] ericm has joined #svg 19:26:15 [stakagi] stakagi has joined #svg 19:26:20 [thorton] thorton has joined #svg 19:26:33 [ed] ... we have started createing an experimental svg renderer 19:26:33 [ed] ... missing some parts 19:26:42 [ed] ... some new features, advanced stroking, sRGB correct rendering 19:26:44 [ed] ... 4x4 transforms 19:26:50 [ed] ... mixing text, 3d and paths, proper path perspective transforms 19:26:52 [ed] DS: what about filters? 19:26:56 [ed] CM: has to work with opengl anyway, so write shaders in GLSL 19:27:00 [ed] NT: yes, the css shaders proposal uses that so yes 19:27:57 [ed] DS: do you do performance testsuites? 19:28:14 [cyril] cyril has joined #svg 19:28:21 [ed] ... w3c haven't had any, but we might want to consider performance a test criteria 19:28:26 [efidler] efidler has joined #svg 19:28:34 [ed] NT: we have that discussion in the khronos group 19:28:54 [ed] ... but who are we to judge the use-cases, depends on other requirements, not the same for everyone 19:29:25 [ed] DS: sure, but just being able to test things that are meant to be performant 19:29:45 [ed] NT: but you'd have to be careful to not construct a benchmark 19:29:49 [stakagi] About projective transforms, Is Mercator Projection possible? 19:30:12 [ed] NT: not sure 19:31:21 [ed] CM: one of the things that come up when discussing features is whether things are easily hw acc 19:31:30 [ed] ... so what's possible to do in graphics libs 19:31:55 [shepazu] shepazu has joined #svg 19:31:59 [ed] ... would be good to know if design decisions we do would make things better or worse for hw acc 19:32:22 [ed] ... also whether it will take time before drivers support these things 19:33:26 [ed] NT: i'm pushing nvidia to join svg wg 19:33:50 [ed] ... to have better communcations with the group 19:34:44 [ed] DS: is intel a member of khronos? 19:34:47 [ed] NT: yes 19:35:11 [ed] DS: we could consider making a joint deliverable between khronos and w3c 19:35:25 [ed] NT: the value of khronos is that all the hw vendors are there 19:36:26 [ed] EF: what do the other vendors think of the nvidia path extension proposal? 19:36:49 [ed] NT: it's a little bit soon for mobile 19:36:56 [ed] ... but it's desktop today 19:37:37 [ed] EF: for the vendors that have the capability do they support hte proposal? 19:38:00 [ed] NT: we haven't officially proposed it yet, we're waiting for feedback 19:38:21 [ed] EM: motorola mobility is really interested in this 19:38:49 [ed] CM: can direct2d do similar things? 19:38:51 [cabanier] Reading your documentation, there is a novel way of doing anti-aliasing that doesn't require FSAA. Can you tell us how it's done? Is there a test application that demoes it? 19:38:53 [jdaggett_] jdaggett_ has joined #svg 19:38:56 [r12a] r12a has joined #svg 19:39:02 [ed] JY: i don't know the details, but fir filter effects yes 19:39:34 [ed] NT: the test app uses AA in the gpu 19:39:44 [ed] ... 16 bits of stochastic AA 19:40:19 [ed] RC: the AA implementation looks interesting, seems to require less memory 19:40:48 [ed] CM: we discussed seams in svg rendering the other day, and the person asked for FSAA to be added 19:41:00 [ed] ... are there other good approaches we should consider? 19:41:12 [ed] NT: reached the end of my expertise, sorry 19:41:50 [ed] DS: having you help us with testing and creating tests 19:41:55 [ed] ... would be good 19:42:03 [ed] NT: yes, we could help out with that 19:42:22 [ed] DS: mutually beneficial spiral yes, for hw vendors too 19:43:13 [ed] NT: it's a bit different from direct2d, but it's not a layer on top of gl, it's an extension, to access the gpu directly 19:43:13 [shepazu] s/testing and creating tests/testing and creating tests, and reusing our tests/ 19:43:36 [ed] CM: would be great to have mark join the group 19:44:20 [ed] DS: would be good to get some hw people on the wg 19:44:44 [ed] --- break for lunch --- 19:44:49 [ed] resumes at 2pm 19:49:54 [cyril_] cyril_ has joined #svg 19:55:49 [stakagi] stakagi has joined #svg 20:31:19 [TabAtkins_] TabAtkins_ has joined #svg 20:39:15 [stakagi] stakagi has joined #svg 20:42:54 [thorton] thorton has joined #svg 20:54:53 [efidler] efidler has joined #svg 20:55:14 [heycam] Zakim, status? 20:55:14 [Zakim] I don't understand your question, heycam. 20:55:18 [heycam] Zakim, code? 20:55:18 [Zakim] the conference code is 26632 (tel:+1.617.761.6200 sip:zakim@voip.w3.org), heycam 20:55:24 [heycam] Zakim, who is on the call? 20:55:24 [Zakim] On the phone I see no one 20:55:39 [Zakim] Team_(svg)19:07Z has ended 20:55:41 [Zakim] Attendees were 20:56:15 [heycam] Zakim, room for 3? 20:56:16 [Zakim] ok, heycam; conference Team_(svg)20:56Z scheduled with code 26632 (CONF2) for 60 minutes until 2156Z 20:56:42 [Zakim] Team_(svg)20:56Z has now started 20:57:02 [shepazu] shepazu has joined #svg 20:57:13 [ericm] ericm has joined #svg 20:57:55 [Tavmjong] 20:59:27 [jun] jun has joined #svg 21:00:03 [TabAtkins_] TabAtkins_ has joined #svg 21:01:17 [vhardy] vhardy has joined #svg 21:01:48 [Rossen] Rossen has joined #svg 21:02:29 [r12a] r12a has joined #svg 21:02:35 [Bert] Bert has joined #svg 21:02:40 [r12a] r12a has left #svg 21:04:39 [jen] jen has joined #svg 21:04:43 [ChrisL] ChrisL has joined #svg 21:05:06 [ChrisL] scribenick: ChrisL 21:05:11 [ChrisL] topic: svg spec editing 21:05:23 [Tavmjong] 21:05:52 [ChrisL] tb: spent some time working on this document, wrote upmy experiences 21:06:15 [ChrisL] ... goal A clearly written SVG 2.0 specification that also happens to look good. 21:06:26 [ChrisL] ... css3 fons spec used as a model 21:06:39 [ChrisL] ... changed stylesheet, used css3 values style 21:06:45 [ChrisL] ... added an annotation class 21:07:11 [ChrisL] ... preserves history of the reason for decisions 21:07:33 [ChrisL] cm: switched off by default and alternate stylesheet to show? 21:07:37 [ChrisL] tb: yes 21:07:49 [ChrisL] tb: added some svg-specific styling also 21:08:32 [ChrisL] ... publish.xml changed, replaced tables with divs which style easier. updated figure handling to allow captions 21:08:44 [ChrisL] cm: looks a lot more like the css fonts figures 21:09:17 [ChrisL] tb: current spec lacks a lot of figures 21:09:39 [ChrisL] tb: unified style, some graphics were tiny others huge, and the colours all over the place 21:09:58 [ChrisL] ... for svg graphics, updated to remove dtd and change titles to be useful 21:10:16 [ChrisL] ... attr lists are all crammed together, replaced by paragraphs 21:10:41 [ChrisL] ... rather than line breaks 21:11:24 [ChrisL] ... much more readable 21:11:33 [ChrisL] cm: never liked the old styling anyway 21:12:19 [cyril] 21:12:24 [ChrisL] cm: vincent has also experimented with specs, see his css transforms spec as an example 21:12:41 [ChrisL] ... started from same stylesheet, made more changes 21:12:58 [ChrisL] ... we need to discuss together and settle on a consistent spec style 21:13:48 [ChrisL] cc: so you changed to a less obvious gradient, light purple to white rather than red to green 21:13:58 [ChrisL] ... previously you could see the exact stops 21:14:23 [ChrisL] ... not against making colours less jarring but you should still see the differences between the colours 21:14:56 [ChrisL] cm: do like the additionalfigues, they explain it well 21:15:22 [ChrisL] ed:better if h & v lines aligned with pixel grid so they are sharp 21:16:05 [jdaggett_] jdaggett_ has joined #svg 21:16:41 [ChrisL] cl: i think they are just grey lines, not intended to be black 21:17:18 [ChrisL] cm: what about including inline figures 21:17:42 [dbaron] dbaron has joined #svg 21:17:48 [ChrisL] tb: some will not render correctly in browsers, if they use new features 21:18:13 [ChrisL] cm: the circlesexampleis like the nvidia demo earlier today, bunched up radial gradients 21:18:42 [ChrisL] cl: well known case that depends on sub-pixel precision 21:18:55 [ChrisL] tb: I added solid solid color 21:19:01 [ChrisL] cm: we resolved to add that? 21:19:13 [ChrisL] (several: yes) 21:19:24 [ChrisL] cc: its below where we got to in requirements list 21:20:02 [ChrisL] tb: is there a list? 21:20:17 [ChrisL] cm: added to the wiki page, it links to the minutes 21:20:41 [ChrisL] ... probably not what you want to link to from the spec annotation though 21:20:45 [plinss] plinss has joined #svg 21:20:48 [ChrisL] tb: what would be better 21:20:51 [ChrisL] cc: its nice 21:21:11 [ChrisL] cm: extra figures are great, colours can be discussed 21:21:15 [ChrisL] ... general direction of spacing etc is good 21:21:45 [ChrisL] cm: may need some more radical restructuring. in terms of dom rather than how markup is rendered 21:21:55 [ChrisL] .... ut that does not present the current improvements 21:22:12 [ChrisL] cc: dom interfaces remain in the chapter? 21:22:15 [ChrisL] cm: yes and should be more prominent in each section 21:22:31 [ChrisL] cm: so tav keep on experimenting, this is helpful 21:23:03 [ChrisL] ed: so we are going with the testing framework, might be nice to annotate things to keep them separate for the spec 21:23:17 [ChrisL] ... maintained by bugzilla and then imported in 21:23:46 [ChrisL] ed: concern on the annotations getting out of date 21:24:11 [ChrisL] cm: like the annotations that say what is new 21:24:56 [ChrisL] cm: see issue 4 in linear gradients 21:25:18 [ChrisL] "Could this be written in a less legalese way? " 21:26:13 [ChrisL] cc: lacuna value was used to express that in tiny1.2 21:26:27 [ChrisL] cl: yes and I see erik suggested adopting that in 2 - I agree 21:27:23 [ChrisL] cc: we need a ara upfront in the spec explaining lacuna, default, inititial etc 21:27:37 [cyril] s/a ara/a paragraph/ 21:28:36 [ChrisL] cl: we need to be precise, don't want to be short and imprecise 21:28:48 [ChrisL] cm: great work tav 21:29:02 [ChrisL] cc: you have not put this in mercurial? 21:29:11 [ChrisL] tb: no, not yet and not sure how to 21:29:22 [ChrisL] cm: jwatt wrpte it up in a wiki page 21:29:32 [cyril] s/wrpte/wrote/ 21:29:38 [ChrisL] s/wrpte/wrote/ 21:29:57 [ChrisL] ds: talking with vincent about this restyling? 21:30:14 [ChrisL] cm: yes he is, we mentioned that earlier 21:30:15 [ChrisL] ds: should have a single style 21:30:25 [ChrisL] cl: yes the point was made earlier 21:30:44 [ChrisL] cc: intereting the two editing companies are providing styles (adobe and inkscape) 21:30:54 [ChrisL] s/ret/rest/ 21:31:01 [ChrisL] ds: like the annoations 21:31:13 [ChrisL] tb: by default the style for those is turned off 21:32:11 [ChrisL] ds: any spec freature we put in the spec should have ids so they can be linked to 21:32:25 [ChrisL] s/freature/feature/ 21:32:39 [ChrisL] cm: chris was saying earlier about generating links to tests 21:33:57 [ChrisL] cl: sync issue with maintaining info in multiple places 21:34:28 [ChrisL] ds: scheme css wg is usig is not fine grained enough. spec section is good but specific assertions is much better 21:34:59 [ChrisL] cl: in woff the first link is to section and subsequent ones to specific testable assertions 21:35:11 [ChrisL] cc: so where are the rules for editing? 21:35:16 [ChrisL] ds: wiki page 21:35:28 [ChrisL] ... and we should work on this with other groups as well 21:35:59 [ChrisL] cl: we need to document the existing spec first 21:36:45 [ChrisL] ds: yes, but we have traction in several groups already, especially for apis and events - things we all want to mark up 21:37:58 [ChrisL] cl: the text "issue 6" etc is css text so its not searchable or copy/pastable 21:38:16 [ChrisL] ds: they should link to actual isues in tracker 21:38:40 [ChrisL] cm: and when you delete one the others would renumber 21:38:46 [ChrisL] s/isues/issues/ 21:39:47 [ChrisL] tb: some of those issues would not be in tracker 21:39:55 [ChrisL] cl: prefer they are all in tracker 21:40:18 [ChrisL] ds: for anything like that, it needs to link to a bug tracker 21:40:59 [ChrisL] cm: to capture some initial rules, could you start a wiki page that lists these? 21:41:30 [ChrisL] cl: and cameron please document or link to the xml build system docs 21:41:59 [heycam] ACTION: Tav to write up wiki page documenting spec writing rules, like annotations 21:41:59 [trackbot] Created ACTION-3176 - Write up wiki page documenting spec writing rules, like annotations [on Tavmjong Bah - due 2011-11-11]. 21:42:39 [ChrisL] cm: good to have a go at incorporating this into the repository and try to build it 21:43:00 [ChrisL] action; cameron to ensure current markup rules linked from tav's wiki page on editing 21:43:20 [ChrisL] action: cameron to ensure current markup rules linked from tav's wiki page on editing 21:43:21 [trackbot] Created ACTION-3177 - Ensure current markup rules linked from tav's wiki page on editing [on Cameron McCormack - due 2011-11-11]. 21:44:13 [ChrisL] cm: in 15 minutes we have web components work discussion, starts at 3pm. so before, we could look at one or two requireents issues 21:44:21 [ChrisL] topic: requirements once more 21:44:30 [cyril] 21:44:52 [ChrisL] topic: path arcto with 360 arcs 21:45:11 [cyril] zakim, draft minutes 21:45:11 [Zakim] I don't understand 'draft minutes', cyril 21:45:21 [cyril] RRSAgent, draft minutes 21:45:21 [RRSAgent] I have made the request to generate cyril 21:45:44 [ChrisL] ed: might fall out of the pattern elements we discussed previously 21:45:45 [cyril] RRSAgent, pointer 21:45:45 [RRSAgent] See 21:45:57 [ChrisL] cm: maybe turtle graphics help here 21:46:13 [ChrisL] ... good to accept this requirement 21:46:22 [ChrisL] ed: express as making it possible to make a complete circle on a path 21:46:34 [cyril] RRSAgent, make minutes public 21:46:34 [RRSAgent] I'm logging. I don't understand 'make minutes public', cyril. Try /msg RRSAgent help 21:46:36 [ChrisL] cm: boraoded to makeing arcs easier 21:48:14 [ChrisL] cl: (explains history of current arc command) 21:48:44 [ChrisL] cm: broadened to makeing arcs easier 21:49:46 [ChrisL] jf: can confirm hosting of SVG in Australia 21:50:37 [ChrisL] cm: turtle graphics will break the "command generates new current point" paradigm 21:50:45 [ChrisL] cl: don't like that 21:51:14 [ChrisL] resolution: make arcs in paths easier 21:51:28 [ChrisL] topic: polar element 21:51:31 [heycam] 21:51:54 [ChrisL] cm: this is the fancy flowers one, previous one was polar coords inside a path 21:52:12 [ChrisL] ... this is an element that makes stars, etc 21:52:30 [ChrisL] tb: inkscape has these sort of shapes 21:53:00 [ChrisL] ... good to include these 21:53:14 [ChrisL] ds: but it can't animate them live in the browser 21:53:27 [ChrisL] tb; also, native support would make our generated files smaller 21:54:17 [heycam] 21:54:27 [ChrisL] cm: so, we resolved not to include this one in svg2 .... 21:55:22 [ChrisL] ... because although it can make some nice artistic effects the added complexity is rather specific and not so useful in the general case 21:55:41 [ChrisL] resolved: wil not add a polar element in svg2 21:55:48 [ChrisL] s/wil/will/ 21:56:44 [ChrisL] Topic: Define <shapePath> element 21:56:47 [heycam] 21:57:49 [ChrisL] cl: this is positioning along a path, not warping along a path which we already rejected 21:58:13 [ChrisL] ds: textPath lets to place a list of shapes along a path, as long as they are glyphs 21:58:19 [ChrisL] ... this is the same except not glyphs. 21:58:31 [ChrisL] ...not thought much about spacing and sizing issues 21:58:48 [Tavmjong] Just got kicked off conference call... hour is up. 21:59:12 [ChrisL] ... this is a way of doing certain marker-like efects that markers can't do 21:59:29 [ChrisL] zakim, room for 3? 21:59:31 [Zakim] ok, ChrisL; conference Team_(svg)21:59Z scheduled with code 26631 (CONF1) for 60 minutes until 2259Z 21:59:50 [cyril] 21:59:50 [ChrisL] (discussion of text along a path) 22:00:17 [ChrisL] cc: gpac has this, text and shapes mixed along a path 22:00:29 [Zakim] Team_(svg)20:56Z has ended 22:00:31 [Zakim] Attendees were 22:00:54 [ChrisL] ed:at opera we had a way to put graphics inside an anchor tag 22:00:57 [heycam] Zakim, who is here? 22:00:57 [Zakim] apparently Team_(svg)20:56Z has ended, heycam 22:01:12 [heycam] Zakim, who is on the call? 22:01:12 [Zakim] apparently Team_(svg)20:56Z has ended, heycam 22:01:21 [heycam] Zakim, this svg 22:01:21 [Zakim] I don't understand 'this svg', heycam 22:01:23 [heycam] Zakim, this is svg 22:01:23 [Zakim] ok, heycam; that matches Team_(svg)21:59Z 22:01:27 [heycam] Zakim, who is on the call? 22:01:27 [Zakim] On the phone I see no one 22:01:40 [Tavmjong] yes 22:01:43 [ChrisL] ds: if we have textPath and shapePath these could take references to other eleents, rendered in order. could be text or shapes 22:02:13 [Tavmjong] 22:02:14 [ChrisL] cc: need an anchor point per shape 22:02:21 [ChrisL] cl: like baseline for glyphs 22:02:34 [ed] se/at opera we had a way to put graphics inside an anchor tag/IIRC opera at one point in time allowed shapes inside of anchor tags as part of a textPath, because the DTD allows pretty much anything inside <a>/ 22:03:14 [ChrisL] cc: width of the object is the advance for the next object 22:03:23 [ChrisL] ... boundingbox most likely 22:04:18 [ChrisL] rossen: its straightforward for some shapes, not clear for arbitrary shapes 22:04:36 [ChrisL] cc: anythink like this in css? 22:04:50 [ChrisL] rossen; no , was thinking of implementing textPath in IE 22:05:50 [ChrisL] ds: (draws on board) path to align to, and child elements which are use or path, these have an x and to alighn them, and an orient to say which way up 22:06:01 [ChrisL] ... autorotate or not 22:06:11 [ChrisL] cc: like animateMotion 22:06:11 [ChrisL] ds; yes 22:07:33 [ChrisL] tb: Copies of the single yellow star are placed along a path. The star is deformed to follow the path. 22:07:46 [ChrisL] ds: can repeat these things 22:08:17 [ChrisL] ... repeat n times or repeat to follow whole path 22:08:28 [ChrisL] ... can do custom dash patterns like this 22:09:14 [ChrisL] cc: already possible to hack this, so a clear way forward would not have too much implementation cost. what are the use cases? 22:09:23 [ChrisL] cm: markers not at endpoints 22:09:35 [ChrisL] ds: railway tracks, custom pattersn eg for mapping 22:10:18 [ChrisL] cl:electrical diagrams 22:11:11 [ChrisL] ds; we brainstormed this at svg f2f a couple years ago but we were hung up on other work 22:11:31 [ChrisL] ... markers are not clickable, want these to be clickable 22:12:11 [ChrisL] ... click would say how far along the path and what the original object was and the repeat count 22:12:58 [ChrisL] ds: putting text along this as well, like text on roads 22:13:30 [ChrisL] cl: repeat groups 22:13:37 [ChrisL] ds: scaling and non scaling 22:14:58 [ChrisL] cm: want to resolve onuc&r not on syntax 22:15:21 [ChrisL] cc: placing object on a path, mixing text and objects, and repeating. these are three separate things 22:15:47 [ChrisL] cm: these are a bit like the deforming objects on a path 22:16:27 [heycam] Scribe: Cameron 22:16:30 [heycam] ScribeNick: heycam 22:16:37 [heycam] s/onuc/on uc/ 22:18:47 [heycam] RESOLUTION: We will allow objects to be positioned along a path 22:22:17 [heycam] cm: basically, this would be improved positioning of markers 22:25:58 [heycam] cm: being able to place a diamond every 10 units along a path, for example 22:26:18 [heycam] cc: similarities between markers and dashes 22:27:48 [heycam] -- 10 minute break -- 22:28:27 [efidler] efidler has joined #svg 22:29:28 [Zakim] Team_(svg)21:59Z has ended 22:29:29 [Zakim] Attendees were 22:36:43 [jen] jen has joined #svg 22:40:12 [heycam] Topic: Component Model 22:40:17 [heycam] DS: SVG has the 'use' element 22:40:23 [heycam] ... which has a concept of a shadow tree 22:40:29 [heycam] ... since it was an early idea, there were various problems with it 22:40:38 [plinss] plinss has joined #svg 22:40:40 [heycam] ... problems with the DOM interface, underlying architecture, performance 22:41:14 [heycam] ... what we'd like to do is to rip at those parts of SVG that have some concept of reusability, and replace them with the Component Model 22:42:42 [heycam] TA: there are some fundamental parts of use that can be represented by Component Model semantics 22:42:57 [heycam] ... 'use' points at a template, and is rendered as the same way as you transplanted that template whereever the 'use' is 22:43:15 [heycam] ... shadow dom works the same way 22:43:24 [heycam] ... more specifically, 'use' doesn't actually have children, the shadow tree isn't really in the dom 22:43:26 [dglazkov] dglazkov has joined #svg 22:43:50 [heycam] ... 'use' is supposed to be fast when spamming it in the dom 22:43:58 [dbaron] dbaron has joined #svg 22:44:11 [heycam] ... that's not the case in implementations 22:44:23 [heycam] ... but we want it to be fast, and we want to satisify this case with shadow dom too 22:44:28 [heycam] ... we want to allow a "projected shadow" 22:44:34 [heycam] ... the template defines the "one and only" copy of the dom 22:44:39 [heycam] ... all the instances just pull a render tree from that 22:44:50 [heycam] ... they lay out as if it's there, but all dom/styling information comes from the one instance in the template 22:45:00 [heycam] DG: all browsers are optimized to create and throw away render boxes 22:45:03 [heycam] ... dom, not so much 22:45:15 [heycam] ... the idea is that projected trees have one dom but can be rendered in multiple places 22:45:19 [heycam] DS: can you change things? 22:45:25 [heycam] TA: if you change it, it changes for everything 22:45:31 [heycam] DG: with projected dom, there is no instance 22:45:46 [heycam] TA: in svg, you can't tweak the instance dom either 22:45:56 [heycam] ... there's an instance tree, but you can only style the instance via inheritance from the 'use' 22:46:01 [heycam] ... all selector matching is done on the template itlsef 22:46:12 [heycam] s/itlsef/itself/ 22:46:12 [heycam] ... this is the only complicated thing 22:46:13 [JanL] JanL has joined #svg 22:46:14 [heycam] ... the styling part we want to figure out 22:46:23 [heycam] ... what amount of styling per instance is required, how we can do this in a sane manner 22:46:52 [plinss] plinss has joined #svg 22:47:01 [plinss] plinss has joined #svg 22:47:12 [heycam] JY: I don't think our implementation is completely crazy as cloning everything, but not sure 22:47:51 [heycam] DS: [ draws an example ] 22:49:12 [heycam] [ people wanting to change little parts of a 'use'd tree, not just with simple inheritance overriding ] 22:49:18 [kaz] kaz has joined #svg 22:49:27 [heycam] TA: if we go with the simple, cheap, projected tree, where everyone gets their own render tree 22:49:34 [heycam] ... there's no styling allowed from the 'use' instance 22:49:37 [heycam] ... that's less than ideal 22:49:47 [heycam] ... how can we do this in a saner manner? 22:49:59 [heycam] ... doing inheritance from the 'use' is rather bad, since that works over the dom tree, not the render tree 22:50:18 [heycam] ... "pretending" you have a dom there, even if there isn't 22:50:18 [heycam] ... the situation for markers can be a bit saner 22:50:20 [plinss__] plinss__ has joined #svg 22:50:43 [heycam] ... you can specify currentFill and currentStroke on dom nodes in the template, but they resolve differently depending on the instance 22:52:02 [heycam] ... if that's not insane, i want to see whether we can apply this to component model 22:52:24 [heycam] CM: people haven't implemented currentFill/currentStroke property values yet 22:52:32 [heycam] CC: one problem I found with 'use', you have to pass the whole inherited properties in 22:52:44 [heycam] ... I would rather have an explicit list of properties that you pass through 22:52:54 [heycam] TA: don't allow the full set of properties 22:53:15 [heycam] CC: or you use something we you can by default put all properties to initial value 22:53:55 [heycam] TA: are used values resolved still with dom tree information? or is it a render tree thing? 22:53:59 [heycam] DG: in webkit, it's when we're doing layout 22:54:32 [heycam] ... the key problem with the projected dom, you'll have multiple render trees, one dom, no way for them to resolve differently 22:54:39 [heycam] TA: used values have to resolve differently 22:54:52 [heycam] ... if you said width:50% in a template, and project it out, you wouldn't be able to resolve that until you laid out the instance 22:55:13 [heycam] ... if that works, we could specify a syntax for "used value" variables 22:55:17 [heycam] ... and let that pass data into the instances 22:56:30 [heycam] CM: won't resolving 50% later mean you get wildly different boxes? 22:56:31 [heycam] TA: not necessarily, used values are handled after layout 22:56:48 [heycam] ED: what about a 'use' of a 'use'? 22:56:54 [heycam] TA: that just falls out of the shadow dom model 22:57:12 [heycam] ED: the other tricky thing is using an external document 22:57:13 [plinss] plinss has joined #svg 22:57:15 [heycam] TA: didn't know you could do that 22:57:25 [heycam] s/using an/using elements from an/ 22:59:37 [heycam] DG: every rendering of the shadow tree, for a "normal" shadow tree, is backed by real dom nodes 22:59:44 [heycam] ... so that's cloning a dom subtree 22:59:59 [heycam] ... projected trees operate in a way where you have a template that has a picture of the dom, but the rendering is projected to different places in the tree 23:00:21 [heycam] DS: could there be the idea of a projected tree with decoration? 23:00:40 [plinss__] plinss__ has joined #svg 23:00:55 [heycam] TA: if we define a form of variables that only resolve at "used value" time, that will work 23:01:29 [plinss__] plinss__ has joined #svg 23:01:30 [heycam] DS: I'd like to be able to get at the computed values in the projection 23:02:13 [plinss__] plinss__ has joined #svg 23:02:15 [heycam] DS: another aspect of all of this is, a 'use' instance ... is this a rendering tree? 23:02:25 [heycam] ... if my reference thing is so big, and the instance is bigger... 23:02:30 [heycam] TA: this is not rendering into a bitmap 23:02:34 [heycam] ... so it's before rasterisation 23:03:22 [heycam] DS: I understand what you're trying to avoid, but if you had some little decorations on this, which were the "diff" of the dom, ... 23:03:29 [heycam] TA: we should be able to do this use case, different colours 23:04:18 [heycam] DS: i'd be worried about authors accidentally incurring performance penalty 23:04:28 [heycam] TA: we shouldn't do the conversion from projected to real shadow implicitly 23:05:22 [heycam] DG: do we really want a projected tree? 23:05:37 [heycam] TA: I think we do want to be able to spam a lot of a single template without a big performance impact 23:06:02 [heycam] JS: i'm not a layout expert, but I know that we have a pointer from all frames to their content node 23:06:18 [heycam] ... which we use a lot 23:06:18 [heycam] ... that breaks down in this case 23:06:31 [heycam] ... you can't point to the content node and say that's the thing it's resolving style against 23:06:59 [heycam] ... I agree we need this, it will be complicated to implement 23:07:26 [noriya] noriya has joined #SVG 23:07:30 [heycam] ... I think you'd want to copy it for now, to get the behaviour, but then do enough decently sized changes to allow not copying 23:07:48 [heycam] DG: webkit has the same problem 23:08:55 [heycam] TA: the SVG layout model is much simpler, the only thing you need to know from the outside world is what the coordinate system is 23:09:00 [heycam] CM: for %age resolution? 23:09:23 [heycam] TA: yes, and general sizing of user coordinates 23:09:23 [heycam] ... html is a lot more complex 23:09:31 [heycam] ... we could make these projection trees replaced elements, so they have a definite width/height 23:09:45 [heycam] ... they participate in outer pages layout, as opaque boxes, then figure out the size of the bounds, lay out the internals 23:09:50 [JanL] when does the next discussion on svg/html5 <video> start? 23:09:58 [heycam] very soon :) 23:10:11 [JanL] thanx 23:10:11 [JanL] just checking 23:10:54 [heycam] JS: the hardest thing is the crazy svg thing, inheritance from two situations 23:12:01 [dglazkov] we win! 23:12:28 [cyril] scribe: 23:12:31 [cyril] scribe: Cyril 23:12:36 [cyril] scribeNick: Cyril 23:13:10 [cyril] Topic: SVG/HTML 5 video tag harmonization 23:14:00 [cyril] JL: I'm with Ericsson, coming from the Web & TV IG 23:14:12 [cyril] ... on how the tv industry can influence W3C 23:14:31 [cyril] ... the question came up of how the SVG and HTML 5 video tag will merge 23:14:43 [cyril] ... I want to put the discussion on the floor 23:15:23 [a12u] a12u has joined #svg 23:15:37 [cyril] ... the observation from the IG is that there would a lot of synergies if you could use the HTML 5 video tag in SVG 23:16:00 [cyril] GP: or if we had a functionaly equivalent tag in SVG 23:16:16 [cyril] JL: there's a lot of control in the HTML 5 video tag 23:16:20 [cyril] ... tracks, etc. 23:16:29 [cyril] ... being able to reuse that in SVG would be interesting 23:16:44 [aizu] aizu has joined #svg 23:17:03 [cyril] CC: is there a document/analysis of the difference ? 23:17:24 [cyril] JL: I come the OpenIPTV forum and we identified several gaps, but HTML 5 has evolved since then 23:17:38 [cyril] ... all the bugs are almost fixed 23:17:51 [cyril] GP: at this stage, there was no analysis made 23:18:01 [cyril] ... but is it the goal to merge in the long term ? 23:18:25 [cyril] CS: my understanding is that the video tag in SVG and in HTML they don't use the same code base 23:19:03 [cyril] ... is there a fondamental reason why they wouldn't use the same code 23:19:13 [cyril] ED: SVG does not use the CSS Box model 23:19:31 [cyril] ... the underlying code is already shared 23:20:48 [ed] s/underlying code is already shared/underlying code (for loading and decoding video, putting it somewhere) is already mostly shared/ 23:21:02 [cyril] CC: in general the goal of the SVG WG is to align with HTML as much as possible 23:21:12 [cyril] ... having the same model, even if the syntax differ 23:21:32 [cyril] JL: what is the next step ? 23:21:38 [cyril] ... do we need proposals 23:21:50 [cyril] DS: the SVG video element only exists in SVG Tiny 1.2 23:22:14 [cyril] ... SVG 2.0 is not necessarily import all features from SVG Tiny 1.2 23:22:26 [cyril] ... there is no consensus in the group that all features of SVG T 1.2 are relevant in the HTML 5 world 23:22:51 [cyril] ... I believe it would be ideal if we had a video element that would be a super set of the HTML 5 video 23:22:53 [ed] s/CSS Box model/CSS Box model for laying out svg elements/ 23:23:14 [cyril] ... SVG Tiny 1.2 comes from SMIL and they are not in HTML 5 video 23:23:24 [cyril] ... the quwstion is are those desirable ? 23:23:43 [cyril] ... but I heard concerns that the animations and the video properties are not in the same stack 23:23:59 [cyril] JL: often the video layer is done in hardware 23:24:25 [cyril] ... the question is should there be restriction on the way it can be manipulated 23:24:45 [cyril] EF: speaking as an implementer, some of the things specified in HTML 5 is not implementable 23:24:58 [cyril] ... like a CSS Shader on a video on a mobile 23:25:03 [cyril] ... like multiple videos 23:26:18 [cyril] DS: I dont think it's profitable to us to define a profile for mobiles when only hints are sufficient 23:26:31 [cyril] ... because the standards pace and the silicon pace are not the same 23:26:40 [cyril] ... but point taken 23:27:11 [cyril] EF: simple cases of manipulation on desktop work but complex cases might not work 23:28:01 [cyril] ED: I'm not sure it's possible to do the same 23:28:21 [cyril] ... for instance, the transformBehavior attribute is not in HTML 5 23:28:43 [cyril] ... I don't know if it's easy to have the HTML 5 video in SVG 23:29:11 [cyril] DS: the fondamental question is the SMIL thing 23:29:45 [cyril] CC: this is not a problem 23:29:56 [cyril] ... people are just scared of SMIL 23:30:15 [cyril] ... but the SMIL timing module when considering a single time container is very simple 23:30:39 [cyril] GP: some the visual parts are not implementable on STB 23:30:47 [jcdufourd] jcdufourd has joined #svg 23:30:49 [cyril] ... but people want would like multiple video tracks 23:30:56 [cyril] ... but people want would like multiple subtitle tracks 23:31:58 [cyril] [JanL showing the comparison table from OIPF] 23:34:34 [cyril] JL: we are mostly interested on video control use cases 23:35:11 [JanL] 23:35:25 [cyril] CS: there are 2 issues: what are the different subsets/intersection of features; should that be directly addressed to reach a common understanding 23:35:41 [JanL] annex L, page 360 23:36:02 [cyril] ... if we agree on the second question, we can then work on the other question 23:36:12 [cyril] GP: is there a market for SVG video 23:36:26 [JanL] this includes a table comparing SVG Tiny video element with OIPF embedded video objects defined in the spec 23:37:00 [cyril] DS: we could have an SVG element behaving the same as the HTML 5 video element 23:37:18 [cyril] TA: the video element of HTML 5 in the SVG namespace 23:37:41 [cyril] JL: this sounds like a good idea 23:38:00 [cyril] DS: we talked about HTML and SVG more tightly coupled in terms of parsing 23:38:50 [cyril] ... when parsing HTML + SVG, when the video element is encountered, what would be the namespace of the video element 23:40:23 [cyril] ... Is there any objection to having a video element in SVG namespace with the same characteristics as the video element ? 23:40:32 [cyril] ... would it be identical or a super set ? 23:40:51 [cyril] ... if we extend it, I would like to have them also applicable in HTML 23:40:57 [noriya] noriya has joined #SVG 23:41:35 [cyril] TA: we should be working toward a world where the SVG and HTML are the same language 23:42:29 [cyril] RESOLUTION: SVG 2 will have a video element in SVG namespace with the same characteristics as the HTML 5 video element 23:42:55 [cyril] RESOLUTION: SVG 2 will have an audio element in SVG namespace with the same characteristics as the HTML 5 audio element 23:43:26 [cyril] ... this includes the track, audio api, blah blah blah 23:43:56 [ed] s/track/track, source/ 23:44:47 [stakagi] controls property? 23:45:12 [cyril] ED: the UI controls might be different 23:45:18 [cyril] DS: I would expect them to be the same 23:45:51 [cyril] ... you can always make your own SVG controls if you wish 23:46:30 [cyril] EF: what would be the difference in using a foreignObject element ? 23:46:56 [cyril] DS: you incur some performance problems with fO video that you would not have with native video 23:47:11 [cyril] JY: we don't implement fO in IE 23:47:20 [cyril] ED: the problems are at least in Opera 23:47:46 [cyril] JL: how soon would this introduction take place 23:47:53 [cyril] DS: 6 months maybe 23:48:34 [cyril] ACTION: Doug to add HTML video/audio elements to SVG 2 23:48:34 [trackbot] Created ACTION-3178 - Add HTML video/audio elements to SVG 2 [on Doug Schepers - due 2011-11-11]. 23:49:34 [cyril] DS: we could have an audio/video module that extends SVG 1.1 23:49:47 [cyril] JL: that would be better in the timing perspective 23:49:59 [cyril] ... that would make a difference 23:50:12 [cyril] DS: we could have a spec in LC by Q1 2012 23:51:27 [cyril] RESOLUTION: We will have a module to SVG 1.1 to add audio/video elements with parity to HTML 5, given resources 23:54:18 [cyril] RRSAgent, make minutes 23:54:18 [RRSAgent] I have made the request to generate cyril 23:54:33 [shepazu] trackbot, end telcon 23:54:33 [trackbot] Zakim, list attendees 23:54:33 [Zakim] sorry, trackbot, I don't know what conference this is 23:54:34 [trackbot] RRSAgent, please draft minutes 23:54:34 [RRSAgent] I have made the request to generate trackbot 23:54:35 [trackbot] RRSAgent, bye 23:54:57 [Bert] rrsagent, make logs public 23:55:30 [Bert] rrsagent, make minutes 23:55:30 [RRSAgent] I have made the request to generate Bert
http://www.w3.org/2011/11/04-svg-irc
CC-MAIN-2015-48
refinedweb
13,061
71.07
Mac Stringstream returns wrong output Hi we are two students in my class who has a slightly strange experience with string stream on our Qt installation C++ Common trails of setup: - macOS Majave v. 10.14 - Qt Creator 4.9.1 - Kit: Desktop Qt 5.12.2 clang 64bit (default) Our Code C++: main.cpp /#include <iostream> #include <sstream> #include <vector> #include <utility> std::vector<std::pair<double, int> > readPolynomial(std::string& p); int main() { std::string pstr = "22x^3+2x^2-1x^0"; std::vector<std::pair<double, int> > poly = readPolynomial(pstr); for (unsigned int i = 0; i < poly.size(); ++i) { if (i>0) { if (poly[i].first > 0) { std::cout << "+"; } } std::cout << poly[i].first << "x^" << poly[i].second; } std::cout << std::endl; return 0; } std::vector<std::pair<double, int> > readPolynomial(std::string& p) { std::vector<std::pair<double, int> > poly(3); std::stringstream stringstream(p); for (size_t i{0}; i < 3; i++) { stringstream >> poly[i].first; stringstream.ignore(2); stringstream >> poly[i].second; } return poly; } Output: Terminal 0x^00x^00x^0 Any suggestions? PS. please be detailed we are first year and new to Qt, C ++, and Terminal command Update 1: Im not he best at using the debugging tool. but at the following link you can fine some pictures of the debugging window. link debugging Images - Kent-Dorfman last edited by Kent-Dorfman This post is deleted! - Kent-Dorfman last edited by The only real problen I see is that you are not consuming the "+" sign between the terms. On a more broad note, parsing expressions in the way you are doing it is quite limited. What about possible white-space in the input string? @Kent-Dorfman There is only this single input. so will not experience any whitespaces. it is a simpel class Exercise. and our teacher hates mac son no help there. And the code works fine on windows machine. running QT with MinGW. All 6 people in our class there is using macOS experience this problem. that 22 becomes 0, and so does 3 , +2, 2, -2, and 0. So must be something with our compiler or environment i guess. Run Linux/Windows in a virtual machine in MacOS. You get to keep your Mac and your sanity. This is how I have developed code for a Linux server machine running in Windows. Are you using release or debug build? Does result change if you switch mode? Try single-stepping your program in debugger and see what happens to string and how is it parsed @Konstantin-Tokarev I Was using debug build. which always resultet in the output: 0x^00x^00x^0 - .... I now how to insert brake point and running the debug mode. but don't know how to single stepping. or what to look for in the debug window... Image of debugging can be fund in dropbox folder... link debugging Images @fcarney Don't want to run a virtual machine. in that case I might as well have bought a windows machine and programing on it. I have Qt installed on my Mac and it should be possible to make it work somehow @Bugi said in Mac Stringstream returns wrong output: I would suggest changing this: for (size_t i{0}; i < 3; i++) { to the more traditional for (size_t i = 0; i < 3; i++) { It's possible that your compiler is having an issue with that kind of initialization @mranger90 I tried it. but that didn't make any difference. But as you can see in the pictures of debug from the above answer. it iterate well through (I will add the debugging image in the question.) Your images don't show the contents of "poly" so it's hard to tell if it's being set correctly. try breaking it into pieces like: std::vector<std::pair<double, int> > poly; std::stringstream stringstream(p); for (size_t i{0}; i < 3; i++) { std::pair<double, int> tpair; stringstream >> tpair.first; stringstream.ignore(2); stringstream >> tpair.second; poly.push_back(tpair); } return poly; And looking at "tpair" to see if stringstream is setting it correctly. What C++ standard is the compiler set to? Can you force it to say -std=c11? Be curious to know what version of the standard it is set to for compiling now. Is the clang setup gcc based or something else? You said it worked in mingw, which is basically gcc. Is it possible to get a gcc compiler for MacOS? Another option to consider. Completely uninstall your Qt installation and reinstall. I had issues with a Qt install installation where header files were corrupted and all sorts of weird problems. Some things would work, and others would not. I did a drive check in Windows and found corrupted sectors. This caused the corruption of the files. This was in Windows 7 however. The code as presented in the original post works in gcc like @Bugi said about it working in mingw. I tried the following to see if it made any difference: QMAKE_CXXFLAGS += -std=c++98 QMAKE_CXXFLAGS += -std=c++11 QMAKE_CXXFLAGS += -std=c++14 QMAKE_CXXFLAGS += -std=c++17 I tested each separately, but they did not change the output at all. I get this on output: 22x^3+2x^2-1x^0 System I tested on: Ubuntu 18.04 Linux gcc/g++ 7.4.0 64 bit compile Hope this helps you narrow it down. I just installed clang 7 on Linux and compiled the program like this: clang++7 main.cpp This produced "a.out". I ran this and it produced: 22x^3+2x^2-1x^0 I don't know if clang is different on Linux than on MacOS though. I don't know what version you have either. If you can tell me the version I can see if that works. This means that you are printing uninitialized variables somehow. but don't know how to single stepping. or what to look for in the debug window... You need to look at value of poly[i].second before and after stringstream >> poly[i].second;line. Also look up value of p at this moment, and look at internal data fields of stringstream. If still unclear, put a breakpoint on that line, and press F11 to get as deep into implementation of stringstream as needed I am running Linux so it only took a couple of minutes to install clangs 3.9, 4.0, 5.0, 6.0, and 7.0. I then compiled and ran your code on all of those versions and it produced what it was supposed to produce (22x^3+2x^2-1x^0). So unless you have a really old version of clang my guess is something else is wrong. I have no clue what could be wrong, but my first inclination is to reinstall Qt completely. In my opinion there is nothing wrong with the code itself. Edit: Then again, 6 other people are having the same problem. So no clue on what could be the issue. Sorry. Edit 2: @Bugi what does your pro file look like? @mranger90 I don't know if this is what you ment by intialization it in a diffrent way... (by creating a double and a int to pass the values into instead.) But it made no difference. std::vector<std::pair<double, int> > readPolynomial(std::string& p) { std::vector<std::pair<double, int> > poly; std::stringstream ss(p); double coeff; int power; for (size_t i = 0 ;i < 3; i++) { ss >> coeff; ss.ignore(2); ss >> power; poly.push_back(std::make_pair(coeff,power)); } return poly; } Images of this code in debug mode se folder debug_2 if you meant something else with your suggestion can you elaborate a little. The image of the first debug run with the origanal code is also the folder (link above) as debug_1 updatet with visible content of "poly" And you're code sugesting with "std::pair<double, int > tpair" is also in the folder (link above) as debug_3 Ok, this is a head scratcher. As indicated by @fcarney I've run your code, as is on a couple of systems (ubuntu 18.04 with gcc 7.4) and Windows 10 with msvc 2017. And they both produce the expected output. So the issue seems to be compiler or environment related. And the fact that debug/release builds produce different output seems to indicated that somewhere, something is not being initialized, but in your simple code that would be easy to spot. The only other issue that pops out on visual inspection is using the variable named "stringstream" which I suppose could cause confusion with the type "stringstream", but one of your tests changes the name to "ss" so that is not the issue. Try parsing the first parameter as an int instead of a double, or even as a float instead of double to see if that makes a difference. - kshegunov Qt Champions 2017 last edited by Be sure the stream's not corrupted: std::stringstream ss(p); ss.exceptions(std::stringstream::failbit | std::stringstream::badbit); try { for (size_t i = 0 ;i < 3; i++) { ss >> coeff; ss.ignore(2); ss >> power; poly.push_back(std::make_pair(coeff,power)); } } catch (const std::exception & e) { std::cerr << e.what() << std::endl; throw; } You'd need the appropriate headers too: #include <exception> #include <iostream> I don't think there are many opportunities to change the compiler on macOS. Since the installation settings come with XCode. Actually tried "this" before I started this thread. and had to format my entire pc and reinstall everything. I treed to type " $ gcc --version " to answer you question. and don't know whether this is useful... MY-MacBook-Air:~ bugi$.6.0 Thread model: posix InstalledDir: Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin I fund this .pro file in the project folder TEMPLATE = app CONFIG += console c++11 CONFIG -= app_bundle CONFIG -= qt SOURCES += \ main.cpp HEADERS += I tried reinstalling the latest Qt Open Source version: Under the instalion setup there is a step where you are installing components: where i have 2 options at this step. 1. is to install as. Qt 5.12.3 -> macOS : This installation resultet in nothing working at all! couldn't even compile with the auto-detected kit. Or correct it to something that worked. So I uninstalled it again (Both time with the MaintenanceTool which removed all files from the system). I then tried again this time with the other option under installing component. 2. which is to install as. Qt 5.12.2 -> macOS : I was then able to compile my projects again but it made no difference. It is still "0x^00x^00x^0" as output. I alså tried 3.th, 4.th, and a 5.th time. 3.th time with both option 1. and 2. simultaneously. this resultere in that nothing would compile. again with the auto-kit or any setting (qmake return 2) The 4.th and 5.th time both time by uninstalling both Qt and XCode. And agin it was only possible to make the qt installation with component: Qt 5.12.2 -> macOS. able to compile any of my projects. still with the wrong output. "0x^00x^00x^0" Okay maybe we found something here ... but since I still have very little code experience I don't know if it is something or not... Or for that matter what it means Output: ios_base::clear: unspecified iostream_category error libc++abi.dylib: terminating with uncaught exception of type std::__1::ios_base::failure: ios_base::clear: unspecified iostream_category error Press <RETURN> to close this window... It went directly to line 49: (throw) after line 39: (ss >> coeff;) Se folder and Debug_4 for code and images. @Bugi said in Mac Stringstream returns wrong output: I don't think there are many opportunities to change the compiler on macOS. I think Python is the same way on MacOS. Can you add an independent compiler? One that you can run from the command line? Not one that changes the entire system compiler. My Linux system has a standard gcc compiler installed, but I can install clang and it won't mess up everything else. That would at least give you a way to do your school work. I still think the VM option with Linux on it may be your best option. I know you don't want to, but it will give you a lot more options. It will also give you experience in Linux if you don't already have that. @fcarney Agree and have begun to take baby steps with linux (ubunto). I was dedcated windows user some years ago (before windows 8) and have worked on windows machines afterwards continued. regarding my school work i have found a windows machine for the moment. BUT. I am still detement to make it work on mac os. I'm really tired of hearing people say you can't be an engineer with a mac. and I'm really happy with my mac :( And although I have a windows machine it doesn't help the others in my class with the same problem (mac user) @Bugi said in Mac Stringstream returns wrong output: I'm really tired of hearing people say you can't be an engineer with a mac. The guy I share an office with is an electrical engineer (30+ years), he is a good programmer, uses a Mac, but runs most of his analysis software and programming tools on a VM running Windows 10. I run Linux and use a VM to run Windows 10 for doing Windows development in Qt. Most of my time is in Linux though. So, there is nothing wrong with a Mac. However, no matter what you run, you will probably need a VM to run something else. Or your boss will make a decision that forces you to run a VM. Its just how it goes. - kshegunov Qt Champions 2017 last edited by @Bugi said in Mac Stringstream returns wrong output: Okay maybe we found something here ... but since I still have very little code experience I don't know if it is something or not... Or for that matter what it means It means one of two things: - You're reading past the end of stream. - You're reading formatted input (the >>operators), which is not formatted according to the expectation; i.e. there's something wrong while reading from the stream. - Kent-Dorfman last edited by just for grins...please replace std:stringtream with std::istringstream and retest. Since you are only using the stream for input, you should use the istream specialization intended for that purpose. - JKSH Moderators last edited by JKSH @Bugi said in Mac Stringstream returns wrong output: ios_base::clear: unspecified iostream_category error libc++abi.dylib: terminating with uncaught exception of type std::__1::ios_base::failure: ios_base::clear: unspecified iostream_category error Someone found a bug in clang's stringstream exception detection before ( ); it's quite possible that your compiler has a buggy implementation of stringstream. This would explain why everyone on this forum was unable to reproduce your issue, yet all 6 macOS users in your class have the same issue. Unless you have a way to try a different compiler on your macOS, you'll have a hard time proving or disproving that the bug is in your compiler. Does this exercise require you to use stringstream? If not, try a different way of parsing the string. - JKSH Moderators last edited by JKSH Anyway, in situations like this, it is useful to create minimal test cases to investigate the issue. int main() { std::string pstr = "22x^3+2x^2-1x^0"; std::stringstream ss(pstr); double coeff = -42.0; ss >> coeff; std::cout << coeff << std::endl; } What does this print for you? If it's the wrong value, what happens if you replace pstr with "22"? @JKSH said in Mac Stringstream returns wrong output: Unless you have a way to try a different compiler on your macOS, you'll have a hard time proving or disproving that the bug is in your compiler. However, it would be wise to update Xcode to the latest available version before investigating further I have some exams the next 2 weeks. so the problem is paused :( i will be back and thanks for the help until now I've ran in the same problem in 0 A.D. (see here). Edit: this is in fact a difference between libstd and libc and it has been reported before: - Konstantin Tokarev last edited by Konstantin Tokarev @wraitii Thanks for information! You've probably meant libstdc++ and libc++, because libc is kinda different thing
https://forum.qt.io/topic/103340/mac-stringstream-returns-wrong-output
CC-MAIN-2019-43
refinedweb
2,777
74.59
Introduction to Pandas DataFrame.count() Pandas Dataframe.count() is characterized as a technique that is utilized totally the quantity of non-NA cells for every section or column. It is additionally appropriate to work with the non-skimming information. Showing results brings about a way that is straightforward which is incredible expertise to have when working with datasets. A typical method to show data is by utilizing a graph with an axis. Python is an extraordinary language for doing information investigation, fundamentally in light of the incredible environment of information-driven python bundles. Pandas is one of those bundles and makes bringing in and dissecting information a lot simpler. The pandas DataFrame is a two-dimensional information structure. The information is orchestrated in lines and sections in an even manner. Both the segment and columns tomahawks are named. It can contain sections of various information types, and the size of the DataFrame can be changed (alterable). Syntax and Parameters Dataframe.count(level=None, numeric=False, axis=0) Where, - the level represents the multiple indexing of the axis and if it is hierarchical, then the count() function inside the dataframe collapses and does not return back to the program. - numeric represents the numeric values that are supported by the program such as integer, floating-point number, and Boolean values. It considers the false value as a default because it has to return to the dataframe whenever the level is specified. - axis represents the rows and column analysis by the program. This axis parameter helps the count() function to describe which rows and columns to consider when the output is to be implemented by the program using Pandas. It is assigned as 0 to rows and 1 to columns. How dataframe.count() function works in Pandas? Now we see various examples on how dataframe.count() works in Pandas. Ordinal information is the place the factors have normal, requested classes, and the separations between the classifications are not known. For instance, if there were two individuals in a room and we said one was short and the other was tall we know there is a distinction, and we know the bearing of the distinction. In any case, we do not have a clue about the estimation of the separation, i.e. what number of inches taller one individual is from the other and that is how we calculate the values in Pandas. Examples to Implement Pandas DataFrame.count() Below are the examples mentioned: Example #1 Using dataframe.count() to figure out the row axis values. Code: import pandas as pd df = pd.DataFrame({"X":[-3, 7, 11, None, 2, 4], "Y":[-2, None, 5, 8, None, 4], "Z":["Vetts", "Suchu", "Pri", "Mickey", "Minnie", "Span"]}) df.count(axis = 0) print(df.count(axis = 0)) Output: Explanation: In the above program, we first import the panda’s library and assign it as pd. Then, we define the dataframe and organize it as rows and columns however we want. After defining the dataframe, we use the df.count() function to calculate the number of values that are present in the rows and ignore all the null or NaN values. Axis=0 represents the rows and we indicate the program to count only the values in the rows of the dataframe. Hence, this command considers this count() function and gives the dataframe row output as shown in the above snapshot and returns back to the panda’s function. Example #2 Using dataframe.count() to figure out the values in the columns. Code: import pandas as pd df = pd.DataFrame({"X":[-3, 7, 11, None, 2, 4], "Y":[-2, None, 5, 8, None, 4], "Z":["Vetts", "Suchu", "Pri", "Mickey", "Minnie", "Span"]}) df.count(axis = 1) print(df.count(axis = 1)) Output: Explanation: In the above program, we write a similar type of code to figure out the column values. Here also first we import the pandas library and then create a dataframe with respective rows and columns. Once the dataframe is defined and created, we assign the count() function to find out the columns. As usual, the axis represents all the row and column parameters. So here we assign an axis to the value 1 so that it displays only the column values and it ignores all the other values which are assigned as none. Thus it has displayed all column values and has ignored row values. Hence, it finally displays the output as shown in the above snapshot. Conclusion: Finally, I would like to conclude by saying that Pandas dataframe.count() is a function that helps us to analyze the values in the Python dataframe and helps us count all the number of rows and column values and give us a specific output. A count plot graph is a gathered bar outline diagram that permits us to show various bar diagram charts on a similar diagram dependent on the classifications that information is broken into. For this situation, we will utilize the two new ordinal scale esteem segments we just made. This helps the count() function in multiple indexing and returns back to the program without making it collapse. Recommended Articles This is a guide to Pandas DataFrame.count(). Here we discuss an introduction to Pandas DataFrame.count(), with syntax and parameters and examples to implement. You can also go through our other related articles to learn more –
https://www.educba.com/pandas-dataframe-count/?source=leftnav
CC-MAIN-2021-21
refinedweb
891
56.96
Name: Jeroen Ruigrok van der Werven Member since: 2000-03-30 18:18:23 Last Login: 2009-05-20 12:43:17 Notes: Former FreeBSD and DragonFly BSD committer. Nowadays working on Python, Trac, Babel, Genshi, and many other projects. FireMath – MathML editing made easier I discovered FireMath today, an addon for Firefox that makes editing MathML much, much easier. Give it a spin. I just wish more browsers than Firefox supported MathML out of the box. Syndicated 2009-06-29 12:54:06 from In Nomine - The Lotus Land ‘1-2 Mbps’. Now you can set ‘Wireless Network Mode’ under ‘Basic Settings’ to ‘Mixed’ instead of just ‘B-only’. Syndicated 2009-06-08 20:12:54 from In Nomine - The Lotus Land Syndicated 2009-04-13 11:40:28 from In Nomine - The Lotus Land docutils: ImportError: No module named roman For some reason setup.py can fail with docutils complaining it cannot find the roman module. One thing that works is just removing docutils from your site-packages and reinstall it. Syndicated 2009-04-07 08:22:14 from In Nomine - The Lotus Land: return json.dumps(data) Which would be used with the $.ajax() call in a way like the following: $.ajax({ type: "POST", url: "", data: "parameter=value", dataType: "json", error: function(XMLHttpRequest, textStatus, errorThrown){}, success: function(data, msg){} });: $.getJSON("", function(data){} );: $.getJSON("?", function(data){} );: return request.args.get('jsoncallback') + '(' + json.dumps(data) + ')': def jsonwrapper(self, request, data): callback = request.args.get('jsoncallback') if callback: return callback + '(' + json.dumps(data) + ')' else: return json.dumps(data) Syndicated 2009-04-03 13:07:15 from In Nomine - The Lotus Land asmodai certified others as follows: Others have certified asmod!
http://www.advogato.org/person/asmodai/
crawl-002
refinedweb
280
50.84
The previous post explained what is meant by period three implies chaos. This post is a follow-on that looks at Sarkovsky’s theorem, which is mostly a generalization of that theorem, but not entirely [1]. First of all, Mr. Sarkovsky is variously known Sharkovsky, Sharkovskii, etc. As with many Slavic names, his name can be anglicized multiple ways. You might use the regular expression Sh?arkovsk(ii|y) in a search. The theorem in the previous post, by Li and Yorke, says that if a continuous function from a closed interval to itself has a point with period three, it has points with all positive periods. This was published in 1975. Unbeknownst to Li and Yorke, and everyone else in the West at the time, Sarkovsky had published a more general result in 1964 in a Ukrainian journal. He demonstrated a total order on the positive integers so that the existence of a point with a given period implies the existence of points with all periods further down the sequence. The sequence starts with 3, and every other positive integer is in the sequence somewhere, so period 3 implies the rest. Sarkivsky showed that period 3 implies period 5, period 5 implies period 7, period 7 implies period 9, etc. If a continuous map of an interval to itself has a point of odd period n > 1, it has points with order given by all odd numbers larger than n. That is, Sarkovsky’s order starts out 3 > 5 > 7 > … The sequence continues … 2×3 > 2×5 > 2×7 > … then … 2²×3 > 2²×5 > 2²×7 > … then … 2³×3 > 2³×5 > 2³×7 > … and so on for all powers of 2 times odd numbers greater than 1. The sequence ends with the powers of 2 in reverse order … 2³ > 2² > 1. Here’s Python code to determine whether period m implies period n, assuming m and n are not equal. from sympy import factorint # Return whether m comes befor n in Sarkovsky order def before(m, n): assert(m != n) if m == 1 or n == 1: return m > n m_factors = factorint(m) n_factors = factorint(n) m_odd = 2 not in m_factors n_odd = 2 not in n_factors m_power_of_2 = len(m_factors) == 1 and not m_odd n_power_of_2 = len(n_factors) == 1 and not n_odd if m_odd: return m < n if n_odd else True if m_power_of_2: return m > n if n_power_of_2 else False # m is even and not a power of 2 if n_odd: return False if n_power_of_2: return True if m_factors[2] < n_factors[2]: return True if m_factors[2] > n_factors[2]: return False return m < n Next post: Can you swim “upstream” in Sarkovsky’s order? [1] There are two parts to the paper of Li and Yorke. First, that period three implies all other periods. This is a very special case of Sarkovsky’s theorem. But Li and Yorke also proved that period three implies an uncountable number of non-periodic points, which is not part of Sarkovsky’s paper. 2 thoughts on “Sarkovsky’s theorem” duplication: “showed demonstrated” (I hope you don’t mind my copy-edit comments. I’ve got one of those brains that snags on stuff like that, and I’ve figured I might as well “contribute” them to your articles that I enjoy reading so much. :-) ) Thanks! I appreciate people pointing out errors because I want to eliminate unnecessary distractions.
https://www.johndcook.com/blog/2021/04/10/sarkovskys-theorem/
CC-MAIN-2021-31
refinedweb
562
59.33
Create Radio Buttons in SWT Create Radio Buttons in SWT This section illustrates you how to create radio button. In SWT, the style RADIO defined in the Button class allows to create radio button. We SWT Radio Buttons in SWT In SWT, the style RADIO defined in the Button... SWT The given example will show you how to create scroll bar in Java... in Java. Create ToolTip Text in SWT Radio Buttons - Java Beginners Radio Buttons Hello Sir, How to create the code for the password... the radion buttons in display the same page in jsp.I need only how to make the question and answer page using the radio buttons.please help me to solve radio buttons radio buttons write a program to create an applet button which has a list of radio buttons with titles of various colours.set the background colour... has a list of radio buttons with titles of various colors and a button Buttons Buttons I have created a web page with radio button group with two radio buttons for accepting the home appliances categories,Kitchen appliances... radio button is selected.Which event listener do I need to implement for this task Radio Buttons in Jsp - JSP-Servlet Radio Buttons in Jsp Hi, i have a page in which there are lot of radio buttons [IMG][/IMG] see the above picture..." depending on the value in the String radio button has to be checked. How to do SWT in Eclipse - Java Beginners SWT in Eclipse hi.. how to call a function in SWT when the shell is about to close... ?? thanks in advance Tab sequence problem with Radio buttons - JSP-Servlet Tab sequence problem with Radio buttons Hi, I have membership type in application as 1 year(radio button) 2 year(radio button) 4 year(radio button) courier delivery courier(radio button) currently tab sequence going SWT Solaris - Swing AWT :// Thanks...SWT Solaris Hi, When I am using SWT in my application it works... in SWT which will give the exact behaviour in all platforms. Thanks Selecting a Radio Button component in Java Selecting a Radio Button component in Java  ... shows five radio buttons with labeled by "First", "Second"...-different radio buttons. Following are the screen shots for the result Radio Buttons in DB Very Urgent - JSP-Servlet Radio Buttons in DB Very Urgent Respected Sir/Madam, I am... in the database.Here I need Radio Buttons added dynamically for each Row. When I click the corresponding Radio Button and Submit,The Emp ID and Emp Name must Dojo Radio Button radio buttons in dojo. For creating radio button you need "...(dijit.form.CheckBox) for RadioButtons to work. [Radio buttons are used when there is a list....] Radio Buttons are the same as html but dojo provides more controls and styling prog. using radio buttons for simple calculator prog. using radio buttons for simple calculator import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.event.*; class Calculator extends JFrame { private final Font BIGGER_FONT = new Font Tutorial, Java Tutorials Tutorials Here we are providing many tutorials on Java related... SWT Tutorials... NIO Java NIO Tutorials Radio buttons in html Radio buttons in html Here is an example of radio button in html.In this example we have display two radio button Male and Female. The user select only one.... Example radioButton.html <html> <head><title>Radio button ; C Tutorials | Java Tutorials | PHP Tutorials | Linux... Tutorials | Dojo Tutorials | Java Script Tutorial | CVS Tutorial... Tutorials | Ruby-Rails Tutorials | SWT Tutorial | Wicket Tutorial  Radio Buttons in HTML Radio Buttons in HTML  ... The Tutorial illustrates an example from Radio Buttons in HTML.In this Tutorial, the code explain to create a Radio Buttons. The code enables a user to select one Java Swing Tutorials Java Swing Tutorials Java Swing tutorials - Here you will find many Java Swing... and you can use it in your program. Java Swing tutorials first gives you brief WRITE A CODE IN STRUTS ACTION CLASS FOR CHECK BOXES AND RADIO BUTTONS - Struts for check boxes and radio buttons and for submit buttons. i have a jsp page which contains check boxes,radio buttons.and when i click submit button the related...WRITE A CODE IN STRUTS ACTION CLASS FOR CHECK BOXES AND RADIO BUTTONS  inserting data from radio buttons to database - JSP-Servlet inserting data from radio buttons to database hi, my problem... of radio buttons. the feedback.jsp should look like same as follows: Please... as per his choice. when user completes all the selection of radio buttons and he Display Label and radio buttons runtime with respect to DB values Display Label and radio buttons runtime with respect to DB values Requirement: I am getting alertCondition,Y,W values from DB the the cooresponding fields like the following: JLbel- alertCondition-"alert1","Alert2" JradioButton radio nuttpn radio nuttpn i have created a html page containing 20 multiple choice questions when i clicl the options which are radio buttons. I have tp retroebe it in a jsp pahe ansabe it in mysql database...further i have to match JSP Radio Button MySQL insert - JSP-Servlet JSP Radio Button MySQL insert Hi, I have an HTML form which has a couple of radio buttons for example (gender: male/female) and some check boxes... however I wanted to ask you if there are tutorials or perhaps you can help SWT login form - Swing AWT (); } } For more information on SWT visit to : Thanks...SWT login form Hi, I want code to create a login form in SWT. My Radio Button In Java Radio Button In Java Introduction In this section, you will learn how to create Radio Button on the frame. The java AWT , top-level window, are represent by the CheckBoxGroup Canvas won't draw on composite (SWT) Canvas won't draw on composite (SWT) I can't get a canvas to draw on a composite with SWT. I've made the composite on a shell and given it a layout... new to java. The relevant code is below. public static void main(String[] args SWT TextEditor SWT TextEditor In this section, we are going to show you how to create a TextEditor using SWT in Java. In SWT, the classes ToolBar and ToolItem are allowed to create Dojo Radio Button ; In this section, you will learn how to create radio buttons in dojo...(dijit.form.CheckBox) for RadioButtons to work. [Radio buttons are used when there is a list....] Radio Buttons are the same as html but dojo provides more controls and styling Use Group Class in SWT group, radio buttons are defined. Here is the code of GroupExample.java... Use Group Class in SWT In this section, you will learn how to use Group class. In SWT need help for writting code in struts action class for check boxes and radio buttons - Struts people.iwould like to write code in struts action class for check boxes and radio buttons and for submit buttons. i have a jsp page which contains check boxes,radio...need help for writting code in struts action class for check boxes and radio java tutorials java tutorials Hi, Much appreciated response. i am looking for the links of java tutorials which describes both core and advanced java concepts... java in detail with relevant explanations and examples systematically ,it would Tutorials dependent radio button dependent radio button Hi. I have 4 radio buttons say all,chocolate,cookie,icecream. If I select all the other 3 should not able to be selected... of the Radio Button is to select only one option at a time. So you don't need Multiple buttons in struts using java script Multiple buttons in struts using java script Multiple buttons in struts using java script SWT "Enter" key event - Swing AWT :// Thanks...SWT "Enter" key event Can any one post me the sample code to get the enter key event? My requirement is , I want some SWT button action How to use radio button in jsp page How to use radio button in jsp page This is detailed java code how to use radio... to selected radio buttons. First page in this example lets the user enter its name Radio button Validation Radio button Validation Hi.. How to validate radio button in java?if the radio button is not selected an error message should be given... Please...()==false)){ JOptionPane.showMessageDialog(null,"Please select radio button..  AWT Tutorials AWT Tutorials How can i create multiple labels using AWT???? Java Applet Example multiple labels 1)AppletExample.java: import javax.swing.*; import java.applet.*; import java.awt.*; import Radio Tag <html:radio>: Radio Tag <html:radio>: html:radio Tag - This Tag renders an HTML <input> element of type radio... property The corresponding bean property for this radio tag Jigloo SWT/Swing GUI Builder Jigloo SWT/Swing GUI Builder  ... for the Eclipse Java IDE and WebSphere Studio, which allows you to build and manage both Swing and SWT GUI classes. Jigloo creates and manages code for all tutorials - Java Beginners tutorials may i get tutorials for imaging or image processing in java Hi friend, Please explain problem in details what you want with image in java. Thanks Create Tabs in Java using SWT Create Tabs in Java using SWT This Tab Example in Java, will teach you how to create tabs using SWT in Java. After going through the example you will be able to create...); java.awt.Frame frame = SWT_AWT.new_Frame(composite); JApplet applet==new JApplet Java - JDK Tutorials Java - JDK Tutorials This is the list of JDK tutorials which... should learn the Java beginners tutorial before learning these tutorials. View the Java video tutorials, which will help you in learning Java quickly. We Tomahawk selectOneRadio tag to create radio buttons on the page. It renders html input tag with type... of radio buttons. If this is set to "spread" value then it doesn't display html but provides feature to display radio buttons Roseindia Tutorials computing platforms and programming languages like Java Tutorials, JSP Tutorials...://roseindia.net/, where you can find large number of tutorials Java Tutorials... of your use. To view tutorials on various topics, visit: Java Tutorial how to display a table and buttons in swings - Java Beginners how to display a table and buttons in swings Hi frends, Actually... different buttons below this displayed table using swings.....please can any... the table and iam not getting buttons below the table......... Thanking you Radio Button in Java Radio Button in Java Radio Button is a circular button on web page that can be selected...;java RadioButtonTest On execution of code ,the program create you a Radio Radio Button in HTML of radio buttons. It is necessary that the name remains the same within a group... to remember while using Radio Button: All Radio Buttons within a group must share the same name Value of the Radio Buttons within a group must be different Java programming for beginners video tutorials to find the Java programming for beginners video tutorials. Let's know the url of the video tutorials. Thanks Hi, Check the tutorials Java Programming... for Java beginners. All the tutorials contains free examples. Thanks Is it possible in SWT ? Is it possible in SWT ? I want drop down like google search (ie, when we type one letter then the word start with that are displayed). when the drop... do this in SWT ? Thanks DBUnit Tutorials Java applications. With the help of DbUnit you can repopulate your database with sample data and perform unit testing of the Java application. This helps Appending Strings - Java Tutorials .style1 { text-align: center; } Appending Strings In java, the two main ways for appending string at the end are : First method is using += operator. Second method is using append() method. In this section, we Thread Deadlocks - Java Tutorials Thread Deadlock Detection in Java Thread deadlock relates to the multitasking. When two threads have circular dependency on a synchronized, deadlock is possible. In other words, a situation where a thread is waiting for an object j2me tutorials - Java Beginners Java App - Add, Delete, Reorder elements of the buttons Java App - Add, Delete, Reorder elements of the buttons Hello, I'm developing a Java application. I created this interface with MockupScreens... of these elements ... Let us put these buttons in a JPanel called btnsUnit SWT Create Scroll Bar in Java using SWT Create Scroll Bar in Java using SWT This section is all about creating scroll bar in Java SWT The given example will show you how to create scroll bar in Java using Create Multiple Buttons Create Multiple Buttons using Java Swing  ... buttons labeled with the letters from A to Z respectively. To display them, we have created an array of letters. Using this array, we have labeled the buttons Radio button validation using jsp - JSP-Servlet a value for radio Buttons) then it will return to same jsp page with the given...Radio button validation using jsp I had one jsp Page and servlet. I did my validations in servlet for my jsp page which contains the radio Tutorials on Java Tutorials on Java Tutorials on Java topics help programmers to learn.... These tutorials are part of online Java course that is provided by RoseIndia which help... Tutorials Java Util Examples List Threading in Java Overview of Networking Get radio button value after submiting page Get radio button value after submiting page Radio buttons are dynamically generated.After selecting radio button & submitting the page...; } You have to keep the name attribute same for all radio buttons Create a JRadioButton Component in Java button in java swing. Radio Button is like check box. Differences between check... to another where Radio Buttons are the different-different button like check box... with the help of this program. This example provides two radio buttons same JSP Tutorials - Page2 JSP Tutorials page 2 JSP Examples Hello World JSP Page... This section shows you how to import a java package or the class in your jsp.... In the Java Server Pages Technology, multiple actions are accessed by using Drop down and radio button value on edit action mr.,mrs.,miss for payment type there are to radio buttons as by cash &...Drop down and radio button value on edit action HI, I have... the value from dropdown and radio button.. But the problem goes with edit action Exception in Java - Java Tutorials Flex Tutorials Flex Tutorials  ... behavior in buttons through its rollOverEffect property. The first button... will study for..in loop which is similar to for each loop of C#, Java and other Java Video Tutorials for beginners Java Video Tutorials for beginners are being provided at Roseindia online for free. These video tutorials are prepared by some of the best minds of Java... contains advance Java tutorials that are designed to help Java professionals Dojo Button, Dojo Button onclick, Dojo Buttons Drag and Drop Example in SWT Drag and Drop Example in SWT Drag and Drop in Java - This section is going to illustrates you how to create a program to drag and drop the tree item in Java . In SWT JAVA - XML JAVA hi.. i want to talk to any SWT expert in JAVA... how can i do it? Hi friend, For read more information,Examples and Tutorials on SWT visit to : Thanks
http://www.roseindia.net/tutorialhelp/comment/98818
CC-MAIN-2014-41
refinedweb
2,549
55.95
Str. ;roseindia" extends="struts-default" namespace="/">...;%@taglib uri="/struts-tags" prefix="s" %> <html>...;Struts2.2.1_Optiontransferselect_Example1</h2><hr> <h4>City selected by client......</h4> Optiontransferselect Tag (Form Tag) Example Optiontransferselect Tag (Form Tag) Example In this section, we are going to describe the Optiontransferselect tag. The Optiontransferselect tag is a UI tag that creates an option transfer.2.1 optiontransferselect tag example. Struts2.2.1 optiontransferselect tag example. Part1 Part Problem - Struts Struts2 Validation Problem Hi, How to validate field that should not accept multiple spaces in Struts2? Regards, Sandeep Hi... in the browser having the example of handling the error in struts 2. nested selected tag ihave display selected item nested selected tag ihave display selected item i have two combo boxes combo1 combo1 in first combo box i taken as follows select name="combo1..., ok it can be done by creating array my problem is when i display data like Maintaining States of Selected CheckBoxes in Different Pages using dispaly table in struts2 Maintaining States of Selected CheckBoxes in Different Pages using dispaly table in struts2 Hi, I am working in a Struts 2 project where in the jsp... and maintain the states of the selected checkboxes.When I submit the Validate <select> tag Items Validate select tag Items Hi, How to validate addition of two numbers of two different "select" tag items in JavaScript..? Thanks in advance2 Sir when i have run my struts 2 web application,every time i get error " request resources is not available",,,what is this,,,plz help me validate select tag items in javascript validate select tag items in javascript Hi, How to validate addition of two numbers from two different <select> tag items in JavaScript..? Thanks in advance struts2 - Struts struts2 how to pre populate the fields using struts2 from the database Struts 2 radio button value problem Struts 2 radio button value problem When I use s:radio tag in struts 2, I'm not able to get the selected value using document.getElementById... gives first radio option's value as the value for any option selected. Ex validation problem - Struts struts validation problem i used client side validation in struts 2 .but message will display only on the top of the component.i want to display... parameter for the message tag or any other alternative solution? Hi struts2 - Struts struts2 hello, am trying to create a struts 2 application that allows you to upload and download files from your server, it has been challenging for me, can some one help Hi Friend, Please visit the following Struts2...problem in JSP..unable to get the values for menuTitle!!! Struts2...problem in JSP..unable to get the values for menuTitle!!! **Hello everyone... i'm trying to make a dynamic menu from database in struts2...n m fairly new in this framework.. there is some problem in this project...my Struts2 Struts2 Apache Struts: A brief Introduction Apache Struts is an open...; Why Struts 2 The new version Struts 2.0 is a combination of the Sturts action is not only thread-safe but thread-dependent. Struts2 tag libraries provide...Struts Why struts rather than other frame works? Struts is used into web based enterprise applications. Struts2 cab be used with Spring iterator display problem - Struts iterator tag can i use id atribute or valuethat i not understand Hi friend, Code to help in solving the problem : Iterator Tag Example! Iterator Tag Example dwr with struts2 - Struts dwr with struts2 CAn u help me how to use dwr with struts2 Internationalisation problem - Struts Internationalisation problem In struts application how can i get... the selected language words in english characters then that english characters will automatically replace to the selected language characters. Could you Struts2 Actions is usually generated by a Struts Tag. Struts 2 Redirect Action In this section, you will get familiar with struts 2 Redirect action... Struts2 Actions Struts2 Actions Tag: Struts Tag: bean:struts Tag... configuration objects. This tag retrieve the value of the specified Struts... of the Struts<bean:struts> tag. Here you will learn to use the Struts Html< <html:option > with "selected" attribute - Struts with "selected" attribute how to display formbean object with "selected" attribute in . please help me Application | Struts 2 | Struts1 vs Struts2 | Introduction... Manager on Tomcat 5 | Developing Struts PlugIn | Struts Nested Tag...) | Date Tag (Data Tag) | Include Tag (Data Tag) in Struts 2 | Param Tag (Data blank application - Struts Struts2 blank application Hi I am new to struts2 and i am trying to run the strutsblank application by following the following Link; As per the instructions given Struts2 Internationalization Struts2 Internationalization Hi How to use i18n functionality for indian languages in struts2 ? I am able to use french and english but none... the following links: struts - Struts of a struts application Hi friend, Code to help in solving the problem : In the below code having two submit button's its values... super.execute(); } } For more information on struts2 visit to : http Struts - Struts Struts Hello I like to make a registration form in struts inwhich... be displayed on single dynamic page according to selected by student. pls send... source code to solve the problem. Mention the technology you have used Struts Tag Lib - Struts Struts Tag Lib Hi i am a beginner to struts. i dont have... Defines a tag library and prefix for the custom tags used in the JSP page. JSP Syntax Examples in Struts : Description The taglib java - Struts using tag. At the same time i am getting a value which based on that i have to mark selected one of the option of object. So can u plz send me any code... code to solve the problem. Thanks Doubleselect Tag (Form Tag) Example Doubleselect Tag (Form Tag) Example In this section, we are going to describe the doubleselect tag. The doubleselect tag is a UI tag that renders two HTML select elements with second shoping items billng source code should be sorted by the most selected items order, I want to add two or more items...shoping items billng source code Hi, I am doing a project on invoice...", here in this page i need to get the items from the data base with respect struts - Struts struts hi.. i have a problem regarding the webpage in the webpage... is taking different action.. as per my problem if we click on first two submit buttons it is going to action in the form tag.. but if we click on third submit
http://www.roseindia.net/tutorialhelp/comment/61307
CC-MAIN-2014-52
refinedweb
1,083
55.74