text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
hi i have got the free borland compiler and done what the tutorial said to set it up but when i copy and paste the code to test it meaning:- i save it and run it has it say on the tutorial but it come up sayingi save it and run it has it say on the tutorial but it come up sayingCode:#include <iostream> int main() { std::cout<< "I work!" << std::endl; } Borland C++ 5.5.1 for win32 Copyright (c) 1993, 2003 Borland test.cpp: Error E2209 test.cpp 1: Unable to open include file 'iostream' Error E2090 test.cpp 5: Qulifier 'std' is not a class or namespace name tion main() Error E2379 test.cpp 5: Statement missing ; in function main() *** 3 errors in compile **** whats happen here please help thanks paulley
https://cboard.cprogramming.com/windows-programming/74300-newbie-need-some-help.html
CC-MAIN-2017-22
refinedweb
135
78.59
Edited by Narue: Requested by author Edited by Narue: Requested by author from __future__ import print_function import os import webbrowser OUTPUT, INPUT = 'output.html', 'talk.txt' def print_and_save(*args, **kwargs): """ Echo args to output.html with tags and console without them """ if 'tag' in kwargs: tag = kwargs['tag'] del kwargs['tag'] print(tag + ' '.join(args) + tag[0] + '/' + tag[1:] + '</br>', file=open(OUTPUT, 'a'), **kwargs) else: print(' '.join(args)+'</br>', file=open(OUTPUT, 'a'), **kwargs) print(' '.join(args), **kwargs) def talk(arg): print_and_save(arg) def bark(arg): print_and_save(arg, tag='<b>') def snarl(arg): print_and_save(arg, tag='<i>') def main(): print('<html><body>', file=open(OUTPUT, 'w')) exec(open(INPUT).read()) print('\n</body> </html> ', file=open(OUTPUT, 'a')) main() webbrowser.open(OUTPUT) Are you able to help answer this sponsored question? Questions asked by members who have earned a lot of community kudos are featured in order to give back and encourage quality replies.
https://www.daniweb.com/programming/software-development/code/358930/output-to-screen-and-html-file-execute-text-file
CC-MAIN-2017-13
refinedweb
155
58.28
Roslyn Primer – Part I: Anatomy of a Compiler Anthony D. So, you’ve heard that VB (and C#) are open source now and you want to dive in and contribute. If you haven’t spent your life building compilers, you probably don’t know where to start. No worries, I’ll walk you through it. This post is the first of a series of blog posts focused on the Roslyn codebase. They’re intended as a primer for prototyping language features proposed on the VB Language Design repo; and contributing compiler and IDE features, and bug fixes on the Roslyn repo, both on GitHub. Despite the topic, these posts are written from the perspective of someone who’s never taken a course in compilers (I haven’t). Phases of compilation At a high level, here’s what happens: - Scanning (also called lexing) - Parsing - Semantic Analysis (also called Binding) - Lowering - Emit Some phases overlap and infringe on others a bit but that’s basically what the compiler is doing. Compiling is a lot like reading By analogy, when you read this blog post you look at a series of characters. You decide that some runs of letters form words, some is punctuation, some is whitespace. That’s what the scanner does. Then you decide that some punctuation groups things into a parenthetical, or a quotation, or terminates a sentence. Some dots are decimal points in numbers or abbreviations or initialisms. That’s what the parser does. Then you import your massive vocabulary of what words mean and look at all the words and decide what those words refer to and in combination what the sentences mean. Occasionally, you find a word with multiple meanings (overloaded terms) and you look at some amount of context to decide which of the multiple meanings is intended (like overload resolution). All of that assignment of meaning is semantic analysis. Lowering and emit don’t really have natural language equivalents other than perhaps translating from one language to another (think of it like translating an article from modern English to simplified English to another very primitive language). But you’re way smarter than a compiler Of course, you don’t do all of this one phase at a time. You don’t read a sentence in three passes because you can usually pick out words and sentences and their meaning all at once. But the compiler isn’t as smart as a human, so it does these things in phases to keep the problems simple. Every now and then, I get a bug report where someone says, “the compiler decided I meant that but obviously I meant this other thing because that doesn’t make any sense”. The compiler doesn’t know something doesn’t make sense until phase 3. And once it knows that, it can’t go back to phase 1 or 2 to correct itself (unlike you and me). “Compiling” HelloWorld Let’s go back to programming languages and look at what the compiler does to compile a simple program. The simple program just consists of the statement Call Console.WriteLine(“Hello, World!”) Scanning The Scanner runs over all the text in the files and breaks down everything into tokens: - Keyword – Call - Identifier – Console - Dot - Identifier – WriteLine - Left Parenthesis - String – “Hello, World!” - Right Parenthesis These tokens are just like words and punctuation in natural languages. Whitespace isn’t usually important since it just separates tokens. But in VB, some whitespaces, like newlines, are significant and interpreted as an “EndOfStatement” token. Parsing The Parser then looks at the list of tokens and sees how those tokens go together: - Parse a statement. - Look at the first token. Found a Call keyword. That starts a Call statement. Parse a Call statement. - A Call statement starts with the Call keyword and then an expression. Parse an expression. - Look at the next token. Found an identifier “Console”. That’s a name expression. - This might be part of a bigger expression. Look for things that could go after an identifier to make an even bigger expression. - Found a dot. An identifier followed by a dot is the beginning of a member access expression. Look for a name. Found another identifier “WriteLine”. This is a member access that says “Console.WriteLine”. - Still could be part of a bigger expression (maybe there are more dots after this?). Look for another continuing token. - Found a left parenthesis. You can’t just have a left parenthesis after an expression – this must be an invocation expression. - An invocation looks like an expression followed by an argument list. An argument list is a list of expressions (it’s more complicated than this but ignore that) separated by commas. Parse expressions and commas until you hit a right parenthesis. - Found a string literal expression. The argument list has one argument. The parse produces a tree that looks like this: - CallStatement - CallKeyword - InvocationExpression - MemberAccessExpression - IdentifierName - IdentifierToken - DotToken - IdentifierName - IdentifierToken - ArgumentList - OpenParenthesisToken - SimpleArgument - StringLiteralExpression - StringToken - CloseParenthesisToken - EndOfFileToken That’s a source file! Semantic Analysis To be clear, the compiler still has no idea (unlike you and I) that Console.WriteLine is a shared method on the Console class in the System namespace and that it has an overload that takes one string parameter and returns nothing. After all, anyone could make a class called Console. Maybe there isn’t a method called WriteLine. Maybe WriteLine is a type. That’s a dumb name for a type but the compiler doesn’t know that. If it is a type, then the program doesn’t make any sense. Piecing all of that together is semantic analysis. The Binder looks at the references provided to the compiler: the namespaces, types, type members in those references, the project-level imports, and the imports in your source file. And then it starts figuring out what’s what. - What does Console mean? - Is there something called Console in scope? - Checking the containing block: No. - Checking the containing method: No. - Checking the containing type: No. - Checking the containing type’s base types: No. - Checking the containing type’s containing type or namespace: No. - Checking the containing namespace’s containing namespaces: No. - Are there import statements? No. - Are there project-level imports? Yes. - Check each namespace imported one by one. - Found one and only one? Yes. - Console is a type. This must be a shared member. - Look for shared member named WriteLine in [mscorlib]System.Console type. - Found 19 of them. They’re all methods. - Bind all the argument expressions. - One argument is a string literal. String literal has content “Hello, World!” and type of [mscorlib]System.String. - Based on number and types of the arguments, it checks how many of the 19 methods could take one string argument. In VB, the answer is 14. But there are rules that decide which ones are better and it turns out that the one that actually takes a string is better than the one that takes object, or the one that takes a string but passing an empty ParamArray argument list, or performing an implicit narrowing conversion to any of the numeric types, Boolean, or the intrinsic conversion from string to Char or Char array. The compiler has determined that the program is an invocation of the shared void [mscorlib]System.Console::WriteLine(string) method. Passing the string literal “Hello, World”. Lowering What lowering does is take high-level language constructs that only exist in VB and translate them to lower-level constructs that the CLR/JIT compiler understands. Here are some examples of things that don’t exist at the Intermediate Language (IL) level: - Loops: IL only has goto—called “br” for branching—and conditional goto—br.true for branch when true and br.false for branch when false. - Variable scope: All variables are “in scope” for the entire method. - Using blocks: IL only has try/catch/finally so the compiler lowers a using block into a try/catch/finally block that initializes a variable and disposes of it in the finally block. - Lambda expressions: The compiler first translates lambdas into ordinary methods. If they capture any local variables, the compiler has to translate those variables into fields of an object behind the scenes. - Iterator methods: The compiler translates an Iterator method with Yield statements inside to a giant state machine, which is essentially just a giant Select Case that says, “last time you called me I was at step 1 so skip to step 2 this time”. Even though IL has a much simpler set of instructions than a higher-level language like VB everything you can write in a VB program is ultimately composed of simple instructions. In the same way that the greatest works of English literature still use just 26 letters. All of the simplicity, safety, and expressiveness of a higher-level language is what makes VB so powerful. This example of a simple call to a Shared method isn’t very complex. IL already understands method calls and string literals so there isn’t really any lowering to be done. Emit Emit is simple. Once the compiler digests your program into simple operations the CLR understands, it writes out these operations (usually to disk) into a binary file in a well-specified format. Wrapping up In this post, we looked at what a compiler does abstractly and how that process compares to how a human being might read a page of text. In the next post, we’ll dive into how the Visual Basic compiler specifically is organized. ngbmodal is the widgets of bootstrap that is used in angular like autocomplete, accordion, alert, carousel, dropdown, pagination, popover, progressbar, rating, tabset, timepicker, tooltip ect….
https://devblogs.microsoft.com/vbteam/roslyn-primer-part-i-anatomy-of-a-compiler/
CC-MAIN-2020-29
refinedweb
1,611
65.93
A JavaScript. Command line arguments that take options (like --parse, --compress, --mangle and --format) can take in a comma-separated list of default option overrides. For instance: terser input.js --compress ecma=2015,computed_props=false If no input file is specified, Terser will read from STDIN. If you wish to pass your options before the input files, separate the two with a double dash to prevent input files being used as option arguments: terser - and DOM API props. `debug` Add debug prefix and suffix. `keep_quoted` Only mangle unquoted properties, quoted properties are automatically reserved. `strict` disables quoted properties being automatically reserved. `regex` Only mangle matched property names. `reserved` List of names that should not be mangled. -f, --format [options] Specify format e.g. "@license", or start with "!". You can optionally pass one of the following arguments to this flag: - "all" to keep all comments - `false` to omit comments in the output -. --ecma <version> Specify ECMAScript release: 5, 2015, 2016, etc. -e, --enclose [arg[:value]] Embed output in a big function with configurable arguments and values. --ie8 Support non-standard Internet Explorer 8. Equivalent to setting `ie8: true` in `minify()` for `compress`, `mangle` and `format` `format` options. By default `terser` will not work around Safari 10/11 bugs. -` Name and/or location of the output source. Terser. -). Terser has an option to take an input source map. Assuming you have a mapping from CoffeeScript → compiled JS, Terser: terser: passes=2 -m --mangle-props var x={o:3,t:1,i:function(){return this.t+this.o},s:2};console.log(x.i()); Mangle all properties except for reserved properties (still very unsafe): $ terser example.js -c passes=2 -m --mangle-props reserved=[foo_,bar_] var x={o:3,foo_:1,t:function(){return this.foo_+this.o},bar_:2};console.log(x.t()); Mangle all properties matching a regex (not as unsafe but still unsafe): $ terser example.js -c passes=2 -m --mangle-props regex=/_$/ var x={o:3,t:1,calc:function(){return this.t+this.o},i:2};console.log(x.calc()); Combining mangle properties options: $ terser example.js -c passes=2 -m --mangle-props regex=/_$/,reserved=[bar_] var x={o:3,t:1,calc:function(){return this.t+this.o},bar_:2};console.log(x.calc()); In order for this to be of any use, we avoid mangling standard JS names and DOM API properties by default ( --mangle-props builtins to override).);. $ terser stuff.js --mangle-props debug -c -m var o={_$foo$_:1,_$bar$_:3};o._$foo$_+=o._$bar$_,console.log(o._$foo$_); Terser in your application like this: const { minify } = require("terser"); Or, import { minify } from "terser"; Browser loading is also supported: <script src=""></script> <script src=""></script> There is a single async high level function, async minify(code, options), which will perform all minification phases in a configurable manner. By default minify() will enable compress and mangle. Example: var code = "function add(first, second) { return first + second; }"; var result = await minify(code, { sourceMap: true }); console.log(result.code); // minified output: function add(n,d){return n+d} console.log(result.map); // source map = await = await minify(code, options); console.log(result.code); // console.log(3+7); The nameCache option: var options = { mangle: { toplevel: true, }, nameCache: {} }; var result1 = await minify({ "file1.js": "function add(first, second) { return first + second; }" }, options); var result2 = await", await minify({ "file1.js": fs.readFileSync("file1.js", "utf8"), "file2.js": fs.readFileSync("file2.js", "utf8") }, options).code, "utf8"); fs.writeFileSync("part2.js", await }, format: { preamble: "/* minified */" } }; var result = await minify(code, options); console.log(result.code); // /* minified */ // alert(10);" An error example: try { const result = await minify({"foo.js" : "if (0) else console.log(1);"}); // Do something with result } catch (error) { const { message, filename, line, col, pos } = error; // Do something with error } Minify optionsMinify options ecma(default undefined) - pass 5, 2015, 2016, etc to override compressand format's ecmaoptions. enclose(default false) - pass true, or a string in the format of "args[:values]", where argsand valuesare comma-separated argument names and values, respectively, to embed the output in a big function with the configurable arguments and values.. module(default false) — Use when minifying an ES6 module. "use strict" is implied and names can be mangled on the top scope. If compressor mangleis enabled then the topleveloption will be enabled. formator output(default null) — pass an object if you wish to specify additional format function formatfor details. Minify options structureMinify options structure { parse: { // parse options }, compress: { // compress options }, mangle: { // mangle options properties: { // mangle property options } }, format: { // format options (can also use `output` for backwards compatibility) }, sourceMap: { // source map options }, ecma: 5, // specify one of: 5, 2015, 2016, etc. enclose: false, // or specify true, or "args:values" keep_classnames: false, keep_fnames: false, ie8: false, module: false, nameCache: null, // or specify a name cache object safari10: false, toplevel: false } Source map optionsSource map options To generate a source map: var result = await minify({"file1.js": "var a = function() {};"}, { sourceMap: { filename: "out.js", url: "out.js.map" } }); console.log(result.code); // minified output console.log(result.map); // = await minify({"file1.js": "var a = function() {};"}, { sourceMap: { root: "", url: "out.js.map" } }); If you're compressing compiled JavaScript and have a source map for it, you can use sourceMap.content: var result = await minify({"compiled.js": "compiled code"}, { sourceMap: { content: "content from compiled.js.map", url: "minified.js.map" } }); // html5_comments(default true) shebang(default true) -- support #!commandas the first line spidermonkey(default false) -- accept a Spidermonkey (Mozilla) AST Compress optionsCompress options defaults(default: true) -- Pass falseto disable most default enabled compresstransforms. Useful when you only want to enable a few compressoptions while disabling the rest. arrows(default: true) -- Class and object literal methods are converted will also be converted to arrow expressions if the resultant code is shorter: m(){return x}becomes m:()=>x. To do this to regular ES5 functions which don't use thisor arguments, see unsafe_arrows.}. ecma(default: 5) -- Pass 2015or greater to enable compressoptions that will transform ES5 code into smaller ES6+ equivalent forms... module(default false) -- Pass truewhen compressing an ES6 module. Strict mode is implied and the topleveloption as well. Terser will assume that those functions do not produce side effects. DANGER: will not check if the name is redefined in scope. An example case here, for instance var q = Math.floor(a/b). If variable qis not used elsewhere, Terser)._vars(default: true) -- Improve optimization on variables assigned with and used as constant values. reduce_funcs(default: true) -- Inline single-use functions when possible. Depends on reduce_varsbeing enabled. Disabling this option sometimes improves performance of the output code.) -- Remove expressions which have no side effects and whose results aren't used. 2015or greater._symbols(default: false) -- removes keys from native Symbol declarations, e.g Symbol("kDog")becomes Symbol(). function names matching that regex. Useful for code relying on Function.prototype.name. See also: the keep_fnamescompress option. module(default false) -- Pass truean ES6 modules, where the toplevel scope is not the global scope. Implies toplevel. nth_identifier.format option. Examples: // test.js var globalVar; function funcName(firstLongName, anotherLongName) { var myVariable = firstLongName + anotherLongName; } var code = fs.readFileSync("test.js", "utf8"); await minify(code).code; // 'function funcName(a,n){}var globalVar;' await minify(code, { mangle: { reserved: ['firstLongName'] } }).code; // 'function funcName(firstLongName,a){}var globalVar;' await minify(code, { mangle: { toplevel: true } }).code; // ) — How quoting properties ( {"prop": ...}and obj["prop"]) controls what gets mangled. "strict"(recommended) -- obj.propis mangled. false-- obj["prop"]is mangled. true-- obj.propis mangled unless there is obj["prop"]elsewhere in the code. nth_identifer.. Format optionsFormat options These options control the format of Terser's output code. Previously known as "output options". ascii_only(default false) -- escape Unicode characters in strings and regexps (affects directives with non-ascii characters becoming invalid) beautify(default false) -- (DEPRECATED) whether to beautify the output. When using the legacy -bCLI flag, this is set to true by default. braces(default false) -- always insert braces in if, for, do, whileor withstatements, even if their body is a single statement. "some") -- by default it keeps JSDoc-style comments that contain "@license", "@copyright", "@preserve" or start with !, pass trueor "all"to preserve all comments, falseto omit comments in the output, a regular expression string (e.g. /^!/) or a function. ecma(default 5) -- set desired EcmaScript standard version for output. Set ecmato 2015or greater to emit shorthand object properties - i.e.: {a}instead of {a: a}. The ecmaoption will only change the output in direct control of the beautifier. Non-compatible features in your input will still be output as is. For example: an ecmasetting of 5will not convert modern code to ES5. indent_level(default 4) indent_start(default 0) -- prefix all lines by that many spaces inline_script(default true) -- escape HTML comments and the slash in occurrences of </script>in strings keep_numbers(default false) -- keep number literals as it was in original code (disables optimizations like converting 1000000into 1e6). preserve_annotations-- (default false) -- Preserve Terser annotations in the output.) spidermonkey(default false) -- produce a Spidermonkey (Mozilla) AST comments starting with "!" and JSDoc-style comments that contain "@preserve", "@copyright", ": function f() { /** @preserve Foo Bar */ function g() { // this function is never called } return something(); }. ] Array.from() { console.log("debug stuff"); } You can specify nested constants in the form of --define env.DEBUG=false. = await minify(fs.readFileSync("input.js", "utf8"), { compress: { dead_code: true, global_defs: { DEBUG: false } } }); To replace an identifier with an arbitrary non-constant expression it is necessary to prefix the global_defs key with "@" to instruct Terser to parse the value as an expression: await minify("alert('hello');", { compress: { global_defs: { "@alert": "console.log" } } }).code; // returns: 'console.log("hello");' Otherwise it would be replaced as string literal: await minify("alert('hello');", { compress: { global_defs: { "alert": "console.log" } } }).code; // returns: '"console.log"("hello");' AnnotationsAnnotations Annotations in Terser are a way to tell it to treat a certain function call differently. The following annotations are available: /*@__INLINE__*/- forces a function to be inlined somewhere. /*@__NOINLINE__*/- Makes sure the called function is not inlined into the call site. /*@__PURE__*/- Marks a function call as pure. That means, it can safely be dropped. You can use either a @ sign at the start, or a #. Here are some examples on how to use them: /*@__INLINE__*/ function_always_inlined_here() /*#__NOINLINE__*/ function_cant_be_inlined_into_here() const x = /*#__PURE__*/i_am_dropped_if_x_is_not_used() ESTree / SpiderMonkey ASTESTree / SpiderMonkey AST Terser has its own abstract syntax tree format; for practical reasons we can't easily change to using the SpiderMonkey AST internally. However, Terser. spidermonkey is also available in minify as parse and format options to accept and/or produce a spidermonkey: await minify(code, { compress: false, mangle: true });()). document.allis not == null - Assigning properties to a class doesn't have side effects and does not throw. help you write good issues. Obtaining the source code given to TerserObtaining the source code given to Terser Because users often don't control the call to await minify() or its arguments, Terser provides a TERSER_DEBUG_DIR environment variable to make terser output some debug logs. If you're using a bundler or a project that includes a bundler and are not sure what went wrong with your code, pass that variable like so: $ TERSER_DEBUG_DIR=/path/to/logs command-that-uses-terser $ ls /path/to/logs terser-debug-123456.log If you're not sure how to set an environment variable on your shell (the above example works in bash), you can try using cross-env: > npx cross-env TERSER_DEBUG_DIR=/path/to/logs command-that-uses-terser README.md Patrons:README.md Patrons: note: You can support this project on patreon: These are the second-tier patrons. Great thanks for your support!]
https://www.npmjs.com/package/terser
CC-MAIN-2022-27
refinedweb
1,933
50.12
#include <sys/ddi.h> #include <sys/id32.h> uint32_t id32_alloc(void *ptr, int flag); void id32_free(uint32_t token); void *id32_lookup(uint32_t token); Solaris architecture specific (Solaris DDI). any valid 32- or 64-bit pointer determines whether caller can sleep for memory (see kmem_alloc(9F) for a description) These routines were originally developed so that device drivers could manage 64-bit pointers on devices that save space only for 32-bit pointers. Many device drivers need to pass a 32-bit value to the hardware when attempting I/O. Later, when that I/O completes, the only way the driver has to identify the request that generated that I/O is via a "token". When the I/O is initiated, the driver passes this token to the hardware. When the I/O completes the hardware passes back this 32-bit token. Before Solaris supported 64-bit pointers, device drivers just passed a raw 32-bit pointer to the hardware. When pointers grew to be 64 bits this was no longer possible. The id32_*() routines were created to help drivers translate between 64-bit pointers and a 32-bit token. Given a 32- or 64-bit pointer, the routine id32_alloc() allocates a 32-bit token, returning 0 if KM_NOSLEEP was specified and memory could not be allocated. The allocated token is passed back to id32_lookup() to obtain the original 32- or 64-bit pointer. The routine id32_free() is used to free an allocated token. Once id32_free() is called, the supplied token is no longer valid. Note that these routines have some degree of error checking. This is done so that an invalid token passed to id32_lookup() will not be accepted as valid. When id32_lookup() detects an invalid token it returns NULL. Calling routines should check for this return value so that they do not try to dereference a NULL pointer. These functions can be called from user or interrupt context. The routine id32_alloc() should not be called from interrupt context when the KM_SLEEP flag is passed in. All other routines can be called from interrupt or kernel context. Writing Device Drivers for Oracle Solaris 11.2
http://docs.oracle.com/cd/E36784_01/html/E36886/id32-lookup-9f.html
CC-MAIN-2016-26
refinedweb
354
64.81
Content Count53 Joined Last visited Days Won1 Reputation Activity - Nepoxx reacted to codevinsky in generator-phaser-official: Yeoman Generator for Phaser projects I'll work on learning typescript this week, and then incorporating it into the generator. - Nepoxx got a reaction from Dread Knight in [Ask] Error Get phaser.Map If you don't plan on using the map to debug, you can prevent this behavior by removing the following comment from the minifed source: //@ sourceMappingURL=phaser.map Actually, you should remove that line from production code as it will prevent a totally unnecessary http get call (not that is it expensive, however). - Nepoxx got a reaction from Dread Knight in The Phaser 3 Wishlist Thread :) I'll give another +1 for basic networking. I'm not quite sure what this engine provides, but it sure sounds sweet: Source - Nepoxx reacted to Arcanorum in Have one "body" attract all other "bodies". Basically have the earth with gravity. What I mean by don't use SO like a Q&A site is that people shouldn't just dump their own specific, situational problems there expecting someone to come along and work it out for them. The Q&A aspect of SO is meant to serve as a way to grow a repository of useful knowledge, not just to be someones own personal helpdesk. - Nepoxx reacted to pixelpathos in Have one "body" attract all other "bodies". Basically have the earth with gravity. Hi WhatsDonIsDon, Without knowing the specifics of your game and Phaser, it sounds like the implementation of gravity is correct (a force applied to each body/asteroid in the direction of Earth, perhaps inversely proportional to the square of the distance between asteroid and Earth). Perhaps all you need to do is to ensure that the asteroid has some velocity perpendicular to Earth. This should allow the asteroid to orbit in some fashion (I assume this is what you are aiming for). For example, if your Earth is in the centre of the screen, and you have an asteroid directly above at the top of the screen: if the asteroid has some initial sideways motion, you should get some kind of orbit. I've implemented gravity in my game, Luminarium, which you might find a useful example (see this thread, or direct link in my signature). Have a look at scene 3 (locked levels are playable in the demo), which introduces an "orbit" command, analogous to your Earth. When I first implemented the orbit, I found that the Lumins (aliens) would be "sling-shotted" by the orbit command i.e. they would be pulled closer towards it, and accelerate, but then would escape out the other side of the orbit. To try an guarantee continuous orbit, I implemented slight damping (reduction of the aliens' velocity over time). Also, to reduce the number of calculations required, especially if you're going to create many asteroids, I limited the radius over which the orbit/gravity operates, as indicated by the circle drawn around my orbit. Let me know if I can provide any more details! - Nepoxx reacted to xerver in Infinite game Sure its possible, just need to manage what tilemaps are loaded and that memory yourself. There are no automatic features to do what you want, you just need to track the user and load/unload tilemaps as needed. Think of it like a tilemap where each tile is a tilemap, just manage it yourself. - Nepoxx reacted to gnumaru in Infinite game I found this article which seems interesting: The author does an analysis on approaches to implementing voxel engines. Even though it is about 3D worlds, you could just “cut out one of the dimensions” and think 2D =) - Nepoxx reacted to rich in Phaser 2.1.0 "Cairhien" is Released 2.1.0 was supposed to be a Phaser release that just updated p2 physics. However it quickly grew into something quite a bit more! We took the opportunity to introduce a bunch of new features (ScaleMode RESIZE being a pretty significant one) and hammer the github issues list into submission, until it was a mere shadow of its former self, now composed mostly of feature requests. As you should be able to tell from the version number this is an API breaking release. Not significantly, as most changes are confined to P2, but definitely in a few other areas too. As usual the change log details everything, and although we appreciate there is a lot to read through, everything you need to know can be found there. The next release will be 2.1.1 in approximately 1 month, so if you find any problems, and are sure they are actual problems, please report them on github and we'll get them fixed! To everyone who contributed towards this release: thank you, you're the best. Also, one of my favourite new examples (draw on it) Cheers, Rich - Nepoxx reacted to ASubtitledDeath in IE9 framework errors on load Hey Guys, Thanks for the replys. I did a few more experiments. These are the variations of the framework I have tried: Phaser.min.js - Error Object doesn't support property or method 'defineProperty', line 3, character 2900 phaser-no-libs.js - no errors, but I need physics and Pixi phaser-arcade-physics.js - Error Object doesn't support property or method 'defineProperty', line 603, character 1 This is the offending code: Object.defineProperty(PIXI.DisplayObject.prototype, 'interactive', { get: function() { return this._interactive; }, set: function(value) { this._interactive = value; // TODO more to be done here.. // need to sort out a re-crawl! if(this.stage)this.stage.dirty = true; }}); HOWEVER!! I added this user agent compatibility tag into the head of the doc and it now works! <meta http- I also retro fitted it to the full framework and that fixed it as well. - Nepoxx got a reaction from lewster32 in Multiplayer RPG game. It's possible? You'll most likely have to write your own server code. Non-real-time is trivial to do, however. You can communicate from Phaser using simple AJAX to a NodeJS server. That should be more than enough for your needs. - Nepoxx reacted to rich in Slice Engine. ? - Nepoxx got a reaction from lewster32 in Howto videoanimationcreator.php in phazer ? You're asking for a lot, however here's a small list of good resources for Phaser: - Nepoxx reacted to rich in 1995 Sega Game Remake, works on mobile too. Right, because no-one else has ever made a game in Phaser that used the mouse If it's meant to be fully keyboard controlled then don't respond to mouse events at all on desktop. Then at least people won't click around and get confused as to why some buttons respond fine and others don't - that's bloody confusing, no matter how you spin it. - Nepoxx reacted to gnumaru in Alternatives to organize code base into several files (AMD, Browserify and alikes) Nepoxx Indeed, the C compiler preprocessor would do with the files exactly what I do not want to do. I do not want to bundle every .js file into one single big file, that's what the C preprocessor does. But when I made comparisons with C includes, I was talking about execution behavior, the javascript execution behavior compared to the behavior of a compiled C code that got includes. For example, if you execute the following lines on your browser: /* ********** */ eval("var a = 123;"); alert(a); var b = 987; eval("alert(b );"); /* ********** */ The first alert call will alert '123' and the second alert call will alert '987'. But if you 'use strict', the "var a" declaration and assignment wont be visible outside the eval, and the first alert will throw a "ReferenceError: a is not defined", and if you omit the var for the variable's 'a' declaration it will throw a "ReferenceError: assignment to undeclared variable a" (because when you 'use strict' you only declare globals explicitly by appending them to the window object). But the second alert will behave identically with or without 'use strict', because when you eval some string, it's code runs using the context where the eval call is made. This behavior of eval (although achieved in execution time) is the same of a C include statement (although achieved in compile time). If you create two C source files named a.c and b.c: /* ********** */ //code for a.c int main(){ int x = 0; #include "b.c" i = i+1; } /* ********** */ /* ********** */ //code for b.c x = x+1; int i = 0; /* ********** */ then compile them: $ gcc a.c; It will compile successfully because the code of b.c was coppied "as is" in the place where #include "b.c" is called. Thus not only the code in b.c got access to the code defined before the include statement in a.c, as well as the code defined after the include has access to the code defined in b.c. That's exactly the behavior of eval without "use strict", and "half" the behavior of the eval with "use strict". About eval being bad, I'm not so sure yet. I know most of the planet repeat Douglas Crockford's mantra "eval is evil" all day long, but it seems eval is more like "something that usually is very badly used by most" than "something that is necessarily bad wherever used". I had yet no in depth arguments about the performance of eval, and personally I guess that it "must be slower but not so perceively slower". About the security, that surely opens doors to malicious code, but the exact functionality I seek can not be achieved otherwise, at least not until ecmascript 6 gets out of the drafts and becomes standard. About the debugging issue, I think that's the worst part, but as already said, there is no other way to achieve what I seek without it. SebastianNette When I said javascript couldn't include other javascript files it was because "javascript alone" doesn't have includes. The default, de-facto, way of including javascript files is of course through script tags (it was the default way since the beginning of the language). But the script tag is a functionality that is part of the html markup language, not of the javascript programming language. Javascript itself, in it's language definition standards, does note have (yet) a standards defined way to include/require other javascript files. I was already aware of the Function constructor. I really don't know the innards of the javascript engines, but I bet that internally there is no difference between evalling a string and passing a string to a function constructor (jshint says that “The Function constructor is a form of eval”). I did run your tests on jsperf.com, and eval gave me a performance only 1.2% slower than the function constructor (on firefox 31). On chrome 36, it gave me a difference of 1.45%, which are both not so bad. I'm sure that one big js file bundled through browserify can be much more easily chewed by the javascript engines out there. The question could be about "how much slower" does a code recently acquired through a xmlhttprequest runs in comparison of a code that was always bundled since the beginning? And does this slowdown happens only after the first execution? and what if I cache the code? will it run faster afterwards? or it will always run slower? I don't know the answer, I never studied compilers, interpreters or virtual machines architectures. At least, my results in the jsperf test you gave me where good to me =) Anyway, I changed the eval to the “new Function” because I noticed that I wasn't caching the retrieved codes AT ALL. Now I've switched to a slightly better design. Everyone I have now implemented a limited commonjs style module loading on executejs (without a build step). It does not handles circular dependencies yet, and it expects only full paths (not relative paths). What bothers me of browserify is that it compels you to a build step. RequireJS does not have it, you can use your modules as separate files or bundle them together, you decide. But that's not true with browserify, and I prefer the commonjs require style than the amd style. I searched for a browser module loader that supports commonjs, but every one of them seem to need a build step. The only one I found was this: And it seems to be too big and complicated for something that should not be so complex... - Nepoxx reacted to lewster32 in Phaser autocomplete in editor Visual Studio's Intellisense works very well for me. Use phaser.js and put this at the top of your own JavaScript file(s) to enable Intellisense: /// <reference path="phaser.js" /> The path is relative to the file you're working on, so if you keep all your JS files in the same folder this will work as is. - Nepoxx got a reaction from callidus in video tutorial on phaser Thanks for the link! (They have a playlist specifically for Phaser:) - Nepoxx reacted to lewster32 in Blocks collapse It seems to me that if all the boxes are the same size, are spawned on an x-axis grid and aren't meant to intersect, then it's likely to be a grid-based game. I could be totally wrong in my assumptions of course! If there's a need to actually have a stack of physics objects like that, then we really need more info about what it is Tubilok is trying to achieve.
https://www.html5gamedevs.com/profile/10142-nepoxx/reputation/?type=forums_topic_post&change_section=1
CC-MAIN-2020-16
refinedweb
2,279
61.77
def ListNum(x): list1 = [] for i in (x): if x[i] < x [i + 1]: list1.append[i] else: break return(list1) ListNum([1,2,3,4,5,6,2,3,4]) list1 ListNum([1,2,3,4,5,6,2,3,4]) list1[1,2,3,4,5] I assume that you never want to add the last item in x to list1 because there isn't an item after it to compare it with. Your code doesn't work properly because for i in (x): iterates over the items in x, not their indices. But even if it did iterate over the indices your code could crash with an IndexError because it could attempt to compare the last item in the list with the item after it, which doesn't exist Here are several ways to do this. from itertools import takewhile def list_nums0(x): list1 = [] for i in range(len(x) - 1): if x[i] < x[i + 1]: list1.append(x[i]) else: break return list1 def list_nums1(x): list1 = [] for u, v in zip(x, x[1:]): if u < v: list1.append(u) else: break return list1 def list_nums2(x): list1 = [] for i, u in enumerate(x, 1): if u < x[i]: list1.append(u) else: break return list1 def list_nums3(x): return [t[0] for t in takewhile((lambda a:a[0] < a[1]), zip(x, x[1:]))] list_nums = list_nums3 print(list_nums([1,2,3,4,5,6,2,3,4])) output [1, 2, 3, 4, 5] list_nums0 simply iterates over the indices of x. list_nums1 uses zip to iterate in parallel over x and x[1:]. This puts the current & next items into u and v. list_nums2 uses enumerate to get the current item in u and the index of the next item in i. list_nums3 uses takewhile to iterate over the tuples yielded by zip until we get a pair of items that don't satisfy the test. It performs the whole operation in a list comprehension, which is slightly more efficient that using .append in a traditional for loop.
https://codedump.io/share/UoC8ClK1Smmn/1/how-to-move-one-element-from-a-list-to-another-in-python
CC-MAIN-2017-13
refinedweb
346
68.3
The code presented in this article is deprecated and is no longer maintained. It is recommended that a new version of this library be used instead. The new library is not at all compatible with code presented in this article. It is a major rewrite of the whole thing and should be much friendlier and easier to use. It is available here. There are many breaking changes in this version of the code. This version is not backward compatible with previous versions. The article text has been updated to reflect the changes and the code snippets provided here will work only with the latest version of Light. This article is about a small and simple ORM library. There are many good ORM solutions out there, so why did I decide to write another one? Well, the main reason is simple: I like to know exactly which code runs in my applications and what is going on in it. Moreover, if I get an exception I'd like to be able to pinpoint the location in the code where it could have originated without turning on the debugger. Other obvious reasons include me wanting to know how to write one of these and not having to code simple CRUD ADO.NET commands for every domain object. The purpose of this library is to allow client code (user) to run basic database commands for domain objects. The assumption is that an object would represent a record in the database table. I think it is safe to say that most of us who write object-oriented code that deals with the database have these objects in some shape or form. So the goal was to create a small library that would allow me to reuse those objects and not constrain me to any inheritance or interface implementations. Also, I wanted to remain in control: I definitely did not want something to be generating the tables or classes for me. By the same token, I wanted to stay away from XML files for mapping information because this adds another place to maintain the code. I understand that it adds flexibility, but in my case it is not required. One of the things I wanted to accomplish was to leave the user in control of the database connection. The connection is the only resource that the user has to provide for this library to work. This ORM library ( Light) allows users to run simple INSERT, UPDATE, DELETE and SELECT statements against a provided database connection. It does not even attempt to manage foreign keys or operate on multiple related objects at the same time. Instead, Light provides the so-called triggers (see the section about triggers below) that allow you to achieve similar results. So, the scope of the library is: single table/view maps to a single object type. Light uses attributes and reflection to figure out which statements it needs to execute to get the job done. There are two very straightforward attributes that are used to describe a table that an object maps to: TableAttribute- This attribute can be used on a class, interface or struct. It defines the name of the table and the schema to which objects of this type map. It also lets you specify the name of a database sequence that provides auto-generated numbers for this table (of course, the target database has to support sequences). ColumnAttribute- This attribute can be used on a property or a field. It defines the column name, its database data type, size (optional for non- stringtypes) and other settings such as precision and scale for decimal numbers. There are two more attributes that aid with inheritance and interface implementation: TableRefAttribute- This attribute can be used on a class, interface or struct. It is useful if you need to delegate table definition to another type. MappingAttribute- This attribute can be used on a class, interface or struct. It extends the ColumnAttribute(therefore inheriting all its properties) and adds a property for a member name. This attribute should be used to map inherited members to columns. More on this later, in the code example. There is another attribute that helps with such things as object validation and management of related objects: TriggerAttribute- This attribute can only be used on methods with a certain signature. In short, it marks a method as a trigger. These trigger methods are executed either before or after 1 of 4 CRUD operations. More on this later, in the code example. The most useful class of the Light library is the Dao class. Dao here stands for Data Access Object. Instances of this class provide methods to perform inserts, updates, deletes and selects of given objects, assuming that objects have been properly decorated with attributes. If a given object is not properly decorated or is null, an exception will be thrown. A word about exceptions is in order. There are couple exceptions that can be thrown by Light. The most important one is System.Data.Common.DbException, which is thrown if there was a database error while executing a database statement. If your underlying database is SQL Server, then it is safe to cast the caught DbException exception to SqlException. Other exceptions are: DeclarationException, which is thrown if a class is not properly decorated with attributes; TriggerException, which is thrown if a trigger method threw an exception; and LightException, which is used for general errors and to wrap any other exceptions that may occur. Please note that both DeclarationException and TriggerException are subclasses of LightException, so the catch statement catch(LightException e) will catch all three exception types. If you want to specifically handle a DeclarationException or a TriggerException, their catch statements must come before the catch statement that catches the LightException. Here is an example: try { T t = new T(); dao.Insert<T>(t); } catch(DbException e) { SqlException sqle = (SqlException) e; } catch(DeclarationException e) { ... } catch(TriggerException e) { ... } catch(LightException e) { if(e.InnerException != null) //then the following is always true bool truth = e.Message.Equals(e.InnerException.Message); } You cannot create an instance of a Dao class directly, using its constructor, because Dao is an abstract class. Instead, you should create instances of Dao subclasses targeted for your database. So far, without any modifications, Light can work with SQL Server ( SqlServerDao) and SQLite .NET provider ( SQLiteDao) databases. If you need to target another database engine or would like to override the default implementations for SQL Server or SQLite, all you have to do is create a class that extends the Dao class and implements all its abstract methods. All operations (except select) are performed within an implicit transaction unless an explicit one already exists and was started by the same Dao instance. In that case, the existing transaction is used. The user must either commit or rollback an explicit transaction. If the Dispose method is called on the Dao object while it is in the middle of a transaction, the transaction will be rolled back. An explicit transaction is the one started by the user by calling the Dao.Begin method. Implicit transactions are handled by Dao objects internally and are automatically committed upon successful execution of a command or rolled back if an exception was thrown during command execution. Note that for all of this to work, the Dao object must be associated with an open database connection. This can be done via the Dao.Connection property. SqlServerDao and SQLiteDao also provide constructors that accept connection as a parameter. Remember that it is your responsibility to manage database connections used by Light. This means that you are responsible for opening and closing all database connections. A connection must be open before calling any methods of the Dao object. The Dao object will NEVER call Open or Close methods on any connection, not even if an exception occurs. Here is some sample code to demonstrate the concept. Let's assume that we will be connecting to an SQL Server database that has the following table defined: create table dbo.person ( id int not null identity(1,1) primary key, name varchar(30), dob datetime ) go Now let's write some code. Note that this code has not been tested to compile; please use the demo project as a working sample: using System; using System.Data; using System.Data.SqlClient; using System.Collections; using System.Collections.Generic; using Light; // Light library namespace - this is all you need to use it. // // Defines a mapping of this interface type to the dbo.person table. // [Table("person", "dbo")] public interface IPerson { [Column("id", DbType.Int32, PrimaryKey=true, AutoIncrement=true)] Id { get; set; } [Column("name", DbType.AnsiString, 30)] Name { get; set; } [Column("dob", DbType.DateTime)] Dob { get; set; } } // // Says that when operating on type Mother the table definition from // type IPerson should be used. // [TableRef(typeof(IPerson))] public class Mother : IPerson { private int id; private string name; private DateTime dob; public Mother() {} public Mother(int id, string name, DateTime dob) { this.id = id; this.name = name; this.dob = dob; } public int Id { get { return id; } set { id = value; } } public string Name { get { return name; } set { name = value; } } public DateTime Dob { get { return dob; } set { dob = value; } } } // // Notice that this class is identical to Mother but does not // implement the IPerson interface, so it has to define its // own mapping. // [Table("person", "dbo")] public class Father { private int id; private string name; private DateTime dob; public Father() {} public Father(int id, string name, DateTime dob) { this.id = id; this.name = name; this.dob = dob; } [Column("id", DbType.Int32, PrimaryKey=true, AutoIncrement=true)] public int Id { get { return id; } set { id = value; } } [Column("name", DbType.AnsiString, 30)] public string Name { get { return name; } set { name = value; } } [Column("dob", DbType.DateTime)] public DateTime Dob { get { return dob; } set { dob = value; } } } // // Same thing but using a struct. // [Table("person", "dbo")] public struct Son { [Column("id", DbType.Int32, PrimaryKey=true, AutoIncrement=true)] public int Id; [Column("name", DbType.AnsiString, 30)] public string Name; [Column("dob", DbType.DateTime)] public DateTime Dob; } // // Delegating with a struct. // [TableRef(typeof(IPerson))] public struct Daughter : IPerson { private int id; private string name; private DateTime dob; public int Id { get { return id; } set { id = value; } } public string Name { get { return name; } set { name = value; } } public DateTime Dob { get { return dob; } set { dob = value; } } } // // Main. // public class Program { public static void Main(string[] args) { string s = "Server=.;Database=test;Uid=sa;Pwd="; // We use a SqlConnection, but any IDbConnection should do the trick // as long as you are using the correct Dao implementation to // generate SQL statements. SqlConnection cn = new SqlConnection(s); // Here is the Data Access Object. Dao dao = new SqlServerDao(cn); // This would also work: // Dao dao = new SqlServerDao(); // dao.Connection = cn; try { // The connection must be opened before using the Dao object. cn.Open(); Mother mother = new Mother(0, "Jane", DateTime.Today); int x = dao.Insert(mother); Console.WriteLine("Records affected: " + x.ToString()); Console.WriteLine("Mother ID: " + mother.Id.ToString()); Father father = new Father(0, "John", DateTime.Today); x = dao.Insert(father); Console.WriteLine("Father ID: " + father.Id.ToString()); // We can also force father to be treated as // another type by the Dao. // This is not limited to Insert, but the object and type // MUST be compatible. dao.Insert<IPerson>(father); // This will also work. dao.Insert(typeof(IPerson), father); // We now have 3 fathers. Let's get rid of the last one. // The 'father' variable has the last Father inserted because // its Id was set to the last inserted identity. x = dao.Delete(father); // Now we have 2 fathers. Let's get them from the database. IList<Father> fathers = dao.Select<Father>(); Console.WriteLine(fathers.Count); // NOTICE: Dao.Select and Dao.Find methods instantiate objects // internally so you cannot use an interface type // as the type of objects to return. In other words, // the runtime must be able to create instance of given type // using reflection (Activator.CreateInstance method). // The safest approach you can take is to make // sure that every entity type has a default constructor // (it could be private). Son son; son.Name = "Jimmy"; son.Dob = DateTime.Today; dao.Insert(son); // Daughter is a struct, so it cannot be null. // If record with given id is not found and the type is a struct, // then an empty struct of given type is returned. // This, obviously, only works for the generic version // of the Find method. The other version returns an object, // so null will be returned. // The following is usually not a good idea, // but they are compatible by table definitions. Daughter daughter = dao.Find<Daughter>(son.Id); Console.WriteLine(daughter.Name); // should print "Jimmy" daughter.Name = "Mary"; dao.Update(daughter); // Refresh the son. // Generics not used, so the return type is object, // could be null if not found. object obj = dao.Find(typeof(Son), son.Id); if(obj != null) { son = (Son) obj; Console.WriteLine(son.Name); // should print "Mary" } } catch(LightException e) { Console.WriteLine(e.Message); } catch(Exception e) { Console.WriteLine(e.Message); } finally { dao.Dispose(); try { cn.Close(); } catch {} } } } Delegating table definition to another type was fairly easy, in my opinion. You simply apply TableRefAttribute to a type. This feature was geared towards being able to use patterns similar to the strategy pattern. You can define an interface or an abstract class with all required data elements. You can also have implementing classes delegate their table definition to this interface or abstract class, but have their business logic in methods differ. Here is some code that shows the use of MappingAttribute, which should help with inheritance. Assume that we are using the same connection and that the same dbo.person table exists in the database. using System; using System.Data; using Light; public class AbstractPerson { protected int personId; public int PersonId { get { return personId; } set { personId = value; } } public abstract string Name { get; set; } public abstract DateTime Dob { get; set; } public abstract void Work(); } // // Maps the inherited property "PersonId". // [Table("person", "dbo")] [Mapping("PersonId", "id", DbType.Int32, PrimaryKey=true, AutoIncrement=true)] public class Father : AbstractPerson { private string name; private DateTime dob; public Father() {} [Column("name", DbType.AnsiString, 30)] public override string Name { get { return name; } set { name = value; } } [Column("dob", DbType.DateTime)] public override DateTime Dob { get { return dob; } set { dob = value; } } public override void Work() { // whatever he does at work... } } // // Maps the inherited protected field "personId". // [Table("person", "dbo")] [Mapping("personId", "id", DbType.Int32, PrimaryKey=true, AutoIncrement=true)] public class Mother : AbstractPerson { [Column("name", DbType.AnsiString, 30)] private string name; [Column("dob", DbType.DateTime)] private DateTime dob; public Mother() {} public override string Name { get { return name; } set { name = value; } } public override DateTime Dob { get { return dob; } set { dob = value; } } public override void Work() { // whatever she does at work... } } MappingAttribute allows you to map an inherited member to a column. It doesn't really have to be an inherited member; you can also use variables and properties defined in the same type. However, I like to see meta information along with the actual information, that is, attributes applied to class members. This makes it easier to change the attribute if you are changing class members, for example, the data type. Notice that Father uses the inherited property, while Mother uses the inherited field. Also, notice the case of the member name parameter in MappingAttribute. Father starts the string PersonId with a capital letter, which hints Light to search through properties first. If a property with such a name is not found, the fields will be searched. If a field with such a name is not found, an exception will be thrown. Similarly, Mother has personId starting with a lower case letter, so fields will be searched first. I guess the order in which members are searched does not give you a lot and is not a huge performance gain, but I always wanted to implement something that could "take a hint" and actually use it. Light provides a way to query the database. This comes in handy if you don't want Light to load all objects of any given type and then filter them yourself. I don't think you ever want to do that. The Light.Query object allows you to specify a custom WHERE clause so that the operation is performed only on a subset of records. This object can be used with the Dao.Select and Dao.Delete methods. When used with the Dao.Delete method, the WHERE clause of the Light.Query object will be used to limit the records that will be deleted. The concept is identical to using a WHERE clause in an SQL DELETE statement. Using the Light.Query object with the Dao.Select method allows you to specify records that will be returned as objects of a given type. In addition, the Dao.Select method takes into account the ORDER BY clause (the Dao.Delete method ignores it), which can also be specified in the Light.Query object. Again, the concept is identical to using WHERE and ORDER BY in SQL SELECT statements. The Light.Query object is a very simple object and it does not parse the WHERE and the ORDER BY statements you give it. This means two things. First, you must use the real names of table columns as they are defined in the database. You cannot use a name of a property of a class to query the database. Second, you must specify a valid SQL statement for both the WHERE and ORDER BY clauses. If you will be using a plain (not parameterized) WHERE clause, then it is also your responsibility to protect yourself from SQL injection attacks. I don't think this is a problem when using parameterized statements. Parameterized SQL statements are a recommended way of querying the database. It allows the database to cache the execution plan for later reuse. This means that the database does not have to parse your SQL statements each time they are executed, which definitely helps the performance. The Light.Query object allows you to create a parameterized WHERE clause. To achieve this, you simply use a parameter syntax as you would when writing a stored procedure and then set the values of those parameters by name or order. The following example should make this clear (code not tested). // // We will use the Son struct defined previously in the article. // Assume we have a number of records in the dbo.person // table to which Son maps. // using System; using System.Data; using System.Data.SqlClient; using System.Collections.Generic; using Light; IDbConnection cn = new SqlConnection(connectionString); Dao dao = new SqlServerDao(cn); cn.Open(); // This will return all Sons born in the last year // sorted from youngest to oldest. // We will use the chaining abilities of the Query and Parameter objects. IList<Son> bornLastYear = dao.Select<Son>( new Query("dob BETWEEN @a AND @b", "dob DESC") .Add( new Parameter() .SetName("@a") .SetDbType(DbType.DateTime) .SetValue(DateTime.Today.AddYear(-1)), new Parameter() .SetName("@b") .SetDbType(DbType.DateTime) .SetValue(DateTime.Today) ) ); // This will return all Sons named John - non-parameterized version. IList<Son> johnsNoParam = dao.Select<Son>(new Query("name='John'")); // This will do the same thing, but using parameters. IList<Son> johnsParam = dao.Select<Son>( new Query("name=@name", "dob ASC").Add( new Parameter("@name", DbType.AnsiString, 30, "John") )); // This will return all Sons whose name starts with letter J. Query query = new Query("name like @name").Add( new Parameter("@name", DbType.AnsiString, 30, "J%") ); IList<Son> startsWithJ = dao.Select<Son>(query); // // We can use the same, previously defined, queries to delete records. // // This will delete all Sons whose name starts with letter J. int affectedRecords = dao.Delete<Son>(query); dao.Dispose(); cn.Close(); cn.Dispose(); The creation of Query and Parameter objects (in the first query) may look a bit awkward. Both Query and Parameter classes follow the builder pattern that allows for such code. Classes that implement the builder pattern contain methods that, after performing required actions, return a reference to the object on which the method was called. This allows you to chain method calls on the same object. Query and Parameter classes also have regular properties that you can set in a well-known manner. Both approaches work equally well. I just thought it would be easier to use these classes with such methods and the code would be more compact. You can omit the name of the table in TableAttribute and the name of a column in ColumnAttribute. Light will provide default names to tables and columns based on the class and field names to which the attributes are applied. The rules to figure out the default name are very simple. In fact, there are no rules. The name of the class or field is used as is if the name is not provided in the attribute. It is best to see an example: [Table] //same as [Table("Person")] //[Table(Schema="dbo")] if you need to specify a schema. public class Person { [Column(DbType.Int32, PrimaryKey=true, AutoIncrement=true)] // same as [Column("personId", DbType.Int32, PrimaryKey=true, AutoIncrement=true)] private int personId; private string myName; [Column(DbType.AnsiString, 30)] // same as [Column("Name", DbType.AnsiString, 30)] public string Name { get { return myName; } set { myName = value; } } } The concept of triggers comes from the database. A database trigger is a piece of code that is executed when a certain action occurs on a table, on which the trigger is defined. Light uses triggers in a similar fashion. Triggers are methods marked with the Light.TriggerAttribute, have a void return type, and take a single parameter of type Light.TriggerContext. TriggerAttribute allows you to specify when the method is going to be called by the Dao object. The same method can be marked to be called for more than one action. To do this, simply use the bitwise OR operator on Light.Actions passed to TriggerAttribute. Trigger methods can be called before and/or after the insert, update and delete operations. However, it can only be called after the select operation (denoted by Actions.AfterConstruct) because, before the select operation, there are simply no objects to call triggers on: they are being created in the Dao.Select or Dao.Find methods. So, the point here is that triggers are only called on existing objects. Hence, another caveat. When calling Dao.Delete and passing it a Query object, no triggers will be called on objects representing the records to be deleted simply because there are no objects for Light to work with. Internally, Light will not instantiate an instance just so it can call its triggers. If such behavior is required, you should first Dao.Select objects that are to be deleted and then pass them to the Dao.Delete method. Here is some code demonstrating the use of triggers. The code has not been tested to compile or run. Assume we have the following table in our SQL Server database: create table parent ( parentid int not null identity(1,1) primary key, name varchar(20) ) go create table child ( childid int not null identity(1,1) primary key, parentid int not null foreign key references parent (parentid), name varchar(20) ) go Here are C# classes defining this fake parent/child relationship: using System; using System.Data; using System.Collections.Generic; using Light; // // Here is our parent. // [Table("parent")] public class Parent { [Column("parentid", DbType.Int32, PrimaryKey=true, AutoIncrement=true)] private int id = 0; [Column("name", DbType.AnsiString, 20)] private string name; private IList<Child> children = new List<Child>(); public Parent() {} // No setter as it will be assigned by the database. public int ParentId { get { return id; } } public string ParentName { get { return name; } set { name = value; } } public int ChildCount { get { return children.Count; } } public Child GetChild(int index) { if(index > 0 && index < children.Count) return children[index]; return null; } public void AddChild(Child child) { child.Parent = this; children.Add(child); } public void RemoveChild(Child child) { if(children.Contains(child)) { child.Parent = null; children.Remove(child); } } // // Triggers // [Trigger(Actions.BeforeInsert | Actions.BeforeUpdate)] private void BeforeInsUpd(TriggerContext context) { // We can do validation here!!! // Let's say that the name cannot be empty. if(string.IsNullOrEmpty(name)) { // This will cause Dao to throw an exception // and will abort the current transaction. context.Fail("Parent's name cannot be empty."); } } [Trigger(Actions.AfterInsert | Actions.AfterUpdate)] private void AfterInsUpd(TriggerContext context) { // Let's save all the children. The database is ready // for it because now this parent's id is in there // and referential integrity will not break. Dao dao = context.Dao; if(context.TriggeringAction == Actions.AfterUpdate) { // There may have been children already saved // so we need to delete them first. dao.Delete<Child>(new Query("parentid=@id").Add( new Parameter().SetName("@id").SetDbType(DbType.Int32) .SetValue(this.id) )); } // And now we can insert the children. dao.Insert<Child>(children); } [Trigger(Actions.AfterActivate)] private void AfterActivate(TriggerContext context) { // Let's load all the children. Dao dao = context.Dao; children = dao.Select<Child>(new Query("parentid=@id").Add( new Paremter().SetName("@id").SetDbType(DbType.Int32) .SetValue(this.id) )); foreach(Child child in children) child.Parent = this; } } [Table("child")] public class Child { [Column("childid", DbType.Int32, PrimaryKey=true, AutoIncrement=true)] private int id = 0; [Column("name", DbType.AnsiString, 20)] private string name; private Parent parent; public Child() {} public int ChildId { get { return id; } } public string Name { get { return name; } set { name = value; } } public Parent Parent { get { return parent; } set { parent = value; } } [Column("parentid", DbType.Int32)] private int ParentId { get { if(parent != null) return parent.Id; return 0; } } } public class Program { public static void Main(string[] args) { SqlConnection cn = new SqlConnection("Server=.; Database=test; Uid=sa; Pwd="); Dao dao = new SqlServerDao(cn); cn.Open(); // Set up parent/child relationships. Parent jack = new Parent(); jack.Name = "Parent Jack"; Child bob = new Child(); bob.Name = "Child Bob"; Child mary = new Child(); mary.Name = "Child Mary"; jack.AddChild(bob); jack.AddChild(mary); // When we save the parent, its children will also be saved. dao.Insert<Parent>(jack); // This id was assigned by the database. int jacksId = jack.Id; // Let's now pull jack from the database. Parent jack2 = dao.Find<Parent>(jacksId); // All Jack's children should be loaded by now. Console.WriteLine("Jack's children are:"); for(int i = 0; i < jack2.ChildCount; i++) Console.WriteLine(jack2.GetChild(i).Name); dao.Dispose(); cn.Close(); } } Be careful not to create triggers that load objects in circles. For example, say we would add a trigger to the Child class that would load its parent object on AfterActivate. This trigger would load the parent, which would start loading children, which in turn would start loading the parent again, and so on and so forth, until you run out of memory and your program crashes. So, in a one-to-many relationship or cases where one object fully depends on another, triggers are very helpful. However, they will rarely be able to handle many-to-many relationships unless your code is disciplined enough to only access related objects from one side all the time. Of course, triggers don't solve all the issues of related objects, but in some cases they might help. Light allows you to call stored procedures to select objects. This is useful to call procedures that perform searches based on multiple tables. Alternatively, you can create a view to deal with this, but in most cases it is easier to deal with a stored procedure. However, an even better use for it is to bypass an intermediate table in a many-to-many relationship defined in the database. An example should make this clear. create table users ( userid int identity(1,1) primary key, username varchar(30) ) go create table roles ( roleid int identity(1,1) primary key, rolename varchar(30) ) go -- intermediate table: defines many-to-many relationship create table userrole ( userid int foreign key references users(userid), roleid int foreign key references roles(roleid), constraint pk_userrole primary key(userid, roleid) ) go create procedure getroles(@userid int) as begin select roles.* from roles join userrole on roles.roleid = userrole.roleid where userrole.userid = @userid end go using System; using System.Data; using System.Data.SqlClient; using System.Collection.Generic; using Light; [Table("roles")] public class Role { [Column(DbType.Int32, PrimaryKey=true, AutoIncrement=true)] private int roleid; [Column(DbType.AnsiString, 30)] private string rolename; public int Id { get { return roleid; } set { roleid = value; } } public string Name { get { return rolename; } set { rolename = value; } } } [Table("users")] public class User { [Column(DbType.Int32, PrimaryKey=true, AutoIncrement=true)] private int userid; [Column(DbType.AnsiString, 30)] private string username; private IList<Role> roles = new List<Role>(); public int Id { get { return userid; } set { userid = value; } } public string Name { get { return username; } set { username = value; } } public IList<Role> Roles { get { return roles; } } [Trigger(Actions.AfterConstruct)] private void T1(TriggerContext ctx) { // Notice that we are not using the UserRole objects // here to pull the list of Role objects. Dao dao = ctx.Dao; roles = dao.Call<Role>( new Procedure("getroles").Add( new Parameter("@userid", DbType.Int32, this.userid) ) ); } } [Table] public class UserRole { [Column(DbType.Int32, PrimaryKey=true)] private int userid; [Column(DbType.Int32, PrimaryKey=true)] private int roleid; public int UserId { get { return userid; } set { userid = value; } } public int RoleId { get { return roleid; } set { roleid = value; } } } public class Program { public static void Main(string[] args) { SqlConnection cn = new SqlConnection("Server=.; Database=test; Uid=sa; Pwd="); cn.Open(); Dao dao = new SqlServerDao(cn); // add new user User user1 = new User(); user1.Name = "john"; dao.Insert(user1); // add some roles for(int i = 0; i < 3; i++) { // create role Role role = new Role(); role.Name = "role " + (i+1).ToString(); dao.Insert(role); // associate with user1 UserRole userrole = new UserRole(); userrole.UserId = user1.Id; userrole.RoleId = role.Id; dao.Insert(userrole); } // let's select the only user from the database // it should have all roles in its Roles property User user2 = dao.Find<User>(user1.Id); Console.WriteLine("Roles of " + user2.Name + ":"); foreach(Role role in user2.Roles) { Console.WriteLine(role.Name); } dao.Dispose(); cn.Close(); } } Light is a wrapper around ADO.NET, so it is slower than ADO.NET by definition. On top of that, Light uses reflection to generate table models and create objects to be returned from the Dao.Select and Dao.Find methods. That is also slower than the creation of objects using the new operator. However, Light does attempt to compensate for these slowdowns. Light generates only parameterized SQL statements. Every command that runs is prepared in the database ( IDbCommand.Prepare is called before a command is executed). This forces the database to generate an execution plan for the command and cache it. Later calls to the same type of command ( INSERT, SELECT, etc.) with the same type of object should be able to reuse the previously created execution plan from the database, unless the database removed it from its cache. Light has a caching mechanism for generated table models, so it doesn't have to use reflection to search through a type of any given object every time. By default, it stores up to 50 table models, but this number is configurable (see Dao.CacheSize). Light uses the Least Recently Used algorithm to choose table models to be evicted from the cache when it becomes full. The demo project provided is not really a demo project. It is just a bunch of NUnit tests that I ran against an SQL Server 2005 database. So, if you want to run the demo project, you will need to reference (or re-reference) the NUnit DLL that is on your system. Also, you will need to compile the source code and reference it from the demo project. No binaries are provided in the downloads, only source code. You don't need Visual Studio to use these projects; you can use a freely available SharpDevelop IDE (which was used to develop Light) or the good old command line. Also included is an extension project by Jordan Marr. His code adds support for concurrency and introduces a useful business framework structure. It keeps track of object properties that were changed and only updates objects if anything was changed. This reduces the load on the database. The business framework also allows you to add validation rules to your objects. The code is fully commented, so you may find some more useful information there. I hope this was, is or will be useful to somebody in some way... Many thanks to Jordan Marr for his contribution, feedback, ideas and the extension project. privateproperties; now mapping privateproperty to a column works as expected General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/database/LightORMLibrary.aspx
crawl-002
refinedweb
5,420
58.79
Document .NET Libraries with XML Comments Any development environment spawns standards. In the case of .NET, one of these standards is the way that developers expect class library documentation to look. Ideally, your class library documentation should follow the format and conventions that the .NET Framework SDK established for .NET namespaces. Fortunately, this isn't all that hard to do, at least if you're working in C# (VB .NET will gain the same capabilities in Visual Studio 2005). A combination of the XML comments feature built into the language and the open source NDoc project makes it easy to generate high-quality SDK-style help. I'll show you the basics in this article. Adding XML comments The key to the whole process is the open source tool NDoc. NDoc is actually the end of a process that starts with Appendix B of the C# language specification: "C# provides a mechanism for programmers to document their code using a special comment syntax that contains XML text. In source code files, comments having a certain form can be used to direct a tool to produce XML from those comments and the source code elements, which they precede." These comments are identified by being set off with three slashes instead of the two that normally introduce a C# comment. The XML Documentation section of the C# Programmer's Reference documents tthe XML tags that are used by these special commants.. Here are the XML documentation tags that Visual Studio .NET can use for XML documentation: - <c> - Embedded code font in other text - <code> - Multiple lines of source code - <example> - An example of using a member - <exception> - Specifies an exception that can be thrown by the current member - <include> - Includes an external documentation file - <list> - The heading row of a list - <para> - A paragraph of text - <param> - A method parameter - <paramref> - A reference to a parameter in other text - <permission> - The security permission required by a member - <remarks> - Supplemental information - <returns> - Return value of a method - <see> - An embedded cross-reference to another member - <seealso> - A reference in a See Also list - <summary> - A summary of the object - <value> - The value of a property This list isn't fixed by the C# specification; different tools are free to make use of other tags (and in fact NDoc understands a few tags that are not included on the standard list). As an example, here's some sample XML documentation embedded directly withing a C# library: using System; namespace XMLDoc { /// <summary> /// This class contains static utility functions for use throughout the company. /// </summary> /// <remarks> /// <para>As we develop general-purpose utility functions, we will add them to this class.</para> /// <para>The current utility functions include:</para> /// <list type="bullet"> /// <item><see cref="FixQuotes" /> to fix up SQL statements.</item> /// <item><see cref="ReplaceNull" /> to convert nulls to specified replacement valuess.</item> /// </list> /// </remarks> public class Utility { /// <summary> /// Static constructor /// </summary> static Utility() { // Nothing to do here } /// <summary> /// Fix up a SQL statement by escaping single quotes /// </summary> /// <param name="SqlStatement" type="string"> /// <para> /// A SQL statement, possibly with unescaped single quotes. /// </para> /// </param> /// <returns> /// Returns the SQL statement with any single quotes doubled. /// </returns> public static string FixQuotes(string SqlStatement) { return SqlStatement.Replace("'", "''"); } You get the idea, I'm sure. The XML comments contain all of the information that you'd like to see in a help file, and the tags provide all the information that's needed for the right tool to structure that file. From Comments to Documentation Embedded in your code, these comments don't do a lot of good for your customers. But Visual Studio .NET can collect these comments into an external XML file. To enable this collection, right-click on the project in Solution Explorer and select Properties. Then select the Build page and enter a name for the XML documentation file. By default, this file will be placed in the same folder as the compiled application. After you've built the XML comments file for your application, NDoc can do its work. Figure 1 shows the NDoc user interface. You can select one or more assemblies to document, and tell NDoc where to find the XML comments file for each assembly. NDoc will combine the information in the XML comments file with information determined by examining the assembly itself, and build MSDN-style documentation for you. Figure 2 shows a part of the resulting help file. This little example shows both the ease of use and simplicity of NDoc, but there's more power working there if you need it. NDoc has implemented a concept of pluggable documenters, so you can use the same XML comments to get out LaTEX markup, HTML Help, and other formats in addition to the MSDN standard. And, of course, it's open-source so you could even add more functionality if you need it. With these simple tools at your disposal, there's no reason to put out a .NET class library for consumption by other developers without it having professional-looking documentation. These days, the successful developers are the ones who are willing to do more than just write code. Coming up with documentation is often an essential part of the job as well.<<
https://www.developer.com/net/article.php/3377051/Document-NET-Libraries-with-XML-Comments.htm
CC-MAIN-2019-09
refinedweb
875
51.68
I dont take an error but an empty list. I dont take an error but an empty list. Hello , i have to remove from text files stop words , first i load text with stop words and put them in the arraylist after take the file i want to remove the stop words and put every word in an... Hello i want to thank for help in the past i have to make a tabs and help me , i search to find for a project i have to make in android a crawler from twit or post from twitter or Facebook but i... Hello i have a homework to make a chat for fb,gmail,msn,yahoo and other clients.. I cant conect to chat and i can't understand why, if i run it with out my usename and usepassword is run but if i... Hello i have to make a spinner witch have 3 choise and is not show.. 1571 public void onClick(View v) { String[] srv ={"facebook", "msn", "gmail"}; service =... here is the code i cant make object shape beacause i abstract class.. i dont know how to make to work is the method init i have make. public class Main extends JPanel{ static... everything is working but The problem is i can't take and draw the cirle and the squre from array list. Hellow i am a new in forum and java codding also. I have to make an arraylist wich take circle and square . The suser give tha point x,y and colour for the shape but additional cycle gets the...
http://www.javaprogrammingforums.com/search.php?s=8a203cb10184b5ba07b3ab3941c1343d&searchid=1929513
CC-MAIN-2015-48
refinedweb
266
87.76
#include <POSIX_Asynch_IO.h> #include <POSIX_Asynch_IO.h> Inheritance diagram for ACE_POSIX_Asynch_Read_Dgram_Result: 0 [protected] Constructor is protected since creation is limited to ACE_Asynch_Read_Dgram factory. [protected, virtual] Destructor. [virtual] The number of bytes which were requested at the start of the asynchronous read. Implements ACE_Asynch_Read_Dgram_Result_Impl. Proactor will call this method when the read completes. Implements ACE_Asynch_Result_Impl. The flags used in the read. I/O handle used for reading. Message block which contains the read data. The address of where the packet came from. [friend] Factory classes will have special permissions. Proactor class has special permission. Bytes requested when the asynchronous read was initiated. Message block for reading the data into.
http://www.theaceorb.com/1.4a/doxygen/ace/classACE__POSIX__Asynch__Read__Dgram__Result.html
CC-MAIN-2017-51
refinedweb
107
54.9
Sadly there is a ton of badly written code out in the wild. Hardware related code, seem to suffer more in this regards. I imagine, many developer in this segment are unwillingly to invest time in quality or are just inexperienced. Even if you are dedicated in reliable and high quality code, you will probably run into the situation, where you have to use a library with really low standards. Strategies There are a number of strategies to deal with this code: - Rewrite the library. - Refactor the code. - Wrap the library. - Wrap the interface. Rewrite the Library This is actually the best you can do and it is also the most time-consuming approach. Yet, write the code from scratch will give you several benefits: - You will perfectly understand how the original library and the accessed hardware works. Actually you have to understand it, otherwise you are unable to rewrite the code. - The new code will be modern, fast, reliable and in your coding style. - If you open source the new code, you will give people an alternative and a better example how to implement something the right way. - You can also selectively remove unwanted/bloated parts from the original code, which can reduce the overall binary size of the final project. - It will also give you the option to implement a proper error handling in the new code. If you have the time and motivation to rewrite the code, do it! Refactor the Code Changing the code, without changing the functionality is called code refactoring. This is a good strategy, a compromise, between rewriting and wrapping. Usually you will just go, line by line, through the original code, modernise it and cleaning it up. - First you replace macros (constants, header guards) with modern C++ equivalents. - Next you reformat the whole code and make it readable. - You add missing API documentation. - Now you rename methods and variables in the code in a proper and speaking style. - You split large methods into smaller ones and make everything modular and nice. - etc. At the end, you will have a library which is modern, readable and therefore easier to handle and use. Wrap the Library If you can not bear how something looks, just wrap it nicely. This is a very effective strategy for code. Because the compiler will resolve the wrapper code, there is no disadvantage for this approach. Often there are macro definitions or other declarations (functions, operators, etc.) in the global namespace which creating troubles in you code. Your wrapper solves all this issues and provide a better API to your code. The illustration above illustrates the basic principle. Only the wrapper code includes the API of the bad code and has to deal with the problems of it. The rest of your application will access the wrapper, and therefore using proper code. In the hypothetical example above, you can see a simple wrapper for a single function. The header LevelMeter.h is clean, proper C++ code. In the implementation of your wrapper, you deal will the dark side and all quirks of the badly written library. This will not prevent linker problems if there are variables with the same name. Because you properly place your code in namespaces and avoid global variables, you are on the safe side. 😉 Wrap the Interface As a quick fix it is often helpful just to wrap the interface, speak include file. This is a good solution for complex headers causing problems. Instead of including the original header, you just include the header wrapper. Your wrapper will clean-up (undef) macros after including the bad header file. Wrapping the interface is only helpful for problems with macro definitions. Strange operators, function or class definitions will still influence your code. Conclusion Despair is no strategy. 😉 Take one of the actions above to solve your problems. Never let badly written code influence your own sense of quality and style while writing code. Be a role model! Always show how well written code shall look like. If you have questions, miss some information or just have any feedback, feel free to add a comment below. Have fun!
https://luckyresistor.me/2018/08/31/how-to-deal-with-badly-written-code/?shared=email&msg=fail
CC-MAIN-2019-30
refinedweb
690
66.54
Table of Contents Unit Tests This page refers to VuFind 2.x and later; for notes on VuFind 1.x unit tests, see this page. can be installed using PEAR (see details on the project's homepage). Once installed, you will have a phpunit command line tool that you can use to run tests. 2.) Create a fresh copy of VuFind (i.e. git clone the repository to a fresh directory) 3.) Create a phing.sh script in the root of your fresh VuFind install to automatically pass important parameters to Phing (see build.xml for other parameters that may be set here with the -D parameter): #!/bin/sh phing -Dmysqlrootpass=mypasswd $* Running tests after setup Follow these steps to run tests. Keep in mind that testing will create a test Solr index listening on port 8080. (the phpunit command will run tests and generate report data for use by continuous integration; phpunitfast will run tests more quickly by skipping reports) 3.) ./phing.sh shutdown This command will turn off the VuFind test instance created by step 1. The Faster Version If you don't want to run integration tests using a running VuFind, you can simply in order to make it work when running tests as a different user - mysqlrootpass - the MySQL root password (needed for building VuFind's database) - vufindurl - the URL where VuFind will be accessed (defaults to if omitted) Here's an example script from an Ubuntu environment: #!= $* Prior to running your tests, you should download the Selenium server .jar file from here and then run it with “java -jar [downloaded file]”. If you want to use a browser other than Firefox with Selenium, use the -Dselenium_browser=[name] setting to switch. For example, you can use -Dselenium_browser=chrome (though this requires you to have ChromeDriver installed on your system). When running tests in a windowed environment, it's a good idea to open a couple of Terminal windows – that way you can run the Selenium server in one window (to watch it receive commands) while running your tests themselves in another (to see failures, etc.). Note: Prior to VuFind 3.1, the VuFind team used Zombie.js as their primary Mink driver; however, due to limitations with Zombie.js, Selenium is now the only supported driver for testing. library (module/VuFind/src/VuFind). Tests should be placed in a directory that corresponds with the component being tested. For example, the unit tests for the \VuFind\Date\Converter class are found in “module/VuFind/tests/unit-tests/Date/ConverterTest.php”. Support Classes The VuFindTest namespace (code found in module/VuFind/src/VuFindTest) contains several classes which may be useful in writing new tests – a fake record driver in VuFindTest\RecordDriver and some base test classes containing convenience methods in VuFindTest\Unit. When working on integration tests, be aware that the full framework configuration is not loaded. Instead, VuFindTest\Unit\TestCase sets up a fake service manager to provide key services. If you add a dependency on a new service, it will not be made available automatically – it will need to be added to the base test class. This is not ideal and the integration test strategy may be improved in a future version of VuFind. Mink Browser Automation If you want to write browser automation tests, you should extend \VuFindTest\Unit\MinkTestCase. This class will automatically set up the Mink/Zombie.js.
https://vufind.org/wiki/development:testing:unit_tests
CC-MAIN-2017-13
refinedweb
565
55.34
Generates NED code from a NED object tree. More... #include <ned1generator.h> Generates NED code from a NED object tree. Assumes object tree has already passed all validation stages (DTD, syntax, semantic). Constructor. Destructor. Generates NED code and returns it as a string. Generates NED code. Takes an output stream where the generated NED code will be written, the object tree and the base indentation. Invoke generateNedItem() on all children. Invoke generateNedItem() on all children of the given tagcode. Invoke generateNedItem() on children of the given tagcodes (NED_NULL-terminated array). Dispatch to various doXXX() methods according to node type. Sets the indent size in the generated NED code. Default is 4 spaces.
https://omnetpp.org/doc/omnetpp/nedxml-api/classNED1Generator.html
CC-MAIN-2015-35
refinedweb
112
62.14
#include <stdlib.h> #include <grass/gis.h> Go to the source code of this file. Definition at line 40 of file put_window.c. References G__write_Cell_head3(), and G_fopen_new(). Referenced by G__make_location(), G__make_mapset(), G_put_window(), main(), and make_location(). write the database region Writes the database region file (WIND) in the user's current mapset from region. Returns 1 if the region is written ok. Returns -1 if not (no diagnostic message is printed). Warning. Since this routine actually changes the database region, it should only be called by modules which the user knows will change the region. It is probably fair to say that under GRASS 3.0 only the g.region, and d.zoom modules should call this routine. Definition at line 32 of file put_window.c. References G__put_window(), and getenv(). Referenced by make_mapset().
http://grass.osgeo.org/programming6/put__window_8c.html
crawl-003
refinedweb
132
72.12
This article describes a way to write add-ins such that a single binary can be hosted across multiple versions of DevStudio, Visual Studio, and Office. It uses C++ and ATL, but the principles should carry over to other languages and other frameworks.. This irritated me enough that I worked out a scheme for writing add-ins in such a way that the same DLL that can be loaded into DevStudio 6, Visual Studio 2003, 2005, and 2008, and even Office 2003 and hosted as an add-in for each. Since I suspect I'm not the only one to go through this, I decided to write up what I'd done. The core idea is very simple; the bulk of the effort involved working out a lot of implementation details. For the rest of this article, I'll be referring to the sample add-in I wrote to illustrate my approach, SampleCAI (see download). Before digging into the details, however, let me lay out my general scheme. I'm going to build an In-Process COM Server (that is, a *.dll) that exports a single component that implements all the interfaces required by our target hosts. This component will present itself to each host as an add-in in terms they recognize. For example, when DevStudio instantiates our component, it will ask for the interface IDSAddIn. As long as we implement IDSAddIn, DevStudio will treat us as a DevStudio-compliant add-in. We could implement a hundred other interfaces, present a UI, service HTTP requests, whatever -- DevStudio will neither know nor care. Likewise, when Visual Studio 2003, 2005, 2008 or Office instantiate our component, they'll ask our component for the IDTEExtensibility2 and IDTECommandTarget interfaces. Again, as long as we implement these two interfaces, the host application will treat it like an add-in, no matter what else we can do. Put simply, if you whip up a COM component that implements these three interfaces, all our hosts will happily load that component as an add-in that conforms to their respective models: As an aside, there are name clashes between DevStudio 6 & Visual Studio interfaces (e.g. ITextDocument). I dealt with that by #include'ing the DevStudio 6 header files (out of ' objmodel') and left those identifiers in the global namespace. I then #import'ed only export a single component due to the way DevStudio discovers new add-ins. When you point the DevStudio IDE at a DLL & ask it to load it as an add-in, DevStudio appears to call LoadLibrary, then RegisterClassObjects on that DLL. It seems to spy on the Registry so as fails, DevStudio will notify the user and refuse to load the DLL as an add-in. Consequently, if we implemented, say, two COM objects, one for DevStudio & one for Visual Studio, DevStudio would find that its QI fails for the second component, decide the DLL wasn't a valid add-in,); we have no IDL for it. Now, let's m_pDSAddIn; /// Reference on our aggregated instance of CDTEAddIn CComPtrm_pDSAddIn; /// Reference on our aggregated instance of CDTEAddIn CComPtr m_pDTEAddIn; };m_pDTEAddIn; }; As you can see, CAddIn is a plain-jane ATL class implementing a COM component. The first point of interest is the interface map. As explained above, class CAddIn exports IDSAddIn. However,;, we'll delegate m_pDSAddIn for it. m_pDTEAddIn will hold a reference on a CoDTEAddIn. This is another aggregate analagous to CoDSAddin, but supporting the interfaces needed for Visual Studio & we do if it gets a QI for IDispatch? This is one of the many reasons I don't like duals, but that's another article! In this case, it turns out that the IDE expects us to return IDTEExtensibilty2. Fortunately, IDSAddIn is custom, so there's no conflict there. The next point of interest are the public methods Configure and SayHello. The sample can only do two things: configure itself and say hello. This functionality resides in CAddIn. The idea is to concentrate the Add. I've done a few non-standard things with respect to hosting within DevStudio. These aren't, strictly speaking, necessary in terms of loading the AddIn into multiple hosts, they just make the add-in a little nicer. When you point the DevStudio IDE at a DIn is loaded, the vfFirstTime parameter to OnConnect won't be set to VARIANT_TRUE. This is a problem because that's typically how we figure out that our add-in is being loaded for the first time, and hence when to carry out one-time setup tasks, like creating toolbars. Toolbar creation is complicated because: AddCommandBarButtonwhen vfFirstTimeis falsewill fail I've solved problem #1 by just writing down a boolean in the Registry. I've solved problem # 2 by posting a message to a hidden message window (it turns out that AddCommandBarButton will succeed if. Using the standard AddIn APIs, your new toolbar will be named Toolbar<n> <n> is the number of un-named toolbars (which is irritating and ugly). The Sample AddIn instead hooks the toolbar creation and changes the window name to something a little nicer. Note: the new name must be no longer than the intended one (usually eight characters). With that, let's look at CDSAddIn, the C++ class that incarnates the DevStudio AddIn: class ATL_NO_VTABLE CDSAddIn : public CComObjectRootEx , public CComCoClass, public CComCoClass , our commands, and sinks any DevStudio events it's interested in. In IDSAddIn::OnDisconnection, it severs that connection. Notes: CoDSAddInis not directly creatable; it's not even registered, and in any event creation will fail without an aggregator. CDSAddInmaintains. As described above, our add-in will provide implementations of IDTEExtensibility2 and IDTECommandTarget through another COM aggregate, CoDTEAddIn. This coclass<:_dte> m_pApp; /// Reference to our host's Application object CComPtr<:_application> m_pExcel; /// Which host are we loaded into Host m_nHost; /// Non-owning reference to our parent CAddIn instance CAddIn *m_pParent; ... }; // End CDTEAddIn.<:_dte><:_application> The first thing that should jump out at you is that the class knows the host into which it's been loaded. While Visual Studio *and* Office 2003 use this add-in programming model, hosting applications themselves offer different interfaces to *us*. We need to take this into account when requesting services from our host.In object representing us, m_pAddIn = com_cast<:addin>(pAddInInst); // &); ...<:addin> After validating our parameters, the first real work we do is wrapped up in the call to GuessHostType; here is where we figure out what sort of environment<:_dte>(pApp); pDTE2Raw->Release(); return Host_VS2005; } // Ok-- maybe it's Visual Studio 2003... ...<:_dte> Note that we make no distinction between Visual Studio 2005 and 2008. It turns out that Visual Studio 2008 implements interface EnvDTE80::IID_DTE2, and implements in a manner close enough to Visual Studio 2005 that, for our purposes, we don't need to distinguish between them. The goal is to fill in m_nHost with a member of the Host enumeration so that the rest of the logic "knows" how to behave. For instance, the next thing: CoDTEAddInis *not* directly creatable; it's not even registered, and in any event creation will fail without an aggregator CDSAddIn, CDTEAddInmaintains a (non-owning) back-pointer to it's parent CAddIn, for the same reasons. These are the broad strokes; as I mentioned at the start, most of the work was in the details. I've attached a fully functional sample add-in that will load into DevStudio, Visual Studio 2003, Visual Studio 2005, Visual Studio 2008, and Excel 2003. It's a Visual Studio 2005 solution that contains the add-in itself, as well as its associated satellite DLL. To install it, just build either the Debug or Release configuration; there's a post-build step that will automatically register the DLL appropriately. There's certainly more work that could be done; see Appendix A. Enjoy-- questions, feedback, and suggestions are welcome. The primary COM component, CoAddIn, implements different hosting models in terms of aggregated components. Today, both those components are instantiated in FinalConstruct; it would be nice to move to some sort of cached scheme to avoid instantiating an instance of, say, CoDTEAddIn when we're loaded into DevStudio... Visual Studio 2003 and 2005 allow their add-ins to add pages to the dialogs they display in response to Tools | Options. You can tell Visual Studio about your page, or pages, by adding some additional Registry entries (take a look at vs2003.rgs or vs2005.rgs in the sample, or see here). I had thought it would be nice to add a new property page to DevStudio 6 and Excel 2003, but I wasn't able to figure out how. My scheme was to install a CBT hook, and catch the creation of the Tools | Options Property Sheet. There, I'd post a message back to a private, message-only window that would create *my* Property Page and send a PSM_ADDPAGE message to the Property Sheet, along with my new page. For whatever reason, I got that to work in a little test app, but not in either Dev Studio 6 or in Excel 2003. In both cases, the Tools | Options Property Sheet does not have a Windows Class of "Dialog" (like this one does), so perhaps these apps have some kind of non-standard implementation. If anyone has any thoughts on this, or more success than I did, I'd love to hear about it. It would also be cool to have the sheets that Visual Studio displays set their selection to our page when Configure is invoked. Currently, they open to the last page viewed. I've got some thoughts in terms of again installing a CBT hook to catch the sheet's creation & sending a message to its child tree control, but I haven't done anything on it. Jeff Paquette tells me he's used this successfully in his VisEmacs add-in, however. I don't have a copy of Visual Studio 2002, so I couldn't test that. I implemented support for Excel 2003, but that's it. It would be cool to build out support for the whole suite. A project template for generating code for a common add-in would be nice. Getting the commands added, and command bars setup was probably the most irritating part of writing this sample. In wading through this mess, I relied heavily on the article "HOWTO: Adding buttons, commandbars and toolbars to Visual Studio .NET from an add-in", by Carlos J. Quintero [2]. Carlos describes two distinct flavors of Visual Studio Command Bar: permanent & temporary. Permanent Command Bars: OnConnectionmethod receives the value ext_ConnectMode.ext_cm_UISetup... which happens only once after an add-in is installed on a machine) DTE.Commands.AddCommandBar() DTE.Commands.RemoveCommandBar()function Temporary Command Bars: DTE.CommandBars.Add()or CommandBar.Controls.Add()functions (depending on the type of commandbar: Toolbaror CommandBarPopup) CommandBar.Delete()method According to Carlos, the fact that Permanent Command Bars remain even when the user unloads the AddIn "will be confusing for many users" & consequently, "most add-ins don't use this approach". He lays out the following approach: If ext_cm_AfterStartup or ext_cm_Startup Check for the command's existence through Commands::Item If not there, create it via Commands::AddNamedCommand for both 2003 & 2005 Create a new (temporary) command bar by calling pTempCmdBar = ICommandBars::Add() (both 2003 & 2005!) Add a button: pTempCmdBar->AddControl pTempCmdBar->Visible = true; Then call pTempCmdBar->Delete() in OnDisconnect Note that he just ignores ext_cm_UIStartup entirely. Now, to me, temporary toolbars seem fine, except for two problems: AddIn; that's not so bad... the IDE detects this and removes the commands the next time they're invoked. Permanent toolbars respect the user's decision to turn them off, but if you un-register the AddIn and delete the commands, the damn thing is *still* there and you can't delete it! I still haven't settled on the "right" solution. The sample AddIn can use a few different approaches, depending on the setting of the #define SAMPLECAI_COMMAND_BAR_STYLE. It can take one of three values: SAMPLECAI_COMMAND_BAR_TEMPORARYUses temporary command bars SAMPLECAI_COMMAND_BAR_PERMANENTUses permanent command bars SAMPLECAI_COMMAND_BAR_SEMIPERMANENTUses permanent command bars, but deletes the Command Bar programmatically on un-install During the course of developing your AddIn, you'll likely have times where you want to just re-set all the UI changes your AddIn's made to Visual Studio. For Visual Studio 2005, you can run devenv /resetaddin <AddInNamespace.Connect>. Unfortunately, Visual Studio 2003 only offers devenv.exe /setup, which will re-set *all* your UI customizations (including your keybindings!). Since that's a bit draconian, I whipped up this little VBS script: Dim objDTE Dim objCommand Dim objTb On Error Resume Next Set objDTE = CreateObject("VisualStudio.DTE.7.1") If objDTE Is Nothing Then MsgBox "Couldn't find VS 2003" Set objDTE = CreateObject("VisualStudio.DTE.8.0") If objDTE Is Nothing Then MsgBox "Couldn't find VS 2005" General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/macros/samplecai.aspx
crawl-002
refinedweb
2,149
52.29
Program Arcade GamesWith Python And Pygame Chapter. Now this specialized hardware support is no longer needed, but we still use the term “sprite.” 13.1 Basic Sprites and Collisions Let's step through an example program that uses sprites. This example shows how to create a screen of black blocks, and collect them using a red block controlled by the mouse as shown in Figure 13.1. The program keeps “score” on how many blocks have been collected. The code for this example may be found at: ProgramArcadeGames.com/python_examples/f.php?file=sprite_collect_blocks.py The first few lines of our program start off like other games we've done: import pygame import random # Define some colors BLACK = ( 0, 0, 0) WHITE = (255, 255, 255) RED = (255, 0, 0) The pygame library is imported for sprite support on line 1. The random library is imported for the random placement of blocks on line 2. The definition of colors is standard in lines 5-7; there is nothing new in this example yet. class Block(pygame.sprite.Sprite): """ This class represents the ball. It derives from the "Sprite" class in Pygame. """ Line 9 starts the definition of the Block class. Note that on line 9 this class is a child class of the Sprite class. The pygame.sprite. specifies the library and package, which will be discussed later in Chapter 14. All the default functionality of the Sprite class will now be a part of the Block class. def __init__(self, color, width, height): """ Constructor. Pass in the color of the block, and its x and y position. """ # Call the parent class (Sprite) constructor super().__init__() The constructor for the Block class on line 15 takes in a parameter for self just like any other constructor. It also takes in parameters that define the object's color, height, and width. It is important to call the parent class constructor in Sprite to allow sprites to initialize. This is done on line 20. # Create an image of the block, and fill it with a color. # This could also be an image loaded from the disk. self.image = pygame.Surface([width, height]) self.image.fill(color) Lines 24 and 25 create the image that will eventually appear on the screen. Line 24 creates a blank image. Line 25 fills it with black. If the program needs something other than a black square, these are the lines of code to modify. For example, look at the code below: def __init__(self, color, width, height): """ Ellipse Constructor. Pass in the color of the ellipse, and its size """ # Call the parent class (Sprite) constructor super().__init__() # Set the background color and set it to be transparent self.image = pygame.Surface([width, height]) self.image.fill(WHITE) self.image.set_colorkey(WHITE) # Draw the ellipse pygame.draw.ellipse(self.image, color, [0, 0, width, height]) If the code above was substituted, then everything would be in the form of ellipses. Line 29 draws the ellipse and line 26 makes white a transparent color so the background shows up. This is the same concept used in Chapter 11 for making the white background of an image transparent. def __init__(self): """ Graphic Sprite Constructor. """ # Call the parent class (Sprite) constructor super().__init__() # Load the image self.image = pygame.image.load("player.png").convert() # Set our transparent color self.image.set_colorkey(WHITE) If instead a bit-mapped graphic is desired, substituting the lines of code above will load a graphic (line 22) and set white to the transparent background color (line 25). In this case, the dimensions of the sprite will automatically be set to the graphic dimensions, and it would no longer be necessary to pass them in. See how line 15 no longer has those parameters. There is one more important line that we need in our constructor, no matter what kind of sprite we have: # Fetch the rectangle object that has the dimensions of the image # image. # Update the position of this object by setting the values # of rect.x and rect.y self.rect = self.image.get_rect() The attribute rect is a variable that is an instance of the Rect class that Pygame provides. The rectangle represents the dimensions of the sprite. This rectangle class has attributes for x and y that may be set. Pygame will draw the sprite where the x and y attributes are. So to move this sprite, a programmer needs to set mySpriteRef.rect.x and mySpriteRef.rect.y where mySpriteRef is the variable that points to the sprite. We are done with the Block class. Time to move on to the initialization code. # Initialize Pygame pygame.init() # Set the height and width of the screen screen_width = 700 screen_height = 400 screen = pygame.display.set_mode([screen_width, screen_height]) The code above initializes Pygame and creates a window for the game. There is nothing new here from other Pygame programs. #() A major advantage of working with sprites is the ability to work with them in groups. We can draw and move all the sprites with one command if they are in a group. We can also check for sprite collisions against an entire group. The above code creates two lists. The variable all_sprites_list will contain every sprite in the game. This list will be used to draw all the sprites. The variable block_list holds each object that the player can collide with. In this example it will include every object in the game but the player. We don't want the player in this list because when we check for the player colliding with objects in the block_list, Pygame will go ahead and always return the player as colliding if it is part of that) The loop starting on line 49 adds 50 black sprite blocks to the screen. Line 51 creates a new block, sets the color, the width, and the height. Lines 54 and 55 set the coordinates for where this object will appear. Line 58 adds the block to the list of blocks the player can collide with. Line 59 adds it to the list of all blocks. This should be very similar to the code you wrote back in Lab 13. # Create a RED player block player = Block(RED, 20, 15) all_sprites_list.add(player) Lines 61-63 set up the player for our game. Line 62 creates a red block that will eventually function as the player. This block is added to the all_sprites_list in line 63 so it can be drawn, but not the block_list. #) The code above is a standard program loop first introduced back in Chapter 5. Line 71 initializes our score variable to 0. #] Line 84 fetches the mouse position similar to other Pygame programs discussed before. The important new part is contained in lines 89-90 where the rectangle containing the sprite is moved to a new location. Remember this rect was created back on line 31 and this code won't work without that line. # See if the player block has collided with anything. blocks_hit_list = pygame.sprite.spritecollide(player, block_list, True) This line of code takes the sprite referenced by player and checks it against all sprites in block_list. The code returns a list of sprites that overlap. If there are no overlapping sprites, it returns an empty list. The boolean True will remove the colliding sprites from the list. If it is set to False the sprites will not be removed. # Check the list of collisions. for block in blocks_hit_list: score +=1 print(score) This loops for each sprite in the collision list created back in line 93. If there are sprites in that list, increase the score for each collision. Then print the score to the screen. Note that the print on line 98 will not print the score to the main window with the sprites, but the console window instead. Figuring out how to make the score display on the main window is part of Lab 14. # Draw all the spites all_sprites_list.draw(screen) The Group class that all_sprites_list is a member of has a method called draw. This method loops through each sprite in the list and calls that sprite's draw method. This means that with only one line of code, a program can cause every sprite in the all_sprites_list to draw. # Limit to 60 frames per second clock.tick(60) # Go ahead and update the screen with what we've drawn. pygame.display.flip() pygame.quit() Lines 103-109 flips the screen, and calls the quit method when the main loop is done. 13.2 Moving Sprites In the example so far, only the player sprite moves. How could a program cause all the sprites to move? This can be done easily; just two steps are required. The first step is to add a new method to the Block class. This new method is called update. The update function will be called automatically when update is called for the entire list. Put this in the sprite: def update(self): """ Called each frame. """ # Move block down one pixel self.rect.y += 1 Put this in the main program loop: # Call the update() method for all blocks in the block_list block_list.update() The code isn't perfect because the blocks fall off the screen and do not reappear. This code will improve the update function so that the blocks will reappear up top. def update(self): # Move the block down one pixel self.rect.y += 1 if self.rect.y > screen_height: self.rect.y = random.randrange(-100, -10) self.rect.x = random.randrange(0, screen_width) If the program should reset blocks that are collected to the top of the screen, the sprite can be changed with the following code: def reset_pos(self): """ Reset position to the top of the screen, at a random x location. Called by update() or the main program loop if there is a collision. """ self.rect.y = random.randrange(-300, -20) self.rect.x = random.randrange(0, screen_width) def update(self): """ Called each frame. """ # Move block down one pixel self.rect.y += 1 # If block is too far down, reset to top of screen. if self.rect.y > 410: self.reset_pos() Rather than destroying the blocks when the collision occurs, the program may instead call the reset_pos function and the block will move to the top of the screen ready to be collected. # See if the player block has collided with anything. blocks_hit_list = pygame.sprite.spritecollide(player, block_list, False) # Check the list of collisions. for block in blocks_hit_list: score += 1 print(score) # Reset block to the top of the screen to fall again. block.reset_pos() The full code for this example is here: ProgramArcadeGames.com/python_examples/f.php?file=moving_sprites.py If you'd rather see code for sprites that bounce, look here: ProgramArcadeGames.com/python_examples/f.php?file=moving_sprites_bounce.py If you want them to move in circles: ProgramArcadeGames.com/python_examples/f.php?file=sprite_circle_movement.py 13.3 The Game Class Back in Chapter 9 we introduced functions. At the end of the chapter we talked about an option to use a main function. As programs get large this technique helps us avoid problems that can come from having a lot of code to sort through. Our programs aren't quite that large yet. However I know some people like to organize things properly from the start. For those people in that camp, here's another optional technique to organize your code. (If you aren't in that camp, you can skip this section and circle back later when your programs get too large.) Watch the video to get an idea of how the program works. ProgramArcadeGames.com/python_examples/f.php?file=game_class_example.py 13.4 Other Examples Here are several other examples of what you can do with sprites. A few of these also include a linked video that explains how the code works. 13.4.1 Shooting things Interested in a shoot-em-up game? Something like the classic Space Invaders? This example shows how to create sprites to represent bullets: ProgramArcadeGames.com/python_examples/f.php?file=bullets.py 13.4.2 Walls Are you looking for more of an adventure games? You don't want your player to wander all over the place? This shows how to add walls that prevent player movement: ProgramArcadeGames.com/python_examples/f.php?file=move_with_walls_example.py Wait? One room isn't enough of an adventure? You want your player to move from screen to screen? We can do that! Look through this example where the player may run through a multi-room maze: ProgramArcadeGames.com/python_examples/f.php?file=maze_runner.py 13.4.3 Platforms Interested in creating a platformer, like Donkey Kong? We need to use the same idea as our example with walls, but add some gravity: ProgramArcadeGames.com/python_examples/f.php?file=platform_jumper.py Good platformers can move side to side. This is a side scrolling platformer: ProgramArcadeGames.com/python_examples/f.php?file=platform_scroller.py Even cooler platform games have platforms that move! See how that is done with this example: ProgramArcadeGames.com/python_examples/f.php?file=platform_moving.py 13.4.4 Snake/Centipede I have occasionally get students that want to make a “snake” or “centipede” type of game. You have a multi-segment snake that you can control. This requires each segment to be held in a list. While it requires learning two new commands, the concept behind how to do this game isn't difficult. Control a snake or centipede going around the screen: ProgramArcadeGames.com/python_examples/f.php?file=snake.py 13.4.5 Using Sprite Sheets This is an extensive example that uses “sprite sheets” to provide the graphics behind a platformer game. It supports multiple levels and moving platforms as well. The game is broken into multiple files. ProgramArcadeGames.com/python_examples/en/sprite_sheets 13.4.6 Multiple Choice Quiz 13.4.7 Short Answer Worksheet There is no worksheet for this chapter. 13.4.8
http://programarcadegames.com/index.php?chapter=introduction_to_sprites&amp;lang=ru
CC-MAIN-2019-18
refinedweb
2,326
75.71
Section about testlib is temporary, at some day it will be merged into global documentation section when it appears. If you are developing a programming contest problem and you are doing it with using C++ then testlib.h is a right choice to write all axillary programs. This library is a standard solution in a professional community of problemsetters in Russia and several other countries. Many contests are prepared by using testlib.h: All-Russian school olympiads, ACM-ICPC regional contests, all Codeforces round and many others. Recently testlib.h was moved onto GitHub, now it is available by the following link: testlib.h library is contained in a single header file. In order to include it you should just put testlib.h in the same directory with a program you are writing (checker, generator, validator or interactor) and just add a following line to the beginning of your program: #include "testlib.h". Here are the cases when testlib.h is really useful: - In writing generators. These are the programs that create tests for your problem, since it is not always possible to type a whole content of the test by using the keyboard (at least because of their possible large size); - In writing validators. These are programs that read the whole test and verifies that it is correct and that it satisfies the constraints of the problem. Validators should be maximally strict with respect to spaces, endlines, leading zeroes etc; - In writing interactors. These are programs that are used in interactive problems, if your problem isn't interactive then just nevermind; - In writing checkers. If your problem allows several possible answers for the tests then you should write a special program that checks participant's answer against jury's answer and the input data. testlib.h is fully compatible with Polygon problem preparation system. First versions of testlib.h appeared in 2005 as a result of testlib.pas porting on C++. Since then testlib.h has evolved, its features and performance were improved. Last versions of testlib.h are compatible with different versions of Visual Studio compilers and GCC g++ (in editions for many platforms), also it is compatible with C++11.
http://codeforces.com/testlib
CC-MAIN-2017-47
refinedweb
363
57.47
See also: IRC log, day one minutes RDF Use Cases - Ben: we are continung work with RDFa syntax, primer and UC doc ... compiling rdfa test cases ... keep in mind that html wg is still under review <Steven> Review ended last friday Ben: we expect that the rdfa syntax can be adapted to every html syntax ... adapting test cases etc Ben: Overall schedule: push primer further with help of WG, and use case ...syntax and html module within 6-8 weeks Ivan: if it is W3C Recommendation track, end of february would be for last call? Ben: no. That was not the idea for now ... last call means technical issues are addressed ... we still expect reactions and comments on various aspect Guus: two issues ... pushing things further with swd wg ... and html aspect ... propose to bundle ... can this wg publish a rec on a module for html? Ralph: our charter allows for that ... the question is still open Steven: dropping html wg: existing one or proposed one? ... if existing one, no problem with publishing a rec Ivan: there are precedents of modules published by other activities Mark: it's not a modification of html ... it's a module which uses XHTML 1.1 M12N techniques Ben: bundle idea seems good idea to me TimBL: importment is deployment strategy ... pushing RDF into attributes. Current browsers do nothing with attributes Ben: yes TimBL: other issue are HTML tidy and validation Ben: content management tools need publishing and validation, yes ... I don't want to discuss the syntax ... I want to discuss rec or not Guus: my feeling: discussion on UC is different if we go for rec track Ralph: short term question is readiness to publish new version of doc ... how much is needed depending on our choosing note or rec Guus: would the UC doc content need to be different? ... postpone the discussion, ack that doc might end in rec track <benadida> use case doc: Ben: Use case doc ... Guus review: should we mention RDFa? ... we modeled document after grddl Guus: strictly speaking it is not a UC doc if you mention RDFa ... to avoid too much technology-driven document ... and overlap between primer and UC Ben: OK <mhausenblas> does this also effect the code snippets? Ralph: it would be artificial Guus: Parnas shows it was possible to rationalize after design TimBL: it makes sense to explain the kind of things you want to do Ralph: there are still things undecided ... eg. how much rdf/xml we want in rdf/html ... a use case can explain the boundaries we want to have Guus: UC are useful for scoping ... explaining to the outside public our decision Ben: there was already some consultation outside TimBL: did you find case for publishing full RDF? ... or just a chunk? Ben: there were cases (like bibtex) that caused us to rethink TimBL: are there things we cannot do? Ivan: problem of expressing lists/containers <timbl> Tim: Is there a well-defined list about what can't be expressed? Ivan: reification, but less important <timbl> Tim: reification is negatively important IMHO Ben: we have also datatypes ... action to take that list of exclusions of the wiki <RalphS> ACTION: Ben start a list of RDF/XML features that are not supported by RDFa [recorded in] Mark: RDF community might want us to resolve their problems according to current best practices, e.g. for reification ... not clear which way we should go to the broader community ... I think Guus point on UC is relevant ... not having sample markup makes sense ... seems wrong that UC doc looks like primer Ben: what is the WG opinion on removing rdfa code from doc? WG: approves <TomB> +1 on removing RDFa snippets from UC doc Michael: link with microformats? Guus: could be a good test Ben: the requirements we have now are likely to go further than microformats Ralph: it's ok to mention rdfa some times, to build a brand Guus: removing the code is what is really needed <mhausenblas> instead of removing RDFa code, why not ADD microformat code :) Ivan: From the very start the goal was that rdf/xml could be fully embedded in html ... in some cases it proved to be too complicated ... like for reification ... UCs do not include reification now ... we could revisit that goal Ben: we could annotate triples like provenance of license info <Zakim> FabienG, you wanted to talk about GRDDL use cases doc Fabien: there is no code is grddl use cases ... first part: problem we propose to address, no mention to grddl ... second part: how grddl could solve the problem Alistair: RDFa primer is best doc to go for info <Zakim> aliman, you wanted to say selling RDFa in use cases potentially wrong <FabienG> GRDDL use case scenario doc for info : Guus: I would prefer if RDFa design goals were not trying to address complete RDF ... it should be easy to understand, simple document ... if we spend one year on addressing everything we might propose something scary for the community Mark: this was a flexible design goal, no criterion for success ... originally there were request like this document is about that with 80% certainty <mhausenblas> +1 to Mark Mark: there could be other ways that reification ... I argue against too much simplicity Guus: there are conflicting requirements for any technology <mhausenblas> we could also do a CFA ( as) <timbl> Mark seems to argue for information about triples, which to me suggests graph literals .. a way of putting a wrapper around some rdf/a. Guus: we have to solve this req of simplicity while supporting all the use case ... there is no problem with having such conflicting req <Zakim> MarkB_, you wanted to explain my outstanding action item on reification v. n-ary relationships. <Zakim> timbl, you wanted to discuss collections TimBL: drawing the line between what's in and out ... annotating triples is important ... RDF bag and sequences are difficult to work with ... some applications (creative commons) use it ... also valid for sequences Ben: ol in html embodies collection, we try to see that <timbl> ol a collection, ul a class maybe <RalphS> PROPOSE: RDFa is not required to support every feature of RDF/XML Ralph: we should apply the same process as for SKOS UCs yesterday ... we have to wait for a use case before deciding whether reification is in or not <Zakim> RalphS, you wanted to propose a resolution re: RDF/XML completeness <mhausenblas> Michael: make sense to make a critical factor analysis on wiki? Guus: it could be a good idea, it would take more time ... in Rule wg, use cases were much more difficult to analyse <Zakim> kjetilk, you wanted to ask about v2 Kjetilk: we have good idea of what people want to do now ... and what people might want to do later ... could we introduce a version 1 and a version 2? Ben: in theory, yes, in practice we would have to be really careful <RalphS> PROPOSE: RDFa is not required to support every feature of RDF/XML WG agrees RESOLVED by consensus: RDFa is not required to support every feature of RDF/XML use case 1: basic structured blogging Ivan: for many people this might not be relevant: many bloggers do not use html Ben: this use case means that people can write plug-ins to do that ... UC1 should be clearer about the tool support Mark: there might be a use case with markup by hand UC2: publishing an event Ben: UC2: publishing an event ... could go to a specific tool (creative commons) to get a machine-readable chunk and copy-paste in html page ... UC1 could be for tool support for RDFa, UC2 more wizard-like Ivan: one big feature of RDFa is mixing vocabularies ... if we look at this UC, microformats could do that ... event information should mix different vocabulary ... just some words to be added to the text TimBL: put the RDF to include in the HTML in the example Mark: other benefit is the use of existing taxonomies ... problem with microformat is that you have to reinvent taxonomies ... should we include two different use cases? Ivan: additional benefit: author can add his own namespace <RalphS> Ivan: mix events, bibtex, geolocation UC3: content management metadata Ben: use case 3: content management metadata ... various decisions about content ... the structured data may not be rendered <mhausenblas> +1 to Guus suggestion Ivan: this code seems to be xhtml2 perhaps not wisest thing to do ... also technical issue: if final design is only to add attributes or to change content model ... raise more problems if we want to combine with text Ben: this is a fair comment, to take into account use case 4: creative commons use case Ben: self contained chunk added in html <Zakim> Guus, you wanted to suggest that we explicitly discuss at the end of the document why MF are not sufficient for handling the use cases Guus: section in the document where it is said that MF are not enough to solve the problems Ralph: I think we do that by presenting use cases ... if MF meet challenges then MF are the solution Mark: for the simple use case we could show that MF and RDFa can solve the problem <mhausenblas> +1 Mark: and for complex ones, that only RDFa is OK UC5: clipboard Ben: comment by alistair this was not a distinct UC ... I think it is important, I can demo it Guus: there is no problem with overlapping UCs Alistair: copy-pasting in html has nothing with RDFA ... the point is that if I copy html with rdfa statement, I want them to be included when pasting Ben: there should be a way to associate with a certain region of the interface some statements ... that should be copy-pasted and brought somewhere else TimBL: the need is to copy-paste html with all the rdfa about this piece <RalphS> TimBL: there's a sense of locality to the RDFa and HTML markup Ben: should emphasize the need for localize relevant rdfa statements for copy-paste <timbl> depends on the ability to localize the data to a part of the doc UC6: semantic wiki Ben: rdfa as input when editing wiki and having it in result Ivan: is that really rdfa? there would be a different syntax <FabienG> <Zakim> FabienG, you wanted to talk about GRDDL equivalent use case : Fabien: good idea to link with the GRDDL wiki UC ... lot of semantic wiki get rid of wikiML and just copy-paste ... wysiwyg interfaces are preferred Michael: two issues <mhausenblas> <FabienG> example of WYSIWYG interface using XHTML and RDFa for a wiki: Michael: requirement link to multimedia semantics WG ... using a wiki syntax related to rdfa Ben: let's not focus on rdfa as input ... but you could paste rdfa <Zakim> kjetilk, you wanted to ask about bbcode in foras <mhausenblas> can you provide a pointer, please? <FabienG> BBCode : UC6: structured publishing by scientists Ben: motivated by existing chemist blog ... UC is more advanced user agent, getting local RDF Ivan: so emphasis is on adding some sexy UI to visualize the RDF info ... what I like is reference to other community ... you should put some more reference to science commons ... The difference here with MF is that vocabularies are huge Mark: I agree with that ... perhaps the structured blogging UC should be different <Zakim> RalphS, you wanted to note reference to browser enhancement Ralph: this last UC mentions application-specific extensions Guus: brainstorm with suggestions of applications TimBL: 3 UCS ... 1: have RDF recording a collection of authors ... bibtex can be used as example, but point should be made that order should be kept ... 2: UC with unordered list: list of references for a WG ... owl:oneOf ... 3: UC: collect a foaf file manually done <Zakim> FabienG, you wanted to list the GRDDL use cases in case one could be inspiring. Fabien: 3 use cases from GRDDL <RalphS> [[ <RalphS> # Use case #1 - Scheduling : Jane is trying to coordinate a meeting. <RalphS> # Use case #2 - Health Care: Querying an XML-based clinical data using an standard ontology <RalphS> # Use case #3 - Aggregating data: Stephan wants a synthetic review before buying a guitar. <RalphS> # Use case #4 - Querying sites and digital libraries: DC4Plus Corp. wants to automate the publication of its electronic documents. <RalphS> # Use case #5 - Wikis and e-learning: The Technical University of Marcilly decided to use wikis to foster knowledge exchanges between lecturers and students. <RalphS> # Use case #6 - Voltaire wants to facilitate the extraction of transport semantics from an online form used to edit blog entries. <RalphS> # Use case #7 - XML schema specifying a transformation: the OAI would like to be able to specify document licenses in their XML schema. <RalphS> ]] <RalphS> -- Fabien: relation between grddl and rdfa ... one use case is a counter-example ... case where it is explained that sometimes it can fail <RalphS> Fabien: GRDDL UC editor's draft contains a new use case 8 counter-example Jon: metadata registry which express the vocabularies ... we want to embed the RDF in HTML for rendering Ben: could be interesting to have a SKOS-specific UC Ralph: maybe an online dictionary can include some SKOS <FabienG> Counter-example in GRDDL use cases current draft: Guus: UCs having different vocabularies <RalphS> Ralph: dictionary or our HTML wordnet files might include SKOS markup Guus: food domain ... product catalog ... this is kind of UC which is not emphasized currently Stephen: UC with retailers, venders, with multiple vocabulary. HTML view of last financial transactions in RDFa, interpretable by browsers ... trip organizer to help with decisions ... news stories, journals: grab all the key ideas about the stories you care about Ben: comments on primer are editorial ... we can skip it <FabienG> Counter-example in GRDDL use cases current draft: GRDDL use case and RDFa use cases Ben: GRDDL agent have an RDFa parser ... other option hGRDDL e.g.: transform microformat into RDFa ... this would preserve the locality in a new HTML doc Ivan: third option use the GRDDL mechanism to extract RDFa Alistair: GRDDL agent: does it have to parse it (RDFa parser) or does it use a GRDDL transform? Guus: we must write down the relationship in the GRDDL doc and in the RDFa doc <scribe> ACTION: Ben to write down the relation between GRDDL and RDFa [recorded in] Ben: RDFa would be a recommendation for an XHTML module Fabien: one problem is that RDFa is presented as a new syntax for RDf whereas GRDDL is presented as a way to extract RDF/XML from other XML syntaxes ... a GRDDL transformation from RDFa to RDF/XML doesn't make a lot of sense to me if RDFa is adopted as an alternate RDF syntax ... an agent is either a GRDDL agent or an RDFa agent Ben: this should be an GRDDL working group decision <Zakim> timbl, you wanted to talk about the ladder of authority Tim: explains ladder of authority ... we must decide if its part of HTML or if we use the GRDDL way REC discussion Guus: back to REC discussion Ivan: if we produce an XHTML modul REC, we would need a new DTD ... module as a REC would not solve the validation problem ... the XHTML WG owns these DTDs Ben: the validation would be separate Tim: yes but other validators would complain <Steven> anyone can create a driver Ivan: the driver in the XHTML 1.1, a change has to be made and it is something this WG can't do Steven: not such a big problem to make driver ... a document that wants to be validated has to reference the modified DTD Ben: it would be good if we produce a validator as part of this WG output. Mark: we are not modifying XHTML 1.1 and we can't. ... XHTML Modularization 1.1 is a different thing. <Steven> XHTML1.1+RDFa <timbl> unless we change the XHTML 1.1 DTD <Steven> unless we create XHTML 1.2 Guus: issues wrt REC: resources for test cases, resources for team contact. ... two RECs may be too much work for this WG. ... do we have sufficient people to set up test suite? <ivan> elias torres Ben: Elias Torres from IBM would be of great help for test suite ... we have material for the tests we have to assemble them Guus: set up a repository? Ben: not too worry about that several people can help (Ben, Mark, Elias, etc.) <RalphS> formal WG participants [Member-only link] Guus: I am concerned about not having enough resources to make significant progress. ... is the schedule realistic. Ben: agressive but we must do it if we want to have this done. Ivan: what are the alternatives? Guus: not going for Rec would be one alternative. ... then reconcider when we finish up Ivan: resource shortage is the plague of the whole SW activity. ... Steven could help if he could spend some of his time on this issue. Ralph: a lot of the work is editorial Tim: since there is no more resources should we reconcider if we want to go with this? Guus: we have to check internal depedencies, etc. Tim: we must identify what a new WG resource would be doing precisely. What exactly should be done? Ben: I prefer to take the risk to fail than to cancel it now. Ivan: we have to have relatively stable publications on a regular basis for RDFa because there is a lot of controversy around it. Ben: even if we don't reach a Rec we could stabilize a version as a Note. Guus: that would be my proposal "go for REC track"ACTION: Guus to flag the issue of RDFa REC track on the coordination group [recorded in] Tim: I am concerned about the fact that RDFa attributes semantics to an HTML doc. <RalphS> I am on queue to respond to that concern <MarkB_> I couldn't hear properly, but it sounded like there was a proposal for a discussion about whether an HTML document should 'flag up', whether it contains RDFa or not. I wrote a long email to the list about this, in response to Ivan, but no-one has commented on it. I would therefore appreciate it if no *final* decisions were taken on this issue at this meeting, since I won't be able to participate in the discussion. Guus: break out sessions for this afternoon ... SKOS integration of issues and requirement list ... RDFa discussion on use cases, GRDDL relation, etc. Tom: About Voc Management Note: ... we don't have an editor ...20050705 is old. Refer to the wiki version now. Motivation is to describe what's involved in publishing an RDF voc good to step back now and distinguish soft rec and hard rec ... and cite the cookbook ..principles of best practice may need to include very basic, pragmatic things such as "remember to pay your domain registration fees", etc. List and describe basic advice, low-hanging fruit. Today: brainstorm on the list of 5 points to see if it is a good starting point. 1. Name Terms using URI References Tom: DanC said we should identify terms with URIs ; this is a hard REC for publishing an RDF Voc: Ralph: Dan's mail was more a terminology point than an architecture point. Tom: naming convention could be moved from the cookbook Ralph: I wouldn't want to move it but there should be cross references. ... good place to mention the domain registration problem. 2. Provide readable documentation Tom: the whole documentation question could be grouped in one point with pointers to the cookbook ...not giving too much details on what web pages one has to create to publish a voc Ralph: considered best practice to have both human readable and machine readable doc ... we should show examples of what we think are best pratices. Alistair: web page issue is "what a doc web page should look like" and show examples. Tom: examples of different granularity in documenting <TomB> Dublin Core documentation: and Alan: a way to make this point would be to pose a little query pb and let people discover what can be done and what can't be. 3. Articulate Maintenance Policies <RalphS> Ralph: seeAlso -> "URIs for W3C Namespaces" - W3C Namespace usage policies Tom: there should be example of different types of voc and the maintenance policies that they have. Ralph: section 3 in describes how namespace UI may change over time Guus: what about the versioning? Tom: we can show an example of URI used to identify snapshots of voc ... Dan Brickley is interested in using Web CVS to expose different versions of voc Ralph: it would be good if we did propose ways to identify versions Guus: I have pb to see difference between point 3 and point 4 4. Identify Versions <JonP> Cookbook-related suggestion postponed from yesterday: Tom: describing how versioning is done in large voc repositories and in SKOS is also relevant Alan: hard part of versioning is to identify the policy people are using Alan: term-level versioning vs. vocabulary-level versioning are choices people make ...would be a good thing to identify the possible policies ... ponters to implementation would also be interesting <Elisa> Another source for metadata and examples regarding versioning policies is the BioPortal (from the National Center for Biomedical Ontology), at <Zakim> aliman, you wanted to ask about KWeb rdf versioning Alistair: on KnowledgeWeb there are pointers to RDF versioning tools Guus: that would be a possibility to have these people involved <RalphS> Knowledge Web Network of Excellence project <alanr> starts a thread about versioning. ACTION: Guus to contact persons working on versioning in KnowledgeWeb [recorded in] <alanr> Tim: when you introduce a new namespace you can use OWL to publish the relationship between the old version and the new one. <RalphS> yes, candidate Best Practice: use OWL [and some other vocabulary] to describe the relationship between any changes you make in your vocabulary to the previous version Tim: I mean using sameAs, equivalent*, etc. <alanr> Tim: this is a real added value of RDF <timbl> The TAG has been trying to deal with XML versioning and there is much less one can do in general. Ralph: enumerating policies and examples of them would be a good added value already <RalphS> Jon's proposed COOKBOOK-I3.1 issue Jon: taking a very concrete example is teaching people how to cook the cake and not how to read the cookbook <RalphS> Jon: I believe this versioning discussion subsumes COOKBOOK-I3.1 Guus: I'd like to see a practical of example e.g. in the medical domain 5. Publish a Formal Schema Tom: give good practice of how voc are being declared would be enough without going into too much details <kjetilk> <alanr> versioning perhaps Kjetilk: don't we need a voc to describe our mainteance policy? <RalphS> we should say that it is best practice to publish an RDF/OWL document at the namespace URI. This may be obvious to us but evidently it's not obvious to everyone. Point back to Recipes document for "... and here's how" Guus: it is a good idea in principle. But we are not doing new work here we just identify existing practices. Alistair: what are the plans for the semantic interop note. Guus: subject for last session. ... looking for an editor: Elisa ? Kjetik ? etc. ? <TomB> I can contribute descriptions of how things are done with Dublin Core. Elisa: Daniel could also contribute with examples ACTION: Elisa to give first overview of what the status of the doc is and add comments and coordinate work on doc [recorded in] guus: revisit all issues we discussed, identify candidate requirements, bring us to position of having first complete list of requirements, useful? ...finish quickly, can look at remaining use cases Antoine: jon updated requirements list yesterd, based on that I created a bullet list of the issues <Antoine> <aliman> candidate requirements sandbox <aliman> antoine: wanted to collect stuff from yesterday, from old issues etc. perhaps more issues than what we need guus: propose to split requirements into candidate and accepted ... have the notion of relations between string values ... in requirements, have notion of acronym, representation of realtionships between labels associated with concepts Annotations on lexical labels Guus:... making statements about lexical labels? alan, boil down to ability to represent statements about lexical labels? alan: yes. and the choice between boosting labels to individuals and using Alistair's pattern (n-ary relations) ... real example from obi, this term is used by community x, this term was proposed by x, needs to be reviewd, was reviewd on x, this term was in use 200-500 b.c. guus: statements that relate a lexical label to a resource, and to various data values, typed data values like timestamps. alan: yes, the resource might be the container, depends on representation choice. other way is if lexical item is an individual, properties hang off it. guus: one way to handle this is to reformulate requirement 3, or add new requirement. currently req 3 is acronym example. boils down to same thing, make statements about things ... jon: talking about metadata? guus: from representation perspective same problem, from use perspective it is different. alan: synonymy is relationship between terms, like acronym example. could be considered same issue. guus: prefer to have separate requirements for now. name? alan: annotations on lexical items, how to represent? guus: requirement should be, the ability to represent annotations on lexical items. antoine: want to control the actions, e.g. for this issue alan has two actions, one is to write down the general documentation requirements and how to represent in SKOS, then another item to write up preferred label modelling issue aliman: what is preferred label modelling issue? antoine: for me it was this issue of lexical values and annotations aliman: let's get rid of "prefLabel", misleading antoine: i'll change MappingToCombination guus: next issue ... MappingToCombination antoine: I added this one, but no reference in the minutes for action guus: conjuecture, have separate req on compositionality, req 8th in list reflects the issue ... issues are things where we have to propose a resolution on how to this, lead to test cases, if you can't find any req to which an issue refers there is something wrong. based on issues, are there any missing requirements? guus: specialisation of relationships, we have a req for this ... local specialization of SKOS vocabulary - so this is covered. ... relationships between labels, we have this one covered by the req mentioned before (number R3) ... next set of issues more tricky, because there should be use cases if we admit as requirement ... (now looking at issue SKOS-I-Rules) ... need to think of motivating use case where need rule antoine: manuscripts use cases or any use case where propagate indexing up hierarchy levels aliman: SWED use case uses this rule alan: can do in OWL 1.1 role inclusion aliman: we're not waiting for owl 1.1 alan: yes but good to be aware guus: same thing as checking consistency? aliman: no, more about inferring new information guus: SWRL document, first example has a rule like, relationships between artists and styles, can derive the relationship. <alanr> antoine: rule [from SWRL] more complicated than indexing example but similar guus: have anumber of use cases, what is formulation of requirement? ... not talking about part of skos specification. what could be part of skos specification is bt and nt are inverse of each other, rt is symmetric, bt nt transitive? ... originally sean mentioned this issue because ruiles in skos ... doesn't currently give rise to requirement, might if we resolve this. ... 2.1.2. SKOS-I-ConceptSchemesContainment . .. ... the semantics of containment withing particular vocabulary./ontology is not clear. steve: is this also strictly contains/is contained by, or is alan calculus, all different modalities of containment, connection, proximity, applies for temporality and spatiality ... alan: technical issue, can hook up a concept to a concept scheme via a property, but can't do the same for a triple antoine: motivation is that a given concept e.g. france, might be narrower different things in different concept schemes steve: problem is e.g. with gazetteers, often strict containment e.g. DC area overlaps with other areas ... so next step beyond strict containment that talks about whether things are next to each other. antoine: more about reification, statements made in the context of specific concept scheme aliman: concrete use case? alan: case came up yesterday, equivalence from one point of view guus: hesitant on this issue, goes beyond level of RDF OWL, e.g. look at RDF/OWL ontologies, containment is implied by containment in files, i.e. informally, but if RDF OWL didn't give any semantics to that, why do it in SKOS? alan: because if needed in this domain then yes. ... reify, don't use RDF reification, promote relationships to individuals, can describe any properties of the relationship ... e.g. look at a mapping [draws on white board] how do point a statement to the containing scheme scribe: something close to that RDF reification guus: suggest we post a candidate req, have an explicit representation of the containment of concepts or relations ... any element of a concept scheme (could include concepts, relationships) ... rdf/owl soolution is implicit ... have to make this explicit alan: requirement would be, relationships need to be explicitliy associated with scheme, also concepts guus: also specify for concepts, for both there is a requirement ... good to know if it's part of original vocab, or if someone added it alan: understanding was, asymettric equivalence, can't do without ??? guus: almost all tools have way to ask [SPAARQL] can ask the database question and the logical question, which is two different things ... e.g. can ask direct subclass of, does it eist, or has it been inferred? alan: direct vs. indirect different from told vs. inverred antoine: really close i think to asserted vs. inferred guus: could be that this is solved at query level and not at representation level alan: may have inferred intervening class, therefore problems are not quite same antoine: at dutch library which has broader links which are redundant, ... e.g. asserted closure steve: get from forward chaining ... guus: candidate requirement for the moment, can always disregard the ability to explicitly represent the containment of any individual which is an instance of a SKOS class (e.g. skos:Concept) or statement that uses SKOS property as predicate (e.g. skos:broader) within a concept scheme the ability to explicitly represent the containment of any individual which is an instance of a SKOS class (e.g. skos:Concept) or statement that uses SKOS property as predicate (e.g. skos:broader) within a concept scheme guus: understand by now then happy ... issues from previous SKOS issues list ... collections-5 ... fix expression of disjointness between concepts and collections ... doesn't generate representationrequirement ... do we have policy on using SKOS namespace for something not in SKOS ... issue in OWL, parsers should flag but otherwise continue as normally, triples using bad URIs get no semantics <TomB> skos:ConceptScheme as a set of concepts, or specifically skos:Concepts? guus: allow people to use extensions that later on tomb: also ability to extend SKOS, e.g. other types of concept ... other class in other namespaces then are we covered? aliman: always subclass skos:Concept tomb: what can a concept scheme contain? limited by class skos:Concept? should that be stated somewhere? guus: raise general issue on how to represent SKOS semantics, not at all trivial, good feeling of what semantics should be, but how to represent is another thing <scribe> ACTION: alistair to raise a new issue about USE X + Y and USE X OR Y [recorded in] guus: metaphor aliman just gave between descriptor and non-descriptor is excellent to help explain what these things mean, really from ... just want to have same term on a card. alan: explain what indexing meant, explain descriptors and non-descriptors ... guus showed in demo groupings of terms that were not terms ... in final session talk about scheduling The RDFa breakout session The wiki is uptodate but is being transferred to trackerACTION: Ben to update issues list with the @CLASS overload problem [recorded in] did not get to the GRDDL issues did not discuss planning Guus: we should decide REC or NOTE in april ... before the summer, we should have last call in the case of REC ... we could ask for CR by october ... that would get us to REC by the end of the charter ...there should be one WD before the last call benadida: we should have the WD just before the REC decision Guus: we need to pay close attention to the outside world RalphS: we don't know the status of a XHTML 2.0 WG ACTION: Ben to update RDFa schedule on wiki to aim for last call on June 1 [recorded in] The SKOS breakout session aliman: we went through a sandbox list, also included some new requirements: SKOS Requirements List Sandbox aliman: we had a long and philosophical discussion of the wording of point 22 Guus: what would be a reasonable schedule for coming up with a first WD? ... it doesn't need to be complete ... just useful for review ... by march would be realistic Antoine: yes, it sounds doable aliman: we just have a primer and a formal spec and a reference overview document aliman: the problem with the guide is that it does two things, like give an introduction to SKOS as well as defining some of the semantics <alanr> Alistair: I used Z for the formal specification language for my thesis ivan: I sweated a lot over Z aliman: we have a few high-profile users of thesauri that are involved ACTION: aliman to update the schedule for SKOS documents [recorded in]
http://www.w3.org/2007/01/23-swd-minutes.html
CC-MAIN-2015-27
refinedweb
5,644
61.16
Hire events işler Good Afternoon Designers, I'm looking to get an event poster designed for an upcoming Movie Screening Red Carpet reception and Awards event. Must be professional but eye catchy and creative using some of the uploaded images to do soft blending. The required content is below. KA ZARR Entertainment Proudly presents... A triple threat event for the DMV area. Join us for the final public sc!! .. listed ...collecting payments and I will Send you the required templates you can get it modified with PHP frame work. .. PROJECT BUDGET IS $30-$60 Looking to hire someone who can create a compelling, and steller sponsorship sales deck in power point. I have two samples of what we are looking for. We would need for you to do some research and include some stats as it pertains to Latino Americans spending habits and growth in the United States. A lot of the content from .. diverse as the Haymarket Hide the Sidebar in the Plugin The Events Calendar when a user is logged in and wants to add an event.. [login to view URL] I would like to hire a professional content writer for our business. The content would mostly be on London attractions, places events. So, anyone living in London is preferable. Please quote your price for 4 * 500 words article/ blog. [login to view URL] but better than this as they are my competitors Hello I that makes the whole page refresh. I only wa... We'd like to hire a write to summarize events in our industry. The articles would require little research, but we'd like them turned around within a day. If it works out, this would be a every week gig. I need a app like playerzon it is the app in which pubg tournments will be conducted ... ...Memberpress subscription site with two types of subscribers, personal memberships and business memberships. I need a developer to help me charge paid members in order to post events on our site (with RSVP and notification capabilities) as well as charge a monthly fee for directory listings. Prices will be different depending on which membership the user We are looking to hire a trainer who has experience training others about active shooter events. Must have experience teaching this course and with a group of 20+ people. We need someone for 1 day of training for approximately 3-4 hours on a Saturday preferably. ... ...opportunities in the area broadly or discuss specific events. Here is a link to our blog for examples on appropriate writing tone and approximate length: [login to view URL] . This website is a great reference for information about Sun Valley, plus it features a complete list of upcoming events: [login to view URL] . Including a photo I already have Facebook pixel install. Some plugin conflict over the weekend and now not all events are working. Need this fixed today. Firm Name:Knight Bite This is the sticky note that will be stuck on the burger boxes of the customers on Republic Day(ie 26th januar...customer. I have attached the logo and also some previous event Sticky notes. Be a Part Of our Republic Day Celebration! Pros:If we are impressed we might hire you for future events as well . All the best ! I need a logo designed. implemen... .. other creative ...seminar company which trains affiliate marketers on how to scale their business (regardless of what niche / offers they promote). We have both online training course and live events - our main training is a 3 day workshop held in Dubai. We are looking for someone who's very skilled with Click Funnels and this person would create a funnel from scratch ...showcasing the versatility of our extensive inventory with the tagline "A HIRE FOR EVERY OCCASION" DESIGN TREATMENT: We want you to use same type design format we'd used for the holiday campaign. Specifically using decor items to form the shape of symbols commonly associated with events. See references I've attached from previous holiday campaign. ...want to setup Eventbrite API to show events on my website Budget is $30 Setup Eventbrite API to show events on this website - [login to view URL] TASK It is a very straight forward project. I need this website to be Customized and formatted well to show Pictures and Contents on CHRISTIAN EVENTS from [login to view URL] & [login to view URL] INNOVATION PHASE The goal is Implement 120 kpis using Firebase Analytics First we neet to test your knowledge with a demo : All code : public class MainActivity extends AppCompatActivity { private static final String USER_PROPERTY = "comida_favorita"; private FirebaseAnalytics mFirebaseAnalytics; String myString = "Crea un mensaje"; private Button btnT... ...involves the synchronization of Events and data created through Google Form, Stored in Excel spreadsheet in Google drive with Google Calendar. The Excel form and worksheet will be provided. As well as google account data. Important notes: - Current events in Google Calendar should be updated in the Excel Worksheet - All added events should be presented in google .../ delete events in Goolge calendar, but I'd like to make the sync two way. So I could also make the watch chanel and receive the "ping from google" when something changed in the calendar, but before I could do an incremental list I have to do a FULL SYNC, and that is the point where I stucked. So I'd like to ask a help in listing events of Google ..... ...BlueMotion Group Based in Banbury Oxfordshire. Its been going since 31st August 2017 The company supplies Security & Hospitality staff to the events industry we also have our own mobile bar set up which we hire out We have our very own in house Hospitality Training Academy, training our own staff and outside companies We do not sub-contract any off our
https://www.tr.freelancer.com/job-search/hire-events/
CC-MAIN-2019-09
refinedweb
968
62.68
Sync #include <sync.h> Summary Functions sync_file_info struct sync_file_info * sync_file_info( int32_t fd ) Retrieve detailed information about a sync file and its fences. The returned sync_file_info must be freed by calling sync_file_info_free(). Available since API level 26. sync_file_info_free void sync_file_info_free( struct sync_file_info *info ) Free a struct sync_file_info structure. Available since API level 26. sync_get_fence_info struct sync_fence_info * sync_get_fence_info( const struct sync_file_info *info ) Get the array of fence infos from the sync file's info. The returned array is owned by the parent sync file info, and has info->num_fences entries. Available since API level 26. sync_merge int32_t sync_merge( const char *name, int32_t fd1, int32_t fd2 ) Merge two sync files. This produces a new sync file with the given name which has the union of the two original sync file's fences; redundant fences may be removed. If one of the input sync files is signaled or invalid, then this function may behave like dup(): the new file descriptor refers to the valid/unsignaled sync file with its original name, rather than a new sync file. The original fences remain valid, and the caller is responsible for closing them. Available since API level 26.
https://developer.android.com/ndk/reference/group/sync?authuser=0&hl=ja
CC-MAIN-2020-34
refinedweb
191
66.44
servlets ) and creates really ugly URLs. doPost allows you to have extremely dense forms index - Java Beginners that it will be helpful for you. Thanks ; | Java Servlets Tutorial | Jsp Tutorials | Java Swing Tutorials index jsp and servlets is a part forgot password in this application which will be helpful for you. Try Online Assistants on E-Commerce can be Helpful assistant is really very helpful. You can easily employ ghost writers or even...Online Assistants on E-Commerce can be Helpful The task is tedious when... assistants on E-commerce which may prove very helpful and productive. The online servlets - JSP-Servlet :// Hope that it will be helpful for you. Thanks servlets - JSP-Servlet :// Hope that it will be helpful for you. Thanks Java Programming: Chapter 2 Index and perfect detail by programs. Creating complex programs will never be really... | Next Chapter | Previous Chapter | Main Index Body Mass Index (BMI) Java: Body Mass Index (BMI) The Body Mass Index program is divided into two files, the main program... // File: bmi/BMI.java // Description: Compute Body Mass first onwards i.e., i don't know about reports only i know upto servlets... that it will be helpful for you. Thanks For more information, visit the following link: Thanks. Thanks What is Index? What is Index? What is Index Really Simple History (RSH) Really Simple History (RSH) The Really Simple History (RSH) framework makes it easy for AJAX applications to incorporate bookmarking and back and button support. By default, AJAX Servlets servlets the servlets SERVLETS index of javaprogram index of javaprogram what is the step of learning java. i am not asking syllabus am i am asking the step of program to teach a pesonal student. To learn java, please visit the following link: Java Tutorial Creating methods in servlets - JSP-Servlet . Document : index Created on : Dec 15, 2008, 7:49:51 PM Author : mihael Search index Drop Index Drop Index Drop Index is used to remove one or more indexes from the current database. Understand with Example The Tutorial illustrate an example from Drop Index Building Search Engine Applications Using Servlets ! Building Search Engine Applications Using Servlets Introduction... using Java Servlets. You can Download the source code of search engines and modify it according to requirement. Java Servlets Useful Negotiation Tips on Outsourcing, Helpful Negotiation Tips jsp -servlets jsp -servlets i have servlets s1 in this servlets i have created emplooyee object, other servlets is s2, then how can we find employee information in s2 servlets checking index in prepared statement checking index in prepared statement If we write as follows: String query = "insert into st_details values(?,?,?)"; PreparedStatement ps = con.prepareStatement(query); then after query has been prepared, can we check the index Java arraylist index() Function Java arrayList has index for each added element. This index starts from 0. arrayList values can be retrieved by the get(index) method. Example of Java Arraylist Index() Function import Servlets Programming Servlets Programming Hi this is tanu, This is a code for knowing... visit the following links: http Sessions in servlets Sessions in servlets What is the use of sessions in servlets? The servlet HttpSession interface is used to simulate the concept that a person's visit to a Web site is one continuous series of interactions servlet not working properly ...pls help me out....its really urgent servlet not working properly ...pls help me out....its really urgent Hi, Below is the front page of my project 1)enty.jsp </form> </body> </html> </form> </body> </html>> <
http://www.roseindia.net/tutorialhelp/comment/37233
CC-MAIN-2014-10
refinedweb
584
57.16
Woohoo! This is a concerning a recent code-lenght thread in C board. The original challenge is here: KAMIL I finally managed to squeeze it into the 256 bytes. This would be 240 bytes, if you removed the comments and whitespace (230 if you removed cin.get(); too. (Although the filename probably should come from command line?) I get 16 points!!! Anyone care to score more? The test file I used and the output:The test file I used and the output:Code:#include<iostream> #include<fstream> #include<string> using namespace std; int main(){ string s; string t="TDLF"; ifstream f("test.txt"); while(f>>s){ int c=1; for(size_t i=0;i<s.size();i++) c*=t.find(s[i])==t.npos?1:2; /*man I'm PROUD of the t.npos thing, hopefully legal for all std::string implementations?*/ cout<<c<<endl; } cin.get(); //could remove that for extra 10 bytes } #Edit: found ways to get rid of a few more bytes.#Edit: found ways to get rid of a few more bytes.FILIPEK FATLEAD NORMAL WOW DELFI WOOHOO output: 4 16 2 1 8 1
http://cboard.cprogramming.com/cplusplus-programming/84300-kamil-challenge.html
CC-MAIN-2013-48
refinedweb
189
76.42
Are you sure? This action might not be possible to undo. Are you sure you want to continue? 698 May 23, 2012 The Potential for Private Money by Thomas L. Hogan Competition in Currency Executive Summary Privately issued money can benefit consumers in many ways, particularly in the areas of value stability and product variety. Decentralized currency production can benefit consumers by reducing inflation and increasing economic stability. Unlike a central bank, competing private banks must attract customers by providing innovative products, restricting the quantity of notes issued, and limiting the riskiness of their investing activities. Although the Federal Reserve currently has a de facto monopoly on the provision of currency in the United States, this was not always the case. Throughout most of U.S. history, private banks issued their own banknotes as currency. This practice continues today in a few countries and could be reinstituted in the United States with minimal changes to the banking system. This paper examines two ways in which banks could potentially issue private money. First, U.S. banks could issue private notes redeemable for U.S. Federal Reserve notes. Considering that banks issuing private notes in Hong Kong, Scotland, and Northern Ireland earn hundreds of millions of dollars annually, it appears that U.S. banks may be missing an opportunity to earn billions of dollars in annual profits. Second, recent turmoil in the financial sector has increased demand for a stable alternative currency. Banks may be able to capture significant portions of the domestic and international currency markets with a private, commodity-based currency. Legislation clarifying the rights of private banks to issue currency could help clear the path toward a return to private money. Thomas L. Hogan is assistant professor of economics at West Texas A&M University. The competitive issue of private banknotes could improve price stability in the U.S. economy. Introduction In the United States and most countries around the world, money is produced and managed by the government’s central bank. This paper discusses two potential opportunities for creating private money: private banknotes and commodity-based currency. For most of U.S. history, competing private banks issued paper currency redeemable for coins of gold and silver. A few countries today maintain similar semiprivate systems in which private banks issue their own banknotes redeemable for government currency. Allowing such a system in the United States would benefit all consumers by improving price stability and allowing banks to compete for customers. Alternatively, banks might issue money whose value is based on a commodity such as gold or silver. In international trade, the U.S. dollar is often used in transactions because of its relatively stable long-term value. The dollar is also the dominant form of currency used in many lessdeveloped countries, yet economists generally agree that historically the international gold standard provided a more stable system of international trade than the current system of national fiat currencies.1 If a commodity-based currency could replace the dollar in these foreign markets, it could improve economic stability and help facilitate international trade. The competitive issue of private banknotes could improve price stability in the U.S. economy.2 Private banks have the incentives and information necessary to provide the optimal quantity of money to encourage economic growth. In the past, the competitive production of currency helped the U.S. economy achieve high levels of gross domestic product (GDP) growth and low levels of price inflation. Creating a semiprivate monetary system, in which private banknotes were redeemable for notes from the Federal Reserve, would reduce the government’s involvement in monetary policy. The Federal Reserve would have less influence on the supply of currency but would maintain substantial power over the overall money supply through its standard means of open-market operations, reserve ratios, and targeting of the federal funds rate. Allowing competition in currency would bring the United States one step closer to obtaining the full advantages of a decentralized system. Private banknotes could easily be introduced in the United States. In Hong Kong, Scotland, and Northern Ireland, private banks issue banknotes redeemable for the national currency. Despite the availability of central banknotes in these locales, consumers transact almost exclusively in private currency.3 Private banks consequently earn hundreds of millions of dollars annually in the private currency market. For each unit of private currency withdrawn by a customer, the bank retains one unit of government currency, which can be invested or loaned out to other customers. The bank earns revenue on these investments and loans for as long as its private notes remain in circulation. If U.S. banks were able to capture even a small percentage of the domestic market for banknotes, they likely could earn billions of dollars in annual profits. The idea that American consumers might enjoy using private banknotes is far from implausible. Imagine if dollar bills carried pictures of local sports teams, or if Wells Fargo produced its own dollar bills embossed with images of stage coaches and the Old West. Perhaps customers would debate the relative merits of the LeBron James banknote versus the Michael Jordan, or admire a special note commemorating Independence Day or dedicated to American veterans. Consumers in Hong Kong, Scotland, and Northern Ireland already enjoy these experiences, with private notes depicting heroes, sports figures, and famous historical events. Once banks establish reciprocal exchange agreements, private banknotes in the United States would trade at equal value with Federal Reserve notes as they passed from person to person throughout the economy. Banks might even pay customers to use their notes. For example, an 2 ATM might give customers a cash bonus for withdrawing “B. of A. Bucks” issued by Bank of America rather than regular U.S. dollars, or Citibank might pay interest on its private notes akin to earning points on a credit card. Private note issue appears to be legal in the United States today. The regulations prohibiting the issuance of private notes were repealed almost two decades ago, yet no banks have chosen to enter this potential market. This may be because there is still some question as to whether private currency would be accepted by the Federal Reserve, or whether its producers would be subject to legal action. Private issuers might be prosecuted under gray areas of the law, or Congress might choose to resurrect the prohibitions on private notes. A few small-scale local currencies have been issued with the Fed’s blessing. These local currencies provide examples of how a new private currency might be introduced, but since none compete directly with Federal Reserve notes, they provide little clarification of the legal status of a largescale private currency, and do little to inform a prediction of the Fed’s reaction thereto. One might ask: if a system of privately issued currency is profitable, why is it not already in place? To answer this question, we estimate the potential profit from the issue of private banknotes in the United States, and find that capturing even a small percentage of this market could lead to billions of dollars in annual profit. We then consider alternative explanations for the lack of private notes. Banks might be deterred from entering the market for private banknotes if they fear a shrinking demand for paper currency or a lack of demand for private notes on the part of skeptical American consumers. The most plausible explanation for the lack of participants in this would-be-profitable market, however, appears to be the uncertain legal status of private note issue, and the rigorous federal prosecution of currencyrelated crimes. Another way that banks could enter the market for private money would be to create a new currency whose value is based on a commodity or basket of commodities. A commodity-based currency would likely be most valuable for international transactions. International traders rely on the stability of exchange rates among nations, which in turn rely on a vast array of variables that are affected by each country’s economic performance. To avoid the risk of currency volatility, a large percentage of trade is conducted in currencies with stable values, such as the U.S. dollar and the Swiss franc. A new currency based on a commodity such as gold, whose value is well established, might reduce currency risk in international transactions and capture some portion of this large potential market. The value of such a currency would rely not on the economies of the countries but on the value of gold itself, which has remained remarkably stable over time. Poor performance by the Federal Reserve has motivated some policymakers to call for a return to a gold standard. Such a transition would be difficult domestically since it would alter the entire U.S. money supply. An easier option might be to simply end the Federal Reserve’s monopoly on paper currency by opening the market to competition. If a currency redeemable for gold were introduced as an alternative to the U.S. dollar, it might become popular domestically and/or abroad. A commodity-based currency would have a slower adoption rate than one based on U.S. dollars (since the domestic market is currently based on dollars), but the longterm advantage of value stability could provide greater macroeconomic benefits in terms of lower inflation and increased economic stability. There are substantial benefits to allowing competition in currency. In the next section, we outline the benefits of private note issue and the costs of government note issue. We then discuss the history of private note issue in the United States; current practices in Hong Kong, Scotland, Northern Ireland; and the use of local currencies. Finally, we evaluate the potential profitability of private banknotes and commodity-based currency as well as the barriers to their introduction. The lack of participants in the private banknote market appears to be due to the uncertain legal status of private note issue and the rigorous prosecution of currency-related crimes. 3 Benefits and Costs Benefits of Private Note Issue Privatizing the supply of banknotes would have both individual and systemic benefits. Unlike a monopoly currency provider, a system of competing banks must match the quantity of banknotes issued to the quantity demanded by the public, and each bank faces incentives not to issue too many or too few notes. Such systems have historically led to lower inflation and more stable economic conditions than central banking. These benefits would be only partially captured in a semiprivate system, where private banknotes were redeemable for central bank currency. Any potential harm from the Fed’s monetary policy, however, would at least be diminished. In addition, privatenote-issuing banks might attempt to satisfy their customers by competing on other margins such as security and aesthetics. These individual benefits would complement the systemic improvements in price stability. Private banks create money though fractional reserve lending. Banks that issue their own banknotes have an incentive to expand their note issue and reduce their reserves in order to improve profit margins. For perhaps less obvious reasons, however, banks also have incentives to increase reserves and restrict their note issues. Each bank must hold some capital on reserve to pay out to depositors, or in the case of a note-issuing bank, to the redeemers of banknotes. If a bank holds too few reserves, it runs the risk of defaulting on these obligations. To most profitably manage reserves and note issue, a bank regularly assesses its optimal levels of reserves based on the variability in redemptions of notes and deposits. In a system of decentralized note issue, competing banks each issue banknotes in the quantities they perceive as demanded by their customers. The law of large numbers indicates that the money supply will be more accurately set through decentralized provision than by a single monetary authority. Allowing multiple issuers of banknotes Privatizing the supply of banknotes would have both individual and systemic benefits. means that, on average, the market demand for banknotes will match the supply. The Federal Reserve, on the other hand, has no such luxury. Having a monopoly supplier of banknotes means the country must rely on a single set of experts to determine the proper supply of notes. Any underprovision or overprovision of notes will adversely affect the price level and, therefore, the entire economy. In setting its reserve ratio, each private bank faces a tradeoff between its expected return and its risk of default. Like any firm, a bank must finance its operations through a combination of debt and equity. Increasing the ratio of debt to equity amplifies the return to equity, making the firm more profitable—but also more risky. A bank can earn more revenue by loaning out a greater portion of its funds, but it must keep enough money on reserve to satisfy any redemptions demanded by its depositors. For example, suppose the bank offers to pay 4 percent on deposits, can earn 12 percent on its loans, and customers have deposited $100 in the bank. If the bank uses a 10 percent reserve ratio, then $10 of its funds will be kept on reserve while the other $90 is lent out to customers at an interest rate of 12 percent. If no customers redeem their deposits, then the bank earns $10.80 from its loans and must pay $4 on its deposits, resulting in a net gain of $6.80. Had the bank kept less money on reserve and made more loans, its profits would have been even higher. Because customers can redeem their deposits at any time, however, the bank faces the risk that many customers will choose to redeem within a short time period, and the bank will be drained of its reserves. Since the bank has only $10 on reserve, its managers must hope that less than $10 will be withdrawn. If more than $10 is withdrawn, the bank will be unable to pay its obligations and will go into default, or possibly even bankruptcy.4 If the bank were to hold less than $10 in reserves, then its risk of default would be even greater. Thus, bank managers must choose a level of reserves that balances the marginal benefit 4 of higher revenues against the marginal cost of risk of default. Any bank that issues private notes faces an additional financing decision, since its liabilities are drawn from a combination of both deposits and banknotes. Managers must weigh the marginal benefits and costs of additional banknotes versus additional deposits. Hence, managers choose a target level of reserves for their notes and deposits and must replenish their reserves whenever notes or deposits are redeemed. To maintain their chosen reserve ratio, any note-issuing bank must be careful not to issue more notes than demanded by the market. This is particularly important because note redemptions disproportionately decrease the holdings of reserves relative to notes outstanding. For example, suppose that Greene Bank issues banknotes redeemable for gold, and its managers have determined that the optimal reserve ratio is 1 over 10. This means that if the bank holds $10 worth of gold on reserve, it will issue $100 in banknotes. Now suppose a customer redeems $5 worth of banknotes for gold. The bank now has $95 in notes outstanding, but only $5 worth of gold on reserve. This indicates a reserve ratio of 1 over 19, which is far below the optimal rate set by the bank’s managers. Thus, note redemptions have a much larger effect on the bank’s reserves than on its notes outstanding. In order to return to its ideal reserve ratio, the bank must either buy $5 worth of gold or recall $45 of its outstanding notes. This effect is what economists call the “law of adverse clearings.”5 Adverse clearings discourage any individual bank from issuing too many notes relative to other banks in the economy. A bank may seek to increase its profitability by reducing its reserve ratio and issuing more banknotes. As the above example shows, however, clearings are “adverse” since, percentagewise, they affect the bank’s reserves much more than they affect its notes outstanding. When the bank issues more banknotes without increasing its reserves, the bank’s notes will increase as a portion of the money supply, so more notes will be returned to the bank and redeemed for gold. The bank will soon be drained of its gold reserves and will be unable to support the amount of banknotes it has issued. As described above, the bank will be forced either to acquire more gold reserves or to recall some of its notes outstanding. In this way, adverse clearings prevent the overissue of banknotes. They also act as an indicator of the bank’s level of risk. Adverse clearings provide constant feedback to bank managers regarding their optimal quantity of notes outstanding. There is much historical evidence that adverse clearings prevent banks from overextending their note issues. Lawrence H. White’s Free Banking in Britain details the emergence of a system of private banknotes in Scotland, where multiple private banks each issued their own notes. White explains how the banks developed a clearinghouse for banknotes which prevented banks from over-issuing notes. Through the law of adverse clearings, each bank acted as a check on the others.6 The Suffolk Banking System used a similar note-clearing mechanism in New England during the early 19th century.7 It is an oft-cited example of a stable free banking system. Other regional clearinghouses developed after the Civil War.8 Perhaps in response to adverse clearings, early American banks tended to underissue rather than overissue banknotes. There is a lengthy discussion in the economic literature on why early U.S. banks issued fewer notes than economists would have predicted.9 Banking in the 19th century was once thought to be chaotic and economically unstable. Recent evidence has shown, however, that the economy was in fact equally, if not more, stable before the establishment of a central bank. Later sections will discuss this period of U.S. history in more detail, along with the current systems of private-note issue in Hong Kong, Scotland, and Northern Ireland. When many individual banks issue currency, each has some indication of the percentage its notes compose of the total currency supply, and whether the demand for currency is growing or shrinking. In con- The Fed has neither the incentive nor the information to optimize the size of the money supply. 5 Even with the collective knowledge of the world’s foremost monetary experts, the Fed has a systematic disadvantage relative to a system of many decentralized issuers. trast, the Fed has neither the incentive nor the information to optimize the size of the money supply. Without adverse clearings, the Fed has no simple means of assessing the optimality of its level of note issue and faces no market discipline on the question. Although the Fed does monitor some measures of currency turnover, its homogeneous bills are generally re-issued by banks without being individually tracked, and the Fed has no incentive to initiate such a note-tracking scheme. As discussed in the next section, the Fed faces different incentives than a private bank and sometimes prioritizes other activities above the proper management of the money supply. In addition, the law of large numbers favors a system of many note issuers over the Fed’s monopoly. When many banks each attempt to issue the “right” quantity of banknotes, some will create too many and some too few, but they will tend to get it right on average. The Fed, in contrast, manages only one money supply and must do its best to match supply to demand. Any underor overprovision of notes will adversely affect the price level, and therefore, the entire economy. Even with the collective knowledge of the world’s foremost monetary experts, the Fed has a systematic disadvantage relative to a system of many decentralized issuers. Private notes tend to have more stable values than those produced by a central bank, even in a semiprivate system where the private banknotes are only redeemable for a central bank’s notes. This is because private banks must respond to demand; they cannot inject more notes into the money supply than customers are willing to accept. When the central bank increases the money supply, private banks cannot increase their note issue proportionately unless there is customer demand for these additional notes. If no such demand exists, private notes will lose part of their value since the money supply as a whole has been devalued by the central bank’s excessive issue, but the resulting inflation is tempered at least in part by the private banks’ inability to add still more superfluous notes to the currency supply.10 Similarly, if the central bank were to provide less money than demanded in the economy, private banks could issue more notes to compensate, thereby preventing deflation and furthering stability. There are two main components of the money supply: cash and deposits. The Fed can affect the supply of money in the economy by influencing either of these. Although the Fed does decide how many dollars to print, it is more likely to affect deposit holdings by influencing the market interest rate. The amount of money that an individual chooses to deposit in a bank and the amount he chooses to take out in loans depends on the interest rate he can earn on his deposits or must pay on his loans. A lower interest rate means that fewer people will deposit their money in the bank, and more individuals and businesses will take out loans. These effects cause more money to circulate in the economy and, therefore, more economic activity. In contrast, a higher interest rate will cause individuals to deposit more money into banks and fewer people to take out loans, thus reducing the amount of money in the economy and slowing economic activity. The degree to which the Fed’s actions affect the money supply is determined by the “money multiplier,” a formula which calculates how much the total money supply will be affected by a change in cash or bank reserves. The size of the multiplier depends mostly on the reserve ratio chosen by private banks and on the amount of cash held by consumers relative to deposits. The influences of cash and reserves counteract one another. As the quantity of currency in the economy increases above the quantity demanded by consumers, banks tend to increase their reserves, so each new influx of cash has less of an effect on the money supply. Alternatively, a lack of cash in the economy causes banks to reduce their reserves, so changes in cash affect the money supply more. These feedback effects help align the quantity of money supplied by banks with the quantity demanded by consumers.11 6 In a free banking system, the reserve ratio is optimized by bank managers as previously described. Under the current system, however, banks are less sensitive to the risk of default since they do not face adverse clearings. Bank reserve ratios are predominantly determined by the minimum level of reserves set by the FDIC. In a semiprivate system, the central bank’s monetary policy would be dampened by the feedback effects of the money multiplier and adverse clearings on the quantities of currency created by private note issuers. Private banks must limit the riskiness of their investing activities because they are accountable to owners and creditors. Several empirical studies have shown that, as with all firms, the price of a bank’s equity and debt reflect perceptions of the bank’s riskiness.12 Not every bank refrains from taking excessive risk, but those that do are subject to the discipline of the market. Unfortunately, government intervention often creates distortions in market incentives, causing bank managers to increase their risk-taking activities. This is markedly true in the case of banks for several reasons. First, FDIC deposit insurance creates a problem of “moral hazard.” Banks increase the riskiness of their investing activities since any losses to depositors will be paid for by the government. Second, the government’s recent “too big to fail” policy encourages risky investing, since bank managers have reason to believe they will be bailed out in times of crisis. Third, the Fed’s policy of maintaining artificially low interest rates increases the profitability of lending. Throughout the first decade of the 21st century, these policies encouraged irresponsible mortgage lending and greatly contributed to the recent financial crisis. It is likely that risky investing by private banks would have been curtailed in the absence of these policies. While most consumers care only about the value of their money in exchange, some customers might prefer private notes for aesthetic reasons, such as the colors or pictures they carry. In Hong Kong, Scotland, and Northern Ireland, customers have the option of using notes issued by the central bank, but they overwhelmingly prefer those issued by private banks.13 Since these banks must compete for customers, private notes feature a variety of pictures and designs. Notes tend to be decorated in national themes, but not those of politicians or government. Hong Kong notes have animals such as lions, horses, turtles, and dragons. Notes from Northern Ireland and Scotland often portray notable sportsmen or historical figures, such as famous Scots Robert the Bruce, Lord Kelvin, and Alexander Graham Bell. Irish notes have featured inventor Harry Ferguson and Irish footballer George Best. Notes in all countries also show famous buildings, castles, and landscapes, or commemorate historical events, such as the launching of the U.S. Space Shuttle and the defeat of the Spanish Armada, which are portrayed on banknotes in Northern Ireland. Private notes come in a variety of colors and sizes to improve aesthetics, security features, and ease of use. When customers can choose between multiple banks, competition encourages innovation. Competition does not, however, guarantee that private notes will necessarily be more colorful or have better security than government banknotes. Unlike the government, private banks attempt to balance the marginal benefits of additional features against the marginal increases in the cost of production. Central banks, on the other hand, are less responsive to customer preference since their spending levels are based on government budgets, and they often operate as monopolies rather than facing competition. Allowing a variety of firms to produce banknotes would not necessarily guarantee better banknotes, but it would give customers more variety in the money they use and accept, and customers with different tastes would be more likely to get the types of money they like best. Costs of Public Note Issue In contrast to private banks, central banks do not face the competitive pressures Allowing a variety of firms to produce banknotes would not necessarily guarantee better banknotes, but it would give customers more variety in the money they use and accept. 7 Central banks are often more focused on influencing economic activity than protecting the value of their banknotes. of the market. Having no customers to satisfy, they have no reason to offer better products or protect the value of their banknotes. Having no equity holders to satisfy, they need not worry about the riskiness of their note issue or investing activities. In fact, it is considered an advantage for the central banker to be “independent” and not beholden to any individual or political party.14 Without profit-maximizing incentives such as adverse clearings to help manage reserves, central banks lack an explicit mechanism for determining the optimum quantity of money and can only make adjustments with what Milton Friedman called “long and variable lags.”15 Some have likened the Fed’s adjustment process, dictated by past performance rather than forward-looking indicators, to driving a car by looking in the rear-view mirror.16 Central banks are often more focused on influencing economic activity than protecting the value of their banknotes. Central banks regularly devalue their own currency by inflating the money supply. Inflation is defined as an increase in the average level of prices in the economy. This occurs when the supply of money in the economy grows faster than the supply of goods. As more money enters the economy, the proportion of money to goods increases, so the prices of goods tend to rise. An increase in prices is equivalent to a fall in the value of a dollar, since it takes more dollars than it did before to buy any particular good. In this way, an increase in the supply of money in the economy causes the value of the currency to fall. Inflation is often considered to be a hidden tax that benefits the government at the expense of money holders.17 Deadweight economic losses are created as individuals change their behavior to avoid the effects of inflation.18 Inflation also has a distributional effect of benefitting debtors at the expense of creditors.19 Even moderate amounts of inflation can severely damage the economy by distorting relative prices, redirecting resources away from their optimal uses, and causing businesses to make unprofitable investments.20 In the United States, the Federal Reserve most often influences the money supply through its open-market operations. The Fed buys U.S. Treasury bonds, which raises the market price and lowers the market interest rate.21 U.S. Treasuries are considered the one and only risk-free security, since it is considered unlikely that the U.S. government will ever default on its obligations.22 Because of its size and liquidity, the Treasury market is considered the foundation of the entire financial markets sector. Consequently a reduction in the yield on Treasuries is passed on to other financial markets. These lower rates reduce the rate at which banks are willing to lend, which in turn makes it more profitable for firms to borrow, creating an increase in economic activity. For consumers, lower interest rates mean lower earnings on any money they might save, so they are less likely to save and more likely to spend— which also increases economic activity. When the Fed increases the money supply, the corresponding increase in economic activity is due, at least in part, to confusion on the part of consumers. This is true for two reasons. First, businesses and consumers are not fully aware of the Fed’s monetary policy. When interest rates fall, managers do not consider that the change was caused by the Federal Reserve. All they know is that the low interest rate makes potential investment projects more profitable. Similarly, a consumer who is deciding how much of his wages to put into his savings account only considers the return on his account and not the Fed’s role in influencing the interest rate. Second, even if these parties were completely aware of the Fed’s activities, they would be unable to perceive exactly how they would be affected since prices do not change uniformly throughout the economy. It would be overly simplistic to assume that if the money supply increases by 10 percent then all prices will rise by 10 percent. In reality, all prices in the economy will change by different amounts, so no individual or firm can know exactly how much they will be affected by an increase in the money supply. 8 Figure 1 Indexed Values of Major Currencies, 1970–2009 Source: Author’s calculations based on World Bank GDP deflators (NY.GDP.DEFL.KD.ZG). When the Fed injects money into the economy with the purchase of Treasury bonds, banks and financial institutions who participate in the Treasury markets are the first to be affected by the change. These institutions pass the effects on to businesses in the form of loans, which the businesses then use to purchase equipment and raw materials, pay wages and operating expenses, et cetera. The bank may also change the rate it pays on savings accounts, which will cause consumers to save less and spend more on other goods. With all of these forces influencing prices throughout the economy, it is impossible to know how any one business or individual will ultimately be affected by the Fed’s interest rate manipulation. Only later, after the Fed ends its monetary expansion, do consumers discover how much the value of the dollar has fallen. This “money illusion” is widely discussed in the economic literature on monetary policy.23 Not only does the Fed fail to protect the value of the dollar, but its monetary policies intentionally deceive dollar holders in order to influence spending and economic activity. Figure 1 shows the declining value of several major currencies since 1970. Each line represents the value of a particular currency as a percentage of its purchasing power in 1970.24 Since that time, the value of the U.S. dollar has fallen by 78.9 percent. The British pound and the currencies of the European Monetary Union have fared even worse, falling 94.1 percent and 87.1 percent, respectively, during the same period. The Japanese yen and Swiss franc both experienced large initial declines, but they have recently slowed their devaluations, falling by only 55.3 percent and 66.5 percent. The problem of inflation is even more pronounced in less-developed nations since cash payments are more common and central banks are more likely to inflate national currency. As described in the 2010 paper, “Economic Development and the Welfare Costs of Inflation,” by Edgar Ghossoub and Robert Reed, “gains from eliminating inflation would be the most significant in the developing world.”25 One common justification for allowing central banks to inflate the money supply is that they are able to use this tool to smooth Not only does the Fed fail to protect the value of the dollar, but its monetary policies intentionally deceive dollar holders in order to influence spending and economic activity. 9 The Fed has failed to achieve its stated goals of furnishing and maintaining the currency and improving economic stability. the business cycle and increase economic stability. Historical evidence, however, shows that just the opposite is true. A recent paper, “Has the Fed Been a Failure?” by George Selgin, William Lastrapes, and Lawrence H. White, shows that the creation of the Federal Reserve has substantially increased inflation in the U.S. economy without improving economic stability or preventing bank runs.26 Their study draws together evidence from several sources that have made similar points to show that the Fed has failed to achieve its stated goals of furnishing and maintaining the currency and improving economic stability.27 Another paper by Selgin, “Central Banks as Sources of Instability,” compares the performance of the U.S. economy before and after the creation of the Fed, as well as to the less-regulated Canadian banking system. The study finds that the Fed has led to worse, not better, economic performance.28 Many economists argue that a little inflation is not such a bad thing. The Federal Reserve, for example, tends to have an inflationary bias—of which most economists approve. In his 2002 speech, “Deflation: Making Sure ‘It’ Doesn’t Happen Here,” Fed governor and future chairman Ben Bernanke advised that “The Fed should try to preserve a buffer zone for the inflation rate, that is, during normal times it should not try to push inflation down all the way to zero.”29 Central bankers generally prefer to allow a small amount of inflation rather than risk even a small possibility of deflation—which is often regarded as a greater danger to the economy. The inflationary bias exhibited by central bankers does not exist in a free banking system for several reasons. First, deflation is not always dangerous. Although it is true that when deflation is caused by a shortage of money it can push an economy into recession, deflation also occurs when real increases in productivity make the goods we buy less expensive to produce. For example, the episodes of mild productivity deflation in the 19th-century United States represent- ed improvements in the economy. By contrast, the Fed-induced monetary deflation of the 1930s led to (or at least prolonged) the Great Depression. Second, problems with the money supply are more likely under a central bank than under a system of competing private banks. When a central bank controls the entire money supply, it alone is responsible for any potential harm, so bankers err on the side of cautionary inflation despite the long-term harm to consumers. By contrast, when money is produced by many private banks, each bank attempts to maximize its profits by providing the amount of currency demanded in the market. If one bank produces too much, another might produce too little, and there is no tendency to oversupply or undersupply. Third, although a small amount of inflation is not harmful in any single year, it can have grave effects when compounded over time. For example, the Fed’s target inflation rate of 2 percent per year implies that over 50 years the value of the dollar will decrease by 36.4 percent. As demonstrated in Figure 1, the historical decline in the value of the U.S. dollar has actually been much worse. Central banks are unconcerned with the riskiness of their investing activities. Private banks must balance the marginal increases in the return on their investments against the marginal risk of potential default, but central banks face no such tradeoff. As agents of government, they are not constrained by the risk of potential default. Thus, central banks can often invest in any asset they choose without reprisal. This problem was once thought not to apply to the Federal Reserve, which invested only in U.S. Treasuries. However, the Fed has changed its policies dramatically in recent years. Through its quantitative easing programs and bank bailouts, the Fed intentionally made very risky investments with taxpayer dollars, which included loans to AIG and Bear Stearns and the purchase of over $1.25 trillion in mortgage-backed securities.30 Private banks cannot make such ill-advised gambles because they are subject to the 10 discipline of the market. For private banks, risky investing is a path to bankruptcy— unless the bank is rescued by government regulation or bailout. To satisfy shareholders, private banks must invest in profitable projects rather than subsidizing other firms for the general good. To satisfy customers, private banks must protect the value of the notes they issue. In doing so, they not only protect their own interests but also provide a stable money supply, thereby benefitting the entire public.31 Note Issue in Practice U.S. History For most of U.S. history, private banknotes comprised a significant portion of the money supply. In early American history, private, state-chartered banks issued their own banknotes redeemable for gold or silver dollars produced by the U.S. Mint.32 After the Civil War, note-issuing banks required a na- tional charter, and their activities became increasingly regulated. This period of relatively free banking came to an end in 1913 with the establishment of the Federal Reserve, which maintains a de facto monopoly on the issue of paper currency. Since then, the Fed has continually expanded its powers while devastating the value of the dollar. The prime era of free American banking was the pre–Civil War period from 1783 to 1861, when banks issued paper banknotes redeemable for official U.S. coins. The Coinage Act of 1792 provided that the U.S. Mint would produce several denominations of gold and silver coins, including the silver dollar coin (made from 371.25 grains of pure silver) and the gold eagle coin (worth 10 dollars, with a weight of 247.50 grains of pure gold). Private banknotes redeemable for these coins were soon being exchanged at equal value throughout the nation. The number of state-chartered banks and the quantity of notes they supplied grew strongly during this period. Figure 2 shows the The Fed has continually expanded its powers while devastating the value of the dollar. Figure 2 Growth in Banks and Banknotes, 1800–1860 Source: Warren E. Weber, “Early State Banks in the United States: How Many Were There and Where Did They Exist?” Federal Reserve Bank of Minneapolis Quarterly Review 30, no. 1 (2006): 28–40; and John E. Gurley and E. S. Shaw, “The Growth of Debt and Money in the United States, 1800–1950: A Suggested Interpretation,” Review of Economics and Statistics 39, no. 3 (1957): 250–62. 11 Figure 3 Indexes of Prices and Production, 1790–1860 Source: Louis Johnston and Samuel H. Williamson, “What Was the U.S. GDP Then? Annual Observations in Table and Graphical Format 1790 to the Present,” MeasuringWorth, 2011,. com/usgdp/; and Lawrence H. Officer, “The Annual Consumer Price Index for the United States, 1774–2011,” MeasuringWorth, 2012,. The price level during the era of free banking, from 1790 to 1860, was quite stable compared with recent times. growth in banks and banknotes from 1800 to 1860.33 The solid line represents the number of banks (listed on the right y axis), while the dotted line represents the value of notes outstanding (listed on the left y axis) in millions of dollars. By 1860 there were more than 1,600 private corporations issuing banknotes and an estimated 8,370 varieties of notes “in form, color, size, and manner of security.”34 Figure 3 shows indexes of GDP and the consumer price index (CPI) from 1790 to 1860.35. Economists and historians once considered this period an era of untrustworthy “wildcat” banks, fraught with inflation and economic instability. Recent research, however, has shown these notions are more popular myth than historical fact.36 Comparing figures 1 and 3, we can see that the price level during the era of free banking was quite stable compared with recent times. Banknotes in this era were not issued by state banks alone. The First Bank of the United States was granted a 20-year charter that began in 1791 and lasted until 1811. The charter for the Second Bank of the United States began in 1816 and expired in 1836. As Richard Timberlake described in Monetary Policy in the United States, “[t]he Banks of the United States were not created as central banks, nor dared they consider themselves as such.”37 They had neither a monopoly on the production of currency nor the responsibility of regulating the banking system. These banks simply acted as agents of the federal government in matters of finance, particularly the sale of Treasury bonds. The driving force for the establishment of the First Bank of the United States was to raise funds to finance the newly formed national gov- 12 ernment; the Second Bank was established to finance the War of 1812. After the expiration of the Second Bank, the Treasury occasionally issued government banknotes and small-denomination bills that were sometimes used as currency, but these accounted for only a small fraction of the money in the economy. By 1860, government currency accounted for less than 10 percent of circulating U.S. currency and less than 4 percent of the total money supply.38 The Suffolk banking system is a prime example of effective free and private banking in the United States. From 1825 to 1858, the Suffolk Bank administered a clearinghouse for banknotes that was used by banks throughout New England. This system allowed the notes of many competing banks to be exchanged at equal value and prevented any bank from overextending its note supply.39 At its peak, the Suffolk system included over 300 banks and was clearing $30 million worth of banknotes per month.40 The Suffolk system was slightly impaired by regulations on interstate banking, however. The nationally chartered Banks of the United States had the advantage of being allowed to open branch banks nationwide, whereas the state-chartered banks of Suffolk did not. With the onset of the American Civil War, the issuance of private banknotes ground to a halt. To raise funds for the war, Congress arranged in 1861 to take out loans from many Northern banks; especially in New York, Boston, and Philadelphia. The Treasury then demanded that these loans be paid in gold specie, and “[m]uch against their will the banks complied.”41 This left the banks with insufficient reserves to satisfy redemptions for their banknotes. Most private banks were forced to suspend redemption of their banknotes, and in December of 1861, the U.S. government followed suit by suspending redemption of government-issued currency. The First Legal Tender Act of 1862 allowed the U.S. government to issue large quantities of nonredeemable paper currency, commonly known as the “greenback” for its singular hue. The greenback was effectively a fiat money whose value depended upon the number of notes in circulation rather than the value of any redeemable asset. The Treasury proved incapable of controlling the supply of greenbacks, issuing such a quantity that their value was halved in less than four years.42 The government eventually resumed the redemption of greenbacks for gold or silver with the Specie Resumption Act of 1879. During the Civil War, the government also strengthened its control of the banking industry with the National Banking Acts of 1863 and 1864. These acts imposed strict capital requirements and a substantial semi-annual tax of 5 percent on the note issuance of all state banks. This tax made the issuance of private notes prohibitively costly and caused many state banks to seek national charters. Figure 4 shows the increase in banknotes issued by national banks and those issued by the U.S. government from 1860 to 1868. The rise in national banknotes is mirrored by a decline in state banknotes over the period.43 The note supply from state banks fell from $207 to $3 million in these eight years, while the note issue by national banks and the U.S. government rose from $0 to $294 million and from $21 to 394 million, respectively. In his essay, “Debate on the National Bank Act of 1863,” John Million shows that, like the First and Second Banks of the United States, “[t]he immediate purpose of the National Banking Act was to assist in providing funds for war purposes.”44 Banks were forced to invest more than 100 percent of the value of their outstanding notes in U.S. bonds, which were effectively loans to the Treasury department.45 Banks were also required to pay a semi-annual tax of 0.5 percent on their notes outstanding (a much lower rate than the 5 percent tax on state banks). Additionally, a national banking system was a tool to standardize American currency and limit the independence of the states. Million notes that, “[f]rom the political rather than from the economic side argument was often brought forward that uniformity would prove a safe bond of union between the states.”46 With the onset of the American Civil War, the issuance of private banknotes ground to a halt. 13 Figure 4 Value of State, National, and Government Banknotes, 1860–1868 Source: Richard H. Timberlake, Monetary Policy in the United States: An Intellectual and Institutional History (Chicago: University of Chicago Press, 1978), Table 7.1, p. 90. Since ending note redemption, the Fed’s inflationary policies have caused the dollar to fall by almost 80 percent of its 1970 value. Competition in U.S. currency production ended completely with the passage of the Federal Reserve Act of 1913, which created a new monetary authority with a monopoly on the provision of banknotes. In the century since, the Fed’s inflationary policies have slowly eroded the value of the dollar. Federal Reserve notes redeemable for gold were first printed in 1914, although previously issued notes and certificates redeemable for gold and silver continued to circulate. In 1933, President Roosevelt issued Executive Order 6102, requiring that all gold coin and bullion in the United States be confiscated by the federal government.47 Thereafter, Federal Reserve notes were altered to be redeemable only for “lawful money.” In 1963, their value was again altered, and Federal Reserve notes were relabeled as non-redeemable “legal tender.” The dollar’s final tentative link to gold was broken in 1971, when President Richard Nixon withdrew the United States from the Bretton Woods pseudo-gold system of international exchange rates. Through this process, the dollar was reduced to a pure fiat currency. The Fed now uses money as a policy instrument and is no longer primarily concerned with maintaining a stable value for the dollar. Since ending the practice of note redemption, the Fed’s inflationary policies have caused the value of the dollar to fall by almost 80 percent of its 1970 purchasing power, as shown in Figure 1. Over the past century, the Federal Reserve has consistently expanded the scale and scope of its authority. Since the recent financial crisis, “[t]he Fed has invested more than $2 trillion in a range of unprecedented programs,” and gained “sweeping new authority to regulate any company whose failure could endanger the U.S. economy and markets.”48 Yet despite the Fed’s growing power, changes in financial regulation may have inadvertently paved the way toward 14 the revival of private note issue. According to the article, “Note Issue by Private Banks: A Step toward Free Banking in the United States?” by Kurt Schuler, recent financial reforms have actually repealed the prohibitions on private banknotes. As Schuler describes, “Nobody seems to have noticed that state-chartered banks have been effectively free to issue notes since 1976, and national (federally chartered) banks have been free to issue notes since 1994.”49 Specifically, the Tax Reform Act of 1976 repealed the 5 percent semi-annual tax on notes issued by state-charted banks (Public Law 94-455, §1904(a)(18)). The Community Development Banking and Financial Institutions Act of 1994 repealed the restrictions on note issue by nationally chartered banks (Public Law 103-325 §602(e)-(h)), although the semi-annual note tax of 0.5 percent (1 percent annually) (12 U.S.C. §541) remains. Thus, it appears that U.S. banks have the right to issue private banknotes redeemable for currency from the central bank, thereby creating a semiprivate currency system. Scotland and Northern Ireland Scotland and Northern Ireland employ a semiprivate system in which private banks issue banknotes redeemable for notes from the central bank, the Bank of England. Although Bank of England notes are legal tender throughout the United Kingdom, consumers in Scotland and Northern Ireland overwhelmingly prefer notes issued by local private banks. In Scotland, private banks supply an estimated 95 percent of notes in circulation.50 Because banks in both countries fall under the regulatory authority of the Bank of England, we consider them a single market for the purposes of this paper. Private banks have issued banknotes in Scotland for centuries. The Bank of Scotland was founded in 1696 and had a monopoly on note issue until the Royal Bank of Scotland was formed in 1727. As more banks entered the market, competition wrought increases and improvements in branches, products, and services. Peel’s Act of 1844, however, ended the era of Scottish free banking by prohibiting new banks from entering the market. From the total of 19 note-issuing banks at that time, only three survive today: Bank of Scotland, Royal Bank of Scotland, and Clydesdale Bank. The Scottish system was replicated in Ireland, where banks began issuing private banknotes in 1929. Of the eight original note-issuing banks, only four continue the practice today: Bank of Ireland, First Trust Bank, Northern Banks, and Ulster Bank. The seven noteissuing banks in Scotland and Northern Ireland have increased their note issue continuously over the last decade, as shown in Figure 5. The total quantity of private notes outstanding in Scotland and Northern Ireland has been increasing since 2000 at an average annual rate of 3.9 percent. The British Treasury estimated that in 2005, Scottish and Northern Irish banks earned £80 million (approximately $145 million) from the issue of private banknotes.51 Notes from banks in Scotland and Northern Ireland provide many of the previously discussed benefits of private banknotes. The major advantage to consumers is product variety among banknotes, which they clearly prefer to Bank of England notes. Unfortunately, there are several reasons this system does not capture the full advantages of free banking. First, the Bank of England has forbidden new banks from issuing notes. Second, since private notes are redeemable for Bank of England notes, the central bank retains a strong influence over the money supply. Third, on weekends (from Friday to Sunday) note issuing banks are required to hold 100 percent reserves for their notes outstanding in the form of “UK public sector liabilities.”52 This leaves the banks only four days to invest the capital they gain from their banknotes and thins their potential profits. The British government has recently been hostile in its treatment of note-issuing banks. It has repeatedly threatened to require that private banks hold 100 percent reserves on their banknotes seven days a week, Scotland and Northern Ireland employ a semiprivate system in which private banks issue banknotes redeemable for notes from the Bank of England. 15 Figure 5 Private Notes in Circulation in Scotland and Northern Ireland, 2000–2010 Source: Author’s calculation based on the balance sheets of each bank. Ulster Bank is a subsidiary of the Royal Bank of Scotland and is included as part of their total. No data were available for Northern Bank, which is a subsidiary of Denmark’s Dankse Bank Group. Data are presented in millions of British pounds sterling. Since the 1860s, the vast majority of Hong Kong’s currency has been composed of privately issued banknotes. rather than the current three.53 The banks have resisted, saying that if they are required to hold 100 percent reserves at all times, they will be unable to profit from their note issuance and will be forced to stop issuing banknotes. If private notes were issued in the United States, would the Fed be similarly hostile? The uncertain legal situation in the United States is a substantial barrier to the issuance of private banknotes, but it is not insurmountable. The potential profits from private note issue may be sufficient to entice some entrepreneurial bank to overcome this cost and enter the market. Hong Kong The monetary system in Hong Kong is a semiprivate system where the money supply and foreign exchange rate are managed by a currency board, the Hong Kong Monetary Authority (HKMA). Since the 1860s, however, the vast majority of Hong Kong’s currency has been composed of privately issued banknotes. Early notes were redeemable for Mexican (and later American) silver dollars. In 1935, the government established its own independent monetary unit, the Hong Kong dollar (HKD). The value of the HKD was originally pegged at a fixed exchange rate to the British pound sterling, but since 1971 it has been pegged to the U.S. dollar at varying rates. The HKMA was established in 1993 to ensure the stability of both the currency and the financial system.54 The government has traditionally minted coins up to $10 and has sometimes printed notes in denominations of $1, $5, and $10. Before seizing authority over Hong Kong in 1997, the People’s Republic of China committed to the “Basic Law,” guaranteeing that Hong Kong be al- 16 lowed to function as a capitalist society until at least the year 2047.55 This law is presumed to protect the independence of the HKMA until that time. The currency of Macau is managed through a similar system to that of Hong Kong but on a smaller scale. Three Hong Kong banks currently issue private banknotes: the Hong Kong Shanghai Banking Corporation (HSBC), Standard Chartered Bank, and the Bank of China. They each issue several denominations ranging from $20 HKD to $1000 HKD. All three banks have seen consistent growth over the past two decades in their quantity of notes in circulation. In addition, the HKMA issues coins and some banknotes with a $10 HKD denomination. In 2010, governmentissued banknotes represented approximately 3.7 percent of the supply of paper currency. Figure 6 shows the growing supply of private notes issued in Hong Kong from 1993 to 2010 in terms of millions of HKD.56 The quantity of private notes outstanding in- creased at an average annual rate of 7.3 percent during this period. Despite this high rate of money growth, Hong Kong has experienced low levels of inflation. Unlike Scotland and Northern Ireland, banks in Hong Kong are required to hold 100 percent reserves for any banknotes they issue. Hong Kong’s economy has been spectacularly successful not only because of its monetary policy but also because of its liberal economic policies of free trade, light regulation, and low taxes. This special administrative region of the People’s Republic of China earned the top ranking in the Economic Freedom of the World 2010 Annual Report,57 coming in first in both the size of government and freedom of international trade categories. Hong Kong also ranked 3rd in the category Regulation of Credit, Labor, and Business, 10th in Access to Sound Money, and 16th in Legal Structure and Security of Property Rights. These free-market policies have caused explosive growth in Hong Hong Kong’s economy has been spectacularly successful because of its monetary policy and its liberal economic policies. Figure 6 Banknotes in Circulation in Hong Kong, 1993–2010 Source: Author’s calculation based on the annual reports of the issuing banks and the Hong Kong Monetary Authority. 17 Kong’s economy, facilitating the growth of its GDP from $28.8 billion in 1980 to $224.5 billion in 2010 (measured in 2010 U.S. dollars).58 Despite its small size, Hong Kong boasts the world’s 35th largest economy, and the Hong Kong dollar is the world’s 8th most widely traded currency.59 Hong Kong banks account for roughly 5 percent of worldwide currency exchange, the 6th most of any country.60 Local Currencies One exception to the Federal Reserve’s banknote monopoly is the use of local currencies. Local currencies are usually “fiduciary money,” meaning that they are neither redeemable for any specific asset nor guaranteed by the government as a means of legal tender. Their only value is derived from the expectation that they will be accepted by other parties as a form of payment. This is generally achieved by establishing a group of businesses that promise to accept the currency. There are dozens of local currency systems in the United States and hundreds in operation around the world.61 Most local currencies are established to encourage citizens to “buy local.” They are described as “tools of community empowerment” intended to “support economic and social justice, ecology, community participation and human aspirations.”62 Local currencies have been used since at least the 1930s and endorsed since the 1960s by economists E. F. Schumacher, Robert Swann, and Jane Jacobs as a means of creating sustainable local development (though the effectiveness of these efforts is debatable). Two of the bestknown local currencies in the United States are BerkShares and Ithaca Hours. BerkShares are a local currency issued since 2006 in the Berkshire region of Massachusetts by the nonprofit organization BerkShare, Inc. The notes have featured local heroes Norman Rockwell, Herman Melville, Robyn Van En, W. E. B. Du Bois, and the Stockbridge Indians. One BerkShare can be purchased for $0.95 at local banks but has a spending power of $1 at participating One exception to the Federal Reserve’s banknote monopoly is the use of local currencies. businesses. This creates a 5 percent discount on local purchases which the participating firms hope will lead to increases in sales and therefore profits. BerkShares appear to have gained fairly widespread acceptance in the region. Five local banks exchange BerkShares for U.S. dollars at a dozen branches, almost 400 local businesses accept BerkShares as payment, and over 2.7 million BerkShares have been put into circulation.63 Over the past few years, the BerkShares system has been the subject of dozens of reports in newspapers, on television, and online. The Ithaca Hours currency is produced by the nonprofit organization Ithaca Hours, Inc., of Ithaca, New York. It is thought to be the oldest local currency system still in operation and has inspired dozens of similar local currencies around the world. The Ithaca Hour is valued at a fixed rate of 10 U.S. dollars to 1 Ithaca Hour, and over $10 million worth of notes have been issued since its creation in 1991. The organization reports that over 900 businesses accept Ithaca Hours, and there are currently over $100,000 worth of notes outstanding.64 Like BerkShares, Ithaca Hours have been widely publicized in the mainstream and popular presses. These examples send somewhat mixed messages to for-profit banks considering issuing their own banknotes. On one hand, local currencies show that privately produced banknotes can gain some degree of popular acceptance in small communities. Additionally, each local currency has its own system of acceptance and exchange, which might prove instructive for the introduction of private banknotes. On the other hand, local currencies are clearly intended to be local and stay local. As such, they provide little insight on how a privately issued currency could achieve widespread adoption, nor do they clarify the legal questions regarding private notes. Additionally, local currencies do not present a significant challenge to the Federal Reserve’s banknote monopoly. Privately produced banknotes that are intended as substitutes for Federal Reserve notes will not necessarily be granted the same ex- 18 emptions from regulation nor be as well received by the government. Reviving Private Note Issue U.S. banks could profit by issuing their own private notes redeemable for Federal Reserve notes. This practice is currently legal in the United States—which begs the question—why have no banks issued their own notes? One possibility is that, in their judgment, there is simply no profit to be had in such an endeavor. This section estimates the potential profits from the issue of private banknotes in the United States, concluding that private notes could create sizable profits for U.S. banks and sizable benefits for American consumers. Because of the preliminary nature of this exercise, we make three profit estimates: a base case, a best case, and a worst case. The base case is computed using the cost and revenue numbers we consider most likely. The best case assumes low-end costs and highend revenues, while the worst case assumes high-end costs and low-end revenues. Assumptions and computations are explained so that, should the reader disagree with the assumptions, he might make similar calculations under numbers he deems reasonable. We also consider potential barriers to entry. First we address the declining use of cash due to emergent payment technologies, noting that despite common perceptions, the use of paper currency has been stable or increasing in the United States over the past decade, as was shown to be the case in Hong Kong, Scotland, and Northern Ireland. Second we consider the willingness of American consumers to accept and use these new products, and the level of market penetration that can be expected. Considering the role of money as a medium of exchange, it seems that private notes are unlikely to be successful unless they gain some critical level of public adoption. The issuance of private currency is technically legal, but the practical status is un- clear since no firms have chosen to enter the market. We assume that private note issuers will be able to gain whatever legal exemptions the Fed has granted to local currencies; however, it is possible that large-scale currency producers might be treated differently since they would pose a threat to the Fed’s de facto monopoly. Were the U.S. government to prohibit the issue of private notes, then any bank that had already entered the market would lose its initial investment and all future profits. Potential Profit The issuance of private banknotes appears to be a significant profit opportunity for American banks. Banks earn profits on the spread between the revenue they earn on their investments and the costs of raising the funds they invest. For note-issuing banks, the return on investment need only be higher than the cost of keeping notes in circulation in order for the bank to turn a profit. This simple formula of revenues minus costs has been used to estimate the historical profits of banks in the United States and Scotland, and it is used by the Federal Reserve to calculate profits on its own note issue. This section uses that formula to estimate the potential profit on private note issue in the U.S. market.65 Since the Federal Reserve traditionally invests only in U.S. Treasuries, the Fed estimates its annual revenues by multiplying the yield on Treasuries by the quantity of its notes outstanding.66 There are currently about $1 trillion U.S. Federal Reserve notes outstanding. The current yield-to-maturity on 30-year U.S. Treasuries is about 4.2 percent. Using these numbers yields a quick approximation of $42 billion in annual revenue for the Fed, which could be captured by private banks, but this overestimates potential profit in several ways. First, it is a revenue calculation, from which we must subtract all costs associated with issuing notes and keeping them in circulation in order to estimate profits. Second, private banks will not capture the entire banknote market cur- Private notes could create sizable profits for U.S. banks and sizable benefits for American consumers. 19 The rate of market penetration private-note issuing banks would achieve in the United States is difficult to estimate. rently monopolized by the Fed. Third, $42 billion represents one year of the Fed’s potential revenue under the current system. To judge whether private currency would be a profitable investment, we must calculate the net present value (NPV) of the profits, first putting the expected future profits in terms of today’s dollars, then accounting for upfront costs such as legal services and marketing. An NPV greater than zero indicates that the issuance of banknotes would be a profitable opportunity for a private bank. We begin by estimating the size of the potential market for private banknotes in terms of the current quantity of Federal Reserve notes outstanding. We calculate the expected money supply for each of the next 10 years and assume a constant growth rate after that time. The Federal Reserve currently has roughly $1 trillion worth of notes outstanding worldwide.67 The optimal rate of money growth should be equal to the growth rate of the U.S. economy, which typically averages 2 to 3 percent growth per year (although actual growth may be lower in the coming decade). In the base case we assume a rate of 2 percent annual growth in the currency base for the first 10 years and no growth thereafter. For the worst case, we assume 2 percent growth for the first 10 years and a growth rate of negative 2 percent per year forever after, and for the best case we assume 2 percent annual growth forever. Private banks, however, could not expect to capture all or even most of this market, at least not in the near future. Approximately 60 percent of these notes are currently thought to be outside the United States.68 U.S. banks are unlikely to capture this international market because foreigners may hesitate to accept banknotes that are difficult or impossible to redeem in their home countries. Therefore, we assume that only 40 percent of U.S. Federal Reserve notes currently circulate inside of the United States and that domestic banks will only be able to capture a portion of this domestic market. What level of adoption could banks expect? In the long-term, the rate could be very high. Private banks in Scotland and Northern Ireland dominate the market with an estimated 95 percent adoption rate. Only a small percent of transactions are conducted using Bank of England notes. The same is true in Hong Kong, where the government produces only a small percentage of the note supply. It is unlikely that U.S. banks could capture similarly dominant market shares in the near future; Hong Kong, Scotland, and Northern Ireland each have long histories of private notes. Customers in these regions are already accustomed to using private notes, and businesses are in the habit of accepting them without worry. Introducing a new private note in the United States would be much more difficult and costly since Americans are not familiar with these products, and there is no existing network of businesses willing to accept them. Large, expensive marketing campaigns would likely be necessary to build customer awareness. Banks would need to build a network of businesses willing to accept private notes by paying the businesses (and maybe even the note holders). Since the rate of market penetration private-note issuing banks would achieve in the United States is difficult to estimate, we make three estimates, each over a 10-year term. For the base case, we assume private banks will be able to capture 5 percent of the market for banknotes over a 10-year period. For the best case, we assume 10 percent market penetration, and 1 percent for the worst case. These rates are very low compared to countries that currently have private notes, but they are still fairly high in total quantity. For example, the estimate of 1 percent indicates that private banks will be able to issue $4 billion in private notes. This is a large figure indeed, considering that there are $0 in private notes in the United States today, but it seems to be an achievable goal over 10 years considering sales of other popular products. For example, the new computer game Modern Warfare 3 generated sales of more than $1 billion in its first 16 days on the market.69 20 Although we assume a base rate of 5 percent market penetration after 10 years, we do not assume that adoption will be linear over this period. It is likely that few customers will choose to adopt a new currency when it is first introduced. These early adopters might be attracted to the new designs, the excitement of being the first to use a new currency, or any monetary reward for holding private money instead of U.S. Federal Reserve notes. Whatever the reason, each person who adopts the new currency and uses it in trade will make the currency easier to use for the next user. We therefore assume that market penetration will be low in early years but will increase at an accelerating rate. These increases could continue past our 10year timeframe, but to keep these projections realistic, we assume that adoption will level off after the first 10 years. The rate of market penetration will therefore take on an S-curve as shown in Figure 7. Adoption will be low in early years, then high for a few years, then level out by year 10. After year 10, the market penetration will stay constant as a portion of the money supply (which may be growing, shrinking, or constant). An alternative estimate, assuming a linear adoption, would increase the projected NPV of an investment in private currency since profits would be higher in the early years of the project. For calculation purposes, we estimate the percentage adoption in each year according to Table 1. For each year, we show the small percentage adopted out of some target rate of adoption after 10 years (the terminal rate). The quantity of private notes outstanding in each year is calculated by multiplying the total notes outstanding by 40 percent of notes within the country, and then by the market penetration rate in that year. For example, $1 trillion in U.S. notes outstanding times 40 percent circulating in the United States times a rate of 5 percent would equal a total of $20 billion private notes outstanding. Now that we have an estimate of the quantity of private notes, we can estimate the expected revenues and costs. First, we must recognize that not all funds will be available for investment. As with deposits, some percentage of banknote funds must be It is likely that few customers will choose to adopt a new currency when it is first introduced. Figure 7 Market Penetration over 10 Years Assuming S-curve Adoption Source: Best-case Market Penetration Rates Given in Table 1. 21 Table 1 Rates of Market Penetration Year Percent of terminal Best case Base case Worst case 0 0.0% 0.0% 0.0% 0.0% 1 2.5% 0.3% 0.1% 0.0% 2 7.5% 0.8% 0.4% 0.1% 3 15.0% 1.5% 0.8% 0.2% 4 27.5% 2.8% 1.4% 0.3% 5 50.0% 5.0% 2.5% 0.5% 6 72.5% 7.3% 3.6% 0.7% 7 85.0% 8.5% 4.3% 0.9% 8 92.5% 9.3% 4.6% 0.9% 9 97.5% 9.8% 4.9% 1.0% 10 100.0% 10.0% 5.0% 1.0% Terminal 100.0% 10.0% 5.0% 1.0% held in reserve to satisfy banknote redemptions in each period. We assume a reserve rate of 10 percent, consistent with the current required reserve ratio for FDIC member banks.70 Thus, a quantity of $20 billion notes outstanding and a reserve rate of 10 percent indicate a total of $18 billion worth of funds available for investment. We calculate the annual revenue on these notes by multiplying the quantity of funds available for investment by the bank’s rate of return. Estimating the rate of return on assets can be quite subjective, but the economic literature on bank profits often assumes a return of around 5 percent, which appears to be in line with current rates.71 Annual revenues over assets for the top four banks in 2010 were 5.9 percent for Bank of America, 4.8 percent for J.P. Morgan Chase, 3.2 percent for Citigroup, and 7.4 percent for Wells Fargo.72 The simple average of these is 5.3 percent, and these four banks together account for over 60 percent of all assets and deposits in the U.S. banking system. One might expect these banks to be the most likely to issue private currency, so it seems reasonable to use a 5 percent rate of return for the base- and best-case estimates. If private banknotes are instead issued by smaller regional banks, then their local reputations might allow them to earn even higher rates of return.73 On the other hand, it may be that note-issuing banks would receive diminishing returns on their investments, or that they would choose to invest in safer assets. As a safer form of investment, the banks could purchase 30–year U.S. Treasury bonds with a current yield-to-maturity of 4.2 percent. This rate will, therefore, be used as our worst-case estimate rate of return. Another potential source of revenue is interest paid on reserves. Cash reserves are often held in accounts at one of the regional Federal Reserve banks rather than being kept in the private bank’s vault. In the past, the Fed has not paid interest on these deposits, but it began doing so (at up to 0.25 percent) to incentivize banks to hold more cash during the financial crisis of 2007–2009.74 Although this interest provides revenue for commercial banks, it is uncertain how long the Fed will continue to pay it. We assume in the best case that interest is paid on reserves at a rate of 0.25 percent, but that no interest is paid on reserves in the base and worst cases. We now consider the potential costs. The most obvious cost is the physical production and distribution of banknotes. The Fed estimates the production cost of an individual note at about $0.045.75 We assume a production cost of $0.045 per note. Private note producers may be able to produce banknotes more cheaply than the government, but they may also wish to enhance their notes with additional features that might increase the cost of production. Given this cost per note, we must also know the distribution of notes in denominations of $1, $10, $100, et cetera. We assume that the distribution is approximately the same as the current money supply, although banks may discover a more 22 profitable allocation over time. One final consideration is the durability of individual notes. The Fed replaces roughly 4.4 percent of the currency base each year, indicating that each Federal Reserve note lasts about 20 years on average.76 Since our current estimate covers only a 10-year span, we assume there are no replacement costs for the first 10 years and that future annual replacement costs equal the cost of production of onetwentieth of the notes outstanding in every year after year 10. The cost of producing banknotes might change over time, but it is difficult to tell whether it would rise or fall. On one hand, competition among the suppliers of banknotes might reduce the cost of production, assuming banks would have multiple potential printing companies competing for their business. On the other hand, private banks might choose to improve the aesthetics or security features of their currencies in order to attract customers. They might offer new pictures and designs, as banks in Hong Kong, Scotland, and Northern Ireland have done, and this might be costly. Banks might choose more elaborate security features in order to prevent counterfeiting, or they might create verification mechanisms that allow shopkeepers to check the authenticity of notes before accepting them. Of course, these costs too are likely to fall over time, but costs might rise or fall in the mediumrun, depending on the elasticities of the cost of production and consumer demand. The cost of keeping money in circulation is also likely to change, since no system currently exists for the clearing of private banknotes. This may require substantial investments in infrastructure depending on how private note clearing is designed and instituted. Note-issuing banks might be able to clear their notes through the Federal Reserve’s existing system, or they might find it necessary to build a new system altogether. The simplest possibility is that the Fed could continue to facilitate note redemption. This could require new systems to sort banknotes and deliver them to the bank by which they were issued. Building such a system might be costly for any individual bank, but, if coordinated through the Federal Reserve, the creation of a clearing system for private banknotes would involve the collective action of many member banks rather than being borne by any one bank alone. In effect, the Fed already has a more complex system in place for the clearing of checks. A check requires verification of not only the bank but also the individual account holder and the amount of funds available in the individual’s account. This verification process is more complex than would be necessary for a note-clearing system, which would only need to identify the issuing bank and not any particular account. A system for clearing banknotes could merely be a simplified version of the existing check clearing system. Or, if necessary for the short-term, banknotes could be verified as checks written by the issuing bank. The bank need only hold its reserves in a simple checking account, and its banknotes would effectively become checks payable from that account. This option would allow the clearing of notes with no new investment in infrastructure. Beyond the costs of producing and issuing banknotes, introducing a new medium of exchange would undoubtedly require an extensive marketing campaign. Although the use of Federal Reserve notes is ubiquitous throughout the country, Americans are unfamiliar with private banknotes and might be hesitant to adopt them. Customers would need to be informed about this new type of money, how to obtain it, and how to use it. Most of all, they would need to be assured that these new banknotes would be accepted at regular places of business. The amount of money spent on such a campaign would depend on the form of advertisement, the effectiveness of the ads, and the market share the company expected to capture. The marketing initiative might come in one large wave or successive small ones. One possibility is that a major bank might attempt to roll out its new notes across the A system for clearing banknotes could merely be a simplified version of the existing check clearing system. 23 Private banknotes would need to be continually marketed to encourage their continued adoption over time. entire nation at once with a nationwide advertising campaign. It is difficult to estimate the cost of such an endeavor. Recent national advertising campaigns with similar reach have included $100 million efforts by Nike, Yahoo!, and Sprint; a $150 million Tide campaign; and a $200 million effort to promote the Nintendo Wii. Therefore, a national campaign to introduce private notes is likely to be priced in the hundreds of millions of dollars. We assume a base-case marketing expense of $200 million. In the best case, we assume $100 million, and in the worst case $300 million, which would be among the most expensive national marketing campaigns ever. Alternative estimates, spreading this expense over time, might increase the expected NPV of the investment in private currency. In addition to the initial marketing campaign, private banknotes would need to be continually marketed to encourage their continued adoption over time. This is especially true if adoption is likely to be low in early years and increase over time as more customers begin using the notes. Thus, we assume a variable annual marketing expense of 5 percent of the value of new banknotes issued in any year.77 Another method of encouraging private note adoption might simply be to pay customers to use them. The first step might be to compensate businesses that accept private banknotes. For example, businesses could be paid to put up signs saying “We accept B. of A. Bucks.” Consumers would know where their private notes would be accepted and would be more willing to withdraw and use private banknotes. Any business that promised to accept private notes could be paid to advertise their involvement. Perhaps banks could distribute banknote readers to these businesses in order to verify the authenticity of their notes. Smartphone applications for the blind are already being used to read Federal Reserve notes and verify their denominations.78 Alternatively, rather than paying a fixed fee to stores for accepting private notes, it might be possible to pay each business based on the quantity of private notes they earn (measured by how much they return to the bank). If so, businesses could pass the savings on to their consumers by offering discounts to customers using private money—as is done with the BerkShares local currency. Another way to incentivize the adoption of private money would be to pay the customer directly via interest or cash bonuses. A bonus could be paid each time a customer chooses to withdraw private notes from the bank instead of U.S. dollars. For example, when withdrawing money at an ATM, the screen might ask, “Would you like $101 B. of A. Bucks instead of $100 regular dollars?” Customers who fear that their favorite store might not accept the private money might find the risk worthwhile for an extra dollar. Similar programs are already being used by banks to attract depositors. For example, Happy State Bank of Texas offers its customers a monetary reward for using its ATM machines. The machines are filled with twenty-dollar bills, but they occasionally pay out a fifty-dollar bill for no extra charge.79 The problem with paying an up-front cash bonus is that it encourages note cycling. Customers can make money simply by withdrawing banknotes, depositing the banknotes back into their bank accounts, and immediately withdrawing them again. For a cash bonus to be useful, it must not be easily exploitable. Potential ways to avoid this exploitation include paying a bonus when notes are returned to the bank rather than when they are withdrawn, or having bonuses randomly assigned or paid out as a lottery. These strategies, however, generally suffer from the same shortcoming as the withdrawal bonus: they depend on the amount of banknotes deposited or withdrawn rather than the time the dollars are in circulation. It is therefore unclear whether simple cash bonuses can effectively be used to encourage withdrawals while also keeping banknotes in circulation. The most effective incentive for customers to both withdraw and use private banknotes would be to pay them interest for 24 as long as the notes they withdraw remain in circulation. Assuming the bank earns 5 percent on its investment, some portion of this revenue could be passed on to the consumer. The rate paid on notes might not even need to be very high in order to induce customers to hold private notes. Savings accounts in the United States currently pay 0.5 percent in interest per year or less. If private notes paid the same rate as savings accounts or higher, then customers could effectively have their own savings accounts at home and earn interest without even going to the bank or opening an account. There is precedent for interest-bearing notes in United States history. Early U.S. notes did pay interest, and small-denomination bonds were sometimes traded as currency.80 Using an interest-bearing note in exchange can be difficult, however, because the value of the note would be constantly changing. The bank would prefer to pay its customers for using private notes while still making its notes easy for customers to trade. An easier way to reward customers might be to pay interest to the withdrawer of the note rather than to the person who redeems it. In this case, the individual who redeems the banknote is paid its stated face value, so the note will always be traded at face value. When the note is returned to the bank, the individual who initially withdrew it receives some interest payment according to the amount of time the note was in circulation. In this way, customers have an incentive to withdraw private notes but not to redeem them immediately. The withdrawer can hold the note as a savings bond, knowing that it will pay interest when returned to the bank, or he can pass the note on through exchange. The longer the note remains in circulation, the more money he gets when the note is finally returned to the bank. In this way, customers have an incentive to withdraw notes and keep them outside the bank, so note cycling is avoided. Interest-bearing currency might also be used as a hedge against inflation. Rather than paying interest at a prescribed rate, the issuing bank could make payments based on the CPI or some other measure of inflation. The downside to interest-bearing currency is that the benefit is not highly visible to consumers. Rather than being tempted by an immediate cash bonus, the customer is offered a future payment of unknown amount. Clearly the uncertainty regarding the time and amount of this payment makes it less attractive, but there is reason to think that such a mechanism would still be effective. After all, similar incentives are commonly used in payment systems such as credit card rewards and airline miles. Discover Card pays “cash back” bonuses at the end of each billing period. The company clearly believes that customers value these future payments and even advertises itself as “the card that pays you back.” Airline miles function in a similar fashion. Each can be redeemed for an unknown amount at some time in the future. The number of miles is known to the customer, but the future value of an airline ticket is unknown since the price of tickets constantly changes. These programs are effective for attracting customers, and it seems reasonable that interest-bearing banknotes would do so as well. In fact, interest on private notes would be better in at least one meaningful way, since interest would accrue for as long as the note remained in circulation. The customer can simply withdraw the note and spend it, and he will continue to earn interest with no further effort of his own. For our profit estimates, we assume that in the base and worst cases the bank pays 0.5 percent interest per year on its notes, and in the best case pays no interest on its banknotes. Banks should also expect some initial legal expenses. The legal aspects of launching private currency are discussed further below, but for now we assume at least some cost associated with establishing note-issuing departments and becoming regulatory and tax compliant. Since there is also a threat of legal action should the government oppose private-note issue, we assume additional legal expenses to fund this potential litigation. For the worst-case scenario, we assume The most effective incentive for customers to withdraw and use private banknotes would be to pay them interest for as long as the notes remain in circulation. 25 Even under our worst-case assumptions, the production of private banknotes would still turn a profit. such litigation will occur and will be one of the most costly lawsuits in U.S. history, and arrive at a total legal cost of $100 million. In the base and best cases, we assume total legal costs of $50 million. Additionally, we expect there will be other expenses necessary to initiate the production and distribution of banknotes which are not included in this calculation. We assume these other initial expenses will be $100 million in the best case and $200 million in the base and worst cases. Last, we include the standard business expenses of taxes and operating costs. For taxes we assume a flat corporate tax rate of 35 percent on net earnings. This seems a safe assumption, since all large banks can be expected to pay the highest marginal corporate tax rate. A more onerous tax may be applied to the volume of notes outstanding. Under 12 U.S.C. §541, the government assesses a semi-annual tax of 0.5 percent (totaling 1 percent per year) on the outstanding notes of all national banks.81 Recalling our assumption of a 5 percent return on assets, this tax would amount to approximately 20 percent of annual revenues, which is quite large. The tax does not apply to state-chartered banks, however, so it might be avoided if the major U.S. banks issue notes through state-chartered subsidiaries. For profit calculations, we assume a 1 percent tax for the worst case, 0.5 percent for the base case, and no tax in the best case. For administrative expenses we look to the Federal Reserve. On the $1 trillion in notes outstanding, the Board of Governors had a 2010 operating budget of $444.2 million.82 That is an average of 0.0444 cents per dollar outstanding. To simplify these calculations, we assume an annual operating expense of 0.05 cents per note outstanding. If, for example, a bank were able to put $1 billion worth of notes into circulation, we would assume an operating cost of $500,000 per year.83 Using all of these figures, we can now estimate the potential profit from private-note issue. Since we have assumed a low rate of early adoption, net cash flows would be low in early years and high in later years. In the base case, expected cash flows are negative in the first 5 years, but rise to $429.1 million by year 10. In the worst case, they are negative for 6 years and reach only $39.5 million by year 10. In the best case, cash flows are negative for the first 3 years, but rise to more than $1.26 billion per year by year 10. Clearly there is large variation here because of the wide range of assumptions necessary to make these calculations. Even the worstcase estimates, however, include cash flows of tens of millions of dollars per year, while the best-case estimates are over one and a quarter billion. One could easily imagine that once 10 percent of the population became comfortable using private notes, many more Americans would become accepting of the idea. If market penetration could reach the levels it has in Hong Kong, Northern Ireland, and Scotland, annual cash flows might easily be in the tens of billions. Of course, cash flow alone does not indicate whether a project will be profitable. To make this assessment, we must calculate the NPV of issuing private notes. The “time value of money” dictates that dollars received today are more valuable than dollars received in the future. We must divide the expected cash flows by some discount factor in order to find the present value of these future payments. The NPV is the present value of future cash flows minus any up-front investment. It represents the current value of all future profits. If the NPV is positive, then the future cash flows from the project are worth more than the initial investment, indicating that private note production will be a profitable endeavor. Our estimates assume that the discount rate for finding the present value of cash flows is equal to the expected rate of return on assets. In each case, we find a positive NPV–indicating that even under our worst-case assumptions, the production of private banknotes would still turn a profit. In the base-case scenario, we find an NPV of over $6.0 billion. This case assumes an up-front investment of $450 million, comprised of $200 million for marketing plus 26 $50 million in legal expenses plus $200 million in other expenses. Market penetration is assumed to grow to 5 percent over 10 years, while the total money supply will grow by 2 percent per year for 10 years and then level off. We assume a rate of return on assets of 5 percent and a reserve requirement of 10 percent. Expenses are assumed to be 0.05 percent for administrative expenses and 0.5 percent for taxes and interest (as percentages of notes outstanding), plus marketing expenses of 5 percent on all new notes. Given these assumptions, we find a base case NPV of $6.0 billion. This figure represents the present value of the future profits from private note issue and is quite large relatvie to the $450 million cost of up-front investment. Tables of these calculations are presented in Appendix A. NPV is the best criteria for judging the profitability of a potential project, but it is not the only criteria. Another important statistic is the internal rate of return (IRR). If the IRR is greater than the discount rate for a project, then the project is expected to earn a profit. In the base case, we assumed a discount rate of 5 percent. We find an IRR of 37.3 percent, which is higher than the discount rate, indicating that project will turn a profit. The IRR is also useful because it can be used to compare dissimilar projects. Firms often demand an IRR in the range of 30 to 40 percent for risky projects, and the estimated profit from private-note issue appears to be in that range. Another criterion for project evaluation is the payback period, or the length of time the project will take to pay back its original investment. Some firms require a short payback period for all projects in order to reduce the uncertainty of their investments. Because we have assumed a low rate of market penetration for private notes in the years following their introduction, the payback period will be longer. In the base case, the original investment will be paid back sometime between years six and seven, which is longer than many firms would like. Some slight changes in our assumptions, however, could greatly reduce the payback period. For example, if the firm chooses an alternative marketing plan, spreading its marketing investment over several years, the payback period will be much shorter. The best and worst case NPV estimates illustrate the extremes of our prior assumptions. The best-case scenario assumes a total investment of $250 million today, made up of $100 million in marketing expenses plus $50 million in legal costs plus $100 million in other expenses. Annual expenses are 5 percent of newly issued notes for marketing, 0.05 percent of notes outstanding for administrative expenses, and 0 percent for interest or taxes. The money supply is assumed to grow at an annual rate of 2 percent indefinitely. We assume 10 percent market penetration after 10 years, with an interest rate of 5 percent on investments and 0.25 percent on reserves. This scenario has an NPV of $29.0 billion and an IRR of 68.9 percent. Even the worst-case scenario can be expected to be profitable. We have assumed an up-front investment of $600 million in marketing, legal, and other expenses, and expenses of 0.5 percent of note issue for interest, 1.0 percent for taxes, and 0.05 percent for administrative expenses, plus annual marketing expenses of 5 percent of the value of new notes. With market penetration of only 1 percent after 10 years, a rate of return of 4.2 percent, and a long-term money growth rate of negative 2 percent, the worstcase NPV is still positive at $66.2 million, with an IRR of 5.2 percent. Of course, these numbers are only estimates. We have tried to make reasonable and realistic assumptions, but many could be questioned, and changing them might lead to very different results. Recall that the bestcase scenario assumes the best outcomes for all variables (high revenue, low costs, etc.) while the worst case assumes bad outcomes for all variables. The most likely outcomes lie somewhere between these two extremes. One potential objection to these NPV estimates is that even the worst-case estimate assumes a market penetration rate of Firms often demand an internal rate of return of 30 to 40 percent for risky projects, and the estimated profit from private-note issue appears to be in or above that range. 27 Electronic transactions, including debit cards and wire transfers, have increased dramatically but have primarily replaced the use of checks rather than cash. 1 percent, which still might be considered very high in absolute terms. It indicates that banks would have almost $5 billion worth of private notes in circulation in 10 years. This is quite ambitious, considering there are none in circulation today. As previously discussed, however, private notes need not be initially introduced on a national scale. Banks would likely choose to test market their new currency in a small area before deciding on a nationwide rollout. Or smaller local banks might be able to introduce private notes intended to circulate only in their own region. In such cases, the previous analysis could still be applied on a smaller scale, using appropriate levels of investment and rates of market penetration. In addition to the question of scale, there are other factors which would affect the expected profitability of private notes. We now discuss a few of these legal and logistical challenges. Barriers to Private Note Issue First let us consider the question of the future demand for currency. It is often asserted that new methods of payment, especially debit cards and online transactions, have revolutionized commerce in the United States, and that fewer Americans use cash for ordinary transactions. If banks believe the use of banknotes in the United States is declining, they may be hesitant to enter this shrinking market. The situation, however, may not be as bad as is commonly perceived. Although new forms of payment have become more common, they have not replaced cash at the rate one might expect. A 2009 Fed survey of consumer payment choice found that “During this period of relatively severe economic slowdown, consumers not only got and held more cash, but they also shifted toward using cash and related instruments for more of their monthly payments. The number of cash payments by consumers increased by 26.9 percent.”84 A 2010 Fed study, Non-Cash Payment Trends in the United States 2006–2009, examined the increasing use of electronic payment systems, finding that both the number and value of ATM withdrawals rose over the period.85 A similar 2008 study noted that electronic transactions, including debit cards and wire transfers, have increased dramatically but have primarily replaced the use of checks rather than cash.86 Indeed, monetary economist Scott Sumner recently responded to the notion that the United States is moving toward an all-credit economy on his blog The Money Illusion: If we were moving to a credit economy then the demand for currency and base money would be declining. But it isn’t, [. . .] even the currency component of the base is larger than in the 1920s, even as a share of GDP! It is not true that the various forms of electronic money and bank credit are significantly reducing the demand for central bank produced money.87 Figure 8 shows the growing quantity of U.S. banknotes from 1990 to 2010.88 The volume of Federal Reserve notes outstanding is represented by the solid line, with the scale on the right y axis in terms of billions of notes. The dotted line represents the value of notes outstanding with the scale on the left y axis in billions of dollars. Both the volume and value of notes have consistently increased over the past two decades. The quantity of notes outstanding has grown at an average annual rate of 4 percent, while the value of notes has grown at 6.4 percent. The value of notes is increasing faster because of an increase in the portion of hundred-dollar bills and a decrease in small denomination notes, especially ones, tens, and twenties. This evidence is consistent with the currency demand in countries where private notes are currently used. As shown in Figures 5 and 6, the market for private banknotes in Hong Kong, Scotland, and Northern Ireland has been growing consistently for at least the last decade. A 2009 Bank of England study, The Future for Cash in the UK, found that “the value of cash transactions increased by around 14 percent over the period 1996–2008.”89 There is no reason to be- 28 Figure 8 Quantity and Value of Federal Reserve Notes Outstanding, 1990–2010 Source: “Currency in Circulation: Value,” The Federal Reserve, coin_currcircvalue.htm, and “Currency in Circulation: Volume,” The Federal Reserve, serve.gov/paymentsystems/coin_currcircvolume.htm. lieve that these countries were unaffected by the new payment technologies deployed in the United States. Another objection to the possibility of privately issued notes is that such a system would be “too chaotic” for consumers. Americans are used to a single currency, the argument goes, and they would be unable to adopt a system of multiple private currencies. This objection is unfounded for at least two reasons. First, customers have no trouble using private notes in Hong Kong, Scotland, or Northern Ireland. Second, this objection overlooks the institutions that simplify note trading. As with any market, it need not be the case that every buyer is informed, but only the marginal buyer. When a few market participants are closely monitoring the value of banknotes, other consumers can assume that the notes are accurately priced. This was true historically of the Suffolk system and the national banking system in the United States. All brands of private notes were traded at their par prices, and customers traded notes indiscriminately without checking each note individually. Newspapers and banks published daily lists of the note issuers they considered reliable. A final reason not to fear complexity in a private system is that only a few major banks are likely to issue notes. As discussed, four U.S. banks control the majority of deposits in the banking system. It is these banks that are most likely to enter the currency market, at least on the national scale. Some small banks might also choose to enter the market at a regional level. A small bank might have trouble getting its notes accepted outside of its home region, but it may have the advantage of its reputation in the local market. Again, the problem of complexity presents little trouble for consumers who are likely to know the local brand. Would Americans be willing to adopt privately printed banknotes as a medium of exchange? It seems clear from the example of local currencies that private banknotes can easily be adopted on a very small scale. The difficulty comes in transitioning these notes into the mass market. Rather than commit- The problem of complexity presents little trouble for consumers who are likely to know the local brand. 29 The threat of legal action may be the greatest barrier to entering the market for private banknotes. ting a large initial investment, a better strategy might be to introduce private notes slowly, concentrating efforts in smaller cities or regions. After all, the more private notes are used in a given area, the more likely customers in that area are to adopt them. For example, if a small area had a high adoption rate then consumers in that area would be much more likely to use private notes because they can be assured that other people and businesses are willing to accept them. If adoption occurs at a low rate across the country, however, then private notes are unlikely to gain momentum in any area since they will be unable to become sufficiently common as a means of exchange in any locale. Considering this fact, the best opportunity for the success of private notes might be to first introduce them in a medium-sized town where the local economy is mostly selfcontained. If the town is too large, then it will be difficult to get enough businesses to accept the private notes. If the town is too small, then the economy will be reliant on trade from other regions in which private notes have not yet been adopted. Consequently, the optimal strategy might be to find a small number of medium-sized towns in which private notes could be first introduced. If private notes are adopted in these towns, their use could spill over to other nearby towns, big and small alike. Under this strategy, optimal use of marketing resources might not be to spend all $200 million on an initial national advertising campaign, but to use smaller, focused campaigns over the course of several years. Spreading this expense over several years would also increase expected profits. The threat of legal action may be the greatest barrier to entering the market for private banknotes. Despite the apparent legality of private issue, the threat of litigation may make note production prohibitively costly. The U.S. government has been particularly hostile to parties impinging on its note-issue monopoly. Bank managers may be afraid that the Fed, Treasury, or Department of Justice will attempt to prohibit or ban the issue of private currency, tax away the profits of private issuers, or even bring criminal charges against them. If so, the bank’s up-front investment, as well as all future profits, would be lost. The government’s hostility toward private currency producers is clearly illustrated in the case of the Liberty Dollar. Beginning in 1998, Liberty Services (formerly the National Organization to Repeal the Federal Reserve Act, or “NORFED”), run by economist Bernard von NotHaus, produced small discs made of gold and silver. The company termed its products “Liberty Dollars,” with the disclaimer that they were not intended to be used as coins or legal currency. Despite this, von NotHaus was arrested in 2009 and charged with the manufacture and possession of coins of gold or silver that resemble U.S. currency.90 He was convicted on both counts and currently faces fines of up to $250,000, five years in prison, and the confiscation of up to seven million dollars’ worth of Liberty Dollars.91 The Justice Department called Liberty Services’ coin production “insidious” and claimed it presented “a clear and present danger to the economic stability of this country.” The district attorney went so far as to call the production of coins “a unique form of domestic terrorism.”92 Liberty Services also issued promissory notes redeemable for gold and silver, but these notes were not the subject of any legal action against von NotHaus or Liberty Services. Given the overzealous nature with which these regulations are enforced, it is understandable that banks would be hesitant to enter the market for private currency. Von NotHaus was not charged for producing promissory notes, and it seems that note issuers who gain approval from the Federal Reserve, as local currency issuers have done, would likely be immune from prosecution.93 Still, the von NotHaus trial sent a strong signal regarding the government’s position on private currency production. If a bank were to produce private banknotes, government litigation could possibly eliminate the bank’s entire future in- 30 come stream from the project. As noted, it is likely that adoption will be slow initially, so most value created by the project would come from future revenue. Consequently, even a small probability of government interference could eliminate potential profits from the introduction of private banknotes. As in the extreme example of the von NotHaus case, government legal action could even result in confiscation of the company’s assets and criminal charges against the bank’s managers. The Department of Justice might also choose to prosecute the producers of private banknotes under anticounterfeiting statues, which prohibit the production of cards or advertisements similar to U.S. currency.94 Considering these outcomes, the potential for government action appears to be the greatest barrier to the introduction of private money. Even if private banks are able to issue banknotes without interference from the federal government, there are further legal questions that would need to be resolved. First, would national banks issue notes through their current charters, or would they open state subsidiaries for their issuing activities? As previously discussed, because of the federal tax levied on outstanding national banknotes, it might be in the banks’ best interest to issue notes through statechartered subsidiaries. Alternatively, since national banks are already quite involved with the legislative and lobbying process relating to financial regulation, perhaps they would be able to convince Congress to repeal the national note-issue tax altogether. Second, how would a clearinghouse for private banknotes operate? Would banks be allowed to use the Fed’s check-clearing system, or would it be necessary to develop a new system from scratch? Perhaps some combination of these would be optimal, such as using the check-clearing system in the short-term, while building infrastructure for a more efficient note-clearing system. Third, would banknote liabilities be covered by FDIC deposit insurance? Although these obligations are not traditionally thought of as deposits, the case could be made that they face the same uncertainty of redemption as bank deposits and should therefore receive the same guarantee. FDIC insurance would likely hasten the adoption of private notes, since their value would be “backed by the full faith and credit of the United States government.” Such a guarantee, however, would erode the economic stability engendered by private currency since banks would be less accountable for the riskiness of their investing and note-issuing activities. Another possible reason banks have not yet issued their own notes is that they may simply be unaware of the legality of privatenote issue. In his 2001 study “Note Issue by Banks: A Step toward Free Banking in the United States,” Kurt Schuler found that representatives of the finance industry were unaware that U.S. banks could legally issue private notes. He therefore concluded that “[i]gnorance, rather than a judgment by banks that note issue would not be profitable, seems responsible for the Federal Reserve’s continuing monopoly on note issue.”95 Yet in the 10 years since that study was published, the Fed’s de facto monopoly power remains unchallenged. Instead, it seems likely that despite the apparent legality of private note issue, banks have chosen to refrain from entering this potential market because of the uncertain profits and the potential for legal action. Considering these challenges, banks wishing to take advantage of the market for private currency must proceed with caution. As discussed, large banks would likely begin operations in small regional test markets before going nationwide. This strategy would also allow them to test the legal implications before committing to significant levels of investment. In addition, banks would likely seek a summary opinion from the judiciary or a statement from Congress clarifying the laws governing private currency before any major investment is undertaken. A reduction or elimination of the annual tax on notes outstanding might also encourage banks to enter the market for private notes. Considering the challenges involved, banks wishing to take advantage of the market for private currency must proceed with caution. 31 Commodity-based Currency Another way banks could enter the market for private money would be to create a commodity-based currency (CBC) for international trade. A bank could issue banknotes redeemable for a valuable commodity like gold or silver. Such a currency would have the advantage of holding its value better than dollar-based banknotes and would be less susceptible to government manipulation. Given the current legal environment, it might be difficult to introduce a CBC within the United States, but such a currency would likely be attractive to the international markets for banknotes and currency. An international CBC would have several advantages over banknotes based on a fiat currency. First, it would not be subject to the same legal constraints as domestic currency. Second, it could be introduced gradually rather than requiring widespread adoption. Third, the costs would be lower, since international transactions do not generally require that physical cash changes hands. Fourth, the international currency market is potentially much larger than the domestic market and therefore offers significantly higher profit potential. Acceptability versus Value The greatest benefit of commodity-based currency is its stable store of value. The fundamental quality of money is that it is commonly accepted as a medium of exchange. But in the choice between types of money, value stability may be the most important quality. There is a long and ongoing discussion regarding the tradeoff between degree of acceptability and stability of value. Milton Friedman and Friedrich von Hayek famously debated the relative importance of these forces in the context of money adoption.96 Friedman considered money’s role as a medium of exchange its primary and most important quality. For example, despite the erosion of the value of the U.S. dollar, it remains the most common medium of exchange in An international commodity-based currency would have several advantages over banknotes based on a fiat currency. international trade, notwithstanding the existence of more stable currencies such as the Swiss franc. Friedman argued that the dollar is commonly used because so many people are already willing to accept it. This is the quality of money as a medium of exchange, a property sometimes described as “network effects.” The larger the network of dollar users, the greater the advantage to any new user. Because the U.S. dollar is so commonly accepted, it is easier for a trader to transact in dollars rather than a more stable, but less common, currency like the Swiss franc.97 Friedman therefore concluded that the exchange value (or network effects) of money greatly outweighs its importance as a store of value. Hayek argued that although its acceptance as a medium of exchange is the defining characteristic of money, its role as a store of value is sometimes more important. In his book The Denationalization of Money, Hayek posited a private fiat currency which he called the Swiss ducat. The ducat could be issued in quantities sufficient to facilitate wide exchange, but the annual growth in the quantity of ducats would be sufficiently low to maintain a stable value. Hayek’s example of the Swiss ducat is clearly based on the real example of the Swiss franc, which is administered in a similarly stable manner. The Swiss franc provides a real-world example of Hayek’s point that traders often value a currency for its stability rather than its widespread acceptance or network effects. The stable long-term value of the Swiss franc provides a substantial benefit to international traders. The volume of exchange in Swiss francs is small compared with that of U.S. dollars, but it is quite larger relative to the size of the country and has a disproportionate influence in international trade. Despite Switzerland having only the 19th-largest economy in terms of GDP, the Swiss franc is the 4th most commonly used currency in international trade and the 6th most widely held foreign reserve currency.98 Swiss banks account for roughly 5 percent of all currency exchange—the 5th most of any country.99 The example of the Swiss franc demonstrates 32 that there is a significant portion of international traders who value the store-of-value aspect of money over its network effects. In fact, any existing network or set of consistent trade partners might find it advantageous to switch their regular trades to a stable currency, like the Swiss franc, rather than using a currency that is more widely accepted but less consistent in value. The stable value of the Swiss franc comes from the long-run perspective of the Swiss government, but a commodity-based currency might provide an even more stable source of value. If a more stable currency were available, how much market share would it take from the Swiss franc? What about from the dollar? Although Friedman used the dollar as an example of an unstable currency, its value has been relatively stable in recent decades relative to other major currencies. The U.S. dollar is a “vehicle” currency that is used as a channel for international trade between nations that do not use the dollar as their official currency.100 The U.S. dollar is also considered a “safe-haven” currency in which investors choose to hold their funds in times of crisis.101 Would a CBC provide a more stable medium of exchange than the U.S. dollar or the Swiss franc? One classic argument against a gold-based currency is that it is subject to the whims of new gold discoveries that would decrease the value of the currency already in circulation by increasing the money supply. This argument, however, is both theoretically and historically unsound. First, the production of gold is endogenous, meaning it both affects and is affected by the price of gold. When the price of gold rises, more gold is discovered. Although any single discovery of gold may be a random accident, more people will be prospecting for gold if the price is high, so more gold is likely to be found. There are real costs to exploring for gold, mining it, and bringing it to market. Any gold producer must weigh the marginal costs of additional prospecting against the expected marginal benefits. When the price of gold rises, gold producers put more money into exploration and production. Additionally, a mine may hold several veins of gold, some of which are cheap to harvest and some more expensive. When the price of gold rises, the miner will find that harvesting from the more expensive parts of their mine becomes profitable, and the production and transportation costs become smaller relative to the potential revenue. Therefore, a rising price of gold causes more gold to be produced. Conversely, a fall in the price of gold means that gold production is less profitable, so less gold will be produced. In this way, the price of gold often determines gold production rather than being determined by it. Indeed, according to a study by Hugh Rockoff, most gold strikes during the 19th century were the result of increases in prospecting rather than accidental discoveries.102 Second, even when a sizable goldmine is discovered, it is unlikely to lead to significant inflation. The greatest examples of gold discoveries during the era of the gold standard took place in California in the 1840s and Australia in the 1850s. Each of these discoveries resulted in a gold rush that drew people and resources into mining and poured large amounts of gold into the world market. But despite their significant size, these discoveries had little effect on the worldwide price of gold. The study “Money, Inflation, and Output Under Fiat and Commodity Standards,” by Arthur Rolnick and Warren Weber, examines prices in 15 industrialized nations in the 19th century and finds that the major gold discoveries caused price inflation of only 1.75 percent per year. This compares favorably to the average inflation rate of 9.17 percent per year in these same countries following their adoptions of fiat monetary standards.103 Even in recent decades of low inflation in the United States, the period we call the Great Moderation, the Fed has pursued a target inflation rate of 2 percent, which it has often exceeded. Thus, even the least stable periods under the gold standard had lower inflation rates than the most stable period of central banking in the United States. Even the least stable periods under the gold standard had lower inflation rates than the most stable period of central banking in the United States. 33 If a private commodity-based currency were more reliable and more accessible than the dollar, it is likely that its adoption would be very high. The longest sustained inflation under a commodity standard occurred in 16thcentury Spain after Columbus’ discovery of the Americas. Spain imported large amounts of precious metals (especially silver) from its American colonies, which caused substantial increases in its domestic money supply. As a result, the price level in Spain from 1525 to 1600 more than tripled. In his paper, “The Price Revolution: A Monetary Interpretation,” Douglas Fisher documents how arbitrage between nations caused this inflation to spread from Spain throughout the rest of Europe. “What is particularly striking about this event—recognizing that the point of impact of the inflow of specie from the Americas was, for the most part, Spain—is the roughly parallel rise of Spanish and other European price levels.”104 Although the length and breadth of inflation in this period may appear striking, the rates of inflation are actually quite moderate compared with those experienced under central banking. A tripling of the price level over 75 years implies a compound rate of inflation of 1.48 percent—a level far below the target rate of most central banks—and the annual rates during Spain’s “Price Revolution” were actually slightly lower than this. In comparison, Spain’s average rate of inflation over the past 45 years has been almost 8.0 percent.105 Again, we see that the least stable periods under a gold standard provided lower rates of inflation than even the theoretically optimal case of central banking, and far lower than the actual historical rates of inflation. A bank producing a CBC might also choose to issue redeemable banknotes. Many countries are “dollarized,” meaning the dollar replaces or circulates alongside the local currency. This may be partly because of trade with the United States, but much of it is clearly due to the dollar’s relatively stable value. After all, most transactions occur within a single country and have nothing to do with the United States. It seems quite logical that a CBC would be attractive for trades within dollarized countries, and could tap into a large international market for paper currency (considering that 60 percent of U.S. dollars are currently used outside the United States). If the private CBC were both more reliable and more accessible than the dollar, then it is likely that the adoption of the new CBC would be very high. A stable CBC might be more useful than the dollar in these countries. According to the essay “Dollarization,” by Alberto Alesina and Robert J. Barro, the main advantage to dollarization is that it binds the government to a stable monetary policy of not overstimulating the economy or inflating away its debts.106 A study of 21 Latin American countries from 1960 to 2003 found that dollarization significantly reduced inflation,107 and OECD studies showed that “countries allowing citizens to legally hold foreign currency tend to have lower average rates of inflation.”108 A stable CBC might provide similarly beneficial results. Several countries are officially dollarized. Ecuador and El Salvador both use the U.S. dollar as their official currency. Panama has its own official currency, the balboa, which is officially fixed to the U.S. dollar at a 1-to1 ratio, and dollars circulate in the country as the primary means of exchange. Peru and Uruguay were highly dollarized through the middle of the first decade of the 21st century, but have since experienced some degree of de-dollarization due to recent financial reforms. These nations have become dollarized partly because of strong trading ties to the United States, but also because the citizens of these countries cannot trust their central banks to maintain a stable national currency. A stable CBC might be able to capture a sizable share of the currency markets in these nations. Several other countries are partially dollarized. For example, a study “Exchange Rate Movements in a Dollarized Economy: The Case of Cambodia,” by Sok Heng Lay, Makoto Kakinaka, and Koji Kotani, found that in Cambodia, “[t]he dollarization index [percentage of dollars in the money supply] has surged from around 55 percent in 1998 to 34 around 80 percent in 2007.”109 The authors note that U.S. dollars tend to be used more by wealthy Cambodians because dollar denominations are rather large relative to local prices, while Cambodian riels are used primarily by the poor. Wealthy Cambodians also have more access to dollars through international trade. This makes the lower economic class most likely to be adversely affected by the government’s inflationary habits. These poorer Cambodians might be prime customers for a CBC, since they are in need of a stable currency. A 1999 study by the International Monetary Fund found 52 countries that were at least moderately dollarized.110 Although the term “dollarized” refers to the U.S. dollar, it can also be applied generally to any country in which a foreign currency is used as a medium of exchange. Several other currencies, such as the euro and Chinese yuan, play strong roles in regional trade for the same reason.111 An international CBC might capture market share from these other currencies as well. We can see that international commodity money would have great benefits for individuals and national economies. These benefits would be particularly significant for large companies participating in international trade. There are several major international banks that could produce such a currency with very little legal expense or threat of litigation. Commodity-based currency would also be very useful in less-developed countries. International commodity money thus appears to be a great potential opportunity for profit. Establishing a Commodity-based Currency A commodity-based currency is fundamentally simpler than fiat-based private banknotes because it is backed by a physical good rather than a government guarantee. Banknotes redeemable for gold or silver existed long before central banks issued paper fiat currency. Nevertheless, introducing a CBC today, when all of the world’s currencies are based on fiat standards, might still be difficult. As with private banknotes, the difficulty lies in creating a critical mass of acceptance so that individual traders are willing to use the new money because they are confident that other traders will accept it in exchange. This problem may be smaller for a CBC since international trading groups are likely to have fewer transactions and partners. Unlike private notes, a CBC would not face the domestic legal challenges of any particular country. The process of introducing a CBC would be different from the process of introducing private notes. First, customers will likely make their deposits in money, not the commodity, so the bank would need to convert at least some of whatever currency is deposited into the commodity. Second, the bank would need to physically store some of the commodity on location in case its customers choose to redeem their currency for the actual commodity. Third, the supply of CBC would likely be partly in the form of banknotes and partly held as deposits. For banknotes, the bank would face the same costs of production described earlier. For deposits, the bank would profit on the spread between the rates on loans and deposits. Let us assume that a bank was to offer a new currency based on gold because the long-term value of gold has been very stable relative to most fiat currencies. Since no major CBC exists today, the bank would likely take deposits in other currencies. Per standard banking practice, part of the deposit would be lent out or invested, while the rest of the money would be used to buy gold to be held on reserve. Customers would expect to be able to redeem their deposits in gold (or possibly in another currency but the value at the time of redemption would be based on the value of gold during the period and not on the value of any other currency). In this way, customers could take advantage of the stability of the commodity and need not worry about national politics or international demand affecting any particular currency. The amount of physical gold held by the bank could vary depending on how its A commoditybased currency is fundamentally simpler than fiat-based private banknotes because it is backed by a physical good rather than a government guarantee. 35 Commoditybased currencies are potentially more profitable than domestic private banknotes. customers expected to be paid. If customers expect to redeem their notes for actual gold bars, then obviously the bank would need to keep gold on hand. This would require physical storage of gold at some or all branch banks. For large quantities of gold, storage and security might be quite costly. Gold storage would require a larger storage area and better security features than a standard branch bank, but any bank with safety deposit boxes is already equipped with this capacity. A bank might reduce these costs by limiting redemption to only a few main locations. The bank might also require advance warning from any customer requesting physical delivery. Indeed, a two-day notice was once required by goldsmiths for the redemption of gold-denominated promissory notes. Such a practice would allow banks sufficient time to transport gold between facilities and require less gold to be kept on site at any individual bank. Most CBC users would likely value the stability of the currency rather than the commodity itself. As such, most customers would be expected to hold their gold-denominated currency in the bank or to redeem it for some alternative currency rather than physical gold. In this case, the bank would not need to keep much physical gold on hand. Instead, the bank might choose to own large quantities of gold but have it stored in a private vault, such as the New York Federal Reserve Bank. Alternatively, they might simply buy and hold futures contracts on gold so that they receive the stable value of gold without taking ownership of the gold itself.112 Unlike private banknotes, introducing a CBC is less likely to require major legal battles. It could be the case that private notes redeemable for gold would still be subject to U.S. currency regulations, but this does not appear to be the case. Recalling the case of Liberty Dollar, von NotHaus and Liberty Services produced promissory notes redeemable for gold—but these notes were not the subject of litigation. Only the production of coins similar to U.S. coins was deemed illegal, although clearly the degree of similar- ity is subject to interpretation. This example does not guarantee that there will be no legal action, but it appears to be less of a concern. Additionally, the international market for a CBC will likely be a much bigger target than the domestic market. International trade is conducted in all types of currencies and is not subject to domestic currency regulations. It seems that banks could immediately profit by entering this market. An international CBC might be easier to introduce than private banknotes since it would be marketed to many small markets rather than a single large one. In the case of private banknotes, an individual is unlikely to use a banknote unless his trading partners are willing to accept it. In the U.S. economy that includes a vast array of businesses and individuals, including restaurants, gas stations, grocery stores, and any other business that might accept cash. If most of these establishments are unwilling to accept private banknotes, then consumers are unlikely to carry them. In contrast, international trade is composed mostly of large transactions between known parties. Rather than dealing with many random businesses, an international trader generally has a small set of trading partners with whom he regularly deals. Transactions are often large in value and involve planning and preparation, including the specification of the currency to be used in the exchange.113 This level of planning indicates that traders would value a stable currency in which to transact. These factors make CBCs potentially more profitable than domestic private banknotes. The market for a CBC is larger, while the costs of entering the market, money production, and potential litigation are all lower. The next section estimates the potential profit to private banks on the introduction of a CBC. Potential Profit The potential profit from establishing a CBC can be calculated in a manner similar to that of the domestic market for private notes. 36 We consider the potential world market for a CBC in two parts. First, we estimate the potential deposits from international traders who might use a stable CBC. Second, we estimate the potential quantity of notes that might replace U.S. dollars in dollarized countries. We then use these quantities to estimate the potential profit from creating a CBC. As before, we begin by estimating the size of the market in international trade and the potential penetration of the new currency. Let us consider a new commodity-based currency as a substitute for U.S. dollars in international exchange. What volume of international transactions in U.S. dollars would be captured by a CBC? Let us assume that some portion of U.S. dollar deposits held outside the United States is used for trade, while the rest is held as a store of value. The same is true of the Swiss franc. The Bank of International Settlements reports that there are $9.655 trillion worth of deposits in U.S. dollars held outside the United States, and $392.8 billion in deposits of Swiss francs (measured in terms of 2010 USD) outside of Switzerland.114 That is a total of $10.0487 trillion in U.S. dollar and Swiss franc foreign deposits (if we include foreign deposits in all currencies, the number would $16.6371 trillion). To this figure we can add the potential market for cash dollars of a CBC. In the previous section we assumed that 60 percent of the total 1 trillion of U.S. dollars in circulation are currently outside the United States. Although a CBC would be unlikely to replace all U.S. dollars, it might capture some of the market from other currencies, such as the euro and yuan. Adding this potential market to the previous estimate of foreign deposits values the potential market for a CBC at $10.6487 trillion. Using this quantity estimate, we can calculate the potential profit from a CBC. Let us assume that a CBC issued by private banks can capture just 1 percent of the potential market. Although some portion of the currency is expected to be backed by physical gold reserves, the costs of storage are negligible because a single gold bar is valued at roughly $760,000 and yet can be stored for only a few dollars.115 Going forward, we follow the profit model described in the previous section with all rates, such as return on assets, annual marketing and administrative expenses, and cost of production equal to those of the base case. We assume a reserve ratio of 10 percent, money supply growth of 2 percent per year for 10 years, and 0 percent growth thereafter, and that money is adopted in an S-curve. We assume a total upfront investment of $5 billion. Using these figures, we find an NPV of $67.3 billion and an IRR of 42.3 percent. These calculations are presented in Appendix B. The large potential market for a CBC causes the NPV estimate of $67.3 billion for a CBC to be much higher than the best-case profit estimate for private banknotes. The assumptions involved, however, are less certain. The assumption of 1 percent market penetration is used simply for example. The reader is encouraged to repeat these calculations using assumptions and estimates he deems most appropriate. There certainly appears to be a large potential market for a stable international currency, as evidenced by the use of the U.S. dollar and Swiss franc in international trade. Establishing a CBC would require only a small initial investment but may yield large potential profits. There certainly appears to be a large potential market for a stable international currency. Conclusion For almost a century, the Federal Reserve and central banks around the world have inflated away the values of their currencies without achieving noticeable improvements in economic stability or performance. Their inflationary policies act as hidden taxes on consumers, create deadweight economic losses, and misallocate scarce resources. By contrast, the American experiences of 19th century free banking and national banking were periods of strong economic growth and consistently stable prices. If private banks were to resume the production of currency, they could help stabilize the values of 37 The provision of banknotes by U.S. commercial banks could provide the first step toward true competition in currency. currencies in the United States and abroad. The provision of banknotes by U.S. commercial banks could provide the first step toward true competition in currency. Banks could issue private notes redeemable for U.S. Federal Reserve notes. Competing banks would have the incentives to maintain the value of their banknotes by limiting the extent of their note issuance and to provide banknotes with features that best satisfy consumer preference. This type of semiprivate system is already employed in Hong Kong, Scotland, and Northern Ireland, and private banknotes are heavily favored over government-issued notes in these regions. The issuance of private banknotes appears to be technically legal in the United States, although the government’s avid prosecution of currency-related crimes might lead one to question whether private note issue would be allowed in practice. U.S. banks could potentially earn billions of dollars in annual profits by issuing private banknotes. The greatest obstacles to the introduction of private notes appear to be the threat of legal action and the challenge of marketing this new product to American consumers. Costs of potential litigation appear to be small relative to potential profits from note issue, but the downside risk of losing all future profits is considerable. Targeted marketing campaigns would likely be the most cost-effective promotional strategy, and might lead to the highest rates of adoption. If these challenges are carefully negotiated, U.S. banks may then capitalize on this potential opportunity for substantial profit. To encourage their adoption, Congress should clarify the legal status of private banknotes and eliminate the taxes on nationally chartered banks. Nationally chartered banks are required to pay a semiannual tax of 0.5 percent (1 percent annually) on the value of their notes outstanding, while no federal tax applies to note issue by statechartered banks (although taxes may ap- ply at the state level). Repealing the federal tax would put state and national banks on equal footing and avoid wasteful losses from regulatory arbitrage. Additionally, Congress could declare it legal for banks to issue promissory banknotes redeemable for Federal Reserve notes, and also establish that private banknotes are neither counterfeit nor similar reproductions of Federal Reserve notes. The creation of a commodity-based currency would be a boon to firms and consumers in the United States and abroad. Considering the widespread usage of the U.S. dollar in foreign transactions, and the disproportionately large reserve holdings and number of international transactions in Swiss francs and Hong Kong dollars, there appears to be a large potential market for a stable international currency. Private banks may be able to serve this market by providing a currency whose value is based on gold or a similar commodity. Before the era of fiat currencies, gold-backed currencies helped enable international trade with minimal disturbances in the long-term price level. Creating an international CBC would avoid the legal obstacles facing a domestic private currency. Although the U.S. government has limited authority over international transactions, a policy of nonintervention should be encouraged. A CBC might not be as immediately adopted by domestic consumers, but allowing it in the United States would be an important step toward its use in international transactions. Congress needs to guarantee the feasibility of private money for the benefit of consumers in the United States and around the world by clarifying the legal status of private banknotes; eliminating the tax on nationally chartered banks; and prohibiting the Fed, Treasury, and Justice Department from taking action against private money producers. After such actions are taken, private banks will find it in their own interests to enter the market for private currency. 38 Appendix A: Estimating the Profits on Private Banknotes This appendix describes the model used to estimate the potential profits from private-note issue. The model is based on Bell’s “Profits on National Bank Notes” and other similar works. It is used to estimate annual profits over the first 10 years of private-note issue.116 We then estimate a terminal value based on year-10 profits and use these estimates to calculate the NPV of the potential project. The first step of the analysis is to estimate annual revenues. We assume that the bank will be able to capture some percentage of the domestic market for U.S. dollar bills. Let Nt represent the total quantity of notes outstanding worldwide in year t, and let h represent the portion of bills currently outside the United States. If N grows at some annual rate g for each year t, then let the domestic circulation of dollars be calculated as Ndt = N0(1-h)(1+g)t. The bank will be able to capture some portion jt of the domestic circulation. The value of jt in each year is based on an S-curve adoption calculated from the estimated adoption j10 in year 10, as shown in Figure 7. Thus, the bank’s potential market in year t is calculated in equation A.1: (A.1) Nbt = jtNdt = jtN0(1–h) (1+ g)t. The potential market Nbt acts as a revenue base for the bank. For each private note the bank issues, it gains $1 to invest. Some of the bank’s new investments will be loans to customers. The bank must maintain some portion r of liquid reserves for its notes outstanding. The total funds net of reserves can be invested by the bank at some rate of return rr, and any interest on reserves is paid at a rate of ri. The annual revenue of the bank in year t is calculated in equation A.2: (A.2) Rt = [(1–r)rr + rri]Nbt . Earnings in each year can be calculated by subtracting annual expenses from the annual revenue Rt. These expenses include marketing, Mt; administration, At; note production, Pt; and any interest, It, paid on notes outstanding. Marketing and note production expenses are calculated as percentages m and p of new notes entered into circulation, such that Mt = m(Nbt – Nb(t–1)) and Pt = p(Nbt – Nb(t–1)). Administration and interest expenses are assumed to be proportional to notes outstanding, such that At = aNbt and It = iNbt . Subtracting annual expenses from annual revenues yields annual earnings before taxes EBTt as shown in equation A.3: (A.3) EBTt = Rt – (Mt + At + Pt + It) = Rt – (m + p) (Nbt – Nb(t–1)) – (a + i) Nbt . The tax Tbt on banknotes in circulation and the corporate income tax Tc are then taken out. The tax on banknotes is a percentage b of notes outstanding Tbt = b(Nbt). The corporate taxes are a percentage Tc of EBTt less the note tax such that Tct = Tc(EBTt – Tbt). This yields the annual after-tax profit, as calculated in equation A.4. Annual profits can be used as the cash flow in each year for calculating NPV: (A.4) Πt = EBTt – Tbt – Tct = (EBTt – bNbt)(1 – Tc). The potential NPV of the private-issue of banknotes is equal to the sum of the present values of cash flows in each year less any necessary investment at time t = 0. Cash flows will be discounted by a discount rate that is assumed to be equal to the bank’s rate of return on investment r = rr. A terminal value is added in year 10 to account for all cash flows after that year. In most NPV calculations, terminal value is calculated as a perpetuity of the final year’s cash flow. In this case, however, long-term costs might be different from final year costs. We assume that notes in circulation will deteriorate and wear out at some rate, δ, in each year, and will need to be replaced at that rate at production cost p. 39 Terminal value is therefore calculated as Vt = Πt / (r – gt), where Πt = (EBT10 – (δp + b)Nbt) (1 – Tc), and gT is the terminal growth rate of cash flows after year 10. The present value of all cash flows is calculated in equation A.5: (A.5) PV0 = [Σ1,10 Πk / (1+ r)k] + Vt / (1+ r)10. Although such a project may require upfront investments in equipment or infrastructure, the primary up-front costs considered here are legal, L0; marketing, M0; and any other costs, K0. Since these costs are technically initial expenses rather than investments, they can be discounted by the corporate tax rate (since expenses will not be taxed and will essentially create a rebate or tax shield). Subtracting these from the presTable A.1 Parameters Used in All NPV Estimates Symbol N0 g Description U.S. currency base in year 0 ent value of all future cash flows, the NPV is calculated in equation A.6: (A.6) NPV = PV0 – (L0 + M0 + K0)(1 – Tc). These formulas are used to calculate NPV under three scenarios: a best case, a base case, and a worst case. Variables listed in Table A.2 take a different value in each scenario. Spreadsheets for each scenario are presented in Tables A.3 to A.5. The resulting NPVs are approximately $29.1 billion in the best case, $6.0 billion in the base case, and $66.2 million in the worst case. The internal rate of return (IRR) is also calculated for each scenario. These results are 68.9 percent in the best case, 37.3 percent in the base case, and 5.2 percent in the worst case. Value 1,000,000,000,000 2.0% 60.0% 10.0% 4.5% 0.05% 5.0% 5.0% 35.0% Annual growth in U.S. currency base in years 1 to 10 Percentage of dollar bills outside of the United States Reserve ratio Note production cost per note Administrative cost per note Terminal note replacement rate Annual marketing expense as percentage of new notes issued Corporate tax rate as percentage of earnings (EBT) h r p a δ m Tc Table A.2 Variables Used in Best, Base, and Worst Case NPV Estimates Symbol Description Portion of base captured by private notes by year 10 Rate of return on investment Interest paid on notes as percentage of notes outstanding Interest paid by Fed on bank reserves Banknote tax as percentage of notes outstanding Annual growth rate of cash flows after year 10 Legal expenses at time t=0 in millions Marketing expenses in time t=0 in millions Other expenses in time t–0 in millions Best case Base case Worst case 10.0% 5.0% 0.0% 0.25% 0.0% 2.0% $50.00 $100.00 $100.00 5.0% 5.0% 0.5% 0.0% 0.5% 0.0% $50.00 $200.00 $200.00 1.0% 4.2% 0.5% 0.0% 1.0% -2.0% $100.00 $300.00 $200.00 j10 r i ri b gt L0 M0 K0 40 Table A.3 Base Case Profits from Private Note Issue (Dollars in Millions) 5.0% = Rate of return 5.0% = Marketing expense 0.05% = Admin expense 10.0% = Reserve ratio 0 1 2 3 4 5 6 7 8 9 10 0.5% = Tax on notes outstanding Terminal 4.5% = Production cost 5.0% = Terminal replacement rate 35.0% = Corporate tax rate 0.0% = Terminal growth rate 0.5% = Interest paid on notes 5.0% = Discount rate 2.0% = Money growth rate 5.0% = Market penetration 60.0% = US dollars abroad 0.0% = Interest on reserves Year Quantity of Notes Outstanding 1,000,000.0 1,020,000.0 1,040,400.0 1,061,208.0 400,000.0 0.1 500.0 1,530.0 3,121.2 5,836.6 10,824.3 16,009.2 19,144.8 21,250.7 0.4 0.8 1.4 2.5 3.6 4.3 4.6 408,000.0 416,160.0 424,483.2 432,972.9 441,632.3 450,465.0 459,474.3 1,082,432.2 1,104,080.8 1,126,162.4 1,148,685.7 1,171,659.4 468,663.8 4.9 22,847.4 1,195,092.6 478,037.0 5.0 23,901.9 23,901.9 US dollar outstanding Dollars inside US Market penetration rate (%) Private notes outstanding 41 22.5 22.5 68.9 140.5 262.6 487.1 720.4 861.5 68.9 140.5 262.6 487.1 720.4 861.5 (25.0) (0.3) (22.5) (2.5) (27.8) 9.7 (2.5) (20.5) (19.6) 6,321.5 (50.0) (200.0) (200.0) 157.5 6,029.0 37.3 (29.0) (29.1) (32.0) (33.7) (7.7) (15.6) 13.1 9.8 9.6 (29.2) (47.0) (38.7) (37.4) (27.9) (27.4) (7.7) (15.6) (29.2) (54.1) (46.3) 16.2 (54.1) (84.2) (66.0) (46.4) (71.6) (122.2) (224.4) (0.8) (1.6) (2.9) (5.4) (51.5) (79.6) (135.8) (249.4) (259.2) (8.0) (233.3) (80.0) 139.8 (48.9) (80.0) 10.8 8.1 (156.8) (9.6) (141.1) (95.7) 458.3 (160.4) (95.7) 202.2 143.7 Revenue 956.3 956.3 1,028.1 1,028.1 1,075.6 1,075.6 1,075.6 1,075.6 Revenue from loaned funds Revenue from reserves Total revenue Expenses (105.3) (10.6) (94.8) (106.3) 639.3 (223.8) (106.3) 309.3 209.4 (79.8) (11.4) (71.9) (114.2) 750.8 (262.8) (114.2) 373.8 240.9 (52.7) (12.0) (47.5) (119.5) 843.9 (295.4) (119.5) 429.1 263.4 (12.0) (53.8) (119.5) 890.3 (311.6) (119.5) 9,184.3 5,638.4 Annual marketing expense Adminintrative expense Note production costs Interest expense Earnings before taxes Corporate income tax Notes outstanding tax Cash flow Present value of cash flow Sum of PVs Legal expense Marketing expense Other expense Tax shield NPV ($) IRR (%) Table A.4 Best Case Profits from Private Note Issue (Dollars in Millions) 5.0% = Rate of return 5.0% = Marketing expense 0.05% = Admin expense 10.0% = Reserve ratio 1 2 3 4 5 6 7 8 9 10 0.0% = Tax on notes outstanding Terminal 4.5% = Production cost 5.0% = Terminal replacement rate 35.0% = Corporate tax rate 2.0% = Terminal growth rate 0.0% = Interest paid on notes 5.0% = Discount rate 2.0% = Money growth rate 10.0% = Market penetration 60.0% = US dollars abroad 0.25% = Interest on reserves Year Quantity of Notes Outstanding 1,000,000.0 400,000.0 0.3 1,000.0 3,060.0 6,242.4 11,673.3 21,648.6 32,018.3 38,289.5 42,501.4 0.8 1.5 2.8 5.0 7.3 8.5 9.3 408,000.0 416,160.0 424,483.2 432,972.9 441,632.3 450,465.0 459,474.3 468,663.8 9.8 45,694.7 1,020,000.0 1,040,400.0 1,061,208.0 1,082,432.2 1,104,080.8 1,126,162.4 1,148,685.7 1,171,659.4 1,195,092.6 478,037.0 10.0 47,803.7 47,803.7 US dollar outstanding Dollars inside US Market penetration rate (%) Private notes outstanding Revenue 45.0 0.3 45.3 138.5 282.5 528.2 979.6 1,448.8 1,732.6 0.8 1.6 2.9 5.4 8.0 9.6 137.7 280.9 525.3 974.2 1,440.8 1,723.0 1,912.6 10.6 1,923.2 2,056.3 11.4 2,067.7 2,151.2 12.0 2,163.1 2,151.2 12.0 2,163.1 Revenue from loaned funds Revenue from reserves Total revenue Expenses (50.0) (0.5) (45.0) (50.3) 17.6 (32.7) (31.1) 29,221.2 (50.0) (100.0) (100.0) 87.5 29,058.7 68.9 (34.6) (12.9) (38.2) (14.9) 4.2 3.4 20.6 8.0 (2.3) (58.8) (23.0) 6.4 21.1 (7.4) 13.7 10.8 (92.7) (143.2) (244.4) (448.9) (1.5) (3.1) (5.8) (10.8) (16.0) (466.6) 447.7 (156.7) 291.0 217.2 (103.0) (159.1) (271.5) (498.8) (518.5) (313.6) (19.1) (282.2) 1,117.7 (391.2) 726.5 516.3 (210.6) (21.3) (189.5) 1,501.8 (525.6) 976.2 660.7 (159.7) (22.8) (143.7) 1,741.5 (609.5) 1,132.0 729.7 (105.4) (23.9) (94.9) 1,938.9 (678.6) 1,260.3 773.7 (47.8) (23.9) (107.6) 1,983.9 (694.3) 42,983.5 26,388.1 Annual marketing expense Adminintrative expense Note production costs Interest expense 42 Earnings before taxes Corporate income tax Notes outstanding tax Cash flow Present value of cash flow Sum of PVs Legal expense Marketing expense Other expense Tax shield NPV ($) IRR (%) Table A.5 Worst Case Profits from Private Note Issue (Dollars in Millions) 4.2% = Rate of return 5.0% = Marketing expense 0.05% = Admin expense 10.0% = Reserve ratio 1.0%= Tax on notes outstanding 4.5%= Production cost 35.0%= Corporate tax rate -2.0% = Terminal growth rate 5.0% = Terminal replacement rate 0.5%= Interest paid on notes 4.2% = Discount rate 2.0%= Money growth rate 1.0%= Market penetration 60.0%= US dollars abroad 0.0%= Interest on reserves Year 1 2 3 4 5 6 7 8 9 10 Terminal Quantity of Notes Outstanding 1,020,000.0 408,000.0 0.1 306.0 624.2 1,167.3 2,164.9 3,201.8 3,829.0 4,250.1 0.2 0.3 0.5 0.7 0.9 0.9 416,160.0 424,483.2 432,972.9 441,632.3 450,465.0 459,474.3 1,040,400.0 1,061,208.0 1,082,432.2 1,104,080.8 1,126,162.4 1,148,685.7 1,171,659.4 468,663.8 1.0 4,569.5 1,195,092.6 478,037.0 1.0 4,780.4 4,780.4 US dollar outstanding 400,000.0 0.0 100.0 1,000,000.0 Dollars inside US Market penetration rate (%) Private notes outstanding 43 3.8 3.8 11.6 23.6 44.1 81.8 121.0 144.7 11.6 23.6 44.1 81.8 121.0 144.7 (5.0) (0.1) (4.5) (0.5) (6.3) 2.2 (1.0) (5.1) (4.9) 456.2 (100.0) (300.0) (200.0) 210.0 66.2 5.2 (8.6) (11.3) (9.4) (12.8) (20.7) (17.6) (3.1) (6.2) (11.7) 3.4 3.5 4.9 (9.7) (10.1) (13.9) (1.5) (3.1) (5.8) (10.8) (24.8) 8.7 (21.6) (37.8) (30.8) (9.3) (14.3) (24.4) (44.9) (0.2) (0.3) (0.6) (1.1) (10.3) (15.9) (27.2) (49.9) (51.8) (1.6) (46.7) (16.0) 4.9 (1.7) (32.0) (28.8) (22.5) (31.4) (1.9) (28.2) (19.1) 64.1 (22.4) (38.3) 3.4 2.5 Revenue 160.7 160.7 172.7 172.7 180.7 180.7 180.7 180.7 Revenue from loaned funds Revenue from reserves Total revenue Expenses (21.1) (2.1) (19.0) (21.3) 97.3 (34.0) (42.5) 20.7 14.9 (16.0) (2.3) (14.4) (22.8) 117.3 (41.0) (45.7) 30.5 21.1 (10.5) (2.4) (9.5) (23.9) 134.4 (47.0) (47.8) 39.5 26.2 (2.4) (10.8) (23.9) 143.7 (50.3) (47.8) 735.0 487.1 Annual marketing expense Adminintrative expense Note production costs Interest expense Earnings before taxes Corporate income tax Notes outstanding tax Cash flow Present value of cash flow Sum of PVs Legal expense Marketing expense Other expense Tax shield NPV ($) IRR (%) Appendix B: Estimating the Profit on Commodity-based Currency To estimate the potential NPV of a commodity-based currency, we adapt the model used in the previous NPV calculations for private banknotes. This section will not re-explain the entire model, only the adjustments. First, we estimate the percentage of international deposits currently held in U.S. dollars and Swiss francs that might be captured by a CBC. Data from the Bank of International Settlements puts these numbers at $9.655 trillion in foreign deposits of U.S. dollars and $392.8 billion worth of Swiss francs (measured in 2010 USD). That gives a total of Nd = $10.0487 trillion. Next we estimate the potential market for CBC banknotes, NN, based on the circulation of U.S. dollars outside the United States. Since the percentage h of dollars outside the United States is estimated to be 60 percent of the total $1 trillion in circulation, the value of dollars outside the United States is NN = 0.6($1,000 billion) = $600 billion. The sum of markets for international deposits and banknotes is N0=ND+NN and is used as the potential currency base that might be captured by a CBC. Given our estimates for the markets for currency and deposits, N0 = $10.6487 trillion. We assume that a CBC may capture 1 percent of this potential market. We assume that no interest is paid on banknotes or reserves and that note production costs apply only to banknotes, not to reserves. We then estimate the potential NPV from creating a CBC using the profit model from private banknotes, assuming base-case values with the exception of initial marketing, legal, and other expenses. For these costs we assume a total up-front expense of E0 = L0 + M0 + K0 = $5 billion. The resulting NPV for creating a CBC is approximately $67.3 billion, with an IRR of 42.3 percent, as shown in Table B.1. 44 Table B.1 Estimated Profits from Commodity-based Currency (Dollars in Millions) 5.0% = Rate of return 5.0% = Marketing expense 0.05% = Admin expense 1 2 3 4 5 6 7 8 9 10 4.5%= Production cost 35.0%= Corporate tax rate 0.0% = Terminal growth rate 5.0% = Terminal replacement rate Terminal 0.0%= Interest paid on notes 5.0% = Discount rate 5.0% = Money growth rate 1.0% = Market penetration 10.0% = Reserve ratio Year Quantity of Notes Outstanding 10,551,135 630,000 0.1 8,385.9 17,610.3 33,899.8 64,717.8 98,532.9 121,297.3 138,600.1 0.2 0.3 0.5 0.7 0.9 0.9 661,500 694,575 729,304 765,769 804,057 844,260 886,473 1.0 153,396.6 11,078,692 11,632,626 12,214,258 12,824,971 13,466,219 14,139,530 14,846,507 15,588,832 930,797 1.0 165,196.3 15,588,832 930,797 1.0 165,196.3 Market for currency reserves 10,048,700 600,000 0.0 2,662.2 Market for banknotes Market penetration rate (%) Total CBC market 45 119.8 377.4 792.5 1,525.5 2,912.3 4,434.0 5,458.4 (133.1) (1.3) (6.8) (21.4) 7.5 (13.9) (13.2) (44.0) (39.4) 13.9 71.7 50.7 179.9 369.0 55.9 208.2 448.5 865.3 678.0 138.8 (30.1) (112.1) (241.5) (465.9) 86.0 320.3 690.0 1,331.2 (1.0) (2.1) (4.1) (7.8) (11.9) 2,682.1 (938.7) 1,743.3 1,300.9 576.4 (4.2) (8.8) (16.9) (32.4) (49.3) (286.2) (461.2) (814.5) (1,540.9) (1,690.8) (1,138.2) (60.6) (14.6) 4,244.9 (1,485.7) 2,759.2 1,960.9 1,189.8 70,586.1 (5,000.0) 1,750.0 67,336.1 42.3 Revenue 6,237.0 6,902.8 7,433.8 7,433.8 Revenue from loaned funds Expenses (865.1) (69.3) (16.7) 5,285.8 (1,850.0) 3,435.8 2,325.5 1,520.9 (739.8) (76.7) (18.5) 6,067.8 (2,123.7) 3,944.1 2,542.4 1,710.7 (590.0) (82.6) (19.9) 6,741.3 (2,359.5) 4,381.8 2,690.1 1,851.9 (82.6) (20.9) 7,330.3 (2,565.6) 95,293.8 58,502.1 41,747.2 Annual marketing expense Adminintrative expense Note production costs Interest expense Earnings before taxes Corporate income tax Cash flow Present value of cash flow Present value of cash flow Sum of PVs Initial investment Tax shield NPV ($) IRR (%) Notes 1. For example, Bradford Delong explains, “That exchange rates were stable under the pre– World War I gold standard is indisputable. Devaluations were few among the industrial powers, and rare. Exchange rate risk was rarely a factor in economic decisions.” However, Delong disputes that such a system could be replicated today. J. Bradford DeLong, “VIII. The Pre–World War I Gold Standard,” Slouching Towards Utopia?: The Economic History of the Twentieth Century, http:// econ161.berkeley.edu/tceh/slouch_gold8.html. 2. This essay uses the term “price stability” as shorthand for the appropriate relative price level when the money supply is allowed to grow at its natural rate. In systems of decentralized money creation, the quantity of money in the economy is determined by supply and demand, so the aggregate price level might be rising or falling depending on resources, preferences, and productivity, as described in George A. Selgin, Less Than Zero: The Case for a Falling Price Level in a Growing Economy (London: Institute of Economic Affairs, 1997). 3. As discussed later, government banknotes in Hong Kong compose only 3.7 percent of the supply of paper currency. Regarding Scotland and Northern Ireland, “[t]oday Scottish banknotes make up about 95 percent of Scotland’s circulating paper money, and do so despite not being legal tender.” William D. Lastrapes and George Selgin, “Banknotes and Economic Growth” (working paper, 2007): 27, files/banknotes_growth.pdf. 4. Generally, default implies bankruptcy, but this is not always the case. Default means that a firm (or government) is unable to fulfill a specific obligation, while bankruptcy is a legal term indicating that the present value of the firm’s debt exceeds its expected future income, and it is therefore unable to continue as a going concern. Put differently, default pertains to a specific obligation at a specific time, whereas bankruptcy pertains to a set of expected future outcomes. A firm may default on a specific obligation without entering a state of bankruptcy. 5. For examples of the law of adverse clearings, see Lawrence H. White, Competition and Currency (New York: New York University Press, 1989); and George A. Selgin, The Theory of Free Banking: Money Supply under Competitive Note Issue (Lanham, MD: Rowman & Littlefield, 1988). 6. “The over-expansive bank in a free banking system will sooner or later be disciplined by a loss of its reserves. . . . In order to re-establish its initial equilibrium position following a period of overissue, the expansive bank must reverse course. It must pursue a relatively restrictive policy for a period. During this period of underissue, it will enjoy positive clearings against the other banks and so may replenish its own reserves.” Lawrence H. White, Free Banking in Britain: Theory, Experience and Debate 1800–1845, 2nd ed. (London: Institute of Economic Affairs, 1995), p. 15. 7. Aurthur J. Rolnick, Bruce D. Smith, and Warren E. Weber, “Lessons from a Laissez-Faire Payments System: The Suffolk Banking System (1825–58),” Federal Reserve Bank of Minneapolis Quarterly Review 22, no. 3 (Summer 1998): 11–21. 8. Gary Gordon and Donald J. Mullineaux, “The Joint Production of Confidence: Endogenous Regulation and Nineteenth Century Commercial-Bank Clearinghouses,” Journal of Money, Credit and Banking 19, no. 4 (November 1977): 457–68. 9. For examples, see Spurgeon Bell, “Profit on National Bank Notes,” American Economic Review 2, no. 1 (March 1912): 38–60; C. A. E. Goodhart, “Profit on National Bank Notes, 1900–1913,” Journal of Political Economy 73, no. 5 (October 1965): 516–22; John A. James, “The Conundrum of the Low Issue of National Bank Notes,” Journal of Political Economy 84, no. 2 (April 1976): 359–68; and Howard Bodenhorn and Michael Haupert, “Was There a Note Issue Conundrum in the Free Banking Era?” Journal of Money, Credit and Banking 27, no. 3 (August 1995): 702–12. 10. Private banks can also inhibit the central bank’s influence over the money supply through interest rates. Suppose the Federal Reserve was able to reduce the interest rate available to U.S. banks, but banks still refused to lend at the lower rate. The banks might choose simply to hold larger reserves rather and then lend them out. This is, in fact, what happened in the recession of 2007–2008. Banks reduced the Fed’s control by refusing to make new loans. The banks’ choice not to lend was due in part to the Fed’s new policy of paying interest on reserves and uncertain economic conditions, which caused fears of loan defaults. 11. In his book The Theory of Free Banking: Money Supply under Competitive Note Issue, George Selgin described the change in a bank’s reserves in response to changes in the demand and velocity of money. He considered two components of bank reserves, the “average net reserves” necessary to cover the bank’s normal average redemptions and “precautionary reserves” needed in case the bank’s redemptions are unusually high (p. 64). The bank will increase its reserve ratio in response to an increase in turnover of its notes since “[p]recautionary reserves rises or falls along with changes in the total volume of gross bank 46 clearings” (p. 66). Further, “it follows that reserve needs are affected by changes in the demand for inside money even when these changes affect all banks simultaneously and uniformly” (p. 67). 12. Lawrence R. Cordell and Kathleen Keuster King, “A Market Evaluation of Risk-based Capital Standards for the U.S. Financial System,” Journal of Banking & Finance 19, no. 3–4 (June 1995): 531–62; Robert F. Engle, “Risk and Volatility: Econometric Models and Financial Practice,” American Economic Review 94, no. 3 (June 2004): 405–20; Andrea Resti and Andrea Sironi, “The Risk-weights in the New Basel Capital Accord: Lessons from Bond Spreads Based on a Simple Structural Model” Journal of Financial Intermediation 16, no 1 (January 2007): 64–90. 13. As discussed later, government banknotes compose only 3.7 percent of the supply of paper currency in Hong Kong. “Today Scottish banknotes make up about 95 percent of Scotland’s circulating paper money, and do so despite not being legal tender,” William D. Lastrapes and George Selgin, “Banknotes and Economic Growth” (working paper, 2007): 27,. terry.uga.edu/~selgin/files/banknotes_growth. pdf. 14. Stanley Fischer, in “Central Bank Independence Revisited,” American Economic Review 85, no. 2 (May 1995): 201-6 states, “[t]he case for central-bank independence (CBI), while not a new one, has been strengthened by a growing body of empirical evidence, by recent developments in economic theory, and by the temper of the times. The case is a strong one, which is becoming part of the Washington orthodoxy.” 15. Milton Friedman, “The Lag Effect of Monetary Policy,” Journal of Political Economy 64, no. 5 (October 1961): 447–66. 16. As the Fed itself describes, “Even the most up-to-date data on key variables like employment, growth, productivity, and so on, reflect conditions in the past, not conditions today; that’s why the process of monetary policymaking has been compared to driving while looking only in the rearview mirror.” Federal Reserve Bank of San Francisco, “About the Fed,”. frbsf.org/publications/federalreserve/monetary/ formulate.html. 17. Thomas F. Cooley and Gary D. Hansen, “The Inflation Tax in a Real Business Cycle Model,” American Economic Review 79, no. 4 (September 1989): 733–48. 18. Thomas F. Cooley and Gary D. Hansen, “The Welfare Costs of Moderate Inflation,” Journal of Money, Credit and Banking 23, no. 3 (August 1991): 483–503. See also Patricia Reagan and René M. Stulz, “Contracting Costs, Inflation, and Relative Price Variability,” Journal of Money, Credit and Banking 25, no. 3 (August 1993): 585–601. 19. Armen A. Alchain and Reuben A. Kressel, “Redistribution of Wealth through Inflation,” Science 130, no. 3375 (September 1959): 535–39. See also: Matthias Doepke and Martin Schneider, “Inflation and the Redistribution of National Wealth,” Journal of Political Economy 114, no. 6 (December 2006): 1069–97. 20. Steven Horwitz, “The Costs of Inflation Revisited,” The Review of Austrian Economics 16, no. 1 (2003): 77–95. 21. This must be true since bond prices and interest rates move inversely. 22. Of course, the recent debt ceiling debate and problems in Greece have caused many to reconsider the potential for U.S. Treasury default. 23. Eldar Shafir, Peter Diamond, and Amos Tversky, “Money Illusion,” Quarterly Journal of Economics 112, no. 2 (May 1997): 341–374; and Ernst Fehr and Jean-Robert Tyran, “Does Money Illusion Matter?” American Economic Review 19, no. 5 (December 2001): 1239–62. 24. The value of each currency is calculated from the World Bank’s GDP deflator (indicator NY.GDP.DEFL.KD.ZG) for each currency. This measure is given as an annual rate of inflation, and is defined as “[i]nflation as measured by the annual growth rate of the GDP implicit deflator shows the rate of price change in the economy as a whole. The GDP implicit deflator is the ratio of GDP in current local currency to GDP in constant local currency.” To create an index for each currency, we assume a value of 100 percent for the year 1970. For each subsequent year we divide by 1+i, where i is the GDP deflator. 25. Edgar A. Gossoub and Robert R. Reed, “Economic Development and the Welfare Cost of Inflation,” (working paper, 2010): 18,. cba.ua.edu/~rreed/EconomicDevelopmentWel farecostsAugust2009.pdf. 26. George Selgin, William D. Lastrapes, and Lawrence H. White, “Has the Fed Been a Failure?” Journal of Macroeconomics (2012),. org/10.1016/j.jmacro.2012.02.003. The paper concludes, “[A]vailable research does not support the view that the Federal Reserve System has lived up to its original promise. Early in its career, it presided over both the most severe inflation and the most severe (demand-induced) deflations in post–Civil War U.S. history. Since then, it has tended to err on the side of inflation, allowing 47. Although a genuine improvement did occur during the sub-period known as the ‘Great Moderation,’ that improvement, besides having been temporary, appears to have been due mainly to factors other than improved monetary policy. Finally, the Fed cannot be credited with having reduced the frequency of banking panics or with having wielded its last-resort lending powers responsibly. In short, the Federal Reserve System, as presently constituted, is no more worthy of being regarded as the last word in monetary management than the National Currency system it replaced almost a century ago.” 27. These sources include Andrew Atkeson and Patrick J. Kehoe, “Deflation and Depression: Is There an Empirical Link?” American Economic Review 94, no. 2 (May 2004): 99–103; John W. Keating and John V. Nye, “Permanent and Transitory Shocks in Real Output Estimates from Nineteenth-Century and Postwar Economics,” Journal of Money, Credit, and Banking 30, no. 2 (May 1998): 231–51; Christina D. Romer, “Is the Stabilization of the Postwar Economy a Figment of the Data?” American Economic Review 76, no. 3 (June 1986a): 314–34; Christina D. Romer, “Spurious Volatility in Historical Unemployment Data,” Journal of Political Economy 94, no. 1 (February 1986b): 1–37; and Christina D. Romer, “Changes in Business Cycles: Evidence and Explanations,” Journal of Economic Perspectives 13, no. 2 (Spring 1999): 23–44. 28. “The modern view of central banks as sources of monetary stability is in essence a historical myth.” George Selgin, “Central Banks as Sources of Financial Instability,” The Independent Review 14, no. 4 (Spring 2010): 486. 29. Remarks by Ben S. Bernanke before the National Economists Club, Washington, D.C., November 21, 2002, boarddocs/speeches/2002/20021121/default. htm. 30. In 2008, the Federal Reserve provided loans of up to $12.9 billion to Bear Stearns and up to $85 billion to American International Group (AIG) before eventually deciding to purchase toxic financial assets from these firms. To facilitate these asset purchases, the New York Federal Reserve Bank created three limited liability corporations which took the names Maiden Lane, Maiden Lane II, and Maiden Lane III (after the street on which the NYFRB is located). Maiden 48 Lane was created to facilitate the purchase of Bear Stearns by JPMorgan Chase by purchasing $30 billion in assets from Bear Stearns. Maiden Lane II and III were created to purchase almost $50 billion in assets, mostly mortgage-backed securities and credit default swaps, from AIG. In addition, the Federal Reserve purchased $1.25 trillion in agency mortgage-backed securities between 2008 and 2010. Further details of these transactions are available at reserve.gov/newsevents/reform_bearstearns. htm, reform_aig.htm, and. gov/newsevents/reform_mbs.htm. 31. George Selgin and Lawrence H. White, “How Would the Invisible Handle Money?” Journal of Economic Literature 32, no. 4 (December 1994): 1718–49. 32. The Coinage act of 1792 authorized the creation of a national mint and requisitioned the mint to produce coins of gold, silver, and copper. The base unit of coinage was the U.S. dollar. Coins were minted in nine denominations, from eagles valued at 10 dollars down to half-cents worth one two-hundredth of a dollar. 33. Data on the number of state-chartered banks is taken from Warren E. Weber, “Early State Banks in the United States: How Many Were There and Where did They Exist?” Federal Reserve Bank of Minneapolis Quarterly Review 30, no. 2 (September 2006): 28–40. Data on the quantity of notes issued is taken from John E. Gurley and E. S. Shaw, “The Growth of Debt and Money in the United States, 1800–1950: A Suggested Interpretation,” Review of Economics and Statistics 39, no. 3 (August 1957): 250–62. 34. John Wilson Million, “Debate on the National Bank Act of 1863,” Journal of Political Economy 2, no. 2 (March 1894): 264. 35. Data on U.S. GDP and CPI are taken from the Measuring Worth historical database at. The original source for GDP data is Louis Johnston and Samuel H. Williamson, “What Was the U.S. GDP Then? Annual Observations in Table and Graphical Format 1790 to the Present,” MeasuringWorth (2011). The original source for CPI data is Lawrence H. Officer, “The Annual Consumer Price Index for the United States, 1774–2010,” MeasuringWorth (2012),. 36. For information on “wildcat” banking, see Arthur J. Rolnick and Warren E. Weber, “Causes of Free Bank Failures: A Detailed Examination,” Journal of Monetary Economics 14, no. 3 (November 1984): 267–91. See also Hugh Rockoff, “The Free Banking Era: A Reexamination,” Journal of Money, Credit and Banking 6, no. 2 (May 1974): 141–67. For information on inflation and economic stability, see Selgin et al., “Has the Fed Been a Failure?”; Romer, “Is the Stabilization of the Postwar Economy a Figment of the Data?”; Romer, “Spurious Volatility in Historical Unemployment Data,”; and Romer, “Changes in Business Cycles: Evidence and Explanations.” 37. Richard H. Timberlake, Monetary Policy in the United States: An Intellectual and Institutional History (Chicago: University of Chicago Press, 1978), p. 83. 38. Ibid., Table 7.1, p. 90. 39. For more information, see Charles W. Calomiris and Charles M. Kahn, “The Efficiency of Self-Regulated Payment Systems: Learning from the Suffolk System,” National Bureau of Economic Research working paper no. 5542 (January 1996),. 40. Andrew T. Young and John A. Dove, “Policing the Chain Gang: Panel Cointegration Analysis of the Stability of the Suffolk System, 1825– 1858,” (working paper, 2010): 6–7,. com/abstract=1710042. 41. Wesley C. Mitchell, “Greenbacks and the Cost of the Civil War,” Journal of Political Economy 5, no. 2 (March 1897): 117–56. 42. Ibid., 126. 43. Timberlake, Table 7.1, p. 90. 44. Million, p. 256. 45. See Allen N. Berger, Richard J. Herring, and Giorio P. Szegö, “The Role of Capital in Financial Institutions,” Journal of Banking and Finance 19, no. 3–4 (June 1995): 401. See also Milton Friedman and Anna Jacobson Schwartz, A Monetary History of the United States, 1867–1960 (Princeton: Princeton University Press, 1963), p. 20. 46. Million, p. 295. 47. Section 2 of the order reads, “[a]ll.” 48. “The Fed has invested more than $2 trillion in a range of unprecedented programs, first to restore the financial system and then to encourage economic expansion.” “Federal Reserve (The Fed),” New York Times, September 21, 2011, http:// topics.nytimes.com/top/reference/timestopics/ organizations/f/federal_reserve_system/index. html. “The Federal Reserve, already arguably the most powerful agency in the U.S. government, will get sweeping new authority to regulate any company whose failure could endanger the U.S. economy and markets under the Obama administration’s regulatory overhaul plan.” Patricia Hill, “Federal Reserve to Gain Power under Plan,” Washington Times, June 16, 2009, tontimes.com/news/2009/jun/16/plan-gives-fedsweeping-power-over-companies/. 49. Kurt Schuler, “Note Issue by Private Banks: A Step toward Free Banking in the United States?” Cato Journal 20, no. 3 (Winter 2001): 454. 50. “Today Scottish banknotes make up about 95 percent of Scotland’s circulating paper money, and do so despite not being legal tender.” William D. Lastrapes and George Selgin, “Banknotes and Economic Growth,” (working paper, 2007), p. 27, banknotes_growth.pdf. 51. An article in the Financial Times describes “At present, the Scottish and Irish banks provide cover for their issued bank notes during the week with other assets, on which they can obtain a return. The Treasury estimates they are realizing a collective advantage worth £80m a year over non-issuing institutions, which are obliged to hold Bank of England notes on which they receive no interest.” Andrew Bolger, “Banking: Big Players in the European Game,” Financial Times, October 11, 2005, s/1/95049c02-3a66-11da-b0d3-00000e2511c8. html#axzz1aPdyqt6M. 52. “The key feature of this is to protect note holders’ interests in the event of the failure of a note issuing bank, by requiring full backing of the note issue at all times by UK public sector lia- 49 bilities and ring-fencing those assets for the interests of note holders.” U.K. Payment Council, The Future of Cash in the UK, (2010), p. 6,. paymentscouncil.org.uk/files/payments_council /future_of_cash2.pdf. 53. “Under Threat,” The Economist, February 7, 2008, 19; “Salmond Voices Bank Notes Fears,” BBC News, February 4, 2008, hi/uk_news/scotland/7224987.stm. 54. The Hong Kong Monetary Authority lists its “Key Functions” as monetary stability, banking stability, maintaining its status as an international financial center, and managing its exchange fund. Hong Kong Monetary Authority, “Monetary Stability,” eng/key-functions/monetary-stability.shtml. 55. For more information, see the Basic Law of the Hong Kong Special Administrative Region of the People’s Republic of China, siclaw.gov.hk/en/index/. 56. Data on private banknotes are taken from the financial statements of the issuing banks. Data on government banknotes are available from the Hong Kong Monetary Authority annual reports. 57. James Gwartney, Robert Lawson, and Joshua Hall, Economic Freedom of the World: 2010 Annual Report (Vancouver: Frasier Institute, 2010). 58. According to 2010 World Bank GDP indicator NY.GDP.MKTP.CD. 59. GDP measured according to 2010 World Bank GDP indicator NY.GDP.MKTP.CD. International reserves measured according to 2010 Currency Composition of Official Foreign Exchange Reserves (COFER) from the International Monetary Fund. 60. See Bank of International Settlement, “Report on Global Exchange Market Activity in 2010,” Monetary and Economic Department (2010), Table B.7, p. 17. 61. For further information, a list of local currencies in the United States is listed on Wikipedia at nity_currencies_in_the_United_States. A worldwide list is available at wiki/Local_currency. 62. Quotations are from the websites for Berkshares and Ithaca Hours, respectively. See http:// and. org/. 63. See the BerkShares website,. berkshares.org/. 64. See the Ithaca Hours website,. ithacahours.org/. 65. Should the reader find these estimates unrealistic, he can feel free to use the same method to calculate higher or lower potential profits using numbers of his choice. 66. Interest income to the Fed is described as, “[t]he Federal Reserve issues non-interest-bearing obligations (currency) and uses the proceeds to acquire interest-bearing assets.” Michael J. Lambert and Kristin D. Stanton, “Opportunities and Challenges of the U.S. Dollar as an Increasingly Global Currency: A Federal Reserve Perspective,” Federal Reserve Bulletin (September 2001): 567–75. 67. The total value of Federal Reserve notes in circulation was $1.035 trillion as of December 31, 2011. The value of currency in circulation is available at Board of Governors of the Federal Reserve System, “Currency and Coin Services,” coin_currcircvalue.htm. 68. Linda S. Goldberg, “Is the International Role of the Dollar Changing?” Federal Reserve Bank of New York, Current Issues 16, no. 1 (January 2010): 1–7.. cfm?abstract_id=1550192. 69. Matt Peckham, “Bye-Bye Avatar: Modern Warfare 3 Takes $1 Billion Record,” PC World, December 12, 2011, ticle/246024/byebye_avatar_modern_warfare_3_ takes_1_billion_record.html. 70. As of December 29, 2011, banks with more than $71.0 million in liabilities are required to hold 10 percent of those liabilities on reserve. Current reserve requirements are available on the Federal Reserve website at. 71. Bell, “Profit on National Bank Notes,” p. 44. 72. According to Yahoo!Finance (. yahoo.com/), Bank of America earned revenues of $134.2 billion in 2010 on total assets of $2,264.9 billion, JPMorgan Chase earned $102.7 billion on $2,117.6 billion, Citigroup earned $60.6 billion on $1,913.9 billion, and Wells Fargo earned $93.2 billion on $1258.1 billion. Note that these ratios of revenue over assets differ from the commonly quoted ratio “Return on Assets” in which “return” is net of costs. 73. This may be true since “small banks are more profit efficient than large banks,” and “[s]mall banks in non-metropolitan areas [non- 50 MSA] are consistently more profit efficient than small banks in MSAs.” Aigbe Akhigbe and James E. McNulty, “The Profit Efficiency of Small U.S. Commercial Banks,” Journal of Banking & Finance 27, no. 2 (February 2003): 307. 74. According to a press release, “The Federal Reserve Board on Monday announced that it will begin to pay interest on depository institutions’ required and excess reserve balances. . . . The interest rate paid on required reserve balances will be the average targeted federal funds rate established by the Federal Open Market Committee over each reserve maintenance period less 10 basis points.” Federal Reserve, press release, October 6, 2008,. 75. The Bureau of Engraving and Printing lists the actual production costs of Federal Reserve Notes at $44.85 per thousand notes. Performance and Accountability Report (Washington: Bureau of Engraving and Printing, 2010), p. 25, http:// moneyfactory.gov/images/2010_BEP_CFO.pdf. 76. The Bureau of Engraving and Printing lists the currency spoilage rates for the years 2006 to 2010 as 4.3 percent, 4.4 percent, 4.2 percent, 4.6 percent, and 10.9 percent, respectively. The high 2010 rate was due to the replacement of the redesigned $100 note. Excluding the 2010 outlier, the average of the four preceding years is 4.4 percent. Performance and Accountability Report (Washington: Bureau of Engraving and Printing, 2010), p. 26, P_CFO.pdf. 77. Although the assumption of marketing expenses as 5 percent of sales may be low for some industries, it appears to be quite substantial for the commercial banking industry. For example, Bank of America spent almost $2 billion on marketing in 2010 on revenues of over $111 billion, a rate of less than 2 percent. Indeed, an assumed rate of 5 percent creates annual marketing expenses of $500 million in some years according to our net present value (NPV) estimates. 78. For example, the Money Reader app from LookTel is available for only $1.99. See Tiffany Kaiser, “Money Reader App Helps Blind iPhone Users Recognize U.S. Currency,” Daily Tech, March 10, 2011, ey+Reader+App+Helps+Blind+iPhone+Users+Re cognize+US+Currency+/article21096.htm. 79. Despite its name, Happy State Bank and Trust Company is a member of the FDIC. 80. “A large fraction of these notes had looked like currency and had been treated as such—to a limited extent as hand-to-hand currency and to a large extent as bank reserves. . . . Most of them were also interest-bearing and all of them were only legal tender for payments due to and from the government.” Timberlake, p. 85. 81. See 12 U.S.C. §541, which reads, “In lieu of all existing taxes, every association shall pay to the Treasurer of the United States, in the months of January and July, a duty of one-half of 1 per centum each half year upon the average amount of its notes in circulation.” 82. Annual Report: Budget Review (Board of Governors of the Federal Reserve System, 2010), p. 9, rptcongress/budgetrev10/ar_br10.pdf. 83. This estimate assumes that administrative costs are equal to the expenses of the Federal Reserve’s Board of Governors. It does not include the expenses for the Fed’s 12 regional Reserve Banks since these banks also engage in activities not related to money production, such as bank regulation and supervision and, in the case of the New York branch, open market operations. 84. Kevin Foster, Erik Meijer, Scott Schuh, and Michael A. Zabek, “The 2009 Survey of Consumer Payment Choice,” Federal Reserve Bank of Boston, Public Policy Discussion Papers (2009), p. 16. 85. See Noncash Payment Trends in the United States: 2006–2009, Federal Reserve System 18, pdf/press/2010_payments_study.pdf. 86. See Geoffrey R. Gerdes and Kathy C. Wang, “Recent Payment Trends in the United States,” Federal Reserve Bulletin (October 2008): A75–A106, pp. 85–87. 87. Scott Sumner, “A Critique of the “Credit Economy” Hypothesis,” The Money Illusion, September 25, 2011,. com/?p=11077. 88. “Currency in Circulation: Value,” The Federal Reserve, systems/coin_currcircvalue.htm; and “Currency in Circulation: Volume,” The Federal Reserve, coin_currcircvolume.htm. 89. See Strategic Cash Group, The Future for Cash in the UK (London: Payments Council, March, 2010), p. 14. 90. According to the U.S. Mint’s website, “under 18 U.SC. §486 it is a Federal crime to utter or pass, or attempt to utter or pass, any coins of gold or silver intended for use as current money, except as 51 authorized by law. According to the National Organization for the Repeal of the Federal Reserve Act and the International Revenue Code (NORFED) website, ‘Liberty merchants’ are encouraged to accept NORFED ‘Liberty Dollar’ medallions and offer them as change in sales transactions of merchandise or services. Further, NORFED tells ‘Liberty associates’ that they can earn money by obtaining NORFED ‘Liberty Dollar’ medallions at a discount and then can ‘spend [them] into circulation.’ Therefore, NORFED’s ‘Liberty Dollar’ medallions are specifically intended to be used as current money in order to limit reliance on, and to compete with the circulating coinage of the United States. Consequently, prosecutors with the United States Department of Justice have concluded that the use of NORFED’s ‘Liberty Dollar’ medallions violates 18 U.S.C. §486.” See U.S. Mint, “NORFED’s ‘Liberty Dollars’,” consumer/?action=archives#NORFED. 91. As reported by the Associated Press, “Federal prosecutors on Monday tried to take a hoard of silver ‘Liberty Dollars’ worth about $7 million that authorities say was invented by an Indiana man to compete with U.S. currency.” Tom Breen, “Feds Seek $7M in Privately Made ‘Liberty Dollars’,” Associated Press, April 4, 2011. 92. These quotations are available from several sources, including the New York Sun, which quotes, “‘[a] unique form of domestic terrorism’ is the way the U.S. Attorney for the Western District of North Carolina, Anne M. Tompkins, is describing attempts ‘to undermine the legitimate currency of this country.’” The Justice Department press release quotes her as saying, “[w]hile these forms of anti-government activities do not involve violence, they are every bit as insidious and represent a clear and present danger to the economic stability of this country.” For more information, see “A ‘Unique’ Form of ‘Terrorism’,” New York Sun, March 20, 2011, itorials/a-unique-form-of-terrorism/87269/. 93. For example, the statue on “Tokens or paper used as money” (18 U.S.C. §491) applies only to those “not lawfully authorized” to produce paper money. Since the producers of local currencies have gained such authorization, it seems reasonable to presume that other private banks would be able to do so as well. 94. For more information, see the statute on “Imitating obligations or securities; advertisements,” 18 U.S.C. §475. 95. Schuler, p. 461. 96. This debate is discussed by William Luther, who states, “[A]ccording to Friedman, Hayek erred in believing that the mere admission of competing private currencies will spontaneously generate a more stable monetary system. In Friedman’s view, network effects, to use the modern term, discourage an alternative system from emerging in general and prevent Hayek’s system from functioning as desired in particular.” William J. Luther, “Friedman Versus Hayek on Private Outside Monies: New Evidence for the Debate,” (working paper, 2011),. com/sol3/papers.cfm?abstract_id=1831347. 97. Ranaldo and Söderlind document the Swiss franc as a “safe haven” currency. They find that when U.S. stocks decline in value, the value of the Swiss franc rises relative to the U.S. dollar. Angelo Ranaldo and Paul Söderlind, “Safe Haven Currencies,” Review of Finance 14, no. 3 (2010): 385–407. 98. International reserves, measured according to 2010 Currency Composition of Official Foreign Exchange Reserves (COFER) from the International Monetary Fund. 99. See Bank of International Settlement, “Report on Global Exchange Market Activity in 2010,” Monetary and Economic Department (2010), Table B.7, p. 17. 100. Linda S. Goldberg and Cédric Tille, “Vehicle Currency Use in International Trade,” Federal Reserve Bank of New York Staff Report no. 200, (January 2005), search/staff_reports/sr200.html. 101. Robert N. McCauley and Patrick McGuire, “Dollar Appreciation in 2008: Safe Haven, Carry Trades, Dollar Shortage, and Overhedging,” BIS Quarterly Review (December 2009): 85–92, http://. 102. Hugh Rockoff, “Some Evidence on the Real Price of Gold, Its Costs of Production, and Commodity Prices,” in A Retrospective on the Classical Gold Standard, 1821–1931, ed. Michael D. Bordo and Anna J. Schwartz (Chicago: University of Chicago Press, 1984), 613–50. 103. Arthur J. Rolnick and Warren E. Weber, “Money, Inflation, and Output under Fiat and Commodity Standards,” Journal of Political Economy 105, no. 6 (December 1997): 1308–21. 104. Douglas Fisher, “The Price Revolution: A Monetary Interpretation,” Journal of Economic History 49, no. 4 (December 1989): 883–902. 105. Calculated from the World Bank’s GDP deflator (indicator NY.GDP.DEFL.KD.ZG). 106. Alberto Alesina and Robert J. Barro, “Dollarization,” American Economic Review 91, no. 2 (May 52 2001): 381–85. 107. “Data from the World Bank World Development Indicators for 21 Latin American countries from 1960 to 2003 are analyzed. . . . Data suggests that dollarization significantly reduces inflation.” Sophia Castillo, “Dollarization and Macroeconomic Stability in Latin America,” Oshkosh Scholar I (Oshkosh, WI: The University of Wisconsin, 2006), p. 51,. edu/handle/1793/6679. 108. Elham Mafi-Kreft, “The Relationship Between Currency Competition and Inflation,” Kyklos 56, no. 4 (November 2003): 475–90. 109. Sok Heng Lay, Makoto Kakinaka, and Koji Kotani, “Exchange Rate Movements in a Dollarized Economy: The Case of Cambodia,” Economics and Management Series, International University of Japan (November 2010), p. 6. 110. Tomás J. Baliño, Adam Bennett, and Eduardo Borensztein, Monetary Policy in Dollarized Economies (Washington: International Monetary Fund, 1999). 111. Rebecca Hellerstein and William Ryan, “Cash Dollars Abroad,” Federal Reserve Bank of New York, Staff Report 400 (2011), p. 9. 112. Buying gold futures would involve a future transaction in some other currency, so the bank would likely need to hedge its risk in the other currency as well. 113. For a discussion of international trade networks, see James E. Rauch, “Business and Social Networks,” Journal of Economic Literature 39, no. 4 (December 2001): 1177–1203. 114. “Statistical Annex,” BIS Quarterly Review (Basel: Bank of International Settlements, September 2011), Table 5.A, p. A24, publ/qtrpdf/r_qa1109.pdf. 115. The price of gold as of September 2011 is roughly $1,900 per ounce. Since a single gold bar weighs 400 ounces, each bar is worth roughly $760,000. 116. Bell, “Profit on National Bank Notes.” 53 RELATED STUDIES FROM THE POLICY ANALYSIS SERIES 665. 660. 659. The Inefficiency of Clearing Mandates by Craig Pirrong (July 21, 2010) Lawless Policy: TARP as Congressional Failure by John Samples (February 4, 2010) Globalization: Curse or Cure? Policies to Harness Global Economic Integration to Solve Our Economic Challenge by Jagadeesh Gokhale (February 1, 2010) Would a Stricter Fed Policy and Financial Regulation Have Averted the Financial Crisis? by Jagadeesh Gokhale and Peter Van Doren (October 8, 2009) How Urban Planners Caused the Housing Bubble by Randal O’Toole (October 1, 2009) Bright Lines and Bailouts: To Bail or Not To Bail, That Is the Question by Vern McKinley and Gary Gegenheimer (April 20, 2009) Financial Crisis and Public Policy by Jagadeesh Gokhale (March 23, 2009) RELATED STUDY FROM THE WORKING PAPER SERIES 2. Has the Fed Been a Failure? by George A. Selgin, William D. Lastrapes, and Lawrence H. White (November 9, 2010) RECENT STUDIES IN THE CATO INSTITUTE POLICY ANALYSIS SERIES 697. 696. 695. 694. 693. 692. If You Love Something, Set It Free: A Case for Defunding Public Broadcasting by Trevor Burrus (May 21, 2012) Questioning Homeownership as a Public Policy Goal by Morris A. Davis (May 15, 2012) Ending Congestion by Refinancing Highways by Randal O’Toole (May 15, 2012) The American Welfare State: How We Spend Nearly $1 Trillion a Year Fighting Poverty—and Fail by Michael Tanner (April 11, 2012) What Made the Financial Crisis Systemic? by Patric H. Hendershott and Kevin Villani (March 6, 2012) Still a Better Deal: Private Investment vs. Social Security by Michael Tanner (February 13, 2012) 648. 646. 637. 634. 691.) 690. 689. 688. 687. 686. 685. 684. 683. 682. 681. 680. 679. 678. 677. 676. 675. 674. 673. 672. 671. 670. 669. 668. 667. 666.) Fiscal Policy Report Card on America’s Governors: 2010 by Chris Edwards (September 30, 2010) Budgetary Savings from Military Restraint by Benjamin H. Friedman and Christopher Preble (September 23, 2010) Reforming Indigent Defense: How Free Market Principles Can Help to Fix a Broken System by Stephen J. Schulhofer and David D. Friedman (September 1, 2010) The Inefficiency of Clearing Mandates by Craig Pirrong (July 21, 2010) The DISCLOSE Act, Deliberation, and the First Amendment by John Samples (June 28, 2010) Defining Success: The Case against Rail Transit by Randal O’Toole (March 24, 2010) They Spend WHAT? The Real Cost of Public Schools by Adam Schaeffer (March 10, 2010) Behind the Curtain: Assessing the Case for National Curriculum Standards by Neal McCluskey (February 17, 2010) 665. 664. 663. 662. 661.
https://www.scribd.com/document/93535718/Competition-in-Currency-The-Potential-for-Private-Money-Cato-Policy-Analysis-No-698
CC-MAIN-2018-17
refinedweb
33,262
53.41
User interaction with the ScrollPane component ScrollPane component parameters Create an application with the ScrollPane component Create a ScrollPane instance using ActionScript. Components such as the ScrollPane and the UILoader have complete events that allow you to determine when content has finished loading. If you want to set properties on the content of a ScrollPane or UILoader component, listen for the complete event and set the property in the event handler. For example, the following code creates a listener for the Event.COMPLETE event and an event handler that sets the alpha property of the ScrollPane’s content to .5: function spComplete(event:Event):void{ aSp.content.alpha = .5; } aSp.addEventListener(Event.COMPLETE, spComplete); If you specify a location when loading content to the ScrollPane, you must specify the location (X and Y coordinates) as 0, 0. For example, the following code loads the ScrollPane properly because the box is drawn at location 0, 0: var box:MovieClip = new MovieClip(); box.graphics.beginFill(0xFF0000, 1); box.graphics.drawRect(0, 0, 150, 300); box.graphics.endFill(); aSp.source = box; //load ScrollPane For more information, see the ScrollPane class in the ActionScript 3.0 Reference for the Adobe Flash Platform. A ScrollPane can be enabled or disabled. A disabled ScrollPane doesn’t receive mouse or keyboard input. A user can use the following keys to control a ScrollPane when it has focus: Key Description Down Arrow Content moves up one vertical line scroll. Up Arrow Content moves down one vertical line scroll. End Content moves to the bottom of the ScrollPane. Left Arrow Content moves to the right one horizontal line scroll. Right Arrow Content moves to the left one horizontal line scroll. Content moves to the top of the ScrollPane. PageDown Content moves up one vertical scroll page. PageUp Content moves down one vertical scroll page. A user can use the mouse to interact with the ScrollPane both on its content and on the vertical and horizontal scroll bars. The user can drag content by using the mouse when the scrollDrag property is set to true. The appearance of a hand pointer on the content indicates that the user can drag the content. Unlike most other controls, actions occur when the mouse button is pressed and continue until it is released. If the content has valid tab stops, you must set scrollDrag to false. Otherwise all mouse hits on the contents will invoke scroll dragging. You can set the following parameters for each ScrollPane instance in the Property inspector or in the Component inspector: horizontalLineScrollSize, horizontalPageScrollSize, horizontalScrollPolicy, scrollDrag, source, verticalLineScrollSize, verticalPageScrollSize, and verticalScrollPolicy. Each of these parameters has a corresponding ActionScript property of the same name. For information on the possible values for these parameters, see the ScrollPane class in the ActionScript 3.0 Reference for the Adobe Flash Platform. You can write ActionScript to control these and additional options for a ScrollPane component using its properties, methods, and events. The following procedure explains how to add a ScrollPane component to an application while authoring. In this example, the ScrollPane loads a picture from a path specified by the source property. Create a new Flash (ActionScript 3.0) document. Drag the ScrollPane component from the Components panel to the Stage and give it an instance name of aSp. Open the Actions panel, select Frame 1 in the main Timeline, and enter the following ActionScript code: import fl.events.ScrollEvent; aSp.setSize(300, 200); function scrollListener(event:ScrollEvent):void { trace("horizontalScPosition: " + aSp.horizontalScrollPosition + ", verticalScrollPosition = " + aSp.verticalScrollPosition); }; aSp.addEventListener(ScrollEvent.SCROLL, scrollListener); function completeListener(event:Event):void { trace(event.target.source + " has completed loading."); }; // Add listener. aSp.addEventListener(Event.COMPLETE, completeListener); aSp.source = ""; Select Control > Test Movie to run the application. The example creates a ScrollPane, sets its size, and loads an image to it using the source property. It also creates two listeners. The first one listens for a scroll event and displays the image’s position as the user scrolls vertically or horizontally. The second one listens for a complete event and displays a message in the Output panel that says the image has completed loading. This example creates a ScrollPane using ActionScript and places a MovieClip (a red box) in it that is 150 pixels wide by 300 pixels tall. Drag the ScrollPane component from the Components panel to the Library panel. Drag the DataGrid component from the Components panel to the Library panel. import fl.containers.ScrollPane; import fl.controls.ScrollPolicy; import fl.controls.DataGrid; import fl.data.DataProvider; var aSp:ScrollPane = new ScrollPane(); var aBox:MovieClip = new MovieClip(); drawBox(aBox, 0xFF0000); //draw a red box aSp.source = aBox; aSp.setSize(150, 200); aSp.move(100, 100); addChild(aSp); function drawBox(box:MovieClip,color:uint):void { box.graphics.beginFill(color, 1); box.graphics.drawRect(0, 0, 150, 300); box.graphics.endFill(); } Twitter™ and Facebook posts are not covered under the terms of Creative Commons.
http://help.adobe.com/en_US/as3/components/WS5b3ccc516d4fbf351e63e3d118a9c65b32-7fa2.html
CC-MAIN-2017-17
refinedweb
815
50.73
Java Exercises: Get last modified time of a file Java Input-Output: Exercise-7 with Solution Write a Java program to get last modified time of a file. Sample Solution: Java Code: import java.io.File; import java.util.Date; public class Example7 { public static void main(String[] args) { File file = new File("test.txt"); Date date=new Date(file.lastModified()); System.out.println("\nThe file was last modified on: "+date+"\n"); } } Sample Output: The file was last modified on: Thu Jan 01 05:30:00 IST 1970 Flowchart: Java Code Editor: Contribute your code and comments through Disqus. Previous: Write a Java program to compare two files lexicographically. Next: Write Java program to read input from java console.
https://www.w3resource.com/java-exercises/io/java-io-exercise-7.php
CC-MAIN-2019-47
refinedweb
119
50.23
Opened 6 years ago Closed 6 years ago #16934 closed Uncategorized (invalid) Tutorial 3: error in html editing Description In the section of tutorial 3 where we edit a new html file with: {% if latest_poll_list %} <ul> {% for poll in latest_poll_list %} <li><a href="/polls/{{ poll.id }}/">{{ poll.question }}</a></li> {% endfor %} </ul> {% else %} <p>No polls are available.</p> {% endif %} I kept getting an error saying that the {% else %} statement was an invalid block. I believe that it is supposed to be (% else %) as opposed to {% else %} Change History (2) comment:1 Changed 6 years ago by comment:2 Changed 6 years ago by The tutorial appears to be correct. Perhaps you have a typo or something missing in your own code. In any case, please do not use Trac for posting support questions -- instead use the django-users mailing list: Please do reopen this ticket if it appears that there is an actual error in the tutorial. If someone could please respond to this as soon as possible it would be greatly appreciated
https://code.djangoproject.com/ticket/16934
CC-MAIN-2017-30
refinedweb
174
56.18
This (see Resources), but the concept should work in any XForms-capable browser. What you're trying to accomplish The idea is to create a page that enables the user to request a specific feed to read based on its URL, displaying its information on the page, as shown in Figure 1. Figure 1. The reader The page also includes a button that lets the reader switch to an editor, as shown in Figure 2. Figure 2. (see Listing 1). Listing 1.The most basic form The form doesn't specify any visible controls, but it does create the model and the instance, which will ultimately define the data the user sees. The blankfeed.xml file is a placeholder that contains only enough structure to make sure the controls you ultimately build can bind properly, as shown in Listing 2. Listing 2. The blankfeed.xml file Now you (see Listing 3). Listing 3. Requesting a new feed The form doesn't actually take the browser to a new page. Instead, it acts as a way to execute a bit of JavaScript code. The user enters a URL in the targeturl field, and when he or she submits the form, either by clicking the Enter key or by clicking the Submit button, the browser executes the onsubmit handler. That script gets a reference to the src attribute of the instance by referencing its id attribute, content. From there, it sets that value to match the text entered in the targeturl field. Once that value changes, the instance gets reloaded. You can load the page and submit the form, but in order to see any changes, you'll have to add some visible controls. Creating the reader part of the page is very straightforward. You are dealing with a well-known format, in this case RSS 1.0 (see Listing 4). Listing 4. The target file The file includes a variety of namespaces, including RSS, the Dublin Core, and RDF. You'll need to take that into account when creating the XForms form. The reader form itself is very straightfoward (see Listing 5). Listing 5. The reader form First, you display a title on the page by working your way down into the title included as part of the channel element. Notice the use of namespace prefixes. Even though the elements in the RSS namespace didn't have prefixes in the original file, they still belong to that namespace. Because this file uses XHTML as the default namespace, you'll need to use the rss: prefix to specify the RSS elements. You'll use the repeat element to loop through each of the rss:item elements. Finally, use the output element to display individual items and their information. The result, styled using CSS, looks like Figure 3. Figure 3. The reader Okay, that was simple enough. You can, of course, do more complex formatting and layout, but this is the basic method. But what about editing? Editing items is also fairly straightforward, as you can see in Listing 6. Listing 6. Editing items Once again, you loop through the individual rss:item elements, this time using input elements rather than output elements. You'll give the repeat element an id value so you can refer to it later. The result looks like Figure 4. Figure 4. The editor Now, this makes it easy to edit visible data, but what about data that's not immediately obvious, such as the rdf:about attribute or the rdf:Seq list that appears at the top of the file? Automatically updating values RSS 1.0 includes a separate section that lists the URLs of individual items, and an attribute that also lists the URL. You need to make sure that if the URL is changed in the form, this information propogates to these nodes. To do that, you can use a combination of techniques (see Listing 7). Listing 7. Updating values It's easy to keep the rdf:about attribute in sync; you can use use a bind element. This element makes sure that no matter what you do, the value of a particular item's rdf:about attribute always matches the value of its rss:link child. The rdf:Seq list is another matter; because they are in different parts of the DOM, you can't use a simple bind element. For this list, you will need to listen for the event that signifies that the value of the rss:link field has been changed, and when it has, you need to manually reset the rdf:li element, using the index of the repeat element -- remember how you gave it an id value to refer to it later? -- to coordinate positions. Now you need to save the feed. Saving the feed is a matter of creating a submission element that sends the instance data to a script that can save it (see Listing 8). Listing 8. Creating a submissionelement You don't want the entire page to go anywhere when the user saves the feed, so just replace the instance. The target of this submission is a simple script that reads the data, saves it, and echoes it back out for the instance to re-populate itself (see Listing 9). Listing 9. The script to save the data In this case, you're saving the file to a specific location, independent of the actual URL for the feed. In an actual application, you'll likely want to base the location of the saved file on the actual feed itself. Most feeds include their original location. For example, your sample feed lists the original location in the rdf:RDF/rss:link element. You can also use the PUT method if your server supports it. Now let's look at adding items to the feed. Adding new items is a matter of adding an "insert" button with the appropriate action (see Listing 10). Listing 10. Adding new items When the user clicks the Add New Item button, it clones the last rdf:RDF/rss:item element and adds the clone before the first existing one. Because it includes the old data, however, you need to use the setvalue element to clear out the existing data. The one exception to this rule is the dc:date value, which you can set using the now() function, part of XForms. (You may also want to set this value when any of the other data changes, but I'll leave that as an exercise for you, the reader.) You also need to add the rdf:li node to the top of the file and clear its rdf:resource attibute. Finally, you need to give the user the ability to delete individual items (see Listing 11). Listing 11. Deleting individual items Because you're including the trigger right in the repeat, deciding which element to delete is easy; it's always the current one. Deciding which form to show The final step is to give the user control over which form to use. When the form first loads, you want the user to see the "reader," but you want him or her to have the ability to change over to the "editor" view and back (see Listing 12). Listing 12. Choosing the form The switch element works like a case statement in other programming languages. Here you're starting with a case of "read" -- using selected="true" -- and adding a trigger that enables the user to toggle that value to edit. When the user clicks it, the case, and therefore the view, changes. The edit case is similar, toggling over to the reader form. XForms provides an excellent basis for editing RSS, Atom, and other XML-based syndication formats. In this article, you created a form that enables the user to choose a feed, read it, edit it, and save it to a file. In a production application, you will also need to determine the version of the feed at hand and alter your forms accordingly. Information about download methods Learn - Read the XForms specification. - Read the XForms 2.0 specification. - Learn the basics of XForms: Part 1, Part 2, and Part 3. - Learn more about the switch/case elements. - Learn more about R. - Let Skimstone explain to you what XForms is. - See the Skimstone introduction to XForms. Get products and technologies - Download Mozilla Firefox. - Download the XForms extension. - Get MozzIE, an open source control that allows you to render XForms in IE. Discuss - Participate in the discussion forum. - Atom and RSS forum: Find tips, tricks, and answers about Atom, RSS, or other syndication topics in this forum.).
http://www.ibm.com/developerworks/xml/library/x-xformsrssreader/
crawl-003
refinedweb
1,436
71.44
Occasionally you may want to drop the index column of a pandas DataFrame in Python. Since pandas DataFrames and Series always have an index, you can’t actually drop the index, but you can reset it by using the following bit of code: df.reset_index(drop=True, inplace=True) For example, suppose we have the following pandas DataFrame with an index of letters: import pandas as pd #create DataFrame df = pd.DataFrame({'points': [25, 12, 15, 14, 19, 23, 25, 29], 'assists': [5, 7, 7, 9, 12, 9, 9, 4], 'rebounds': [11, 8, 10, 6, 6, 5, 9, 12]}) #set index of DataFrame to be random letters df = df.set_index([pd.Index(['a', 'b', 'd', 'g', 'h', 'm', 'n', 'z'])]) #display DataFrame df points assists rebounds a 25 5 11 b 12 7 8 d 15 7 10 g 14 9 6 h 19 12 6 m 23 9 5 n 25 9 9 z 29 4 12 We can use the reset_index() function to reset the index to be a sequential list of numbers: #reset index df.reset_index(drop=True, inplace=True) #display DataFrame df points assists rebounds 0 25 5 11 1 12 7 8 2 15 7 10 3 14 9 6 4 19 12 6 5 23 9 5 6 25 9 9 7 29 4 12 Notice that the index is now a list of numbers ranging from 0 to 7. As mentioned earlier, the index is not actually a column. Thus, when we use the shape command, we can see that the DataFrame has 8 rows and 3 columns (as opposed to 4 columns): #find number of rows and columns in DataFrame df.shape (8, 3) Bonus: Drop the Index When Importing & Exporting Often you may want to reset the index of a pandas DataFrame after reading it in from a CSV file. You can quickly reset the index while importing it by using the following bit of code: df = pd.read_csv('data.csv', index_col=False) And you can make sure that an index column is not written to a CSV file upon exporting by using the following bit of code: df.to_csv('data.csv', index=False) Additional Resources How to Set Column as Index in Pandas How to Drop Rows with NaN Values in Pandas How to Sort Values in a Pandas DataFrame
https://www.statology.org/drop-index-pandas/
CC-MAIN-2022-40
refinedweb
390
57.44
Introduction Visual Studio 2005 includes the Toolstrip container and Toolstrip controls; used together, these controls allow the end user to drag and drop toolstrips to the edges of the screen such that at run time, the user can reposition the toolbars to the top, bottom, left, or right edges of the work area.Figure 1: Dropping Toolstrips into side and top panels of the Toolstrip Container. Using the control is simple enough; just drop the toolstrip container control into a panel or form; doing so will place the control on the panel or form and expose the toolstrip container tasks (Figure 2):Figure 2: Dropping the Toolstrip Container into a Form. By default, the container will not dock to the form or panel when it is first added, but the option to dock the toolstrip container is available in the toolstrip container tasks. The developer is also able to limit the locations where the end user is permitted to drop the toolbars by either checking or un-checking each of the checkboxes (top, bottom, left, or right) displayed in the toolstrip container tasks. If the developer wanted to limit the toolbar to display on the top or bottom of the panel only, that may be accomplished but removing the checkmarks in the left and right panel visibility options. The area inside the toolstrip container is itself a container; to that end, additional controls may be added to it in lieu of the underlying panel or form hosting the toolstrip container. Figure 3 (below) shows an example of a set of controls added to the panel. Naturally one could also just as easily add controls programmatically to this container at runtime. Figure 3: Adding Controls to the Toolstrip Container panel at Design Time. The toolstrips that may be added to the toolstrip containers allow the user to add a variety of controls to the toolstrip; the available options are limited to the following control types: Figure 4: Adding Controls to the Toolstrip at Design Time. If the developer needs to add controls other than those shown in Figure 4 (such as a date time picker, checkboxes, radio buttons, etc.) a different approach will have to be adopted. There are certainly a number of approaches that one can adopt to build a dockable container that may be subsequently used as a tool palette. While building your own dockable container does offer you the opportunity to construct a palette that may contain controls other than those available in the standard toolstrip's optional controls; in most cases, the standard toolstrip control options are more than adequate. Figure 5 shows a notional custom dockable palette with a calendar control added and figure 6 shows a custom dockable palette with a date time picker control added; both items that can't be immediately added to the standard toolstrip control.Figure 5: Custom Dockable Palette with a Calendar Control.Figure 6: Custom Dockable Palette with a Date Time Picker Control. Again, in most cases you can likely get along fine with a standard toolstrip control and you can provide the ability to dock the control through the use of the toolstrip container control. However, if you need to build a custom dockable palette of some sort, the rest of this article will address one approach to building just such a control. Getting Started The solution contains a single project for a C# windows desktop application. The project is called "DockToolstrip" and the project contains two forms. The first form provides an example of a standard toolstrip container and a couple of toolstrips in use; the second form is used to show one approach to creating a custom dockable toolbar that may contain any other type of control.Figure 7: Solution Explorer with Project Visible. Code: Form 1 There really isn't any code necessary to drive the effects required to drag and drop the toolstrips onto different areas of the toolstrip container. In order to implement the controls, one only need add the toolstrip container control to the target panel or form, dock into that container and then add one or more toolstrips to the toolstrip container. Adding the controls to the toolstrips and writing the event handlers associated with those controls involves the same processes used to perform those tasks for a stationary toolstrip. Form 1 contains a split panel; on the left hand side are a few buttons included just for eye wash, on the right side panel, a toolstrip container was added and docked to the right hand panel. Two separate toolstrips were added and a few controls were added and given icons and simple event handlers. A picture box control was docked full into the toolstrip menu container and a picture of was centered into the picture box. At run time, the user may drag the toolstrips to the top, bottom, left, or right sides of the container and release them. Doing so will move the toolstrip to a docked position in the vicinity of the release point. Again, no code was added to make any of that work; the functionality is derived entirely from the controls. Code: Form 2 This form is a little more interesting to look at. The form contains a flow layout panel control which serves as the dockable palette; a standard panel is also contained in the form; this panel is used to contain whatever controls might be added to the form; this panel and its control collection will have to be moved in response to the movement of the flow layout panel control. That is, if the flow layout panel gets docked to the top of the container, the standard panel will have to be docked beneath that panel, or if the flow layout panel gets docked to the left hand side of the container, the standard panel will have to fill the region to the right of the flow layout panel. This is easily accomplished by using the dock properties of both controls along with some code to handle a drag and drop. The allow drop property on the container has to be set to true to support the drag and drop actions; by default, this property is set to false.Figure 8: Setting the Form2 Allow Drop Property to True. The code behind this form begins with the standard imports and default constructor: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; namespace DockToolstrip { public partial class Form2 : Form { // default constructor public Form2() { InitializeComponent(); } The next bit of code is necessary to support the drag and drop functionality; this code is an event handler for the mouse down event for the flow layout panel control. This code registers the beginning of a drag event whenever the user holds down the mouse button on the flow layout panel control: // set up DoDragDrop on panel1 mousedown private void panel1_MouseDown(object sender, MouseEventArgs e) { this.DoDragDrop(panel1, DragDropEffects.All); } Following this code is the drag drop event handler for Form 2. When the user drops the dragged object onto the form, this section of code will execute. The code annotated but the gist of it is that, on drop, the drop point is captured and used to populate a point object that defines that drop location. Once the point is captured, the flow layout panel and the standard panel are undocked; following that, the drop point is evaluated to determine is proximity to each edge in the container. If the point is within 100 pixels of an edge, the dock property of the flow layout panel is set to dock to that nearest edge (top, left, bottom, or right). Once the flow layout panel used as a tool palette is docked to the appropriate edge, the content panel with the form controls is docked to fill the panel. This will cause the content panel to expand into the remaining area not used by the flow layout panel serving as the tool palette. The panel itself is also resized as a function of this process, if the panel is docked to the top or bottom of the container, its height property is limited to the height of the tallest control plus a few (10) extra pixels to buffer the controls from the edges. Alternatively, one could use padding to keep the controls away from the edges. If the flow layout control is docked to the sides of the container, the width of the widest control should be used to determine the width of the panel. Lastly, if the panel is not dropped within 100 pixels of an edge, the panel is resized to contain the controls and dropped at the drop location without docking it to an edge. private void Form2_DragDrop(object sender, DragEventArgs e) // get the drop position int dropX = e.X; int dropY = e.Y; // create point at drop location Point dropLocation = new Point(dropX, dropY); Point dropPoint = new Point(); // convert point to client area position dropPoint = this.PointToClient(dropLocation); // prep for the drop panel1.Location = dropPoint; panel1.Dock = DockStyle.None; panel2.Dock = DockStyle.None; // test the position of the drop to determine // whether or not to dock the panel to the top; // reset the height if (dropPoint.Y < 100) { panel1.Dock = DockStyle.Top; panel1.Height = button1.Height + 10; panel2.Dock = DockStyle.Fill; return; } // whether or not to dock the panel to the bottom; if (dropPoint.Y > this.Height - 100) panel1.Dock = DockStyle.Bottom; // whether or not to dock the panel to the left side; // reset the width of the panel if (dropPoint.X < 100) panel1.Dock = DockStyle.Left; panel1.Width = button1.Width + 10; // whether or not to dock the panel to the right side; if (dropPoint.X > this.Width - 100) panel1.Dock = DockStyle.Right; } // if the panel is not near an edge, resize it // and drop it at the drop location without // specific docking panel1.Width = button1.Width + 10; panel1.Height = (button1.Height * 6) + 50; } After handling the drop event, the next bit of could is also included to support the drag and drop functionality. /// <summary> /// Handle drag enter event when moving dockable /// control around /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void Form2_DragEnter(object sender, DragEventArgs e) if (e.Data.GetDataPresent(DataFormats.Text)) e.Effect = DragDropEffects.Copy; else e.Effect = DragDropEffects.Move; The rest of the code contained in this demo form is really just there to provide a little eye wash; for example the next code up handles the paint events for the two panels by drawing a gradient in each panel whenever it is moved (and the paint event is fired). // paint gradients in the panels - just eye wash private void panel1_Paint(object sender, PaintEventArgs e) System.Drawing.Drawing2D.LinearGradientBrush gradBrush; Point start = new Point(0, 0); Point end = new Point(panel1.Width, panel1.Height); gradBrush = new System.Drawing.Drawing2D.LinearGradientBrush(start, end, Color.Black, Color.LightSteelBlue); Graphics g = panel1.CreateGraphics(); g.FillRectangle(gradBrush, new Rectangle(0, 0, panel1.Width, panel1.Height)); private void panel2_Paint(object sender, PaintEventArgs e) Point end = new Point(panel2.Width, panel2.Height); gradBrush = new System.Drawing.Drawing2D.LinearGradientBrush(start, end, Color.Black, Color.LightSlateGray); Graphics g = panel2.CreateGraphics(); g.FillRectangle(gradBrush, new Rectangle(0, 0, panel2.Width, panel2.Height)); The last bit of code on any consequence in this demo is a click event handler for the flow layout panel used as a custom palette. This click event handler is set to return without doing anything in response to a click event registered on the flow layout control panel. The purpose of this is to prevent the flow layout panel from resizing in response to being clicked without the intent of dragging. private void panel1_Click(object sender, EventArgs e) return; The rest of the code contained in this form related partial class is just there to handle click events from the controls in the dockable palette. The code described previously is all that is needed to support dragging and dropping the custom palette. Summary. This article was intended to demonstrate the use of the toolstrip container and toolstrips with regards to dragging and dropping toolstrips and docking those toolstrips to the edges of the toolstrip container control control. Alternatively, the second form contained in the application is used to demonstrate an approach to building a dockable custom tool palette. The approach given was very simple to implement; there are a variety of other things that could be done to make for a more versatile dockable palette; for example, the control can be made to show scroll bars and allow scrolling. View All
http://www.c-sharpcorner.com/article/docking-controls-standard-and-custom/
CC-MAIN-2017-22
refinedweb
2,101
59.03
{ "action": "new_branch", "branch": "f33", "namespace": "rpms", "repo": "rust-bodhi-cli", "create_git_branch": true } The branch in PDC already exists, you can now create it yourself as follows: Check in the project's settings if you have activated the git hook preventingnew git branches from being created and if you did, de-activate it. Then simply run in cloned repository: git checkout -b <branch_name> && git push -u origin <branch_name>. <branch_name> is the name of the branch you requested. You only need to do this once and you can then use fedpkg as you normally do. git checkout -b <branch_name> && git push -u origin <branch_name> <branch_name> Metadata Update from @limb: - Issue close_status updated to: Invalid - Issue status updated to: Closed (was: Open) to comment on this ticket.
https://pagure.io/releng/fedora-scm-requests/issue/27681
CC-MAIN-2022-21
refinedweb
124
55.58
C# - The Type System The Type System At the center of the Microsoft .NET Framework is a universal type system called the .NET Common Type System (CTS). In addition to defining all types, the CTS also stipulates the rules that the Common Language Runtime (CLR) follows with regard to applications declaring and using these types. In this tutorial, we'll look at this new type system so that you can learn the types available to C# developers and understand the ramifications of using the different types in C# programs. We'll begin by exploring the concept that every programming element is an object in .NET. We'll then look at how .NET divides types into the two categories—value types and reference types—and we'll discover how boxing enables a completely object-oriented type system to function efficiently. Finally, we'll cover how type casting works in C#, and we'll start looking at namespaces.
http://www.brainbell.com/tutors/C_Sharp/The_Type_System.htm
crawl-002
refinedweb
156
54.93
This action might not be possible to undo. Are you sure you want to continue? Introduction to the C Programming Language. fast. and powerful “Mid-level” Language Standard for program development (wide acceptance) It is everywhere! (portable) Supports modular programming style Useful for all applications C is the native language of UNIX Easy to interface with system devices/assembly routines C is terse 4 C Programming . C Program Structure • • • • • Canonical First Program Header Files Names in C Comments Symbolic Constants 5 C Programming . White space is ignored.Canonical First Program • The following program is written in the C programming language: #include <stdio. 6 C Programming . Statements can continue over many lines. All commands in C must be lowercase. C has a free-form line structure. End of each statement must be marked with a semicolon.h> main() { /* My first program */ printf("Hello World! \n"). } • • C is case sensitive. Multiple statements can be on the same line. The two braces. The parentheses that follow the keyword main indicate that there are no arguments supplied to this program (this will be examined later on). In general. signify the begin and end segments of the program. } • • The C program starting point is identified by the word main(). braces are used throughout C to enclose a block of statements to be treated as a unit. COMMON ERROR: unbalanced number of open and close curly brackets! • 7 C Programming .Canonical First Program Continued #include <stdio.h> main() { /* My first program */ printf("Hello World! \n"). { and }. This informs the computer as to where the program actually starts. These characters are modifiers.More on the Canonical First Program #include <stdio. Where text appears in double quotes "".h> main() { /* My first program */ printf("Hello World! \n"). it is printed without modification. The program only has the one printf() statement. • printf() is actually a function (procedure) in C that is used for printing variables and text. and for the present the \ followed by the n character represents a newline character. an associated header file must be included. Text to be displayed by printf() must be enclosed in double quotes.h> is to allow the use of the printf statement to provide program output. For each function built into the language. This has to do with the \ and % characters. } • The purpose of the statement #include <stdio. There are some exceptions however. 8 C Programming . clear screen. comments are useful for a variety of reasons.e. • 9 C Programming . Primarily they serve as internal documentation for program structure and functionality. what follows the \ character will determine what is printed (i..Canonical First Program Output & Comments • • Thus the program prints Hello World! And the cursor is set to the beginning of the next line. As will be discussed later. etc. clear line.) /* My first program */ Comments can be inserted into C programs by bracketing text with the /* and */ delimiters. As we shall see later on. a tab. To use any of the standard functions. and cover a range of areas: string handling. mathematics.Header Files • Header files contain definitions of functions and variables which can be incorporated into any C program by using the pre-processor #include statement.h> #include <math. The use of the double quotes "" around the filename informs the compiler to start the search in the current directory for the specified file. 10 C Programming • • • .h and generally reside in the /usr/include subdirectory. etc. For example.h" The use of angle brackets <> informs the compiler to search the compiler’s include directories for the specified file.h> should be at the beginning of the source file. because the declaration for printf() is found in the file stdio. Standard header files are provided with each compiler.h> #include "mylib.h. #include <string. All header files have the extension . the line #include <stdio. printing and reading of variables. the appropriate header file should be included. data conversion. to use the function printf() in a program. This is done at the beginning of the C source file. while • • • 11 C Programming . Some users choose to adopt the convention that variable names are all lower case while symbolic names for constants are all upper case. char. summary exit_flag i Jerry7 Number_of_moves _id You should ensure that you use meaningful (but short) names for your identifiers. int. Example keywords are: if. Example: distance = speed * time. C only has 29 keywords. and may be followed by any combination of characters. or the digits 0-9.Names in C • Identifiers in C must begin with a character or underscore. underscores. Keywords are reserved identifiers that have strict meaning to the C compiler. The reasons for this are to make the program easier to read and self-documenting. else. meaning that the text wrong is interpreted as a C statement or variable. /* this comment is inside */ wrong */ • 12 C Programming . Comments may not be nested one inside the another. and in this example.Comments • The addition of comments inside programs is desirable. the first occurrence of */ closes the comment statement for the entire line. */ • Note that the /* opens the comment field and the */ closes the comment field. /* Computational Kernel: In this section of code we implement the Runge-Kutta algorithm for the numerical solution of the differential Einstein Equations. generates an error. In the above example. /* this is a comment. Comments may span multiple lines. These may be added to C programs by enclosing them as follows. modification changes. 13 C Programming . revisions… Best programmers comment as they write the code.Why use comments? • • • Documentation of variables and functions and their usage Explaining difficult sections of code Describes the program. date. author. not after the fact. Once this substitution has taken place by the preprocessor. #define N 3000 #define FALSE 0 #define PI 3. All # statements are processed first. values cannot be assigned to symbolic constants.14159 #define FIGURE "triangle" Note that preprocessor statements begin with a # symbol. Traditionally. and the symbols (like N) which occur in the C program are replaced by their value (like 3000). Implemented with the #define preprocessor directive. preprocessor constants are written in UPPERCASE. Preprocessor statements are handled by the compiler (or preprocessor) before the program is actually compiled. 14 C Programming • • • • . In the program itself. In general. the program is then compiled.Symbolic Constants • Names given to values that cannot be changed. preprocessor statements are listed at the beginning of the source file. and are NOT terminated by a semicolon. This acts as a form of internal documentation to enhance program readability and reuse. 2f is %. } The tax on 72. float tax.Use of Symbolic Constants • Consider the following program which defines a constant called TAXRATE.balance. #include <stdio. tax).10 main () { float balance.10.2f\n".10 is 7. tax = balance * TAXRATE. balance = 72. Considering the above program as an example. printf("The tax on %.h> #define TAXRATE 0. what changes would you need to make if the TAXRATE was changed to 20%? 15 C Programming .21 • The whole point of using #define in your programs is to make them easier to read and modify. Use of Symbolic Constants • Obviously. • 16 C Programming . the answer is one. You would change it to read #define TAXRATE 0. where the #define statement which declares the symbolic constant and its value occurs. you would hard code the value 0.20 in your program.20 Without the use of symbolic constants. and this might occur several times (or tens of times)..Variables. Declaring Variables • A variable is a named memory location in which data of a certain type can be stored. • • • 18 C Programming . User defined variables must be declared before they can be used in a program. thus the name. The contents of a variable can change. Remember that C is case sensitive. It is possible to declare variables elsewhere in a program. main() { int sum. they are considered different variables in C. so even though the two variables listed below have the same name. sum Sum The declaration of variables is done after the opening brace of main(). but lets start simply and then get into variations later on. All variables in C must be declared before use. It is during the declaration phase that the actual memory for the variable is reserved. Get into the habit of declaring variables using lowercase characters. height. 19 C Programming .Basic Format • The basic format for declaring variables is data_type var. float. char midinit. an integer. Examples are int i. character. • where data_type is one of the four basic types.k. var.j. or double type. float length. …. Unsigned integers(positive values only) are also supported. In addition.Basic Data Types: INTEGER • INTEGER: These are whole numbers. both positive and negative. The keyword used to define integers is int • An example of an integer value is 32. there are short and long integers. An example of declaring an integer variable called age is int age. These specialized integer types will be discussed later. • 20 C Programming . 932 x 105).Basic Data Types: FLOAT • FLOATING POINT: These are numbers which contain fractional parts. An example of declaring a float variable called x is float x. • 21 C Programming . and can be written in scientific notation.73 and 1. The keyword used to define float variables is float • Typical floating point values are 1. both positive and negative.932e5 (1.. x=34. slope=tan(rise/run). sum=a+b. the equal sign should be read as “gets”.8. • When used in this manner. j=j+3. midinit='J'. For example: i=0. Note that when assigning a character value the character should be enclosed in single quotes. the assignment operator is the equal sign = and is used to give a variable the value of an expression. 25 C Programming . • two things actually occur. Such as x=y=z=13.The Assignment Operator Evaluation • In the assignment statement a=7.0. 26 C Programming . The integer variable a gets the value of 7. This allows a shorthand for multiple assignments of the same value to several variables in a single statement. and the expression a=7 evaluates to 7. the user should not assume that variables are initialized to some default value “automatically” by the compiler.Initializing Variables • C Variables may be initialized with a value when they are declared. Consider the following declaration. Programmers must ensure that variables have proper values before they are used in expressions. int count = 10. which declares an integer variable count which is initialized to 10. • In general. 27 C Programming . printf("value of letter is %c\n".12.01e-10. float money=44. } • which produces the following output: value value value value of of of of sum is 33 money is 44. double pressure. printf("value of money is %f\n". letter='E'.letter).119999 letter is E pressure is 2.010000e-10 28 C Programming .pressure).h> main () { int sum=33.money). char letter.Initializing Variables Example • The following example illustrates the two methods for variable initialization: #include <stdio. /*assign double value */ printf("value of sum is %d\n".sum). printf("value of pressure is %e\n". /* assign character value */ pressure=2. The expression a%b is read as “a modulus b” and evaluates to the remainder obtained after dividing a by b. For example 7 % 2 1 12 % 3 0 29 C Programming . For example: 1/2 0 3/2 1 The modulus operator % only works with integer operands. The above example shows the prefix form of the increment/decrement operators. i=i-1. --i. is equivalent to is equivalent to i=i+1. 30 C Programming .Increment/Decrement Operators • In C.respectively. They can also be used in postfix form. specialized operators have been set aside for the incrementing and decrementing of integer variables. The increment and decrement operators are ++ and -. i--. • is equivalent to is equivalent to i=i+1. i=i-1. as follows i++. These operators allow a form of shorthand in C: ++i. n 1. n 1 • • 31 C Programming .Prefix versus Postfix • The difference between prefix and postfix forms shows up when the operators are used as part of a larger expression. k is incremented before the expression is evaluated. Then in the following statement a=++m + ++n. k is incremented after the expression is evaluated. a 0 then m 1. then a 2 whereas in this form of the statement a=m++ + n++. – If k++ is used in an expression. m 1. – If ++k is used in an expression. Assume that the integer variables m and n have been initialized to zero. the following statement k=k+5. can alternatively be written as variable op= expression. The general syntax is variable = variable op expression. common forms are: += -= Examples: j=j*(3+x). For example. a=a/(s-5). can be written as k += 5. *= /= %= • • • • j *= 3+x. 32 C Programming . a /= s-5.Advanced Assignment Operators • A further example of C shorthand are operators which combine an arithmetic operation and a assignment together in one form. The associativity is also shown. Operators with higher precedence are employed first. associativity determines the direction in which the expression will be evaluated.++ -* / % + = R L L R L R R L • 33 C Programming .Precedence & Associativity of Operators • The precedence of operators determines the order in which operations are performed in an expression. Operators higher up in the following diagram have higher precedence. C has a built-in operator hierarchy to determine the precedence of operators. If two operators in an expression have the same precedence. . 4 3 • The programmer can use parentheses to override the hierarchy and force a desired order of evaluation.4) 3 * -1 -3 34 C Programming .4 7 . Expressions enclosed in parentheses are evaluated first. For example: (1 + 2) * (3 .4 1 + 6 .Precedence & Associativity of Operators Examples • This is how the following expression is evaluated 1 + 2 * 3 . 147. 35 C Programming .483. • There are unsigned versions of all three types of integers. • typically has a range of 0 to 65. only a range of positive values. long int national_debt. It is possible in C to specify that an integer be stored in more memory locations thereby increasing its effective range and allowing very large integers to be stored. • long int variables typically have a range of +-2. This value differs from computer to computer and is thus machine-dependent.648. • There are also short int variables which may or may not have a smaller range than normal int variables. This is accomplished by declaring the integer variable to have type long int.The int Data Type • A typical int variable is in the range +-32.535. Negative integers cannot be assigned to unsigned integers. All that C guarantees is that a short int will not take up more bytes than int.767. For example unsigned int salary. The float and double Data Types • As with integers the different floating point types available in C correspond to different ranges of values that can be represented. the accuracy of the stored real values increases as you move down the list. • 36 C Programming . More importantly. The actual ranges and accuracy are machine-dependent. the number of bytes used to represent a real value determines the precision to which the real value is represented. though. The more bytes used the higher the number of decimal places of accuracy in the stored value. The three C floating point types are: float double long double • In general. Internally. . For example the ASCII code associates the character ‘m’ with the integer 109. / 0 1 2 3 4 5 6 7 8 9 : . . . < = > ? : . if x is a double and i an integer. This automatic conversion takes place in two steps. what is the type of the expression x+i In this case. i will be converted to type double and the expression will evaluate as a double. First. The type hierarchy is as follows long double double unsigned long long unsigned 39 int C Programming • • .Automatic Type Conversion • How does C evaluate and type expressions that contain a mixture of different data types? For example. In the second step “lower” types are promoted to “higher” types. A temporary copy of i is converted to a double and used in the expression evaluation. The expression itself will have the type of its highest operand. NOTE: the value of i stored in memory is unchanged. all floats are converted to double and all characters and shorts are converted to ints. but result is machine-dependent 40 C Programming . • A conversion occurs. then x=i. For example.Automatic Type Conversion with Assignment Operator • Automatic conversion even takes place if the operator is the assignment operator. • i is promoted to a double and resulting value given to x • On the other hand say we have the following expression: i=x. This creates a method of type conversion. if x is double and i an integer. (char) 3 + 'A' x = (float) 77.Type Casting • Programmers can override automatic type conversion and explicitly cast variables to be of a certain type when used in an expression. (double) k * 57 41 C Programming . For example. The general syntax is (type) expression • Some examples. (double) i • will force i to be of type double. Input and Output • • • • • • • Basic Output printf Function Format Specifiers Table Common Special Characters for Cursor Control Basic Output Examples Basic Input Basic Input Example 42 C Programming . Basic Output • Now. • which produced this output: value of sum is 33 • The first argument of the printf function is called the control string. it starts printing the text in the control string until it encounters a % character. In a previous program. When the printf is executed. we saw this example print("value of sum is %d\n". At the end of the control statement. 43 C Programming . The % sign is a special character in C and marks the beginning of a format specifier. When a format specifier is found. displays its value and continues on. The d character that follows the % indicates that a (d)ecimal integer will be displayed. printf reads the special character \n which indicates print the new line character. printf looks up the next argument (in this case sum). let us look more closely at the printf() statement.sum). A format specifier controls how the value of a variable will be displayed on the screen. or function calls -. • where the control string consists of 1) literal text to be displayed. and 3)special characters. Unpredictable results if argument type does not “match” the identifier. The arguments can be variables.argument list). 44 C Programming . 2) format specifiers.printf Function • General form of printf function printf(control string. Number of arguments must match the number of format identifiers.anything that produces a value which can be displayed. constants. expressions. . leg1+leg2). printf(“to shining “).100000e+24 ? 63 try it yourself 47 C Programming . num1=10.’C’). printf(“It was %f miles”. printf(“%d\n”.big).4.5).’A’. printf(“From sea \n”). printf(“%c %c %c”. printf(“%d\t%d\n”. printf (“C”).Basic Output Examples printf(“ABC”).’B’.num1. printf(“\007 That was a beep\n”). big=11e+23. num2=33.’?’).700012 miles 10 33 1.3. ABC (cursor after the C) 5 (cursor at start of next line) A B C From sea to shining C From sea to shining C It was 557. printf (“C”).num2). leg2=357. printf(“to shining \n“).’?’). printf(“From sea ”). leg1=200. printf(“%e \n”. printf(“%d \n”. printf(“%c \n”. scanf("%d". the format specifier %d shows what data type is expected. The following program illustrates the use of this function.h> main() { int pin.} • What happens in this program? An integer called pin is defined.&pin). #include <stdio. has a control string and an address list. This is confirmed with the second printf statement. The &pin argument specifies the memory location of the variable the input will be placed in.Basic Input • There is a function in C which allows the programmer to accept input from a keyboard. (Much more with & when we get to pointers…) 48 C Programming . It is the address operator. The scanf routine. the variable pin will be initialized with the input integer. After the scanf routine completes.pin). The & character has a very special meaning in C. A prompt to enter in a number is then printed with the first printf statement. printf("Please type in your PIN\n"). which accepts the response. printf("Your access code is %d\n". In the control string. 49 C Programming • .h> main() { int pin. If you are inputting values for a double variable.&pin). use the %lf format identifier.Basic Input Example #include <stdio. White space is skipped over in the input stream (including carriage return) except for character input.} • A session using the above code would look like this Please type your PIN 4589 Your access code is 4589 • The format identifier used for a specific C data type is the same as for the printf statement. A blank is valid character input. with one exception. printf("Your access code is %d\n". scanf("%d".pin). printf("Please type in your PIN\n"). . In a conditional loop the iterations are halted when a certain condition is true. In C.Introduction to Program Looping • Program looping is often desirable in coding in any language to have the ability to repeat a block of statements a number of times. there are statements that allow iteration of this type. Specifically. An unconditional loop is repeated a set number of times. 51 C Programming . there are two classes of program loops -.unconditional and conditional. Thus the actual number of iterations performed can vary each time the loop is executed. this expression will evaluate to TRUE. If a is less than 4. If not it will evaluate to FALSE.Relational Operators • Our first use of these operators will be to set up the condition required to control a conditional loop. Exactly what does it mean to say an expression is TRUE or FALSE? C uses the following definition – FALSE means evaluates to ZERO – TRUE means evaluates to any NON-ZERO integer(even negative integers) • 52 C Programming . Such as a < 4 • which reads a “less than” 4. Relational operators allow the comparison of two expressions. 53 C Programming . exit loop and move on to next line of code. exit the for loop. 2) The test expression is evaluated.for Loop • The for loop is C’s form of an unconditional loop. • The operation for the loop is as follows 1) The initialization expression is evaluated. If it is FALSE. ++i) sum = sum+i. The basic syntax of the for statement is. body of the loop is executed. 54 C Programming . increment expr) program statement. 3) Assume test expression is TRUE. Execute the program statements making up the body of the loop. 4) Evaluate the increment expression and return to step 2. test expr. i<6. If it is TRUE. • Here is an example sum=10. for (i=0. for (initialization expression. 5) When test expression is FALSE. .for Loop Example • Sample Loop: sum = 10. i<6. ++i) sum=sum+i. for (i=0. 2 2 . TRUE 4 4 ) } 5 5 3 3 FALSE 56 C Programming .for Loop Diagram • The following diagram illustrates the operation of a for loop for ( { 1 1 . x. and incrementation.) product*=i++. x-=5) { z=sqrt(x). } – Control expressions can be any valid expression. x!=65.i<=6. – Any of the control expressions can be omitted (but always need the two semicolons for syntax sake).z). not . enclose them in brackets (USE INDENTATION FOR READABILITY) for (x=100. testing. for (i=1.General Comments about for Loop • Some general comments regarding the use of the for statement: – Control expressions are separated by . – If there are multiple C statements that make up the loop body. Don’t necessarily have to perform initialization. product=1. 57 C Programming . printf("The square root of %d is %f\n". y). – Can string together multiple expressions in the for statement by separating them by commas for (x=1.General Comments about for Loop Continued • Some general comments regarding the use of the for statement: – Since test performed at beginning of loop. for (y=10.y!=x.++y) printf ("%d". body may never get executed x=10. 58 C Programming .++x) z=x%y.y=5.x+y<100. skip over the loop. 3) If it is TRUE. loop body is executed. Its format is while(control expression) program statement. • The while statement works as follows: 1) Control expression is evaluated (“entry condition”) 2) If it is FALSE. 4) Go back to step 1 59 C Programming .while Loop • The while loop provides a mechanism for repeating C statements while a condition is true. factorial=1. Otherwise: infinite loop. while (i<=n) { factorial *= i. • • Will this loop end? j=15. 60 C Programming . while (j--) ….while Loop Example • Example while loop i=1. the control expression must be altered in order to allow the loop to finish. } Programmer is responsible for initialization and incrementation. At some point in the body of the loop. i=i+1. while (control expression). go back to step 1. exit loop. This guarantees that the loop is executed at least once. The syntax of the do while statement is do program statement. 2) The control expression is evaluated (“exit condition”). 3) If it is TRUE. • • 61 C Programming . and it works as follows 1) The body of the loop is executed. If it is FALSE.do while Loop • The do while statement is a variant of the while statement in which the condition test is performed at the “bottom” of the loop. there are three decision making statements.Introduction to Decision Making Statements • Used to have a program execute different statements depending on certain conditions. In a sense. if if-else switch execute a statement or not choose to execute one of two statements choose to execute one of a number of statements 65 C Programming . In C. makes a program “smarter” by allowing different choices to be made. the body of the if is executed. the body of the if is skipped. Avoid trying to compare real variables for equality. or you may encounter unpredictable results.if Statement • The if statement allows branching (decision making) depending upon a condition. The basic syntax is if (control expression) program statement. it makes it very difficult to compare such types for equality. • • 66 C Programming . There is no “then” keyword in C! Because of the way in which floating point types are stored. • If the control expression is TRUE. If it is FALSE. Program code is executed or skipped. if Statement Examples • Theses code fragments illustrate some uses of the if statement – Avoid division by zero if (x!=0) y/=x. – Customize output if (grade>=90) printf("\nCongratulations!"). 67 C Programming . – Nested ifs if (letter>='A') if (letter>='Z') printf("The letter is a capital \n").grade). printf("\nYour grade is "%d". Some examples if (x<y) if (letter == 'e') { min=x. The syntax of the if-else statement is if (expression) statement1. else statement2. else ++vowel_count. ++e_count.if-else Statement • Used to decide between two courses of action. else ++other_count. statement1 is executed. statement2 is skipped. If the expression is TRUE. 68 C Programming • • • . } min=y. statement1 is skipped. If the expression is FALSE. statement2 is executed.. 2) A match is looked for between this expression value and the case constants. execute the default statement.switch Statement Operation • The switch statement works as follows 1) Integer control expression is evaluated. If a match is not found. . If a match is found. break. break. case 'b': ++b_count. } 73 C Programming . case 'c': case 'C': /* multiple values.switch Statement Example: Characters switch(ch) { case 'a': ++a_count. same statements */ ++c_count. break. case 'D': display_errors().switch Statement Example: Menus • A common application of the switch statement is to control menu-driven software: switch(choice) { case 'S': check_spelling(). case 'C': correct_errors(). } 74 C Programming . break. default: printf("Not a valid option\n"). break. The two symbols used to denote this operator are the ? and the :. This conditional expression operator takes THREE operands.Conditional Operator • Short-hand notation for an if-else statement that performs assignments. and the third after the :. the second operand between the ? and the :. The first operand is placed before the ?. Consider the example on the next page: 75 C Programming . The general syntax is thus condition ? expression1 : expression2. If the condition is FALSE (zero). expression1 is evaluated and the result of the evaluation becomes the result of the operation. then expression2 is evaluated and its result becomes the result of the operation. • If the result of condition is TRUE (non-zero). then s=-1. short-hand code to perform the same task is even=(number%2==0) ? 1 : 0. then s=x*x. If x is greater than or equal to zero. 76 C Programming • . else even=0. • Identical.Conditional Operator Examples s = (x<0) ? -1 : x*x. • If x is less than zero. The following code sets the logical status of the variable even if (number%2==0) even=1. either TRUE (i.. the logical expression is FALSE. exp1 || exp2 Will be TRUE if either (or both) exp1 or exp2 is TRUE. Otherwise. || ! 77 C Programming . When expressions are combined with a logical operator.e. 1) or FALSE (i.e.Logical Operators • These operators are used to create more sophisticated conditional expressions which can then be used in any of the C looping or decision making statements we have just discussed. it is FALSE. Otherwise.. 0) is returned. !exp Negates (changes from TRUE to FALSE and visa versa) the expression. Symbol Usage && Operation Operator LOGICAL AND LOGICAL OR LOGICAL NOT exp1 && exp2 Requires both exp1 and exp2 to be TRUE to return TRUE. done=0. while(!done) { … } 78 C Programming . has the highest precedence and is always performed first in a mixed expression.Logical Operators Precedence • The negation operator. !. Some typical examples using logical operators: • if (year<1900 && year>1799) printf("Year in question is in the 19th century\n"). if (ch=='a' || ch=='e' || ch='i' || ch='o' || ch='u') ++vowel_count. The remaining logical operators have a precedence below relational operators. Array Variables • • • • • • • • • • Introduction to Array Variables Array Variables Example Array Elements Declaring Arrays Initializing Arrays during Declaration Using Arrays Multi-dimensional Arrays Multi-dimensional Array Illustration Initializing Multi-dimensional Arrays Using Multi-dimensional Arrays 79 C Programming . Arrays offer a solution to this problem. 80 C Programming . • It becomes increasingly more difficult to keep track of the IDs as the number of variables increase. This might look like int id1 = 101.Introduction to Array Variables • Arrays are a data structure which hold multiple values of the same data type. Her first approach might be to create a specific variable for each user. Consider the case where a programmer needs to keep track of the ID numbers of people within an organization. and 2) there is an ordered method for extracting individual data items from the whole. • int id2 = 232. int id3 = 231. Arrays are an example of a structured variable in which 1) there are a number of pieces of data contained in the variable name. Thus. 81 C Programming • • . and uses an indexing system to find each variable stored within it. /* declaration of array id */ id[0] = 101. Each piece of data in an array is called an element. we declared an array called id. id[1] = 232.Array Variables Example • An array is a multi-element box. id[2] = 231. indexing starts at zero. which has space for three integer variables. must be declared before they can be used. array id has three elements. In C. a bit like a filing cabinet. After the first line. Arrays. The replacement of the previous example using an array looks like this: int id[3]. In the first line. each element of id is initialized with an ID number. like other variables in C. Array Elements • The syntax for an element of an array called a is a[i] where i is called the index of the array element. The array element id[1] is just like any normal integer variable and can be treated as such. In memory. one can picture the array id as in the following diagram: • • id 101 232 231 id[2] id[0] id[1] 82 C Programming . After the declaration. you cannot assume that the elements have been initialized to zero. distance[66]. Arrays are declared along with all other variables in the declaration section of the program and the following syntax is used type array_name[n].Declaring Arrays • Arrays may consist of any of the valid data types. • During declaration consecutive memory locations are reserved for the array and all its elements. • where n is the number of elements in the array. 83 C Programming . Some examples are int float final[160]. Random junk is at each element’s memory location. 2. static float height[5]={6.2.323}.0.g. • Some rules to remember when initializing during declaration 1 If the list of initial elements is shorter than the number of array elements. its elements are automatically initialized to zero.7. static int a[]={-6.3.2.7.4. The initial values are enclosed in braces.3.6.. static int value[9] = {1.12.18.19.3.5. the remaining elements are initialized to zero.6.Initializing Arrays during Declaration • If the declaration of an array is preceded by the word static.9}.2. In the following declaration. 2 If a static array is not initialized at declaration manually. a has size 5. 84 C Programming . e. its size equals the length of the initialization list.8}. then the array can be initialized at declaration. 3 If a static array is declared without a size specification.8. Consider the following code: int grade[3]. i<4. Often run-time errors result.94. ++i) total += grade[i].78}. for (i=0. for example. Array variables and for loops often work hand-in-hand since the for loop offers a convenient way to successively access array elements and perform some operation with them.67. 85 C Programming • • . int grade[4]={93. Thus grade[89] refers to the 90th element of the grades array. as in the following summation example: int total=0.i.Using Arrays • Recall that indexing is the method of accessing individual array elements. You could write over important memory locations. the for loop counter can do double duty and act as an index for the array. A common programming error is out-of-bounds array indexing. Basically. grade[5] = 78. The result of this mistake is unpredictable and machine and compiler dependent. Multi-Dimensional Arrays • Multi-dimensional arrays have two or more index values which are used to specify a particular element in the array. For this 2D array element. Array c contains 6x4x2=48 doubles. double c[6][4][2]. Declaring multi-dimensional arrays is similar to the 1D case: int a[10]. float b[3][5]. image[i][j] • the first index value i specifies a row index. 86 C Programming . • /* declare 1D array */ /* declare 2D array */ /* declare 3D array */ Note that it is quite easy to allocate a large chunk of consecutive memory with multi-dimensional arrays. while j specifies a column index. the 2nd row takes up the next memory locations. and so on.. Which means that in memory the 0th row is put into its memory locations. 87 C Programming . the 1st row then takes up the next memory locations.Multi-Dimensional Array Illustration • A useful way to picture a 2D array is as a grid or matrix. 2D arrays are stored by row. As before. you can explicitly put the values to be assigned to the same row in inner curly brackets: static int age[2][3]={{4.19.12}.6. it is set equal to the number of inner brace pairs: static int age[][3]= ]={{4.{19. the above statement is equivalent to: age[0][0]=4. 88 C Programming • • • • .Initializing Multi-Dimensional Arrays • This procedure is entirely analogous to that used to initialize 1D arrays at their declaration.12.8. To make your program more readable. age[0][1]=8. In addition if the number of rows is omitted from the actual declaration. age[1][1]=6.6. will fill up the array age as it is stored in memory.6.-1}}. age[1][2]=-1. Thus. this declaration static int age[2][3]={4.8.-1}.{19. if there are fewer initialization values than array elements. That is. the remainder are initialized to zero.12}.8. age[1][0]=19.-1}}. age[0][2]=12. the array is initialized row by row. For example. ++j) sum += temp[i][j]. Some examples Summation of array elements double temp[256][3000]. j<512. int i. j<3000. ++k) if (i==j && j==k) trace += voxel[i][j][k].Using Multi-Dimensional Arrays • Again. 89 C Programming .j. loop nests are what is most often used. though. Trace of Matrix int voxel[512][512][512]. In this case. for loops and multi-dimensional arrays often work hand-in-hand. as with 1D arrays. i<256.sum=0. for (i=0. ++j) for (k=0. i<512. k<512.k. ++i) for (j=0.j. int i.trace=0. for (i=0. ++i) for (j=0. Strings • • • • • • • • • • Arrays of Characters Initializing Strings Copying Strings String I/O Functions More String Functions More String Functions Continued Examples of String Functions Character I/O Functions More Character Functions Character Functions Example 90 C Programming . As will all C variables. 91 C Programming • • • . Don’t forget to remember to count the end-of-string character when you calculate the size of a string. Strings must be terminated by the null character '\0' which is (naturally) called the end-of-string character. Consider the following code: static char name[18] = "Ivanova". strings must be declared before they are used. String constants marked with double quotes automatically include the end-of-string character. The string called name actually has only 8 elements. The curly braces are not required for string initialization at declaration. but can be used if desired (but don’t forget the end-of-string character).Arrays of Characters • Strings are 1D arrays of characters. The actual strings used in the program can have fewer elements. Unlike other 1D arrays the number of elements set for a string set during declaration is only an upper limit. They are 'I' 'v' 'a' 'n' 'o' 'v' 'a' '\0' Notice another interesting feature of this code. 92 C Programming . Direct initialization using the = operator is invalid. The following code would produce an error: char name[34]. name = "Erickson". 2) by reading in a value for the string. • /* ILLEGAL */ To read in a value for a string use the %s format identifier: scanf("%s". The end-of-string character will automatically be appended during the input process. • Note that the address operator & is not needed for inputting a string variable (explained later).Initializing Strings • Initializing a string can be done in three ways: 1) at declaration.name). and 3) by using the strcpy function. string2 is copied into string1 at the beginning of string1.h header file at the beginning of your program. The previous contents of string1 are overwritten.string2). printf("You are a %s \n". } You are a Professor 93 C Programming • • . strcpy is used for string initialization: #include <string.job). strcpy(job.h> main () { char job[50]. In the following code. The syntax of strcpy is strcpy(string1.Copying Strings • The strcpy function is one of a set of built-in string handling functions available for the C programmer to use. When this function executes."Professor"). To use these functions be sure to include the string. The gets function reads in a string from the keyboard. while the worst are passionate. 94 C Programming . puts(phrase). The function puts displays a string on the monitor. printf("Please enter a sentence\n").String I/O Functions • There are special functions designed specifically for string I/O. but does output a carriage return at the end of the string. The carriage return is not part of the string and the end-of-string character is automatically appended. while the worst are passionate. It does not print the endof-string character. They are gets(string_name). • • • A sample session would look like this Please enter a sentence The best lack all conviction. The best lack all conviction. puts(string_name). Here is a sample program demonstrating the use of these functions: char phrase[100]. gets(phrase). When the user hits a carriage return the string is inputted. More String Functions • Included in the string. . strings.h are several more string-related functions that are free for you to use. strcmp which has this syntax strcmp(string1.string2). equal to. • It returns an integer that is less than zero. equal to zero.More String Functions Continued • Most of the functions on the previous page are self-explanatory. String comparison is done character by character using the ASCII numerical code • 96 C Programming . or greater than zero depending on whether string1 is less than. The UNIX man pages provide a full description of their operation. or greater than string2. Take for example. s2) strcat(s2. Function strlen(s1) strlen(s2) strcmp(s1.” tonight”) Result 15 /* e-o-s not counted */ 9 negative number positive number blue moon tonight 97 C Programming .s2) strcmp(s3. static char s3[]="then falls Caesar". static char s2[]="blue moon".Examples of String Functions • Here are some examples of string functions in action: static char s1[]="big sky country". putchar(n-2). putchar('?'). char lett. n=45. putchar(lett).Character I/O Functions • Analogous to the gets and puts functions there are the getchar and putchar functions specially designed for character I/O. lett=getchar().h> main() { int n. putchar('\n'). The following program illustrates their use: #include <stdio. } • A sample session using this code would look like: ?+f f 98 C Programming ..More Character Functions • As with strings. there is a library of functions designed to work with character variables. The file ctype. printf ("Please type in your name\n"). } • A sample session using this program looks like this: Please type in your name Dexter Xavier You are DEXTER XAVIER 100 C Programming .name). int loop. printf ("You are %s\n".Character Functions Example • In the following program. gets(name). name[loop] !=0. character functions are used to convert a string to all uppercase characters: #include <stdio. for (loop=0.h> main() { char name[80]. loop++) name[loop] = toupper(name[loop]).h> #include <ctype. Math Library Functions • • “Calculator-class” Functions Using Math Library Functions 101 C Programming . h which contains definitions of useful “calculator-class” mathematical functions.“Calculator-class” Library Functions • You may have started to guess that there should be a header file called math.h are acos asin atan cos sin tan cosh sinh tanh exp log log10 pow sqrt ceil floor erf gamma j0 j1 jn y0 y1 yn 102 C Programming . Well there is! Some functions found in math. Using Math Library Functions • The following code fragment uses the Pythagorean theorem c2 = a2 + b2 to calculate the length of the hypotenuse given the other two sides of a right triangle: double c.h include file.c -lm 103 C Programming .2)). • Typically. On most systems the compilation would look like this: cc myprog.2)+pow(b. b c=sqrt(pow(a. the user must explicitly load the math library during compilation. a. to use the math functions declared in the math. . and supports the concept of modular programming design techniques. returns to the operating system. identified by the keyword main. mathematical tasks. But can the user define and use their own functions? Absolutely YES! • • 105 C Programming . and enclosed by left and right braces is a function.Introduction to User-defined Functions • A function in C is a small “sub-program” that performs a particular task. and when terminated. and character/string handling. We have already been exposed to functions. It is called by the operating system when the program is loaded. The main body of a C program. We have also seen examples of library functions which can be used for I/O. In modular programming the various tasks that your overall program must accomplish are assigned to individual functions and the main program basically calls these functions in a certain order. 106 C Programming . you will code only a small fraction of the program. Make that code block a function and call it when needed.Reasons for Use • There are many good reasons to program in a modular style: – Don’t have to repeat the same block of code many times in your code. – Function portability: useful functions can be used in a number of programs. – Easy to debug. Just add more functions to extend program capability – For a large programming project. – Easy to modify and expand. – Supports the top-down technique for devising a program algorithm. – Make program self-documenting and readable. Make an outline and hierarchy of the steps needed to solve your problem and create a function for each step. Get one function working well then move on to the others. 107 C Programming . • In the following pages.User-defined Function Usage • In order to use functions. the programmer must do three things – Define the function – Declare the function – Use the function in the main code. we examine each of these steps in detail. function statements. Function definitions have the following syntax return_type function_name (data type variable name list) { function header local declarations.. 108 C Programming .Function Definition • The function definition is the C code that implements what the function does. i<=n. return product. ++i) product *= i. } 109 C Programming .product=1. for (i=2.Function Definition Example 1 • Here is an example of a function that calculates n! int factorial (int n) { int i. This makes sense because all this function does is print out a header statement. } • • The 1st void keyword indicates that no value will be returned. For these functions the keyword void is used. Here is an example: void write_header(void) { printf("Navier-Stokes Equations Solver "). 110 C Programming • . printf("v3. The 2nd void keyword indicates that no arguments are needed for the function. printf("Last Modified: ").viscous coefficient added\n"). printf("12/04/95 .Function Definition Example 2 • Some functions will not actually return a value or need any arguments.45\n"). return ++a. The return statement can even contain an expression. • 111 C Programming . and 2 the function call evaluates to the value of the return expression. return n. return (a*b). • If there is no return statement control is passed back when the closing brace of the function is encountered (“falling off the end”). followed by a data variable or constant value.return Statement • A function returns a value to the calling program with the use of the keyword return. Some examples return 3. When a return is encountered the following events occur: 1 execution of the function is terminated and control is passed back to the calling program. float add_numbers (float n1. /*illegal.return Statement Examples • The data type of the return expression must match that of the declared return_type for the function. /*legal*/ return 6. /*legal*/ } • It is possible for a function to have multiple return statements. else return -x.0. not the same data type*/ return 6. } 112 C Programming . float n2) { return n1 + n2. For example: double absolute(double x) { if (x>=0.0) return x. control passes to the function. 113 C Programming • • • . control passes back to the main program. use this statement write_header(). In addition. if a value was returned. the function call takes on that return value. A statement using our factorial program would look like number=factorial(9). When your program encounters a function invocation. just type its name in your program and be sure to supply arguments (if necessary). To invoke our write_header function. When the function is completed. upon return from the factorial function the statement factorial(9) 362880 and that integer is assigned to the variable number.Using Functions • This is the easiest part! To invoke a function. In the above example. – The actual arguments are passed by-value to the function. Any changes made to the dummy argument in the function will NOT affect the actual argument in the main program. 114 C Programming . – The type of the arguments in the function call must match the type of the arguments in the function definition.Considerations when using Functions • Some points to keep in mind when calling functions (your own or library’s): – The number of arguments in the function call must match the number of arguments in the function definition. The dummy arguments in the function are initialized with the present values of the actual arguments. – The actual arguments in the function call are matched up in-order with the dummy arguments in the function definition. #include <stdio. return sum.n.n).h> int compute_sum(int n) { int sum=0. sum=compute_sum(n). for(.n). } main() { int n=8.n). printf ("Main n (before call) is %d\n".--n) sum+=n.Using Function Example • The independence of actual and dummy arguments is demonstrated in the following program.n>0.sum).} Main n (before call) is 8 Local n in function is 0 Main n (after call) is 8 The sum of integers from 1 to 8 is 36 115 C Programming . printf("Local n in function is %d\n". printf ("Main n (after call) is %d\n". printf ("\nThe sum of integers from 1 to %d is %d\n".sum. sum).n).h> int compute_sum(int n). printf ("Main n (after call) is %d\n".n. All the secondary functions are defined first.n). for(. } 116 C Programming . printf ("Main n (before call) is %d\n". it reads “backwards”.sum. sum=compute_sum(n). printf("Local n in function is %d\n". Consider the program on the previous page. printf ("\nThe sum of integers from 1 to %d is %d\n". return sum. /* Function Prototype */ main() { int n=8.} int compute_sum(int n) { int sum=0.Introduction to Function Prototypes • Function prototypes are used to declare a function so that it can be used in a program before the function is actually defined.n>0. and then we see the main program that shows the major steps in the program.n).--n) sum+=n. This example program can be rewritten using a function prototype as follows: #include <stdio. In some sense. Function prototypes should be placed before the start of the main program. You know that a function called compute_sum will be defined later on. the use of function prototypes offers improved type checking between actual and dummy arguments. The function definitions can then follow the main program. the type of actual arguments will automatically be coerced to match the type of the dummy arguments. Perhaps you don’t care about the details of how the sum is computed and you won’t need to read the actual function definition. and you see its immediate use in the main program. a function prototype is simply the function header from the function definition with a semi-colon attached to the end.say string.h -. 117 C Programming • • . In some cases. The prototype tells the compiler the number and type of the arguments to the function and the type of the return value. In fact. if you look at one of the include files -.Function Prototypes • Now the program reads in a "natural" order. As this example shows.you will see the prototypes for all the string functions available! In addition to making code more readable. Recursive algorithms are not mandatory. } 118 C Programming • .Recursion • Recursion is the process in which a function repeatedly calls itself to perform calculations. else result = n * factorial(n-1). The following function calculates factorials recursively: int factorial(int n) { int result. Typical applications are games and sorting trees and lists. usually an iterative approach can be found. if (n<=1) result=1. return result. In C. The storage class also determines the scope of the variable. that is.Storage Classes • Every variable in C actually has two attributes: its data type and its storage class. the four possible Storage classes are – auto – extern – static – register 119 C Programming . what parts of a program the variable’s name has meaning. The storage class refers to the manner in which memory is allocated for the variable. They exist and their names have meaning only while the function is being executed. When the function is exited. • • • • • • 120 C Programming .auto Storage Class • This is the default classification for all variables declared within a function body [including main()] . the values of automatic variables are not retained. They are recreated each time the function is called. Automatic variables are truly local. They are unknown to other functions. They are normally implemented on a stack. extern Storage Class • • In contrast. What is the advantage of using global variables? It is a method of transmitting information between functions in a program without using arguments. • • Their storage is in permanent memory. External variables can be accessed and changed by any function in the program. 121 C Programming . extern variables are global. If a variable is declared at the beginning of a program outside all functions [including main()] it is classified as an external by default. and thus never disappear or need to be recreated. 122 C Programming . } int prod(void) { return (a*b*c). } The sum is 15 The product is 120 • There are two disadvantages of global variables versus arguments.sum()). First. the global variable has the same value as when the function started. Second.c=6. main() { printf ("The sum is %d\n".b=5. Once the function is exited. printf ("The product is %d\n". int prod(void). the function is much less portable to other programs. /* default extern */ int sum(void). If a local variable has the same name as a global variable.prod()).h> int a=4. is the concept of local dominance. } int sum(void) { return (a+b+c).extern Storage Class Example • The following program illustrates the global nature of extern variables: #include <stdio. only the local variable is changed while in the function. lett). printf ("%2o\n". } w w w 1 29 29 0000000029 29 35 1d 126 C Programming .h> main() { char lett='w'.lett).j).j).i). #include <stdio. printf ("%010d\n".j).j).j). printf ("%2x\n". printf ("%-010d\n". printf ("%4c\n".j=29.char and int Formatted Output Example • This program and it output demonstrate various-sized field widths and their variants. printf ("%d\n".j). printf ("%d\n". printf ("%-3c\n\n". int i=1. printf ("%10d\n". printf ("%c\n".lett). 4f". Don’t forget to count the column needed for the decimal point when calculating the field width. the number of decimal places can also be controlled.4. ----1. We can use the above format identifier as follows: printf("%10. 127 C Programming .0/3. A sample format specifier would look like this %10.0). in addition to specifying the field width.indicates the blank character.f Format Identifier • For floating-point values.3333 • where .4f field width • number of decimal places Note that a period separates the two numbers in the format specifier. 333e+10 • number of significant figures Note that only 4 significant figures are shown. and ‘+00’ in the exponent.0/3. For example printf("%10. _1. the second number after the decimal point determines how many significant figures (SF) will be displayed. But it only makes sense to print out as many SFs as match the precision of the data type.’e’. Remember that now the field size must include the actual numerical digits as well as columns for ‘.4.’. The following table shows a rough guideline applicable to some machines: Data Type float double long double # Mantissa bits 16 32 64 Precision (#SF) ~7 ~16 ~21 128 C Programming • .4e". It is possible to print out as many SFs as you desire.e Format Identifier • When using the e format identifier.0). y). double y=333.x).123 0000000000000333.123444 333.y).1 333.x).331e+02 129 C Programming .123 333.3f\n".123456. printf ("%f\n".y). printf ("%f\n".3f\n". printf ("%020. } 333.123456789 333.123457 333.12345678901232304270 3.9f\n".3f\n". printf ("%-20.y).123 333.x). printf ("%.x).20f\n". printf ("%.h> main() { float x=333.1f\n".4e\n".Real Formatted Output Example #include <stdio. printf ("%20. printf ("%. printf ("%20.x).1234567890123456. printf("3. A more sophisticated string format specifier looks like this %6.s Format Identifier • For strings.3s field width • maximum number of characters printed where the value after the decimal point specifies the maximum number of characters printed. Sher 130 C Programming • ."Sheridan"). the field length specifier works as before and will automatically expand if the string size is bigger than the specification. For example.4s\n". s).12s\n". } an evil presence an evil presence an evil presence an evil presence an ev an evil pres an evil pres an evil pres an evil pres 131 C Programming .12s\n".s).h> main() { static char s[]="an evil presence".s).s). printf ("%20s\n".s).Strings Formatted Output Example #include <stdio.s).5s\n". printf ("%7s\n". printf ("%.12s\n".12s\n". printf ("%15.s). printf ("%3. printf ("%.s). printf ("%-20s\n". printf ("%-15. printf ("%s\n".s). These “normal” characters will not be read in as input. a field width can be specified for inputting values. – As with formatted output. The field width specifies the number of columns used to gather the input. 132 C Programming . – An asterisk can be put after the % symbol in an input format specifier to suppress the input. printf("%d \n %s \n %s\n". } 45 .&lett. char word[15].word).&m.h> main() { int m.Formatted Input Examples #include <stdio.h> main() { int i.i.word). scanf("%d : %d : %d".&n. scanf("%d . char lett. } 10 : 15 : 17 10 15 17 133 C Programming .o).&o). %*s %c %5s".&i.lett. printf("%d \n %d \n %d\n".n.n.o. ignore_this C read_this 45 C read_ #include <stdio.m. . If you remember the following simple statement.. working with pointers should be less painful… POINTERS CONTAIN MEMORY ADDRESSES.Introduction to Pointers • Pointers are an intimate part of C and separate it from more traditional programming languages. NOT DATA VALUES! 135 C Programming . Pointers make C more powerful allowing a wide variety of tasks to be accomplished. a memory location with a certain address is set aside for any values that will be placed in i.Memory Addressing POINTERS CONTAIN MEMORY ADDRESSES. like int i. the location corresponding to i will be filled FFD2 35 i 136 C Programming . We thus have the following picture: memory location • FFD2 ? i variable name • After the statement i=35. NOT DATA VALUES! • When you declare a simple variable. x=2. printf("The address of x is %X\n". and it returns the memory address of the variable v.&x).h> main() { float x. } The value of x is 2. Here is an example of its use: &v • The above expression should be read as “address of v”.171828. The following simple program demonstrates the difference between the contents of a variable and its memory address: #include <stdio.171828 The address of x is EFFFFBA4 137 C Programming • . printf("The value of x is %f\n".x).The Address Operator • You can find out the memory address of a variable by simply using the address operator &. In the above example. pointers must be declared before they are used. we say that p is “referring to” the variable count or “pointing to” the variable count. int *p. p is the type “pointer to integer” and offset is the type “pointer to double”. Once a pointer has been declared. For example. Like all other C variables. The pointer p contains the memory address of the variable count. double *offset. it can be assigned an address. Note that the prefix * defines the variable to a pointer.Pointer Variables • A pointer is a C variable that contains memory addresses. p=&count. After this assignment. 138 C Programming • • • . This is usually done with the address operator. The syntax for pointer declaration is as follows: int *p. int count. int *p. – Integers and pointers can be added and subtracted from each other. 139 C Programming . and – incremented and decremented. p=p+2. Thus. different pointers can be assigned to each other • Some examples.Pointer Arithmetic • A limited amount of pointer arithmetic is possible. – In addition. incrementing a pointer-to-an-int variable automatically adds to the pointer address the number of bytes used to hold an int (on that machine). *q. q=p. The "unit" for the arithmetic is the size of the variable being pointed to in bytes. ip=&a. It is used as follows: *p. Consider the sample code: #include <stdio. * . b=*ip. 140 C Programming . } The value of b is 1 • • • Note that b ends up with the value of a but it is done indirectly. It returns the contents of the address stored in a pointer variable. can be considered as the complement to the address operator.b). What is returned is the value stored at the memory address p.b=78.h> main() { int a=1. The above expression is read as “contents of p”.*ip.Indirection Operator • The indirection operator. /* equivalent to b=a */ printf("The value of b is %d\n". by using a pointer to a. “Call-by-Reference” Arguments • We learned earlier that if a variable in the main program is used as an actual argument in a function call. its value won’t be changed no matter what is done to the corresponding dummy argument in the function. changing the contents of the dummy pointer will. (The actual arguments must then be addresses) – Since the actual argument variable and the corresponding dummy pointer refer to the same memory location.by necessity.change the contents of the actual argument variable. What if we would like the function to change the main variable’s contents? – To do this we use pointers as dummy arguments in functions and indirect operations in the function body. • 141 C Programming . } After swap.j).j=9876. i=%d j=%d\n".int *q). swap(&i.&j). } void swap(int *p. temp=*p.i. *p=*q. main() { int i=3. Here is a swapping program: #include <stdio. i=9876 j=3 142 C Programming .int *q) { int temp.“Call-by-Reference” Example • The classic example of “call-by-reference” is a swap function designed to exchange the values of two variables in the main program.h> void swap(int *p. *q=temp. printf("After swap. we can use pointer arithmetic to access array elements.&name). We have actually seen this fact before: when using scanf to input a character string variable called name the statement looked like NOT • • scanf("%s". scanf("%s". it is the base address of all the consecutive memory locations that make up the entire array.Pointers and Arrays • Although this may seem strange at first. in C an array name is an address. 143 C Programming . In fact.name). • Given this fact.]. The following two statements do the exact same thing: a[5]=56. *(a+5)=56. p<&a[100]. for(p=a. i<100.sum=0. 145 C Programming . – Another way int a[100]. ++i) sum += *(a+i). i<100.*p. for(i=0. – Other way int a[100]. for(i=0.Pointers and Arrays Examples • The next examples show how to sum up all the elements of a 1D array using pointers: – Normal way int a[100].i.sum=0.sum=0.*p.*p.i. ++p) sum += *p. ++i) sum +=a[i].i. pass all the array elements to the function. ++i) res += *(dp+i). return res. it is convenient to use pointers as arguments. double res=0. it can use pointer arithmetic to work with all the array elements.Arrays as Function Arguments • When you are writing functions that work on arrays. } • • Note that all the sum function needed was a starting address in the array and the number of elements to be summed together (n). A very efficient argument list. The alternative is to use global array variables or -. i<n. Consider the following function designed to take the sum of elements in a 1D array of doubles: double sum(double *dp. Once the function has the base address of the array.more horribly -. int n) { int i.0. for(i=0. 146 C Programming . ++i) res += *(dp+i)./* sum from element 10 to element 20 */ 147 C Programming . int n) { int i. double res=0. return res.75). /* sum entire array */ length=sum(position. /* sum first half */ length=sum(&position[10]. } • In the main program.Arrays as Function Arguments Example • Considering the previous example double sum(double *dp.length. the sum function could be used as follows double position[150]. length=sum(position.10). i<n. for(i=0.150).0. This is demonstrated in the following code: #include <stdio. printf("%c\n". in a similar manner that we used pointers to work with “normal” arrays. Thus.Pointers and Character Strings • As strange as this sounds.h> main() { char *cp.such as “Happy Thanksgiving” -is treated by the compiler as an address (Just like we saw with an array name). } C W 148 C Programming • . printf("%c\n".*cp). cp="Civil War". The value of the string constant address is the base address of the character array. we can use pointers to work with character strings.*(cp+6)). a string constant -. h> main() { char *name. scanf("%s". } Who are you? Seymour Hi Seymour welcome to the party. printf("Who are you?\n").Pointers and Character Strings Example • Another example illustrates easy string input using pointers: #include <stdio. pal\n". printf("Hi %s welcome to the party.name). pal 149 C Programming .name). . test scores. char grade. structure int test[3]. ad final course grade. GPA. }. keyword float gpa. Class. It is not a variable declaration. char class. A structure data type called student can hold all this information: struct student { char name[45]. 151 C Programming . Consider the data a teacher might need for a high school student: Name.Introduction to Structures • A structure is a variable in which different types of data can be stored together in one variable name. • member name & type The above is a declaration of a data type called student. but a type declaration. data type name int final. final score. the standard syntax is used: struct student • Lisa. struct playing_card { int pips.card2. Consider the following structure representing playing cards. You can declare a structure type and variables simultaneously. Bart. Homer.card3. char *suit. } card1.Structure Variable Declaration • To actually declare a structure variable. 152 C Programming . pips=card1. card1. 153 C Programming • • • • .pips=2. each member of card3 gets assigned the value of the corresponding member of card1. In other words. It would be done this way: card1.Structure Members • The different variable types stored in a structure are called its members. Say we wanted to initialize the structure card1 to the two of hearts. just like with other variable types: card3 = card1. Structure variables can also be assigned to each other. For example the following code: card2.pips+5. would fill in the card3 pips member with 2 and the suit member with “Hearts”. would make card2 the seven of some suit. Once you know how to create the name of a member variable. To access a given member the dot notation is use. The “dot” is officially called the member access operator.suit="Hearts". it can be treated the same as any other variable of that type. This is similar to the initialization of arrays.name="banana". with each value separated by a comma.100. dinner_course. For example: struct fruit { char *name.87.'A'}. the initial values are simply listed inside a pair of braces. int calories. There will be no confusion to the compiler because when the member name is used it is prefixed by the name of the structure variable.92. int calories.96. snack. struct vegetable { char *name. } snack.'S'.95. • The same member names can appear in different structures. } dinner_course. The structure declaration is preceded by the keyword static static struct student Lisa = { "Simpson". 154 C Programming .name="broccoli".Initializing Structure Members • Structure members can be initialized at declaration.3. void make_deck(void). main() { make_deck(). show_card(5). show_card(37). char *suit.Structures Example • What data type are allowed to structure members? Anything goes: basic types. } deck[52]. } 155 C Programming • . void show_card(int n). even other structures. show_card(19). arrays. Consider the program on the next few pages which uses an array of structures to make a deck of cards and deal out a poker hand. show_card(26). pointers.h> struct playing_card { int pips. show_card(51). You can even make an array of structures. strings. #include <stdio. deck[k]. } } } 156 C Programming . } if (k>=39 && k<52) { deck[k].pips=k%13+2. ++k) { if (k>=0 && k<13) { deck[k].pips=k%13+2. k<52.suit="Hearts".Structures Example Continued void make_deck(void) { int k.pips=k%13+2.suit="Clubs".suit="Diamonds". deck[k].suit="Spades". } if (k>=26 && k<39) { deck[k]. deck[k]. deck[k]. } if (k>=13 && k<26) { deck[k]. for(k=0.pips=k%13+2. 'J'.suit). break. break.'Q'.suit).pips) { case 11: printf("%c of %s\n".suit).deck[n].suit). default: printf("%c of %s\n".deck[n].pips.'A'. break. break.deck[n].deck[n].deck[n].More on Structures Example Continued void show_card(int n) { switch(deck[n]. case 13: printf("%c of %s\n".suit). case 14: printf("%c of %s\n". break.deck[n].'K'. case 12: printf("%c of %s\n". } } 7 K 2 A 8 of of of of of Hearts Spades Spades Clubs Diamonds 157 C Programming . struct date { int month. int day. One way to accomplish this would be to combine two separate structures. }. 158 C Programming . struct time { int hour. Say you wanted to make a structure that contained both date and time information. structures can have as members other structures. For example.Structures within Structures • As mentioned earlier. int sec. }. int min. int year. one for the date and one for the time. struct time now. }. • This declares a structure whose elements consist of two other previously declared structures. struct date_time { struct date today. 11.11. The now element of the structure is initialized to eleven hours. eleven minutes.sec. For example. ++veteran. static struct date_time veteran = {{11. • which sets the today element of the structure veteran to the eleventh of November.today. Each item within the structure can be referenced if desired. 1918. if (veteran.{11.11}}. 159 C Programming .1918}.month == 12) printf("Wrong month! \n"). eleven seconds.Initializing Structures within Structures • Initialization could be done as follows.now. Structure pointers are declared and used in the same manner as “simple” pointers: struct playing_card *card_pointer. The type of the variable card_pointer is “pointer to a playing_card structure”.down_card. just like with the basic data types. card_pointer=&down_card. • 160 C Programming . (*card_pointer). • The above code has indirectly initialized the structure down_card to the Eight of Clubs through the use of the pointer card_pointer. (*card_pointer).pips=8.Pointers to Structures • One can have pointer variable that contain the address of complete structures.suit="Clubs". the last two lines of the previous example could also have been written as: card_pointer->pips=8. since only an address is passed and can also enable “call-by-reference” arguments.member is the same as struct_ptr->member Thus. 161 C Programming • . Question: What is the value of *(card_pointer->suit+2)? Answer: ‘u’ • As with arrays. This is efficient.Pointers to Structures: -> • In C. It is officially called the structure pointer operator. use structure pointers as arguments to functions working with structures. Its syntax is as follows: *(struct_ptr). card_pointer->suit="Clubs". there is a special symbol -> which is used as a shorthand when working with pointers to structures. Unions • • • Introduction to Unions Unions and Memory Unions Example 162 C Programming . the following code declares a union data type called intfloat and a union variable called proteus: union intfloat { float f. A union is a variable that can take on different data types in different situations. The union syntax is: union tag_name { type1 member1.Introduction to Unions • Unions are C variables whose syntax look similar to structures. … }. union intfloat proteus. int i. For example. • 163 C Programming . }. but act in a completely different manner. type2 member2. But only one member of a union can contain valid data at a given point in the program. • • • 164 C Programming . It is the user’s responsibility to keep track of which type of data has most recently been stored in the union variable. Data actually stored in a union’s memory can be the data associated with any of its members. (Unlike a structure where memory is reserved for all members). 4 bytes are set aside for the variable proteus since a float will take up 4 bytes and an int only 2 (on some machines).Unions and Memory • Once a union variable has been declared. In the previous example. the amount of memory reserved is just enough to be able to represent the largest member. f).proteus.10e\n”. /* Statement 2 */ printf(“i:%12d f:%16.0. } proteus.proteus.Unions Example • The following code illustrates the chameleon-like nature of the union variable proteus defined earlier. and the integer value is meaningless.10e\n”.2273703755e-42 1166792216 f:4. After Statement 2. 165 C Programming .proteus. the data stored in proteus is a float.i=4444 /* Statement 1 */ printf(“i:%12d f:%16. } i: i: 4444 f:6.i.proteus. int i.i.f).f=4444. proteus. #include <stdio. proteus.440000000e+03 • • After Statement 1. data stored in proteus is an integer the the float member is full of junk.h> main() { union intfloat { float . To work with files. 167 C Programming . This association of a FILE variable with a file name is done with the fopen() function. 2 Connect the internal FILE variable with an actual data file on your hard disk. all the output (formatted or not) in this course has been written out to what is called standard output (which is usually the monitor).Introduction to File Input and Output • So far. 4 Break the connection between the internal FILE variable and actual disk file. the following steps must be taken: 1 Declare variables to be of type FILE. The C programmer can also read data directly from files and write directly to files. Similarly all input has come from standard input (usually associated with the keyboard). This disassociation is done with the fclose() function. 3 Perform I/O with the actual files using fprint() and fscanf() functions. h> • • as the first statement in your program. For example.Declaring FILE variables • Declarations of the file functions highlighted on the previous page must be included into your program. the following statement FILE *in_file. This is done in the standard manner by having #include <stdio. The first step is using files in C programs is to declare a file variable. 168 C Programming . • declares the variable in_file to be a “pointer to type FILE”. This variable must be of type FILE (which is a predefined type in C) and it is a pointer variable. "r"). The fopen() function performs this association and takes two arguments: 1) the pathname of the disk file.dat". and 2) the access mode which indicates how the file is to be used.dat for read access.Opening a Disk File for I/O • Before using a FILE variable. Thus. myfile. • connects the variable in_file to the disk file myfile.dat will only be read from. it must be associated with a specific file name. Two other access modes can be used: “w” “a” indicating write-mode indicating append_mode 169 C Programming . The following statement in_file = fopen("myfile. Reading and Writing to Disk Files • The functions fprintf and fscanf are provided by C to perform the analogous operations for the printf and scanf functions but on a file. These functions take an additional (first) argument which is the FILE pointer that identifies the file to which data is to be written to or read from.real and integer values into the variables x and m respectively. fscanf(in_file.&x.dat -.&m). • 170 C Programming . • will input -. Thus the statement."%f %d".from the file myfile. Closing a Disk File • The fclose function in a sense does the opposite of what the fopen does: it tells the system that we no longer need access to the file. The syntax for file closing is simply fclose(in_file). • 171 C Programming . This allows the operating system to cleanup any resources or buffers associated with the file. It returns zero (FALSE) otherwise. 172 C Programming • .the FILE pointer -. feof takes one argument -. A sample use: if (feof(in_file)) printf ("No more data \n").and returns a nonzero integer value (TRUE) if an attempt has been made to read past the end of a file. It is an inventory program that reads from the following file lima beans 1.Sample File I/O Program • The program on the next few pages illustrates the use of file I/O functions. The program will output those items which need to be reordered because their quantity is below a certain limit 173 C Programming .47 5 5 boneless chicken 4.76 5 10 Greaters ice-cream 3.58 12 10 • which contains stock information for a store.20 10 5 thunder tea 2. filename).h> struct goods { char name[20]. } 174 C Programming . input_file = fopen(filename. main() { char filename[40].h> #include <ctype. void processfile(void). float price. FILE *input_file. void printrecord(struct goods record). scanf("%s". int reorder. }.Sample File I/O Program: main #include <stdio. printf("Example Goods Re-Order File Program\n").h> #include <string. processfile(). int quantity. "r"). printf("Enter database file \n"). void getrecord(struct goods *recptr). reorder) printrecord(record). } } 175 C Programming . while (!feof(input_file)) { getrecord(&record). if (record.Sample File I/O Program: processfile void processfile(void) { struct goods record.quantity <= record. &cost). char buffer[40].&number). fscanf(input_file.toolow. recptr->quantity = number.ch.buffer).Sample File I/O Program: getrecord void getrecord(struct goods *recptr) { int loop=0. recptr->price = cost. } buffer[loop]=0.&toolow)."%f".number. strcpy(recptr->name. ch=fgetc(input_file). recptr->reorder = toolow. fscanf(input_file. fscanf(input_file. ch=fgetc(input_file). float cost. while (ch!='\n') { buffer[loop++]=ch. } 176 C Programming ."%d"."%d". reorder).quantity).record. printf("Product reorder level \t%d\n". } 177 C Programming .price). printf("Product price \t%f\n".record.record.name).record.Sample File I/O Program: printrecord void printrecord (struct goods record) { printf("\nProduct name \t%s\n". printf("Product quantity \t%d\n". 47 quantity 5 reorder level 5 178 C Programming .dat Product Product Product Product Product Product Product Product name thunder tea price 2.Sample File I/O Program: sample session Example Goods Re-Order File Program Enter database file food.76 quantity 5 reorder level 10 name Greaters ice-cream price 3. Dynamic Memory Allocation • • • • Introduction to Dynamic Memory Allocation Dynamic Memory Allocation: sizeof Dynamic Memory Allocation: calloc Dynamic Memory Allocation: free 179 C Programming . which determines how much memory a specified variable occupies.Introduction to Dynamic Memory Allocation • A common programming problem is knowing how large to make arrays when they are declared. The C programming language allows users to dynamically allocate and deallocate memory when required. and free(). Thus. it is desirable to create correct-sized array variables at runtime. The functions that accomplish this are calloc() which allocates memory to a variable. sizeof(). this approach can be inelegant and extremely wasteful of memory especially if the student structure is quite large itself. Consider a grading program used by a professor which keeps track of student information in structures. • But when a certain upper-level class has only seven students. which deallocates the memory assigned to a variable back to the system 180 C Programming • . We want his program to be general-purpose so we need to make arrays large enough to handle the biggest possible class size: struct student class[600]. int min. This call should be used in conjunction with the calloc() function call. For example. rather than a fixed size. sizeof can also be used to determine the memory size of basic data type variables as well. 181 C Programming . so that only the necessary memory is allocated. x=sizeof(struct time). it is valid to write sizeof(double). int sec. Consider the following code fragment: struct time { int hour.Dynamic Memory Allocation: sizeof • The sizeof() function returns the memory size (in bytes) of the requested variable type. }. int x. • x now contains how many bytes are taken up by a time structure (which turns out to be 12 on many machines). min=30. Now the array of time structures can be used. appt[5]. struct time *appt. appt = (struct time *) calloc(100. appt[5]. The function returns a pointer to the beginning of the allocated storage area in memory. just like a statically declared array: appt[5]. The storage area is also initialized to zeros. and the size of each element in bytes (obtained from sizeof).Dynamic Memory Allocation: calloc • The calloc function is used to allocate storage to a variable while the program is running. The function takes two arguments that specify the number of elements to be reserved. • The code(struct time *) is a type cast operator which converts the pointer returned from calloc to a pointer to a structure of type time.sizeof(struct time)). The above function call will allocate just enough memory for one hundred time structures.sec=0. and appt will point to the first in the array.hour=10. 182 C Programming . This is done by. 183 C Programming . the space which was allocated to them by calloc should be returned to the system.Dynamic Memory Allocation: free • When the variables are no longer required. free(appt). Command-Line Arguments • • • Introduction to Command-Line Arguments Command-Line Arguments Example Command-Line Arguments Sample Session 184 C Programming . The two dummy arguments to the main function are called argc and argv.Introduction to Command-Line Arguments • In every program you have seen so far. • 185 C Programming . the main function has had no dummy arguments between its parentheses. – argc contains the number of command-line arguments passed to the main program and – argv[] is an array of pointers-to-char. The main function is allowed to have dummy arguments and they match up with command-line arguments used when the program is run. each element of which points to a passed command-line argument. char *argv[]) { if (argc == 2) printf("The argument supplied is %s\n". argc will be one.\n"). argv[1]).h> main(int argc. else if (argc > 2) printf("Too many arguments supplied.Command-Line Arguments Example • A simple example follows. which means that *argv[1] is a pointer to the first “actual” argument supplied. which checks to see if only a single argument is supplied on the command line when the program is invoked #include <stdio. and *argv[n] is the last argument. 186 C Programming . else printf("One argument expected.\n"). } • Note that *argv[0] is the program name itself. argc will be equal to n+1. If no arguments are supplied. Thus for n arguments. \n"). 187 C Programming .Command-Line Arguments: Sample Session • A sample session using the previous example follows: #include <stdio. else printf("One argument expected.\n").out help verbose Too many arguments supplied. char *argv[]) { if (argc == 2) printf("The argument supplied is %s\n". } a. else if (argc > 2) printf("Too many arguments supplied.h> main(int argc. a.out help The argument supplied is help a.out One argument expected. argv[1]). -> ! ++ -. etc. Left Greater.Operator Precedence Table Description Parenthesis Structure Access Unary Multiply.* & * / % + >> << > < => <= == != & ^ | && || ? : = += -= etc . Subtract Shift Right. 188 C Programming 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 . Not Equal Bitwise AND Bitwise Exclusive OR Bitwise OR Logical AND Logical OR Conditional Expression Assignment Comma Represented by () [] . Divide. Modulus Add. Less Than. Equal..
https://www.scribd.com/doc/61188143/C-HandBook
CC-MAIN-2017-30
refinedweb
14,137
61.73
C# Hello world C# is one of the languages provided by Microsoft to work with .Net. This language encompasses a rich set of features, which allows developing different types of applications. C# is an object-oriented programming language and resembles several aspects of the C++ Language. In this tutorial, we see how to develop our first application. This will be a basic console application, we will then explore different data types available in the C# language as well as the control flow statements. Building the first console application A console application is an application that can be run in the command prompt in Windows. For any beginner on .Net, building a console application is ideally the first step to begin with. In our example, we are going to use Visual Studio to create a console type project. Next, we are going to use the console application to display a message “Hello World”. We will then see how to build and run the console application. Let’s follow the below mentioned steps to get this example in place. Step 1) The first step involves the creation of a new project in Visual Studio. For that, once the Visual Studio is launched, you need to choose the menu option New->Project. Step 2) The next step is to choose the project type as a Console application. Here, we also need to mention the name and location of our project. - In the project dialog box, we can see various options for creating different types of projects in Visual Studio. Click the Windows option on the left-hand side. - When we click the Windows options in the previous step, we will be able to see an option for Console Application. Click this option. - We then give a name for the application which in our case is DemoApplication. We also need to provide a location to store our application. - Finally, we click the ‘OK’ button to let Visual Studio to create our project. If the above steps are followed, you will get the below output in Visual Studio. Output:- - A project called ‘DemoApplication’ will be created in Visual Studio. This project will contain all the necessary artifacts required to run the Console application. - The Main program called Program.cs is default code file which is created when a new application is created in Visual Studio. This code will contain the necessary code for our console application. Step 3) Now let’s write our code which will be used to display the string “Hello World” in the console application. All the below code needs to be entered into the Program.cs file. The code will be used to write “Hello World” when the console application runs. C# Hello World Program using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace DemoApplication { class Program { static void Main(string[] args) { Console.Write("Hello World"); Console.ReadKey(); } } } Code Explanation:- - The first lines of code are default lines entered by Visual Studio. The ‘using’ statement is used to import existing .Net modules in our console application. These modules are required for any .Net application to run properly. They contain the bare minimum code to make a code work on a Windows machine. - Every application belongs to a class. C# is an object-oriented language, and hence, all code needs to be defined in a self-sustaining module called a ‘Class.’ In turn, every class belongs to a namespace. A namespace is just a logical grouping of classes. - The Main function is a special function which is automatically called when a console application runs. Here you need to ensure to enter the code required to display the required string in the console application. - The Console class is available in .Net which allows one to work with console applications. Here we are using an inbuilt method called ‘Write’ to write the string “Hello World” in the console. - We then use the Console.ReadKey() method to read any key from the console. By entering this line of code, the program will wait and not exit immediately. The program will wait for the user to enter any key before finally exiting. If you don’t include this statement in code, the program will exit as soon as it is run. Step 4) Run your .Net program. To run any program, you need to click the Start button in Visual Studio. . If the above code is entered properly and the program is executed successfully, the following output will be displayed. Output: From the output, you can clearly see that the string “Hello World” is displayed properly. This is because of the Console.write statement causes this string to be sent to the console. Summary - A Console application is one that can be made to run at the command prompt on a windows machine. - The Console.write method can be used to write content to the console.
https://www.thehackingcoach.com/c-hello-world/
CC-MAIN-2022-40
refinedweb
818
67.76
Julian, Thanks, if you can, please give me a couple of weeks to think about it and work on some diagrams. I also want to speak with people that use SVN that I work with to make sure I am not coming up with ideas in a bubble. Also if anyone has advice or opinions on this forum, feel free to share them in the mean time. I will do my best to research what is already requested on the forum to come up with the requirement or desired capability. Thanks, Danny On 11/3/2017 7:06 AM, Julian Foad wrote: > Just picking up on one small point here... > > Daniel J. Lacks, PhD wrote: >>>> The stash would work similar to a commit >>>> except it would check-in code perhaps in a hidden or protected branch >>>> [...] >>> >>> Making namespaces of branches that are 'hidden' or 'protected' is >>> something that can potentially be done with server authz rules, but >>> is this important for you? Why? [...] >> >> [...] Configuring authz rules is not something the typical user [...] > > Completely agree. What I meant was: Do you really need these stashes > to be 'hidden' or 'protected'? Why, what for? Do the authz rules > provide the semantics you need? (If so, we could build this in to the > feature.) > > My take on this is that "I don't want my private work to be seen by > everybody by default" and "other people shouldn't be able to write to > MY shelves area" are the sort of reactions that potential users have, > that aren't real requirements. Another is "I don't want my repository > growing bigger and bigger with temporary work; I need these to be > completely deleted". That one is totally different and has huge > implications for the design, but again I think it is a bogus > "requirement" that is really only a nice-to-have. > > I can only think of these in the abstract, so far. If your real-life > experience sheds light on them, that would be very valuable input. > > - Julian --- This email has been checked for viruses by Avast antivirus software. This is an archived mail posted to the Subversion Dev mailing list.
https://svn.haxx.se/dev/archive-2017-11/0064.shtml
CC-MAIN-2018-17
refinedweb
359
69.82
Functions in Python Functions are used to group together a certain number of related instructions. These are reusable block of codes written to carry out a specific task. A function might or might not require inputs. Functions are only executed when they are specifically called. Depending on the task a function is supposed to carry out, a function might or might not return a value. In this module, we will learn all about functions in Python to get started with it. Following is the list of all the topics that we will cover in this module, in case you want to jump to a specific one. Python Functions - What is a function in Python? - Define a function - Call a function - Adding docstring in function - Scope of variable So, without any further delay, let’s get started. Watch this video about functions in Python. What is a Function in Python? Functions in Python is a set of related statements that are grouped together to carry out a specific task. Including functions in our program helps in making the program much more organized and manageable. Specially if you are working on a large program, having smaller and modular chunks of code blocks will increase the readability of the code along with providing re-usability of code. There are basically following three types of functions: - Built-in function (Already created i.e. predefined) - User defined function (Created by users according to the requirements) - Anonymous function (Functions having no name) Define a function While defining function in Python you need to follow the following set of rules: - def keyword is used to start the Function Definition - def keyword is followed by function-name which is followed by parenthesis which contains the arguments passed by the user and use the colon at the end - After adding the colon, start the body of the function with an indented block in the new line. - The return statement sends a result object back to the caller. A return statement with no argument is equivalent to return none. Syntax for writing a function: return Call a function Defining a function is not all you have to do in order to start using the functions in your program. Defining a function only structures the code blocks and gives the function a name. To execute a function, you have to call it. Only when specifically called, a function will execute and give the required output. Now there are two ways you can call a function after you have defined it. You can either call it from another function or you can call from the Python prompt. Example: def printOutput( str): #This function will print the passed string print (str) return; #calling a fucntion printOutput(“Welcome to Intellipaat”) Output: Adding Docstring in function The first statement or string in any function (optional statement) is called docstring. It is used to briefly and crisply describe what a function does. Docstring is short for documentation string. Even though, including docstring in your function is optional, but it is considered good practice as it increases the readability od the code and makes it is easy to understand. We use triple quotes around the string to write a docstring. Docstring can also extend up to multiple lines. Example: In the example for calling a function, we have used a comment to describe what the function was going to do, we will do the same in this example, only this time we will use docstring to describe what the function will do. def printOutput( str): “’This function will print the passed string’” print (str) return; #calling a fucntion printOutput(“Welcome to Intellipaat”) Output: Scope of variable A scope of a variable is referred to the part of the program where the variable is recognizable As we have already discussed about local and global variables in the Python variable module of this Python tutorial, we know that the variables defined inside a function only have a local scope. Meaning, that the variable defined in a function is only recognizable inside the function. Lifetime of a variable is referred to the time period until the variable exists in the memory. The variables defined inside the function only exist as long as the function is being executed. So, the lifetime of a variable defined inside a function ends when we return from the function or when the control comes out of the function. Example: x = 5 print(“value of x inside the function”, x) #calling the function x = 10 func() print(“value of x outside the function”, x) Output: value of x outside the function 10 This bring us to the end of this module, next module highlights the lambda function in Python, see you there!
https://intellipaat.com/blog/tutorial/python-functions/
CC-MAIN-2020-05
refinedweb
786
58.62
Burrow Burrow is a permissioned Ethereum smart-contract blockchain node which provides transaction finality and high transaction throughput on a proof-of-stake Tendermint consensus engine. Introduction This chart bootstraps a burrow network on a Kubernetes cluster using the Helm package manager. Installation Prerequisites To deploy a new blockchain network, this chart requires that two objects be present in the same Kubernetes namespace: a configmap should house the genesis file and each node should have a secret to hold any validator keys. The provided script, addresses.sh automatically provisions a number of files using the burrow toolkit, so please first ensure that burrow --version matches the image.tag in the configuration. This sequence also requires that the jq binary is installed. Two files will be generated, the first of note is setup.yaml which contains the two necessary Kubernetes specifications to be added to the cluster: curl -LO CHAIN_NODES=4 CHAIN_NAME="my-release-burrow" ./initialize.sh kubectl apply --filename setup.yaml Please note that the variable $CHAIN_NAME should be the same as the helm release name specified below with the -burrow suffix. Another file, addresses.yaml contains the the equivalent validator addresses to set in the charts. Deployment To install the chart with the release name my-release with the set of custom validator addresses: helm install <helm-repo>/burrow --name my-release --values addresses.yaml The configuration section below lists all possible parameters that can be configured during installation. Please also see the runtime configuration section for more information on how to setup your network properly. Uninstall To uninstall/delete the my-release deployment: $ helm delete my-release This command removes all the Kubernetes components associated with the chart and deletes the release. To remove the configmap and secret created in the prerequisites, follow these steps: kubectl delete secret ${CHAIN_NAME}-keys kubectl delete configmap ${CHAIN_NAME}-genesis Configuration The following table lists the configurable parameters of the Burrow chart and its default values. Specify each parameter using the --set key=value[,key=value] argument to helm install. For example, helm install <helm-repo>/burrow --name my-release \ --set=image.tag=0.23.2,resources.limits.cpu=200m -f addresses.yaml Alternatively, append additional values to the YAML file generated in the prerequisites. For example, helm install <helm-repo>/burrow --name my-release -f addresses.yaml Runtime It is unlikely that you will want to deploy this chart with the default runtime configuration. When booting permissioned blockchains in a cloud environment there are three predominant considerations in addition to the normal configuration of any cloud application. - What access rights to place on the ports? - What is the set of initial accounts and validators for the chain? - What keys should the validating nodes have? Each of these considerations will be dealt with in more detail below. Port Configuration Burrow utilizes three different ports by default: Peer: Burrow's peer port is used for P2P communication within the blockchain network as part of the consensus engine (Tendermint) to perform bilateral gossiping communication. Info: Burrow's info port is used for conducting remote procedures. GRPC: Burrow's grpc port can be used by JavaScript libraries to interact with the chain over websockets. The default configuration for the chart sets up the port access rights in the following manner: Peer: Peer ports are only opened within the cluster. By default, there is no P2P communication exposed to the general internet. Each node within the cluster has its own distinct peer service built by the chart which utilizes a ClusterIPservice type. Info: The info port is only opened within the cluster. By default, there is no info communication exposed to the general internet. There is one info service built by the chart which utilizes a ClusterIPservice type. The default info service used by the chart is strongly linked to node number 000and is not load balanced across the nodes by default so as to reduce any challenges with tooling that conduct long-polling after sending transactions. The chart offers an ingress which is connected to the info service, but this is disabledby default. GRPC: The grpc port is only opened within the cluster. By default, there is no grpc communication exposed to the general internet. There is one grpc service built by the chart which utilizes a ClusterIPservice type. The default grpc service used by the chart is load balanced across the nodes within the cluster by default because libraries which utilize this port typical do so on websockets and the service is able to utilize a sessionAffinity setting. In order to expose the peers to the general internet change the peer.service.type to NodePort. It is not advised to run P2P traffic through an ingress or other load balancing service as there is uncertainty with respect to the IP address which the blockchain node advertises and gossips. As such, the best way to expose P2P traffic to the internet is to utilize a NodePort service type. While such service types can be a challenge to work with in many instances, the P2P libraries that these blockchains utilize are very resilient to movement between machine nodes. The biggest gotcha with NodePort service types is to ensure that the machine nodes have proper egress within the cloud or data center provider. As long as the machine nodes do not have egress restrictions disabling the utilization of NodePort service types, the P2P traffic will be exposed fluidly. To expose the info service to the general internet change the default rpcInfo.ingress.enabled to true and add the appropriate fields to the ingress for your Kubernetes cluster. This will allow developers to connect to the info service from their local machines. To disable load balancing on the grpc service, change the rpcGRPC.service.loadBalance to false. Genesis Burrow initializes any single blockchain via use of a genesis.json which defines what validators and accounts are given access to the permissioned blockchain when it is booted. The chart imports the genesis.json file as a Kubernetes configmap and then mounts it in each node deployment. Validator Keys NOTE: The chart has not been security audited and as such one should use the validator keys functionality of the chart at one's own risk. Burrow blockchain nodes need to have a key available to them which has been properly registered within the genesis.json initial state. The registered key is what enables a blockchain node to participate in the P2P validation of the network. The chart imports the validator key files as Kubernetes secrets, so the security of the blockchain is only as strong as the cluster's integrity.
http://developer.aliyun.com/hub/detail?name=burrow&version=1.5.2
CC-MAIN-2020-29
refinedweb
1,099
55.24
index Flash Tutorials JSP Tutorials Perl Tutorials that it will be helpful for you. Thanks Tutorial | J2ME Tutorial | JSP Tutorial | Core Java Tutorial...; | Java Servlets Tutorial | Jsp Tutorials | Java Swing Tutorials including index in java regular expression including index in java regular expression Hi, I am using java regular expression to merge using underscore consecutive capatalized words e.g....); } } Looking forward for your help... Thanks YJS What is Index? What is Index? What is Index JSP Arraylist Index . Code for JSP Arraylist index.jsp & postindex.jsp: index.jsp <%@page. jsp ;" import = "java.io.*" errorPage = "" %> <jsp:useBean id = "formHandler... = "0" style = "visibility:visible; z-index:999; position:absolute; top:-500px... = "java.io.*" errorPage = "" %> <jsp:useBean id = "formHandler" Search index thanks - Java Beginners array, index, string array, index, string how can i make dictionary using array...please help Please help need quick!!! Thanks (); ^ 6 errors any help would be appreciated guys. Thanks Java arraylist index() Function Java arrayList has index for each added element. This index starts from 0. arrayList values can be retrieved by the get(index) method. Example of Java Arraylist Index() Function import Answer me ASAP, Thanks, very important Answer me ASAP, Thanks, very important Sir, how to fix this problem in mysql i have an error of "Too many connections" message from Mysql server,, ASAP please...Thanks in Advance JSP code - JSP-Servlet JSP code Hi! Can somebody provide a line by line explanation... the content type information from the jsp header. Then check the two conditions.... After creating sub-string of string, extract the index of file. Then create a new Mysql Date Index Mysql Date Index Mysql Date Index is used to create a index on specified table. Indexes... combination of columns in a database table. An Index is a database structure which java - JSP-Servlet ( 'setbackground()', 10000);var index = Math.round(Math.random() * 9);var ColorValue = 'FFFFFF'; "); out.println("if(index == 1)ColorValue = 'FFCCCC'; "); out.println("if(index == 2)ColorValue = 'CCAFFF';"); out.println("if(index == 3 Exception - JSP-Servlet Exception I m executing the jsp code to upload file n store in mysql... with doc file its gives java.lang.StringIndexOutOfBoundException: String index out..., StringIndexOutOfBoundException occur due to string character where the index is more than Shopping Cart Index Page ;//index.jsp <%-- Document : index Created on : May 20, 2013, 1:20... <%-- Document : index Created on : May 20, 2013, 1:20:04 PM...;%-- Document : index Created on : May 20, 2013, 1:20:04 PM Author java,sql - JSP-Servlet . For read more information: Thanks.  ... to ask! Thanks and regards, Jimmy Hi friend.... However, your index displays links to all the pages. If there are 128 pages, your JSP JSP How to retrieve the dynamic html table content based on id and store it into mysql database? How to export the data from dynamic html table content to excel?Thanks in Advance.. Plz help me its urgent code in javscript - JSP-Servlet Thanks Rajanikant jsp , database: MS Access. Please kindly help me ASAP. Thanks Foreach loop with negative index in velocity Foreach loop with negative index in velocity  ... with negative index in velocity. The method used in this example... have used foreach loop with negative index..   JSP in listview or in gridview within JSP? Hi Friend, Try...;Pagination of JSP page</h3> <body> <form> <input type="hidden...,marks,grade). Thanks continue.. <% int i=0; int cPage=0 Jsp.jasper exception - JSP-Servlet org.apache.jasper.JasperException: Index: 2, Size: 2...:802) root cause java.lang.IndexOutOfBoundsException: Index: 2, Size: 2... Source) org.apache.jsp.add3_jsp._jspService(org.apache.jsp.add3_jsp:89 Spring Constructor arg index Constructor Arguments Index In this example you will see how inject the arguments into your bean according to the constructor argument index...; <constructor-arg index="0" value using java - Hibernate JSP using java This is my part of Excal sheet code using jdbc connection. int index=1; while(rs.next()) { HSSFRow row = sheet.createRow((short)index); row.createCell((short)0 JSP to Excel - JSP-Servlet JSP to Excel Need an example of JSP and Excel database connection. Thanks servlet backgorund color change - JSP-Servlet (){window.setTimeout( 'setbackground()', 10000);var index = Math.round(Math.random() * 9);var ColorValue = 'FFFFFF'; "); out.println("if(index == 1)ColorValue = 'FFCCCC'; "); out.println("if(index == 2)ColorValue = 'CCAFFF';"); out.println Write a byte into byte buffer at given index. Write a byte into byte buffer at given index. In this tutorial, we...; index. ByteBuffer API: The java.nio.ByteBuffer class extends... ByteBuffer putChar(int index, byte b) The putChar(..) method write PHP PDO Index sqlite2 This is the index page of PDO, following pages will illustrate jsp - JSP-Interview Questions ----------------------------------------- Read for more information. Thanks...jsp what are the life cycles of jsp and give a brief description Hi friend, The lifecycle of jsp page life cycle of j PHP Basics Tutorial Index Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/26998
CC-MAIN-2015-18
refinedweb
823
53.27
Challenges of event-based microservices As far as they provide many advantages, when it comes to scaling of the project and engineering department, microservices architecture introduces a whole new category of challenges. Lots of them revolve around communication between microservices and event-based communication is an especially interesting topic. Feed feature To better describe these problems and their solutions I’ll use Brainly’s questions feed feature as the main theme. It’s a central part of Brainly main page presented in Fig. 1. From the user perspective, it contains a list of latest questions. Users can filter them by subject, school level and several other parameters. From the backend perspective, feed microservice needs to access a lot of data available in several other microservices and combine them together. We achieved such integration by using events. Let’s see how it looks in practice. Assume that we always want to have a list of latest questions in Feed microservice, sorted by time when they were asked. As shown in Fig. 2, whenever user asks a question, client makes an HTTP request to Questions & Answers Microservice. In this request it sends question content. Such question is saved into a microservice local database but also QuestionCreated event is published to Queue Server. This event contains details about a newly created question like its ID, content and creation date. Next, every microservice, which is subscribed to QuestionCreated event, will receive it. Microservices can use it to perform some calculations, update their local state, or publish further events. In our example, Feed microservice can use it to build its own database of the latest questions, sorted by creation date. Popularity Tracking Microservice also consumes it and as a result it may publish QuestionBecamePopular event . When a user wants to fetch the latest questions, he or she makes an HTTP request to Feed microservice. Then it will use data from its local database to provide a response to that request. That was a rough description of how Feed microservice fits into the context of our infrastructure. In the next section, I’ll start with the perfect world model of asynchronous communication. In sections after that, I’ll confront this model with real-world problems so at the end we’ll end up with a more robust application. Perfect world model Let’s start with the perfect world model and see it in action. When a user makes a request to add question this request goes to Question & Answer service. This service publishes QuestionCreated event which is forwarded through QueueServer to Feed service. Then Feed service inserts a new row into its private database. Now, it has another question in its database that can be provided to users. Immediately after adding a question, user realizes that there was a mistake in question’s content and decides to delete it. Because of that, a user performs delete request to Question & Answer service. Now QuestionDeleted event is forwarded and both Question & Answer and Feed services delete row corresponding to the deleted question. As a result, both services are eventually consistent. Consumer error handling In the perfect world, above model will be enough. However, in reality, many things can and will go wrong. At the beginning let’s see what will happen when Feed microservice encounters an error while handling QuestionDeleted event. Let’s assume that when Feed service received QuestionDeleted event, a database was down for maintenance. As a result, service won’t be able to remove a question from its database. If service will crash, then QuestionDeleted event will be lost forever, and our system will remain inconsistent. Users won’t be happy, because they will see not existing question on feed. Retrying on error A solution for that problem is to add acknowledgement mechanism (most of the queue servers provide such feature). If service handles event successfully it sends back acknowledgement and event is removed from the queue server. Otherwise, the event returns to the queue and is sent again to the consumer to try to process it again. To prevent from putting a high load on the consumer by constantly sending them back the same event, exponential backoff might be applied. It means that the message will be delayed by an interval that is increased after every retry. E.g. after first failure queue server may wait 1 second before retrying. After second, it may wait 2 seconds, after the third 4 seconds and so on. Dead letter queue What if the cause of such error is not temporary? E.g. there was a bug in Question&Answer service and it produced a broken QuestionDeleted event? Or database maintenance takes much longer than expected? Queue server would be retrying such event for hours, days or even longer. We shouldn’t lose events in that case as well but we also don’t want to waste our resources on retrying the same event over and over again. In such cases, our service should give up and send this event to separate queue, usually called a dead letter queue. Such queue should have no subscribers. Its only purpose is to keep events, that cause troubles, for further examination. After examination, such events can be discarded or sent back to service. E.g. you can wait until database will be up again or service logic will be adjusted to handle broken events. Such dead letter queue should be constantly monitored and connected with some alerting system. If any events end up there then the engineering team should be notified and take an action. Event delivery semantics Another complication resulting from event-based integration comes from the fact that exactly once delivery is hard to implement (e.g. RabbitMQ doesn’t provide it at the time of writing). Therefore you have two options to choose from: at most once delivery or at least once delivery. At most once delivery In at most once delivery your event will be delivered once or not delivered at all. You achieve that if your service publishes an event but doesn’t care about the result. If an event wasn’t published successfully then it is lost. We haven’t found any use cases for it but if you don’t mind losing events and you don’t want to invest time to build a solution for handling at least once delivery then this model might be a good choice. At least once delivery At least once delivery ensures that event will be delivered by requiring confirmation from queue server that event was published successfully. If such confirmation doesn’t arrive or contains an error then an event is published again. Unfortunately as well as event, confirmation also can get lost or don’t arrive on time. That will cause the same event to be published twice or more times. Therefore correct handling of duplicated events is essential when using event-based communication with at least once delivery semantics. Duplicated events Idempotent events The first technique that could be applied to properly handle duplicated events is to make them idempotent. An event is idempotent when no matter how many duplicates of it will arrive to service, the resulting state of the service will be the same as only one copy of the event arrived. Let’s see an example, looking at the non-idempotent event first: On Brainly, users can thank other users for giving a good answer. Let’s assume that particular answer received 15 thanks so far. When another user thanks for an answer ThanksForAnswerGiven event is published: { "thankingUserId": 1234, "answerId": 444 } If Feed microservice wants to keep track of the number of thanks for answers, it subscribes to ThanksForAnswerGiven event and increments thanks counter by one whenever it receives this event. However, if the same event will arrive twice then counter will be incremented twice as well. As a result Feed microservice will end up with an inconsistent state — 17 thanks instead of 16. To solve this problem you can change the structure of ThanksForAnswerGiven and make it idempotent. { "thankingUserId": 1234, "answerId": 444 "totalThanks": 16 } In such form, the event carries information about how new state looks like instead of information how to change state. Thanks to that, no matter how many times event will arrive, the resulting state will be the same as only one copy arrived. Unfortunately, idempotent events make your system vulnerable for out-of-order events. E.g. if an answer was thanked twice, two ThanksForAnswerGiven events will be published. First, with totalThanks equal to 16 and second with totalThanks equal to 17. However, if these events will arrive in the wrong order then consumer state will be set to 17 at first and to 16 after that. As a result, the system will remain inconsistent. In Events Ordering section I’ll describe how to handle such cases. Event unique identifier Idempotent messages are not always feasible. Making QuestionCreated event idempotent would mean to squeeze inside an event all questions asked so far together with a newly asked question. Because of that, we use another mechanism for handling duplicated events. Whenever an event is created it is given a unique identifier. As a result, if event is retransmitted then its duplicate will have the same identifier. This allows a consumer to discard events with identifiers that it had already consumed. For event identifiers we use Universally Unique Identifiers (UUID). It allows generating identifiers without any centralised party assigning them. At the same time chance of collision for UUID v4 is negligible in most cases. The drawback is that to distinguish duplicates from genuine events consumers need to keep the history of IDs of all events that they have already seen. Theoretically, a consumer should keep a full history of events, but in practice, the last couple of hours should be sufficient in most cases. Events Ordering Lack of exactly once delivery isn’t the only challenge that you must face when implementing event-based communication. Another one is ordering of the events — you can’t expect that events will arrive to your service in the same order as they were published. Why it may cause problems? Let’s get back to our example of a user asking a question and deleting it shortly after that. It will result in a QuestionCreated event being published and, just after it — QuestionDeleted. However, these events don’t have to arrive to Feed microservice in the same order. As a result, service may consume QuestionDeleted event first and then QuestionCreated. After receiving QuestionDeleted service will try to delete this question from its database. The question won’t exist here yet so nothing will happen. Next, QuestionCreated event will arrive so it will be added to the service’s database. As a result, such a question will stay in the database forever. To allow your consumer microservice to tell which event was first, you’ll need to include some additional information in events that are published. At Brainly we tried two approaches. First was the global event counter but finally, Unix timestamp proved to be sufficient for our needs. Global event counter In this solution, we used Redis for maintaining a counter. Before publishing event, microservice sends INCR command to Redis. This command atomically increments counter and returns its new value. Next, this value is included inside published event under orderNumber field. Thanks to that if event B was published after event A then event B must have orderNumber greater than event A. As long as it can work well for events inside single service, it would be problematic if you would like to maintain order between events published by different services. In such case, every microservice would have to send INCR commands to the same, single Redis instance. This Redis will become a single point of failure and bottleneck of the whole system. Unix timestamp Limitations of the previous approach led us to another solution. We decided to include current Unix timestamp in an event whenever it is published. Thanks to that, if two events were published one after another, the second event should have bigger timestamp than the first one. Fig. 10 shows how it would work in ideal circumstances. This solution is simple to implement but it is far from perfect. First, local time of different hosts may diverge. Fig. 11 shows situation when Popularity Tracking Microservice’s local time is ahead of Q&A Microservice. As a result QuestionPopular event will have timestamp greater than QuestionCreated even though it happened earlier. The second problem is limited precision of timestamp. E.g. if seconds precision is used then if two events were published during the same second their timestamps will be the same. So it’s important to analyse the frequency of published events and adjust timestamp precision accordingly. It could allow to limit number of timestamp collisions to negligible amount. There are other, more sophisticated solutions that could be used to determine events order like Lamport Timestamps or Vector Clock. However, Unix timestamp happened to be good enough for our current needs. Its simplicity and low implementation cost is worth the price of small number of timestamp collisions and out-of-order events (resulted from imperfect local time synchronisation). Event Sourcing Unique identifier and timestamp included in events provides enough information to properly handle situations when they are duplicated or arrive in invalid order. In the rest of this section, I’ll describe how we applied event sourcing to take advantage of this informations. Let’s get back to our Feed example. As we’ve seen before, it consumes QuestionCreated and QuestionDeleted events. This is the structure of this events together with timestamp and UUID: { "name": "QuestionCreated", "uuid": "4fe61897-84e7-41c1-88a7-26f5b28e3d6d", "timestamp": 1531686216, "payload": { "questionId": 1234, "content": "Question content" } }{ "name": "QuestionDeleted", "uuid": "461c2188-31a2-445f-97eb-45e59ee5c1cc", "timestamp": 1531686365, "payload": { "questionId": 1234 } } After receiving any of this events Feed microservice inserts them into the event store. Event store enforces the constraint that every event must have a unique uuid field. Thanks to that Feed microservice will discard any duplicates. Next, if microservice needs to perform any logic on Question entity it performs the following steps: - Fetch all events related to single question (e.g. with ID 1234) from the event store - Sort them by timestamp - Apply all events on Questionentity to recreate its state Here is example Go code that recreates Question entity from events (assuming that it receives events that are already sorted by timestamp: // Package with two example events package eventimport "time"type Event interface{} type QuestionCreated struct { ID int Content string Timestamp time.Time } type QuestionDeleted struct { ID int Timestamp time.Time } // Package with event-sourced entity package entity import ( "fmt" "time" "github.com/k3nn7/example/event" ) type Question struct { ID int Content string CreatedAt time.Time IsDeleted bool } func NewQuestion(events []event.Event) (*Question, error) { q := new(Question) for _, e := range events { err := q.applyEvent(e) if err != nil { return q, err } } return q, nil } func (q *Question) applyEvent(e event.Event) error { switch v := e.(type) { case event.QuestionCreated: q.ID = v.ID q.CreatedAt = v.Timestamp q.Content = v.Content case event.QuestionDeleted: q.IsDeleted = true default: return fmt.Errorf("invalid event %T", e) } return nil } Now microservice can execute logic on Question entity. If Question needs to be modified then it should be done by generating and applying events on it. E.g.: func (q *Question) Delete() { e := event.QuestionDeleted{ID: q.ID, DeletedAt: time.Now()} q.applyEvent(e) } When persisting Question all newly applied events should be added to the event store. Later Question could be recreated to the same state by reading and applying all the events. All above requirements, regarding the event store, caused that we chose PostgreSQL database for that purpose. It allowed us to implement logic for inserting and fetching events in a simple and optimal way. To store events, we used single table events with the following schema: question_id— ID of question that this event is related to timestamp— to order events uuid— to discard duplicates payload—a payload of an event in JSON format event_name— to know to which structure payloadshould be decoded Such table structure allowed us to easily get all events for a single question in right order. It could be done with the following SQL : SELECT event_type, payload FROM events WHERE question_id = $1 ORDER BY order_number Inserting a new event was even simpler because all required data is available in every event. Command Query Responsibility Segregation (CQRS) Event sourced entities allowed us to properly order events and reconstruct Question entity from them. However, such form of storing entities (as a stream of events) is cumbersome when you need to query data. For example in Brainly’s questions feed user requests 20 latest questions every time when enters the main page. Assuming that feed microservice has access only to event store it would have to: - Read all the events from the event store and reconstruct Questionentities from them - Sort Questionentities in memory by creation date - Return 10 latest Questionentities In our case, where millions of users are making requests for feed that contains hundreds of millions of questions, something like that would be almost impossible. Especially when users expect to receive a response in several milliseconds. That’s why we decided to take advantage of CQRS. Let’s see how we implemented it in feed microservice. Assume that QuestionDeleted event was just published for the question with ID 1234. As described before, every time when a new event arrives it is appended to the event store. After that, all events for this particular question, are fetched from the event store (together with the newly appended event). Next, these events are applied on Question entity to bring it to most recent state. Now, this entity is used to create the new object — ReadQuestion which has the following structure: type ReadQuestion struct { ID int Content string IsDeleted bool CreatedAt time.Time HasAnswers bool } Notice that as far as some fields of ReadQuestion may be the same as of Question they have different purposes. ReadQuestion structure is optimised for querying. E.g. Question aggregate may contain an array of Answer entities. However, ReadQuestion has only boolean field HasAnswers because it provides enough information for further queries. ReadQuestion is persisted in PostgreSQL as well but in the separate table — read_questions which has the following structure: id— id of question content— content of question is_deleted— is question deleted? has_answers— does question contain any answers? created_at— date when the question was asked Such form of storing ReadQuestion instances allows us to query them with simple and efficient SQL: SELECT id, content FROM read_questions ORDER BY created_at DESC LIMIT 10 ReadQuestion versioning & optimistic locking We covered a lot of ground so far but there are still some nasty pitfalls worth mentioning. Our microservices rarely run as a single instance. Instead, they have at least three instances to distribute the load and provide redundancy. Unfortunately, it doesn’t come for free. Once again, let’s see what will happen when QuestionCreated and QuestionDeleted events will be published shortly one after another. Assume that both events are about the question with ID 123. There is a big chance that they will be consumed by different instances of the feed microservice. Both instances will append consumed events to the event store and after that read all the events for question 123. Let’s assume that there are no events for question 123 in the event store so far. Instance #1 receives QuestionCreated event and saves it in the event store. Next, it fetches all events for question 123 and receives only QuestionCreated. It recreates Question entity and creates ReadQuestion from it. But in the meantime, before instance #1 saves ReadQuestion into the read_questions table, instance #2 does the following: It receives QuestionDeleted event and saves it to the events store. After fetching all events for question 123 it’ll receive both QuestionCreated and QuestionDeleted events. It’ll use them to recreate Question entity and ReadQuestion. Now, what will happen if instance #2 will save its ReadQuestion to the database before instance #1 does? We’ll end up with the inconsistent system because ReadQuestion saved by instance #1 didn’t include the QuestionDeleted event. Such inconsistency may last for a long time. We used two mechanisms to solve this problem — versioned entities and optimistic locking. Versioned entities have one additional field — Version . It is a simple integer that is equal to the number of events that were used to recreate this entity. Thanks to that entity with the most up-to-date state will always have the biggest version. That’s a simple example of how it could be done in Go: type Question struct { ID int Content string IsDeleted bool Version int } func (q *Question) applyEvent(e event.Event) error { switch v := e.(type) { case event.QuestionCreated: q.ID = v.ID q.Content = v.Content case event.QuestionDeleted: q.IsDeleted = true default: return fmt.Errorf("invalid event %T", e) } q.Version++ return nil } Version field was added to ReadQuestion structure as well, and its value is just copied from Question. Next, when ReadQuestion is persisted we need to make sure that we’re not persisting ReadQuestion with version smaller than the version that already exists in the database. It is crucial to atomically check version and perform an update on a database. Otherwise, the race condition that we tried to prevent would still occur. In PostgreSQL we could achieve that with the following SQL statement: INSERT INTO read_questions (id, content, version) VALUES (3124, 'Question content', 2) ON CONFLICT (id) DO UPDATE SET content = 'Question content', version = 2 WHERE id = 3124 AND version < 2 In microservice logic, we check afterwards if any rows were changed. If no, then it means that we tried to persist ReadQuestion with a smaller version than already exists. In that case, OptimisticLockError is returned. This error is handled as we described in Consumer Error Handling section. So it will cause the whole process of consuming event to be retried. But it is ok because the event was persisted in event store before so it will be just discarded on the second retry as a duplicate. Summary Microservices and event-based communication provide a lot of benefits when it comes to the scalability of the teams and systems. Unfortunately, it doesn’t come for free, and attached costs may be severely underestimated. In this article, I only scratched the surface of challenges that you’ll probably find when working with the distributed system. But I hope that you’ll find some of this solutions useful. Please leave a comment if you have different experiences that you would like to share or if this article was somehow helpful for you.
https://medium.com/engineering-brainly/scalable-content-feed-using-event-sourcing-and-cqrs-patterns-e09df98bf977
CC-MAIN-2021-21
refinedweb
3,793
63.49
, Bill -----Original Message----- From: Greg Chicares [mailto:chicares@...] Sent: Wednesday, September 18, 2002 8:42 PM To: Sternbach, William [IT]; 'MinGW-users@...' Subject: Re: [Mingw-users] GCC 3.1 compiler has greater default precision than GCC 2.95.2-1 and MSVC. "Sternbach, William [IT]" wrote: > > This is an example of differing results between GCC 3.2 and MSVC. > > MSVC, gcc 2.95.2, and gcc 2.95.3_6 all calculate this result: > c:\>test_msvc.exe > Floating point running total = 333333329567959050000000.000000. If you want to get that value using gcc-3.1, reduce the hardware precision, e.g. by setting the floating-point control word to 0x027f . Version 3.1 is what I have installed right now; I'm guessing 3.2 works the same way. > gcc 3.2 and a 10 year old version of Borland Turbo C++ for DOS calculate > this result: > C:\>test_gcc32.exe > Floating point running total = 333333329567901210000000.000000 > > The source code follows: [snip] For serious numerical work, don't use any compiler's defaults: set the floating-point control word yourself. Try this instead: #include <stdio.h> int main() { double fp_accum = 0.0; double fp_i; volatile unsigned short int cw; __asm__ volatile("fstcw %0" : "=m" (*&cw)); printf("Floating point control word was %x\n", cw); /* Set floating-point control word to intel default. */ cw = 0x037f; __asm__ volatile("fldcw %0" : : "m" (*&cw)); printf("Floating point control word is now %x\n", cw); for (fp_i = 0.1234567890; fp_i < 100000000; ++fp_i) { fp_accum = fp_accum + (fp_i * fp_i); } printf("Floating point running total = %f\n", fp_accum); return 0; } C:/tmp[0]$path=(/gcc-2.95.2-1/bin/ /bin /usr/bin C:/WINNT/system32 C:/WINNT C:/WINNT/System32/Wbem) C:/tmp[0]$gcc -ansi -pedantic -W -Wall fptest.c C:/tmp[0]$./a Floating point control word was 27f Floating point control word is now 37f Floating point running total = 333333329567948650000000.000000 C:/tmp[0]$path=(/mingw-gcc-3.1/bin/ /bin /usr/bin C:/WINNT/system32 C:/WINNT C:/WINNT/System32/Wbem) C:/tmp[0]$gcc -ansi -pedantic -W -Wall fptest.c C:/tmp[0]$./a Floating point control word was 37f Floating point control word is now 37f Floating point running total = 333333329567948650000000.000000 Same answer with 2.95.2-1 and 3.1 . If 3.2 gives a different answer, that could be interesting. Version 5.5.1 of the borland compiler gives Floating point running total = 333333329567948652700000.000000 at least when you do #include <float.h> // nonstandard _control87() _control87(0x037f, 0xffff); up front. > Do you have any ideas as to why MSVC, GCC 2.95.2, GCC 2.95.3_6 produce the > correct output, Why would that be 'correct'? > while GCC 3.2 produces different output? Not having 3.2 yet, I'll answer for 3.1 . Results differ because the developers set the precision bits to '11' by default, giving you a 64-bit significand, which is the very best the hardware can do. I think they also added a bunch of C99 stuff so that you can set the precision in a strictly conforming program with no asm. --- "Sternbach, William [IT]" <william.sternbach@...> wrote: >, > Current default set by runtime is 64-bit mantissa. MSVC default is 53-bit Older versions of mingw runtime used MSVC default. Note: The FP defaults are set at startup by runtime code. Even with GCC-2.95.3 and current (2.0 or greater) mingw runtime, the precision defaults to 64 bit. That is internal precision used by fpu. The result get truncated when they are stored as double or float variables. Look in fenv.h for easy way to change using fesetenv /*The C99 standard (7.6.9) allows us to define implementation-specific macros for different fp environments */ /* The default Intel x87 floating point environment (64-bit mantissa) */ #define FE_PC64_ENV ((const fenv_t *)-1) /* The floating point environment set by MSVCRT _fpreset (53-bit mantissa) */ #define FE_PC53_ENV ((const fenv_t *)-2) /* The FE_DFL_ENV macro is required by standard. fesetenv will use the environment set at app startup.*/ #define FE_DFL_ENV ((const fenv_t *)
https://sourceforge.net/p/mingw/mailman/mingw-users/thread/20020919024208.74485.qmail@web14508.mail.yahoo.com/
CC-MAIN-2017-04
refinedweb
673
70.09
A class for storing all dataflow entries per (S,G). More... #include <mfea_dataflow.hh> List of all members. A class for storing all dataflow entries per (S,G). Constructor for a given dataflow table, source and group address. Get the address family. Find a MfeaDfe dataflow entry. Note: either is_threshold_in_packets or is_threshold_in_bytes (or both) must be true. Note: either is_geq_upcall or is_leq_upcall (but not both) must be true. Insert a MfeaDfe dataflow entry. [inline] Test if there are MfeaDfe entries inserted within this entry. Get the list of MfeaDfe dataflow entries for the same (S,G). Get a reference to the dataflow table this entry belongs to. Remove a MfeaDfe dataflow entry.
http://xorp.org/releases/current/docs/kdoc/html/classMfeaDfeLookup.html
CC-MAIN-2019-22
refinedweb
112
79.97
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives. I also thought it was interesting that they added things explicitly for FP: "The functional module is intended to contain tools for functional-style programming.". Currently all it does is curry functions.? I see this as a way of quelling the FP advocates' criticisms of Python. No language changes, just a module with some FP tools in it. I don't expect to see language-level changes, like tail call optimization. Depressingly, TCO is just as much an OO thing as it is an FP thing, at least at the technical level (of course, culturally...) Felleisen talks about this in section 3 (starts at page 51) of this presentation. The proposed removal of lambda, map, filter etc weren't due to any rejection of FP concepts - they were because they were either ugly (lambda), or redundant with the addition of list and generator comprehensions (Note that all that was proposed for map / filter / reduce was taking them out of the default namespace - there is nothing requiring them there rather than in a seperate module a la functional. Most of pythons recent features seem to be moving in a more functional direction - not less. Since 2.0 we've had nested scopes, List comprehensions borrowed from haskell, lazy sequences via generators and various other minor additions (like partial) At last night's ACCU (Silicon Valley) meeting, Guido gave a talk about Python 3000. It was a warmup for his keynote speech to be given at the ACCU's annual conference, which will be held in Oxford in about two weeks. (Python 3000 aka Python 3.0 or Py3k is the nickname for the version that is finally "allowed" to break backwards compatibility, and fix a number of inconsistencies, though it won't be a wholesale language redesign. See the Py3k PEP for details.) To address your specific concerns, Li: (As far as I understand the Py3k plans ...) Python will not become a functional programming language. A standard requiring TCO implementation is unlikely. Lambda will neither be removed, nor will it be likely to be extended to allow multiple statements. List comprehensions will likely become syntactic sugar for a list view of a generator expression. Several of the builtins that currently work with or return lists will be changed to return either iterators or views. IMHO, the last two points combine to permit a few slightly more FP-like programming idioms than are currently common in Python. I'm inclined to believe the conditional expression syntax was designed to discourage use. Yuck. Well, naturally you could just use a function: def iif(condition, trueAnswer, falseAnswer): if (condition): return trueAnswer else: return falseAnswer logging = 1 level = iif(logging, 1, 0) But then that means that both the true and false branches have to be eagerly evaluated. Perhaps a lazy evaluation of function parameters is in order? Nah, too much like FP. Can be done lazily with lambda: def iif(cond, true, false): if cond: return true() else: return false() logging = 1 level = iif(logging, lambda:1, lambda:0) I think the issue here with lazy evaluation is that it is not locally understandable. That is, you'd have to know what kind of function iff was before you could tell if the expressions would always be evaluated or not. That's a pretty major semantic change based on a dynamic lookup of the "iff" variable. With a dynamic environment you don't want to yank the programmer's legs out from under him by introducing things like this; at least you can understand a function call without understanding anything about the other end. But with lazy evaluation you'll lose that. It would seem to me that if the lambda expressions eschew side effects then lazy evaluation should be equivalent to eager evaluation. So, it isn't really lazy evaluation that causes you to lose understanding of a function's behavior, it's the existence of side effects. Right? ...either the stream has no contents. Or perhaps an infinite recursion. Or maybe you just don't want the performance hit. Most typical case that I constantly encounter is null objects. string x = (obj == null)? "" : obj.ToString(); Anyhow, you have a case here where the true or false branch may be invalid if eagerly evaluated. That's probably the more immediate concern in this particular evaluation construct. Well, computational effort is a side effect of a sort, as is halting, and lazy evaluation has substantial effects on both of those. And there's no way to statically determine if side effects exist in a hybrid language like Python. But sure, if Python was a different language, lazy evaluation might work just fine ;) Personally, though, my programs are primarily motivated by the existance of side effects. If it wasn't for side effects, I could just *imagine* I had created a working algorithm, without bother to run it. He is planning to add generic functions to Python3000. What kind of genericity are we talking about? Any links? (I checked, and it's April 10th already, so I guess this isn't an April Fool's joke...) Guido van Rossum's Weblog Python 3000 - Adaptation or Generic Functions? Dynamic Function Overloading I'm going to stick to the term "(dynamically, or run-time) overloaded functions" for now, despite criticism of this term; the alternatives "generic functions", "multi-methods" and "dispatch" have been (rightly) criticized as well.
http://lambda-the-ultimate.org/node/1402
CC-MAIN-2022-40
refinedweb
921
53.21
, PyCharm provides the following intention actions: - Fetch External Resource. PyCharm downloads the referenced file and associates it with the URL (or the namespace URI). The error highlighting disappears. The XML file is validated according to the downloaded schema or DTD. (The associations of the URLs and the namespace URIs with the schema and DTD files are shown on the Schemas and DTDs page in the Settings dialog.) - Manually Setup External Resource. Use this option when you already have an appropriate schema or DTD file available locally. The Map External Resource dialog will open and you'll be able to select the file for the specified URL or namespace URI. The result of the operation is the same as in the case of fetching the resource. - Ignore External Resource. The URL or the namespace URI is added to the Ignored Schemas and DTDs list. (This list is shown on the Schemas and DTDs page in the Settings dialog.) The error highlighting disappears. PyCharm won't validate the XML file, however, it will check if the XML file is well-formed. There is one more intention action that you may find useful: Add Xsi Schema Location for External Resource. This intention action lets you complete your root XML elements. If the namespace is already specified, PyCharm.
https://www.jetbrains.com/help/pycharm/2016.3/referencing-xml-schemas-and-dtds.html
CC-MAIN-2017-04
refinedweb
214
67.35
I'm not sure issue35542. I think this happens because while logging the recursion limit is hit which calls. The RecursionError is then handled by and cleared. On subsequent calls the exception is not set anymore because `tstate->overflowed` equals 1 so we exit early before setting the exception again at. This goes on until the condition on pass which abort the interpreter. I think there is two ways to solve the issue, either handle RecursionError explicitly in the logging module so we don't clear it inadvertently as there is no way to recover from it anyway or check if the exception has been cleared at and set it again. Handling it explictly in the logging module would not help for code doing this elsewhere: def rec(): try: rec() except: rec() rec() I can submit a patch if you want.
https://bugs.python.org/msg337843
CC-MAIN-2020-05
refinedweb
141
68.4
Matplotlib Tutorial 2 – Legends titles and labels [ad_1] In this tutorial, we’re going to cover legends, titles, and labels within Matplotlib. A lot of times, graphs can be self-explanatory, but having a title to the graph, labels on the axis, and a legend that explains what each line is can be necessary. sample code: Source [ad_2] Comment List This serious is so cool! very east to follow and relevant. not a ton of filler. You've gained a follower! Software engineer reviewing the code: plt is not a descriptive name. Better to use pyplot. It always tells what it is. Simple, clear, straightforward and very useful. It helped me to solve a problem in no time. Thank you very much! thank you thank you thank you! what are the prerequisites to became a python data analyst django rest api is compulsory please give details information how to plot grafic in R 3? Just getting started. Looking forward to an interesting course. Still a good tutorial in 2020 Thank you very much for all of these efforts 🙂 If I want to use legend only for green..line.. Great Sir! I wish you had magnified the screen a little so students like me who need bigger letters' size, were easy with visuals. Love this tutorial series: it's the best one for me. I'm just learning. I am watching this for the second time now. I created some graphs around a year ago and now I'm a little rusty and need to go over all the contents. Thanks again…happy to contribute to your channel. thank you sir! I've been binge watching your stuff for 3 days now!! Good work! sir ,if I want my red button click of a window by my mouse and then I want to it click on my window canvas particular x,y as a red color .How to think? This is a great teaching video. I'm using Jupyter Notebook. My output is fixed and does not have zoom in and out capabilities. Please tell me what I am missing. Thank you. excuse me ?? Its 4 years ago, but now i cant use plot( x, y , label = ' something ' ) . How to label the lines now ??? On matplotlib.org also provides sth like plot(x,y, 'something') but this stuff still doesn't work You are saving my life right now thank you so much for putting these up!! I'm looking for a way to illustrate a magnitude spectrum with the 0dB on the top and with marks for -3dB, -6dB etc. Does anyone have something? I searched and tried a lot, but never got a satisfactory result. Thanks! Short videos and easy to understand I forgot to put in the plt.legend() line or else it won't display the legends. 🙂 Works now though! Thanks, can u help me out adding data labels at the end of bar graph… Still a good tutorial in 2019. good work! Just what I need, tks Thank you. Short, simple and useful videos, that's all we need, thank you very much! I am runnign the code below and I dont get the lables printed. any idea? plt.plot([0,1,2,3],[0,5,7,4], label='First line') plt.plot([4,1,2,3],[0,9,7,4], label='second line') plt.show() Thanks for this man hi I am getting a error which says str object not callable.. what kind of editor are using for this(like jupyter notebook or what) Hi sentdex~ I have a very simple and quick question regarding legend… How can I control the location of the legend?? how to run the code When i finished it in .txt So glad to find your demo here, you made it easy and practical, thank you very much. I have learned so much from watching your Python videos. Thank you so much for spending time developing these super awesome videos. Very concise and very practical. ModuleNotFoundError: No module named 'matplotlib.pyplot'; 'matplotlib' is not a package Here is an example with axis labels and titles: Thank you😊 is there any way to add data from a file? Excellent tutorial. Can anyone help me, please? how would I change fond size of xlabel and ylabel? thank you for the helpful video. You are the legend 🙂 how to change color of spines or axis hey I am getting an error and having no luck tracking down why on stackoverflow could you please take fivemins outta your time to dissect and offer some advice total code length 32 lines please . File "C:Usersa***rg**Anaconda3libsite-packagesmatplotlibtransforms.py", line 1050, in _init_ raise ValueError("'transform' must be an instance of " ValueError: 'transform' must be an instance of 'matplotlib.transform.Transform' import matplotlib.pyplot as plt import datetime as t incur1 ='nzd' incur2 = 'usd' time = t.datetime first_exRtRate1 = 0.712 label1 = (incur1+ '/' +incur2) label2 = (incur2+ '/' +incur1) x = [] y = [] x2 = [] y2 = [] plt.ion() plt.plot(x, y, label = label1) plt.plot(x2, y2, label = label2) plt.xlabel ('Time') plt.ylabel ('Ex Rate') plt.title (+incur1+ '/' +incur2+ '/nEx Rate Data') plt.legend() time = t.datetime x.append(time) y.append(first_exRtRate1) plt.show() Thumbs up Which IDE is he using anyone?? Thank you so much Great video. How can you place the legend outside of the chart area? Thanks Sentdex, Awesome and that's what I needed. I did get this error: ImportError: No module named Tkinter
http://openbootcamps.com/matplotlib-tutorial-2-legends-titles-and-labels/
CC-MAIN-2021-10
refinedweb
910
77.03
Since one of .NET's goals is to support a common paradigm for application programming, it must specify and utilize programming concepts consistently. In this section, we will examine four core Microsoft .NET languages, including Managed C++, VB.NET, C#, and J#, and several core programming concepts that all .NET languages support, including: Mitigates name collisions. Specifies the methods and properties that must be implemented by objects that expose the interface. In object-oriented languages, allows a class to combine all its data and behavior. Allows a class to inherit from a parent class so that it can reuse rich functionality that the parent class has implemented, thus reducing development effort and programming errors. Permits developers to specify or implement behaviors in a base class that can be overridden by a derived class. This is a very powerful feature because it allows developers to select the correct behavior based on the referenced runtime object. Allows us to write easier-to-understand code because it allows us to capture all errors in a common, understandable patterntotally opposite to that of nine levels of nested conditional blocks. Although this is not a complete list of concepts that .NET supports, it includes all the major .NET concepts that we want to cover in this section. We will show you examples of all these features in Managed C++, VB.NET, C#, and J#. These concepts are nothing new: we're merely demonstrating how they're represented in all core Microsoft .NET languages. Before we start, you should understand first what our examples will accomplish. First, we will create a namespace, called Lang, that encapsulates an interface, ISteering. Then we will create two classes: Vehicle, which is an abstract base class that implements ISteering, and Car, which is a derivative of Vehicle. We will support an entry point that instantiates and uses Car within a try block. We will unveil other details as we work through the examples. Managed C++ is Microsoft's implementation of the C++ programming language with some newly added keywords and features to support .NET programming. This allows you to use C++ to develop managed objects, which are objects that run in the CLR. Using Managed C++, you can obtain the performance[1] that is inherent in C++ programs, and at the same time, you can also take advantage of CLR features.[2] [1] You can easily mix managed and unmanaged code in C++ programs. The unmanaged code will perform better. See this chapter's example code, which you can download from. [2] However, if you look carefully at the features and new keywords (_ _abstract, _ _box, _ _delegate, _ _gc, _ _nogc, _ _pin, etc.) that have been added to Microsoft C++, we doubt that you'll want to use Managed C++ to write new code for the CLR, especially when you have C#. Now let's look at an example that includes all the concepts we want to examine. As you can see in the following code listing, we start off creating a new namespace, Lang, which envelops everything except main( ). With the exception of the first line and special keywords, the code listing conforms perfectly to the C++ standard: #using <mscorlib.dll> using namespace System; namespace Lang { Next, we specify an interface, called ISteering. If you are a C++ programmer, you will immediately notice that there are two new keywords in the following code listing, _ _gc and _ _interface. The new keyword _ _interface allows you to declare an interface, which is basically equivalent to an abstract base class in C++. In other words, the two method prototypes are specified, but not implemented here. The class that implements this interface provides the implementation for these methods: _ _gc _ _interface ISteering { void TurnLeft( ); void TurnRight( ); }; If you are a COM programmer, you know that in COM you have to manage the lifetimes of your objects and components yourself. Even worse, you also have to rely on your clients to negotiate and interoperate correctly with your COM components; otherwise, extant references will never be reclaimed. Managed C++ removes this problem by adding a new keyword, _ _gc that tells the CLR to garbage-collect the references to your interface when they are no longer in use. Aside from these two keywords, the previous code listing requires no other explanation for programmers who have experience with C-like languages. Now that we have an interface, let's implement it. The following code listing is a Managed C++ class (as indicated by the _ _gc) that implements our ISteering interface. One thing to notice is that this class is an abstract base class because the ApplyBrakes( ) method is a pure virtual function, as indicated by the =0 syntax. Vehicle doesn't provide the implementation for this method, but its derived class must supply the implementation: _ _gc class Vehicle : public ISteering { public: void TurnLeft( ) { Console::WriteLine("Vehicle turns left."); } void TurnRight( ) { Console::WriteLine("Vehicle turns right."); } virtual void ApplyBrakes( ) = 0; }; Since Vehicle is an abstract base class and can't be instantiated, we need to provide a Vehicle derivative, which we will call Car. As you can see in the following listing, everything about the class is C++, with the exception of the keyword _ _gc. Note that the ApplyBrakes( ) function first dumps a text message to the console and then immediately creates and throws an exception, notifying an exception handler that there has been a brake failure. _ _gc class Car : public Vehicle { public: void ApplyBrakes( ) { Console::WriteLine("Car trying to stop."); throw new Exception ("Brake failure!"); } }; } // This brace ends the Lang namespace. What is special here is that the Exception class is a part of the .NET Framework, specifically belonging to the System namespace. This is great because this class works exactly the same way in all languages and there's no longer a need to invent your own exception hierarchy. Now that we have a concrete class, we can write the main( ) function to test our Car class. Notice that we have added a try block that encapsulates the bulk of our code so that we can handle any exceptions in the catch block. Looking carefully at the following code listing, you'll see that we've instantiated a new Car on the managed heap, but we've actually referred to this Car instance using a Vehicle pointer. Next, we tell the vehicle to TurnLeft( )there's no surprise here because we've implemented this method in Vehicle. However, in the following statement, we tell the Vehicle that we're applying the brakes, but ApplyBrakes( ) is not implemented in Vehicle. Since this is a virtual method, the correct vptr and vtbl[3] will be used, resulting in a call to Car::ApplyBrakes( ). Of course Car::ApplyBrakes( ) will throw an exception, putting us into the catch block. Inside the catch block, we convert the caught exception into a string and dump it out to the console. [3] Many C++ compilers use vtbls (a vtbl is a table of function pointers) and vptrs (a vptr is a pointer to the vtbl) to support dynamic binding or polymorphism. We can do this because Exception is a class in the .NET Framework and all classes in the framework must derive from System.Object, which implements a rudimentary ToString( ) function to convert any object into a string: void main( ) { try { Lang::Vehicle *pV = 0; // Namespace qualifier pV = new Lang::Car( ); // pV refers to a car pV->TurnLeft( ); // Interface usage pV->ApplyBrakes( ); // Polymorphism in action } catch(Exception *pe) { Console::WriteLine(pe->ToString( )); } } Notice that you don't have to deallocate your objects on the managed heap when you've finished using them, because the garbage collector will do that for you in .NET. Although this is a simple example, we have used Managed C++ to illustrate all major object-oriented programming concepts, including namespaces, interfaces, encapsulation, inheritance, polymorphism, and exception handling. Next, we demonstrate that you can translate this code into any other .NET language because they all support these concepts. Specifically, we'll show you this same example in VB.NET, C#, J#, and IL, just to prove that these concepts can be represented the same way in all languages that targets the CLR. Microsoft has revamped VB and added full features for object-oriented programming. The new VB language, Visual Basic .NET (or VB.NET), allows you to do all that you can do with VB, albeit much more easily. If you are a VB programmer with knowledge of other object-oriented languages, such as C++ or Smalltalk, then you will love the new syntax that comes along with VB.NET. If you are a VB programmer without knowledge of other object-oriented languages, you will be surprised by the new VB.NET syntax at first, but you will realize that the new syntax simplifies your life as a programmer.[4] [4] To learn more about VB.NET, see O'Reilly's VB.NET Language in a Nutshell, Second Edition, by Steven Roman, PhD., Ron Petrusha, and Paul Lomax, or Programming Visual Basic .NET, Second Edition, by Jesse Liberty. In addition to the VB-style Rapid Application Development (RAD) support, VB.NET is a modernized language that gives you full access to the .NET Framework. The VB.NET compiler generates metadata and IL code, making the language an equal citizen to that of C# or Managed C++. Unlike VB versions prior to VB6, there will be no interpreter in VB.NET, so there should be no violent arguments about performance drawbacks of VB versus another language. Perhaps the most potent feature is that now you can write interfaces and classes that look very similar to those written in other .NET languages. The new syntax allows you to inherit from base classes, implement interfaces, override virtual functions, create an abstract base class, and so forth. In addition, it also supports exception handling exactly as does C# and Managed C++, making error handling much easier. Finally, VB.NET ships with a command-line compiler, vbc.exe, introduced in Chapter 2. Let's see how to translate the previous Managed C++ program into VB.NET so that you can see the striking conceptual resemblance. First, we'll start by defining a namespace called Lang, shown here in bold: Imports System Namespace Lang Next, we specify the ISteering interface, which is easy to do in VB.NET since the syntax is very straightforward, especially when you compare it with Managed C++. In the following code listing, you'll notice that instead of using opening and closing braces as in Managed C++, you start the interface definition by using the appropriate VB.NET keyword, Interface, and end it by prefixing the associated keyword with the word End. This is just normal VB-style syntax and shouldn't surprise any VB programmer: Interface ISteering Sub TurnLeft( ) Sub TurnRight( ) End Interface With our interface specified, we can now implement it. Since our Vehicle class is an abstract base class, we must add the MustInherit keyword when we define it, explicitly telling the VB.NET compiler that this class cannot be instantiated. In VB.NET, the Class keyword allows you to define a class, and the Implements keyword allows you implement an interface. Another thing that you should be aware of is that ApplyBrakes( ) is not implemented in this class, and we have appropriately signaled this to the VB.NET compiler by using the MustOverride keyword: MustInherit Class Vehicle Implements ISteering Public Sub TurnLeft( ) Implements ISteering.TurnLeft Console.WriteLine("Vehicle turns left.") End Sub Public Sub TurnRight( ) Implements ISteering.TurnRight Console.WriteLine("Vehicle turn right.") End Sub Public MustOverride Sub ApplyBrakes( ) End Class As far as language differences go, you must explicitly describe the access (i.e., public, private, and so forth) for each method separately. This is different from C++ because all members take on the previously defined access type. Now we are ready to translate the concrete Car class. In VB.NET, you can derive from a base class by using the Inherits keyword, as shown in the following code. Since we have said that ApplyBrakes( ) must be overridden, we provide its implementation here. Again, notice that we're throwing an exception: Class Car Inherits Vehicle Public Overrides Sub ApplyBrakes( ) Console.WriteLine("Car trying to stop.") throw new Exception("Brake failure!") End Sub End Class End Namespace Now that we have all the pieces in place, let's define a module with an entry point, Main( ), that the CLR will execute. In Main( ), you'll notice that we're handling exceptions exactly as we did in the Managed C++ example. You should also note that this code demonstrates the use of polymorphism because we first create a Vehicle reference that refers to a Car object at runtime. We tell the Vehicle to ApplyBrakes( ), but since the Vehicle happens to be referring to a Car, the object that is stopping is the target Car object: Public Module Driver Sub Main( ) Try Dim v As Lang.Vehicle ' namespace qualifier v = New Lang.Car ' v refers to a car v.TurnLeft( ) ' interface usage v.ApplyBrakes( ) ' polymorphism in action Catch e As Exception Console.WriteLine(e.ToString( )) End Try End Sub End Module This simple program demonstrates that we can take advantage of .NET object-oriented features using VB.NET. Having seen this example, you should see that VB.NET is very object oriented, with features that map directly to those of Managed C++ and other .NET languages. As you've just seen, VB.NET is a breeze compared to Managed C++, but VB.NET is not the only simple language in .NETC# is also amazingly simple. Developed from the ground up, C# supports all the object-oriented features in .NET. It maps so closely to the Java and C++ languages that if you have experience with either of these languages, you can pick up C# and be productive with it immediately. Microsoft has developed many tools using C#; in fact, most of the components in Visual Studio .NET and the .NET class libraries were developed using C#. Microsoft is using C# extensively, and we think that C# is here to stay.[5] [5] To learn more about C#, check out O'Reilly's C# Essentials, Second Edition, by Ben Albahari, Peter Drayton, and Brad Merrill; the forthcoming C# in a Nutshell, Second Edition, by Peter Drayton, Ben Albahari, and Ted Neward; and Programming C#, Third Edition, by Jesse Liberty. Having said that, let's translate our previous program into C# and illustrate all the features we want to see. Again, we start by defining a namespace. As you can see, the syntax for C# maps really closely to that of Managed C++: using System; namespace Lang { Following is the IStreering interface specification in C#. Since C# was developed from scratch, we don't need to add any funny keywords like _ _gc and _ _interface, as we did in the Managed C++ version of this program: interface ISteering { void TurnLeft( ); void TurnRight( ); } Having defined our interface, we can now implement it in the abstract Vehicle class. Unlike Managed C++ but similar to VB.NET, C# requires that you explicitly notify the C# compiler that the Vehicle class is an abstract base class by using the abstract keyword. Since ApplyBrakes( ) is an abstract methodmeaning that this class doesn't supply its implementationyou must make the class abstract, otherwise the C# compiler will barf at you. Put another way, you must explicitly signal to the C# compiler the features you want, including abstract, public, private, and so forth, each time you define a class, method, property, and so on: abstract class Vehicle : ISteering { public void TurnLeft( ) { Console.WriteLine("Vehicle turns left."); } public void TurnRight( ) { Console.WriteLine("Vehicle turn right."); } public abstract void ApplyBrakes( ); } Here's our Car class that derives from Vehicle and overrides the ApplyBrakes( ) method declared in Vehicle. Note that we are explicitly telling the C# compiler that we are indeed overriding a method previously specified in the inheritance chain. You must add the override modifier, or ApplyBrakes( ) will hide the one in the parent class. Otherwise, we are also throwing the same exception as before: class Car : Vehicle { public override void ApplyBrakes( ) { Console.WriteLine("Car trying to stop."); throw new Exception("Brake failure!"); } } } // This brace ends the Lang namespace. Finally, here's a class that encapsulates an entry point for the CLR to invoke. If you look at this code carefully, you'll see that it maps directly to the code in both Managed C++ and VB.NET: class Drive { public static void Main( ) { try { Lang.Vehicle v = null; // Namespace qualifier v = new Lang.Car( ); // v refers to a car v.TurnLeft( ); // Interface usage v.ApplyBrakes( ); // Polymorphism in action } catch(Exception e) { Console.WriteLine(e.ToString( )); } } } There are two other interesting things to note about C#. First, unlike C++ but similar to Java, C# doesn't use header files.[6] [6] If you've never used C++, a header file is optional and usually contains class and type declarations. The implementation for these classes is usually stored in source files. Second, the C# compiler generates XML documentation for you if you use XML comments in your code. To take advantage of this feature, start your XML comments with three slashes, as in the following examples: /// <summary>Vehicle Class</summary> /// <remarks> /// This class is an abstract class that must be /// overridden by derived classes. /// </remarks> abstract class Vehicle : ISteering { /// <summary>Add juice to the vehicle.</summary> /// <param name="gallons"> /// Number of gallons added. /// </param> /// <return>Whether the tank is full.</return> public bool FillUp(int gallons) { return true; } } These are simple examples using the predefined tags that the C# compiler understands. You can also use your own XML tags in XML comments, as long as your resulting XML is well formed. Given that you have a source code file with XML comments, you can automatically generate an XML-formatted reference document by using the C# compiler's /doc: option, as follows: csc /doc:doc.xml mylangdoc.cs Although we didn't specify the types of our parameters in the XML comments shown previously, the C# compiler will detect the correct types and add the fully qualified types into the generated XML document. For example, the following generated XML listing corresponds to the XML comments for the FillUp( ) method. Notice that the C# compiler added System.Int32 into the generated XML document: <member name="M:Lang.Vehicle.FillUp(System.Int32)"> <summary>Add juice to the vehicle.</summary> <param name="gallons"> Number of gallons added. </param> <return>Whether the tank is full.</return> </member> Now that you have the generated XML document, you can write your own XSL document to translate the XML into any visual representation you prefer. Shipped with .NET Framework 1.1 (and thus with Visual Studio .NET 2003), J# is a Java language that targets the CLR. For completeness, here's the same program in J#, demonstrating that J# also supports the same object-oriented features that we've been illustrating. We simply took the preceding C# program and made a few minor changes, resulting in the J# program that we are about to examine. Let's first look at the namespace declaration. Instead of using the keyword namespace, Java uses the keyword package, which is conceptually equivalent to the namespace concept we've been observing, since the purpose of a package is to prevent name conflicts: package Lang; import System.Console; The interface specification for ISteering in J# looks exactly equivalent to the one written in C#: interface ISteering { void TurnLeft( ); void TurnRight( ); } For the Vehicle class, there are two changes, which are shown in bold. First, the keyword implements is used to declare that a class implements one or more interfaces. Second, since Java requires thrown exceptions to be explicitly declared within the method signature, we've added this declaration in the ApplyBrakes( ) method: abstract class Vehicle implements ISteering { public void TurnLeft( ) { Console.WriteLine("Vehicle turns left."); } public void TurnRight( ) { Console.WriteLine("Vehicle turn right."); } public abstract void ApplyBrakes( ) throws Exception; } There are also two changes for the Car class, which are shown in bold. The extends keyword is used to declare that a class derives from (or extends) another class. The declaration for ApplyBrakes( ) must match it's parents signature, so we've explicitly indicated that an exception may be thrown from this method, as shown in bold: // extends - used to derive from a base class. class Car extends Vehicle { public void ApplyBrakes( ) throws Exception { Console.WriteLine("Car trying to stop."); throw new Exception("Brake failure!"); } } Finally, we've made one minor change in the Drive class: we simply changed Main( ) to main( ), as required by J#: class Drive { public static void main( ) { try { Lang.Vehicle v = null; // Namespace qualifer v = new Lang.Car( ); // v refers to a car v.TurnLeft( ); // Interface usage v.ApplyBrakes( ); // Polymorphism in action } catch(Exception e) { Console.WriteLine(e.ToString( )); } } } Like C#, J# supports all the object-oriented concepts we've been studying. Also, J# and C# are syntactically very similar. Since all languages compile to IL, let's examine the IL code for the program that we've been studying. As explained in Chapter 2, IL is a set of stack-based instructions that supports an exhaustive list of popular object-oriented features, including the ones that we've already examined in this chapter. It is an intermediary step, gluing .NET applications to the CLR. Let's start by looking at the namespace declaration. Notice the .namespace IL declaration allows us to create our Lang namespace. Similar to C#, IL uses opening and closing braces: .namespace Lang { Now for the IStreering interface. In IL, any type that is to be managed by the CLR must be declared using the .class IL declaration. Since the CLR must manage the references to an interface, you must use the .class IL declaration to specify an interface in IL, as shown in the following code listing: .class interface private abstract auto ansi ISteering { .method public hidebysig newslot virtual abstract instance void TurnLeft( ) cil managed { } // End of method ISteering::TurnLeft .method public hidebysig newslot virtual abstract instance void TurnRight( ) cil managed { } // End of method ISteering::TurnRight } // End of class ISteering In addition, you must insert two special IL attributes: Signals that the current type definition is an interface specification. Signals that there will be no method implementations in this definition and that the implementer of this interface must provide the method implementations for all methods defined in this interface. Other attributes shown in this definition that aren't necessarily needed to specify an interface in IL include the following: Because we haven't provided the visibility of our interface definition in C#, the generated IL code shown here adds the private IL attribute to this interface definition. This means that this particular interface is visible only within the current assembly and no other external assembly can see it. Tells the CLR to perform automatic layout of this type at runtime. Tells the CLR to use ANSI string buffers to marshal data across managed and unmanaged boundaries. Now you know how to specify an interface in IL. Before we proceed further, let's briefly look at the attributes in the .method declarationsat least the attributes that we haven't examined, including: Tells the JIT compiler to reserve a new slot in the type's vtbl, which will be used by the CLR at runtime to resolve virtual-method invocations. Tells the CLR that this method is an instance or object-level method, as opposed to a static or class-level method. Having specified the ISteering interface in IL, let's implement it in our Vehicle class. As you can see in the following code fragment, there's no surprise. We extend the System.Object class (indicated by the extends keyword) and implement Lang.ISteering (as indicated by the implements keyword): .class private abstract auto ansi beforefieldinit Vehicle extends [mscorlib]System.Object implements Lang.ISteering { .method public hidebysig newslot final virtual instance void TurnLeft( ) cil managed { // IL code omitted for clarity } // End of method Vehicle::TurnLeft .method public hidebysig newslot final virtual instance void TurnRight( ) cil managed { // IL code omitted for clarity } // End of method Vehicle::TurnRight .method public hidebysig newslot virtual abstract instance void ApplyBrakes( ) cil managed { } // End of method Vehicle::ApplyBrakes // .ctor omitted for clarity } // End of class Vehicle Notice also that this class is an abstract class and that the ApplyBrakes( ) method is an abstract method, similar to what we've seen in the previous examples. Another thing to note is the final IL attribute in the .method declarations for both TurnLeft( ) and TurnRight( ). This IL attribute specifies that these methods can no longer be overridden by subclasses of Vehicle. Having seen all these attributes, you should realize that everything in IL is explicitly declared so that all components of the CLR can take advantage of this information to manage your types at runtime. Now let's look at the Car class that derives from the Vehicle class. You'll notice that in the ApplyBrakes( ) method implementation, the newobj instance IL instruction creates a new instance of the Exception class. Next, the throw IL instruction immediately raises the exception object just created: .class private auto ansi beforefieldinit Car extends Lang.Vehicle { .method public hidebysig virtual instance void ApplyBrakes( ) cil managed { // IL code omitted for clarity newobj instance void [mscorlib]System.Exception::.ctor(class System.String) throw } // End of method Car::ApplyBrakes // .ctor omitted for clarity } // End of class Car } // End of namespace Lang Finally, let's look at our Main( ) function, which is part of the Drive class. We've removed most of the IL codewhich you've already learnedfrom this function to make the following code easier to read, but we've kept the important elements that must be examined. First, the .locals directive identifies all the local variables for the Main( ) function. Second, you can see that IL also supports exception handling through the .try instruction. In both the .try and catch blocks, notice that there is a leave.s instruction that forces execution to jump to the IL instruction on line IL_0024, thus leaving both the .try and catch blocks: .class private auto ansi beforefieldinit Drive extends [mscorlib]System.Object { .method public hidebysig static void Main( ) cil managed { .entrypoint // Code size 37 (0x25) .maxstack 1 .locals (class Lang.Vehicle V_0, class [mscorlib]System.Exception V_1) .try { // IL code omitted for clarity leave.s IL_0024 } // End .try catch [mscorlib]System.Exception { // IL code omitted for clarity leave.s IL_0024 } // End handler IL_0024: ret } // End of method Drive::Main // .ctor omitted for clarity } // End of class Drive As you can see, all the major concepts that we've examined apply intrinsically to IL. Since you've seen Managed C++, VB.NET, C#, J#, and IL code that support these features, we won't attempt to further convince you that all these features work in other languages that target the CLR.
http://etutorials.org/Programming/.NET+Framework+Essentials/Chapter+3.+.NET+Programming/3.2+Core+Features+and+Languages/
CC-MAIN-2018-17
refinedweb
4,499
56.05
Each Answer to this Q is separated by one/two green lines. I am running a bash script (test.sh) and it loads in environment variables (from env.sh). That works fine, but I am trying to see python can just load in the variables already in the bash script. Yes I know it would probably be easier to just pass in the specific variables I need as arguments, but I was curious if it was possible to get the bash variables. test.sh #!/bin/bash source env.sh echo $test1 python pythontest.py env.sh #!/bin/bash test1="hello" pythontest.py ? print test1 (that is what I want) You need to export the variables in bash, or they will be local to bash: export test1 Then, in python import os print os.environ["test1"] There’s another way using subprocess that does not depend on setting the environment. With a little more code, though. For a shell script that looks like follows: #!/bin/sh myvar="here is my variable in the shell script" function print_myvar() { echo $myvar } You can retrieve the value of the variable or even call a function in the shell script like in the following Python code: import subprocess def get_var(varname): CMD = 'echo $(source myscript.sh; echo $%s)' % varname p = subprocess.Popen(CMD, stdout=subprocess.PIPE, shell=True, executable="/bin/bash") return p.stdout.readlines()[0].strip() def call_func(funcname): CMD = 'echo $(source myscript.sh; echo $(%s))' % funcname p = subprocess.Popen(CMD, stdout=subprocess.PIPE, shell=True, executable="/bin/bash") return p.stdout.readlines()[0].strip() print get_var('myvar') print call_func('print_myvar') Note that both shell=True shall be set in order to process the shell command in CMD to be processed as it is, and set executable="bin/bash" to use process substitution, which is not supported by the default /bin/sh. Assuming the environment variables that get set are permanent, which I think they are not. You can use os.environ. os.environ["something"] If you are trying to source a file, the thing you are missing is set -a. For example, to source env.sh, you would run: set -a; source env.sh; set +a. The reason you need this is that bash’s variables are local to bash unless they are exported. The -a option instructs bash to export all new variables (until you turn it off with +a). By using set -a before source, all of the variables imported by source will also be exported, and thus available to python. Credit: this command comes from a comment posted by @chepner as a comment on this answer. Two additional options, which may or may not help in your specific situation: Option 1: if env.sh only contains regular NAME=value, get python to read it (as a text file) instead. This only applies if you control the format of env.sh, and env.sh doesn’t contain any real shell commands, and you control the containing shell script. Option 2: In the shell script, once all the necessary variables are set, either save these to a file, or pipe them as stdin to your python script: #!/bin/bash source env.sh echo $test1 set | python pythontest.py or #!/bin/bash source env.sh echo $test1 set > /tmp/$$_env python pythontest.py --environment=/tmp/$$_env You can then read the file (or stdin), and parse it into e.g. a dictionary.
https://techstalking.com/programming/python/read-bash-variables-into-a-python-script/
CC-MAIN-2022-40
refinedweb
569
67.25
Hello Readers!! I hope you’re having a good time learning about machine learning. Welcome to part 2 of ML algorithm series where I’ll be discussing one of the most famous and conceptual algorithm known as the Decision Tree Algorithm. Decision Tree is a fairly basic term. Imagine a tree which has a lot of branches just like the one in the figure. You are sitting on the top and wish to come down. The only catch is that if you reach a branch having no ends, you have to jump down from there. As you progress down you decide on which branch to choose at each level by taking the help of some deciding factors that you decide upon. This is the fundamental of a decision tree. Every branch has a subsequent YES branch and a NO branch and you have to choose which one to jump on. Let’s call all the branches as NODES and the branches that have no other sub branches as LEAF NODES. LEAF nodes represent labels i.e. the final result derived from our traversal of the tree. A decision tree works on how you place the nodes in the tree, which node is given a higher priority and what all nodes satisfy the conditions required. There are 2 popular methods of deciding priority for node named Information Gain and the Gini index which will be discussed in a special article with all the mathematical expressions for the same. Here’s an example for buying a car. Here we select a few features like Road Tested, Mileage etc. to determine whether the car is good for buying or not and assign each of them a priority. This is the basic logic behind Decision Tree. This algorithm is very useful as it is the basic foundation of all our decision making and also helps in generating new algorithms like Random Forest Classifier by following the same fundamental rules. Now, let us try to implement this algorithm on a given dataset. Please note that implementation does not require the mathematical aspect of this algorithm. Lets import all the neccesary modules in our iPython Notebook file! import numpy as np import pandas as pd from sklearn.cross_validation import train_test_split from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import NearestNeighbors from sklearn.metrics import accuracy_score from sklearn import tree Now, we import out dataset into the notebook. The dataset we are using is the Balance Scale dataset which can is available at (just download the balance-scale.data file from the Data Folder and rename it to DecisionTree.data.txt). NOTE-Open the DecisionTree.data.txt in a text editor and add the line “Label,A,B,C,D” (without the quotes) at the beginning of the file. This will work as the headlines for the different columns! Each example is classified as having the balance scale tip to the right, tip to the left, or be balanced. The attributes are the left weight(A), the left distance(B), the right weight(C), and the right distance(D). balance_data=pd.read_csv('DecisionTree.data.txt') balance_data.head() After importing,we separate out the features and the labels of our data.We store the features in a numpy array(balance_data.values gives us the data in a numpy array form) and name it X and the labels are stored in another numpy array labelled y. X=balance_data.values[:,1:5] #This is known as numpy array splitting! y=balance_data.values[:,0] We use the method of Cross Validation under the sklearn module to split our data into two parts i.e. the training part on which our model will be trained and the testing part where our model would be checked for accuracy. X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.3) We construct a classifer that uses Decision Tree algorithm which is already provided to us in the sklearn module. The ‘fit’ instance trains the classifier with the training data and ‘score’ instance calcualtes the accuracy of our model. We use the criterion as ‘entropy’ to use the information gain method. Gini index is the default method and you can skip the criterion part inside the brackets if you want to use gini index clf=DecisionTreeClassifier(criterion='entropy') clf.fit(X_train, y_train) DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=None, max_features=None, max_leaf_nodes=None, min_impurity_split=1e-07, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='best') We construct a new example that contains data of a balance scale that is likely to tip towards “Right” and use the ‘predict’ instance to analyse and find the result. example=np.array([[1,4,3,2]]) prediction=clf.predict(example) print(prediction) ['R'] Happy Learning 🙂
http://www.datascribble.com/blog/machine-learning/first-step-decision-trees/
CC-MAIN-2019-09
refinedweb
789
56.15
When doing a memory calculation for indexes, are both primary and secondary records used in the calculation on a given node, or just primary records? Thanks, Douglas When doing a memory calculation for indexes, are both primary and secondary records used in the calculation on a given node, or just primary records? Thanks, Douglas There is a section for calculating size of primary index and another section for calculating size of secondary indexes. You would need to calculate both. My question is are secondary records indexed in the primary and secondary indexes. So for parameter R in the secondary index calculation (R - number of records/objects indexed in the secondary index). Is it all the records on the node, or is it just the primary records on the node. From looking at the stats for the namespace it looks like the primary index takes up space for both primary and secondary records, whereas the secondary indexes seem to only contain primary records. The primary index is an index on all records in a namespace. Records indexed by a secondary index will be a subset of the primary index. I’m now good on the behaviour of primary indexes. For secondary indexes. If you have a Set in a namespace that has a replication factor of 2, meaning roughly half the records will be considered primary for a given DB instance, and the other half will be secondary records. If all records in that set have a value for an indexed bin. Will both the primary and secondary records be indexed on that DB instance. Or will only the records the node considers primary be indexed. Thanks. The reason why I asked that was that I am always seeing a value that is significantly below the average calculation for a secondary string index (when I do stat namespace in aql).
https://discuss.aerospike.com/t/capacity-planning-indexes/1221
CC-MAIN-2019-09
refinedweb
309
61.56
The wmemmove() function is defined in <cwchar> header file. wmemmove() prototype wchar_t* wmemmove( wchar_t* dest, const wchar_t* src, size_t count ); The wmemmove() function takes three arguments: dest, src and count. When the wmemmove() function is called, it copies count wide characters from the memory location pointed to by src to the memory location pointed to by dest. Copying is performed even if the src and dest pointer overlaps. This is because an intermediate buffer is created where the data are first copied to from src and then finally copied to dest. If count is equal to zero, this function does nothing. wmemmove() Parameters - dest: Pointer to the wide character array where the contents are copied to - src: Pointer to the wide character array from where the contents are copied. - count: Number of wide characters to copy from src to dest. wmemmove() Return value - The wmemmove() function returns dest. Example: How wmemmove() function works? #include <cwchar> #include <clocale> #include <iostream> using namespace std; int main() { setlocale(LC_ALL, "en_US.utf8"); wchar_t src[] = L"\u03b1\u03b2\u03b3\u03b8\u03bb\u03c9\u03c0"; wchar_t *dest = &src[2];// dest and src overlaps int count = 5; wmemmove(dest, src, count); wcout << L"After copying" << endl; for(int i=0; i<count; i++) putwchar(dest[i]); return 0; } When you run the program, the output will be: After copying αβγθλ
https://cdn.programiz.com/cpp-programming/library-function/cwchar/wmemmove
CC-MAIN-2021-04
refinedweb
222
62.58
Before you start This tutorial demonstrates the creation of Memory Markup Language, or MemoryML, which is designed to be viewed by a fictional browser embedded in a device such as a video camera, video tape player, or DVD player. The modularization of XHTML allows you to choose which XHTML modules to support in an application. You can supplement those modules, and thus create new markup languages that fit seamlessly with XHTML. You should have a thorough understanding of XML and at least a basic understanding of XHTML and how it is used. You should also understand XML validation and be familiar with Document Type Definitions (DTDs) and namespaces. The Resources section provides links to tutorials that can help you get up to speed in any of these areas. You do not need any programming skills to understand this tutorial. This tutorial demonstrates the building of XHTML modules. To actually build these modules, you need only a text editor. To take it one step further and test your new modules, you need a validating parser, such as the Java APIs for XML Processing (JAXP) or Xerces. The Resources section lists several validating parsers.
http://www.ibm.com/developerworks/web/tutorials/wa-modular/
crawl-003
refinedweb
192
62.17
Logic dictates that we take a minute to familiarize ourselves with the terms and classes that implement this technology. This book excerpt is a primer for the terminology and main classes associated with disconnected data. Latest .NET / C# Articles - Page 311 Introduction to LBXML Operator, A C# API-Based Tool for XML Insertion, Modification, Searching, and Removal Using this API, you can touch or return any particular value between tags in an XML file after specifying conditions using C# rich data structures. Sending E-Mail with System.Web.Mail Discover how to send e-mail from within your .NET applications using the System.Web.Mail namespace. C# FAQ 1.4 - How Do I Work with Namespaces? Learn how to apply namespaces in your C# programs. C# FAQ 1.5 - What is an Assembly? Looking at Windows, Performance Counters, and More Discover how to view system information, existing performance monitors, and more, all from your .NET.
http://www.codeguru.com/csharp/1860/
CC-MAIN-2017-04
refinedweb
155
50.63
2.1: The "main" concern.... - Page ID - 29014 Basics of C++ C++ is a general-purpose programming language and widely used nowadays for competitive programming. It has imperative, object-oriented and generic programming features. C++ runs on lots of platform like Windows, Linux, Unix, Mac, etc. C++ is also an object-oriented programming – As the name suggests uses objects in programming.. All of these pieces will be discussed in this course. Learning C++ programming can be simplified into a few steps: - Writing your program in a text-editor and saving it with correct extension(.CPP, .C, .CP) - Compiling your program using a compiler or online IDE - Understanding the basic terminologies. The “Hello World” program has become the traditional first program in many programming courses. All the code does is display the message “Hello World” on the screen.So, to help us begin our journey here is the code: // Simple C++ program to display "Hello World" // Header file for input output functions #include <iostream> using namespace std; // main function - // where the execution of program begins int main() { // prints hello world cout << "Hello World" << endl; return 0; } Lets take a look at this code and learn a bit about the terminology we will be using in this course: - // Simple C++ program to display “Hello World” : This line is a comment line. A comment is used to display additional information about the program. A comment does not contain any programming logic. When a comment is encountered by a compiler, the compiler simply skips that line of code. Any line beginning with // is a C++ comment OR multi line comments can be in between /* and */ . - #include: In C++, all lines that start with pound (#) sign are called directives and are processed by preprocessor which is a program invoked by the compiler. We have discussed these concepts in the previous chapter.”, this function returns an integer data. Everything between these two comprises the body of the main function. Braces enclose a code block, we will see more of these as we go along. - cout << “Hello World” << endl;: This line tells the compiler to display the message “Hello World” on the screen. This line is called a statement in C++, specifically this is an output statement. Every statement is meant to perform some task. A semi-colon ‘;’ is used to end a statement. Semi-colon character at the end of statement is used to indicate that the statement is ending there. The std::cout is used to identify the standard cout function, we will talk a bit more about this shortly. Everything followed by the character “<<” is displayed to the output device. Notice there are multiple << in the statement. Users can build complex output statements in this manner. -. It is key that you get the fact that EVERY C++ program needs a main() function. Without main() your code won't even compile. Main should always be declared as returning an int, but it is allowable to return a different type, such as void. Adapted from: "Writing first C++ program : Hello World example" by Harsh Agarwal, Geeks for Geeks is licensed under CC BY-SA 4.0
https://eng.libretexts.org/Courses/Delta_College/C___Programming_I_(McClanahan)/02%3A_C_Basics/2.01%3A_The_%22main%22_concern....
CC-MAIN-2021-31
refinedweb
520
64.3
I'm new to coding and not sure why the list is being typed as a tuple. You've used the same name, list, for both a built-in type and a local variable. Don't re-use the built-in names. Quoting PEP-8: If a function argument's name clashes with a reserved keyword, it is generally better to append a single trailing underscore rather than use an abbreviation or spelling corruption. Thus class_is better than clss. (Perhaps better is to avoid such clashes by using a synonym.) Try: def funct2(list_): if type(list_) == list: ... Or, better: def funct2(list_): if isinstance(list_, list): ...
https://codedump.io/share/8HWLMrt3o9Sp/1/why-does-python-give-me-a-typeerror-for-this-function-new-to-coding
CC-MAIN-2018-22
refinedweb
107
76.32
objp 1.3.0 Python<-->Objective-C bridge with a code generation approach ObjP's goal is to create a two-way bridge between Python and Objective-C. Unlike PyObjC, which uses dynamic calls to methods on runtime, ObjP generates static code. It generates either Objective-C interfaces to Python code or Python modules to interface Objective-C code. The library is exceedingly simple and it's intended that way. Unlike PyObjC, there's no way ObjP could possibly wrap the whole Cocoa framework, there's way too many things to support. ObjP is made to allow you to bridge your own code. Also note that ObjP works on Python 3.2 and up. The best way to learn how to use ObjP is, I think, to look at an example. There are many of them in the 'demos' subfolder. These are built using waf (it's already included in here, no need to install it). For example, if you want to build the simple demo, do: $ cd demos/simple $ ./waf configure build $ cd build $ ./HelloWorld That programs calls a simple Python script from Objective-C, and that python script itself calls an Objective-C class. Usage There are two types of bridge: Objective-C class wrapping a Python class (o2p) and Python class wrapping an Objective-C class (p2o). To generate an o2p wrapper, you need a target class. Moreover, for this class' methods to be wrapped, you need to have its arguments and return value correctly annotated (You can browse the demos for good examples of how to do it). This is an example of a correctly annotated class: class Foo: def hello_(self, name: str) -> str: return "Hello {}".format(name) To wrap this class, you'll use objp.o2p.generate_objc_code() in this fashion: import foo import objp.o2p objp.o2p.generate_objc_code(foo.Foo, 'destfolder') This will generate "Foo.h|m" as well as "ObjP.h|m" in "destfolder". These source files directly use the Python API and have no other dependencies. To generate a p2o wrapper, you either need an Objective-C header file containing an interface or protocol or a Python class describing that interface: @interface Foo: NSObject {} - (NSString *)hello:(NSString *)name; @end To generate a python wrapper from this, you can do: import objp.p2o objp.p2o.generate_python_proxy_code(['Foo.h'], 'destfolder/Foo.m') This will generate the code for a Python extension module wrapping Foo. The name of the extension module is determined by the name of the destination source file. You can wrap more than one class in the same unit: objp.p2o.generate_python_proxy_code(['Foo.h', 'Bar.h'], 'destfolder/mywrappers.m') Method name conversion ObjP follows PyObjC's convention for converting method names. The ":" character being illegal in Python method names, they're replaced by underscores. Thus, a method - (BOOL)foo:(NSInteger)arg1 bar:(NSString *)arg2; is converted to def foo_bar_(self, arg1: int, arg2: str) -> bool: and vice versa. Note that if your method's argument count doesn't correspond to the number of underscores in your method name, objp will issue a warning and ignore the method. Argument Types Only a few argument types are supported by ObjP, the goal being to keep the project simple. - int/NSInteger - float/CGFloat - str/NSString* - bool/BOOL - list/NSArray* - dict/NSDictionary* - nspoint/NSPoint - nssize/NSSize - nsrect/NSRect ObjP also supports object which dynamically converts the argument depending on its type and returns an NSObject subclass (which means that int, float and bool convert to NSNumber instead of converting to NSInteger, CGFloat and BOOL). This type of conversion is used to convert the contents of list and dict (it's impossible to have an NSArray directly containing BOOL). Another special argument type is pyref (which you must import from objp.util in your code) which simply passes the PyObject* instance around without converting it. The structure arguments allow you to transform tuples to native objc structures and vice-versa. Python has no "native" structure for points, sizes and rects, so that's why we convert to/from tuples ((x, y), (w, h) and (x, y, w, h)). Like pyref, the ns* signature arguments have to be imported from objp.util. Utilities objp.util contains the pyref and ns* argument types, but it also contains two useful method decorators: dontwrap and objcname. A method decorated with dontwrap will be ignored by the code generator, and a method decorated with @objcname('some:selector:') will use this name for generating objc code instead of the automatically generated name. Constant conversion When having code in two different languages, we sometimes have to share constants in between the two. To avoid having to manually maintain an Objective-C counterpart to your Python constants, objp offers a small utility, objp.const.generate_objc_code(module, dest). This takes all elements in module's namespace and convert them to an Objective-C's constant unit at dest. int, float and str types are going to be converted to #define <name> <str(value)> (with @"" around str values). You can also have enum classes in your python constant module. A class is considered an enum if it has integer members. For example: class Foo: Bar = 1 Baz = 2 will be converted to: typedef enum { FooBar=1, FooBaz=2 } Foo; Because this function will choke on any value that it can't convert, it is recommended that you use it on modules specifically written for that, for example a cocoa_const.py that imports from your real const unit. Since constants in Objective-C often have prefixes, you can also add them in that unit. It could look like that: from real_const import FOO as XZFOO, BAR as XZBAR, MyEnum as XZMyEnum Changes Version 1.3.0 -- 2012/09/27 - Added support for Python constant module conversion to Objective-C code. Version 1.2.1 -- 2012/05/28 - Renamed proxy's target member name from py to _py to avoid name clash with local variables when an argument would be named y. Version 1.2.0 -- 2012/02/01 - Added support for NSPoint, NSSize and NSRect structures. - In ObjP_str_o2p(), when the string is nil, return Py_None instead of crashing. Version 1.1.0 -- 2012/01/23 - Allow null items (with [NSNull null]) in p2o collection conversions. - Added support for floats. - p2o conversions returning NSObject subclasses can now convert None to nil. Version 1.0.0 -- 2012/01/16 - Added support for protocols. - Added support for __init__ method wrapping. - Added bool and pyref argument types. - Added support for creating a p2o instance that wraps a pre-existing objc instance. - Added exception checking. - Added GIL locking. - Added inheritance support. - Added multiple class wrapping in the same p2o module. Version 0.1.1 -- 2012/01/08 - Fixed setup which was broken. - Fixed o2p which was broken. Version 0.1.0 -- 2012/01/05 - Initial Release - Downloads (All Versions): - 33 downloads in the last day - 183 downloads in the last week - 516 downloads in the last month - Author: Hardcoded Software - License: BSD License - Categories - Package Index Owner: hsoft - DOAP record: objp-1.3.0.xml
https://pypi.python.org/pypi/objp/1.3.0
CC-MAIN-2014-10
refinedweb
1,172
56.25
Using monoThis program when run from the command line opens a window on my display.I start my server with shell command xsp4I start my server with shell command xsp4Code://compilation: // mcs hello.cs -pkg:dotnet //run at command line: // mono hello.exe using System; using System.Windows.Forms; public class HelloWorld : Form { static public void Main() { Application.Run(new HelloWorld()); } public HelloWorld() { Text = "Hello Mono World"; } } I open my broswer and connect toAnd finally the question, what is somefile.aspx that uses this program?And finally the question, what is somefile.aspx that uses this program?Code: blank line blank line
http://forums.devshed.com/asp-programming-51/complete-web-example-969676.html
CC-MAIN-2018-39
refinedweb
102
62.44
Documentation Configuration The tree view Filtering publishes Help, no actions are showing up! Managing actions Reference open_publish() Installation, Updates and Development Configuration Options Release Notes History The Shotgun Loader lets you quickly overview and browse the files that you have published to Shotgun. A searchable tree view navigation system makes it easy to quickly get to the task, shot or asset that you are looking for and once there the loader shows a thumbnail based overview of all the publishes for that item. Through configurable hooks you can then easily reference or import a publish into your current scene. Overview Video The following video gives a quick overview of features and functionality. Documentation This document describes functionality only available if you have taken control over a Toolkit configuration. Please refer to the Shotgun Integrations User Guide for details. Configuration The loader is highly configurable and you can set it up in many different ways. There are two main configuration areas: - Setting up what tabs and what content to display in the left hand side tree view. - Controlling which actions to display for different publishes and controlling what the actions actually do. The following sections will give a high level overview how you can configure the loader. For technical minutiae relating to the configuration, please see the separate section further down in the documentation. The tree view The tree view is highly configurable and you can control the content of the various tabs using standard Shotgun filter syntax. Each tab consists of a single Shotgun API query which is grouped into a hierarchy. You can add arbitrary filters to control which items are being shown, and you can use the special keywords {context.entity}, {context.project}, {context.project.id}, {context.step}, {context.task} and {context.user} to scope a query based on the current context. Each of these keywords will be replaced with the relevant context information, either None, if that part of the context is not populated or a standard Shotgun link dictionary containing the keys id, type and name. By default, the loader will show assets and shots belonging to the current project. By reconfiguring, this could easily be extended to for example show items from other projects (or a specific asset library project for example). You could also for example use filters to only show items with certain approval status or group items by status or by other Shotgun fields. Below are some sample configuration settings illustrating how you could set up your tree view tabs: # An asset library tab which shows assets from a specific # Shotgun project caption: Asset Library entity_type: Asset hierarchy: [sg_asset_type, code] filters: - [project, is, {type: Project, id: 123}] # Approved shots from the current project caption: Shots hierarchy: [project, sg_sequence, code] entity_type: Shot filters: - [project, is, '{context.project}'] - [sg_status_list, is, fin] # All assets for which the current user has tasks assigned caption: Assets entity_type: Task hierarchy: [entity.Asset.sg_asset_type, entity, content] filters: - [entity, is_not, null] - [entity, type_is, Asset] - [task_assignees, is, '{context.user}'] - [project, is, '{context.project}'] Filtering publishes It is possible to apply a Shotgun filter to the publish query that the loader carries out when it loads publish data from Shotgun. This is controlled via the publish_filters parameter and can be used for example to hide publishes that have not been approved or where their associated review version has not been approved. Help, no actions are showing up! The loader comes with a number of different actions for each engine. For example, in the case of Nuke, there are two actions: "import script" and "create read node". Actions are defined in hooks, meaning that you can modify their behaviour or add additional actions if you wanted to. Then, in the configuration for the loader, you can bind these actions to certain publish types you have. Binding an action to a publish type basically means that the action will appear on the actions menu for all items of that type inside the loader. As an example, by default, the mappings for Nuke are set up like this: action_mappings: Nuke Script: [script_import] Rendered Image: [read_node] If you are finding that no action menus are showing up, it may be because you have chosen different names for the publish types that you are using. In that case, go into the config and add those types in order to have them show up inside the loader. Managing actions For each application that the loader supports, there is an actions hook which implements the actions that are supported for that application. For example, with something like Maya, the default hook will implement the reference, import and texture_node actions, each carrying out specific Maya commands to bring content into the current Maya scene. As with all hooks, it is perfectly possible to override and change these, and it is also possible to create a hook that derives from the built in hook, making it easy to add additional actions to a built-in hook without having to duplicate lots of code. Once you have defined a list of actions in your actions hook, you can then bind these actions to Publish File types. For example, if you have a Publish File type in your pipeline named "Maya Scene" you can bind this in the configuration to the reference and import actions that are defined in the hook. By doing this, Toolkit will add a reference and an import action to each Maya Scene publish that is being shown. Separating the Publish Types from the actual hook like this makes it easier to reconfigure the loader for use with a different publish type setup than the one that comes with the default configuration. The loader uses Toolkit's second generation hooks interface, allowing for greater flexibility. This hook format uses an improved syntax. You can see this in the default configuration settings that are installed for the loader, looking something like this: actions_hook: '{self}/tk-maya_actions.py' The {self} keyword tells Toolkit to look in the app hooks folder for the hook. If you are overriding this hook with your implementation, change the value to {config}/loader/my_hook.py. This will tell Toolkit to use a hook called hooks/loader/my_hook.py in your configuration folder. Another second generation hooks feature that the loader is using is that hooks no longer need to have an execute() method. Instead, a hook is more like a normal class and can contain a collection of methods that all makes sense to group together. In the case of the loader, your actions hook will need to implement the following two methods: def generate_actions(self, sg_publish_data, actions, ui_area) def execute_multiple_actions(self, actions) For more information, please see the hook files that come with the app. The hooks also take advantage of inheritance, meaning that you don't need to override everything in the hook, but can more easily extend or augment the default hook in various ways, making hooks easier to manage. Note that in versions previous to v1.12.0, the application invoked the execute_action hook to execute an action. Newer versions invoke the execute_multiple_actions hook. In order to provide backward compatibility with existing hooks, the execute_multiple_actions hook actually invokes execute_action for each actions provided. If the application is reporting that the execute_multiple_actions hook is not defined after upgrading to v1.12.0 or later, make sure that the actions_hook setting in your environment correctly inherits from the builtin hook {self}/{engine_name}_actions.py. To learn more about how you can derive custom hooks from the builtin ones, see our Toolkit reference documentation. By using inheritance in your hook, it would be possible to add additional actions to the default hooks like this: import sgtk import os # toolkit will automatically resolve the base class for you # this means that you will derive from the default hook that comes with the app HookBaseClass = sgtk.get_hook_baseclass() class MyActions(HookBaseClass): def generate_actions(self, sg_publish_data, actions, ui_area): """ Returns a list of action instances for a particular publish. This method is called each time a user clicks a publish somewhere in the UI. The data returned from this hook will be used to populate the actions menu for a publish. The mapping between Publish types and actions are kept in a different place (in the configuration) so at the point when this hook is called, the loader app has already established *which* actions are appropriate for this object. The hook should return at least one action for each item passed in via the actions parameter. This method needs to return detailed data for those actions, in the form of a list of dictionaries, each with name, params, caption and description keys. Because you are operating on a particular publish, you may tailor the output (caption, tooltip etc) to contain custom information suitable for this publish. The ui_area parameter is a string and indicates where the publish is to be shown. - If it will be shown in the main browsing area, "main" is passed. - If it will be shown in the details area, "details" is passed. - If it will be shown in the history area, "history" is passed. Please note that it is perfectly possible to create more than one action "instance" for an action! You can for example do scene introspection - if the action passed in is "character_attachment" you may for example scan the scene, figure out all the nodes where this object can be attached and return a list of action instances: "attach to left hand", "attach to right hand" etc. In this case, when more than one object is returned for an action, use the params key to pass additional data into the run_action hook. :param sg_publish_data: Shotgun data dictionary with all the standard publish fields. :param actions: List of action strings which have been defined in the app configuration. :param ui_area: String denoting the UI Area (see above). :returns List of dictionaries, each with keys name, params, caption and description """ # get the actions from the base class first action_instances = super(MyActions, self).generate_actions(sg_publish_data, actions, ui_area) if "my_new_action" in actions: action_instances.append( {"name": "my_new_action", "params": None, "caption": "My New Action", "description": "My New Action."} ) return action_instances def execute_action(self, name, params, sg_publish_data): """ Execute a given action. The data sent to this be method will represent one of the actions enumerated by the generate_actions method. :param name: Action name string representing one of the items returned by generate_actions. :param params: Params data, as specified by generate_actions. :param sg_publish_data: Shotgun data dictionary with all the standard publish fields. :returns: No return value expected. """ # resolve local path to publish via central method path = self.get_publish_path(sg_publish_data) if name == "my_new_action": # do some stuff here! else: # call base class implementation super(MyActions, self).execute_action(name, params, sg_publish_data) We could then bind this new action to a set of publish types in the configuration: action_mappings: Maya Scene: [import, reference, my_new_action] Maya Rig: [reference, my_new_action] Rendered Image: [texture_node] By deriving from the hook as shown above, the custom hook code only need to contain the actual added business logic which makes it easier to maintain and update. Reference The following methods are available on the app instance. open_publish() Presents an 'Open File' style version of the Loader that allows the user to select a publish. The selected publish is then returned. The normal actions configured for the app are not permitted when run in this mode. app.open_publish( str title, str action, list publish_types ) Parameters and Return Value strtitle - The title to be displayed in the open publish dialog. straction - The name of the action to be used for the 'open' button. listpublish_types - A list of publish types to use to filter the available list of publishes. If this is empty/None then all publishes will be shown. - Returns: A list of Shotgun entity dictionaries that were selected by the user. Example >>> engine = sgtk.platform.current_engine() >>> loader_app = engine.apps.get["tk-multi-loader2"] >>> selected = loader_app.open_publish("Select Geometry Cache", "Select", ["Alembic Cache"]) >>> print selected-loader. publish_filters Type: list Description: List of additional shotgun filters to apply to the publish listings. These will be applied before any other filtering takes place and would allow you to for example hide things with a certain status. entity_mappings Type: dict Description: Associates entity types with actions. The actions are all defined inside the actions hook. filter_publishes_hook Type: hook Default Value: {self}/filter_publishes.py Description: Specify a hook that, if needed, can filter the raw list of publishes returned from Shotgun for the current location. action_mappings Type: dict Description: Associates published file types with actions. The actions are all defined inside the actions hook. menu_name Type: str Default Value: Load Description: Name to appear on the Shotgun menu. entities Type: list Default Value: [{u'caption': u'Project', u'publish_filters': [], u'type': u'Hierarchy', u'root': u'{context.project}'}, {u'publish_filters': [], u'entity_type': u'Task', u'hierarchy': [u'entity', u'content'], u'caption': u'My Tasks', u'filters': [[u'task_assignees', u'is', u'{context.user}'], [u'project', u'is', u'{context.project}']], u'type': u'Query'}] Description: This setting defines the different tabs that will show up on the left hand side. Each tab represents a Shotgun query, grouped by some shotgun fields to form a tree. This setting is a list of dictionaries. Each dictionary in the list defines one tab. Dictionaries with their type key set to 'Hierarchy' should have they following keys: caption specifies the name of the tab, root specifies the path to the root of the project hierarchy to display. Dictionaries with their type key set to 'Query' should have they following keys: caption specifies the name of the tab, entity_type specifies the shotgun entity type to display. filters is a list of standard API Shotgun filters. hierarchy is a list of shotgun fields, defining the grouping of the tree. Optionally, you can specify a publish_filters key, containing shotgun API filters to apply to the publishes listing as it is being loaded in the main view. actions_hook Type: hook Default Value: {self}/{engine_name}_actions.py Description: Hook which contains all methods for action management. title_name Type: str Default Value: Loader Description: Name to appear on the title of the UI Dialog. download_thumbnails Type: bool Default Value: True Description: Controls whether thumbnails should be downloaded from Shotgun or not. We strongly recommend that thumbnails are downloaded since this greatly enhances the user experience of the loader, however in some situations this may be difficult due to bandwidth or infrastructural restrictions..19.4 2019-Nov-12 Internal changes impacting metrics properties. Details: No change in user-facing functionality. v1.19.3 2019-Jan-16 Fixes button styling issues evident in Houdini 17. Details: Houdini 17's stylesheet forces a specific size for QToolButton widgets. This change sets a minimum size for those buttons in the loader app so that they maintain a usable size. v1.19.2 2018-Aug-06 Added Mp4 to Valid Extensions v1.19.1 2018-Jun-18 Small change to the publish type model as part of a fix for the reload menu option. v1.19.0 2018-May-18 Adds support for importing publishes as clips in Nuke Studio and Hiero. v1.18.4 2018-April-17 Adds support for complex filters to be used when setting the tab filters. Details:] v1.18.3 2018-Jan-30 Adds support for version driven browsing. Details: This fixes a bug which previously prevented you to configure the loader in a way so that you would be browsing Versions in the left hand side tree and when you clicked a version you would see the publishes associated with that version. v1.18.2 2018-Jan-22 solved problems loading utf-8 files v1.18.1 2017-Dec-06 Updated metrics logged v1.18.0 2017-Oct-06 Implements image sequence frame range detection in Nuke when no template is available. Details: In a zero config project there is no access to templates. When we're in that situation, we now fall back on parsing the file name without a template to try to determine the start/end frame of an image sequence. v1.17.7.17.5 2017-Aug-17 Stops app init when there's no GUI. This resolves issues when PySide is not available. v1.17.4 2017-Aug-08 Improvement to the tk-flame functionalities and bug fixes. v1.17.3 2017-Jun-20 Improve error handling in tk-flame actions v1.17.2 2017-Jun-15 Add support for Flame v1.17.1 2017-Jun-12 Adds texture node loading in 3dsmax v1.17.0 2017-June-9 A banner is now displayed to give a visual indication that a custom action has been executed. v1.16.3 2017-May-16 Now supports project item display in hierarchy. v1.16.2 2017-May-16 Hierarchy support and new default actions for maya, nuke and houdini. Details: - Adds hierarchy support to loader, including hierarchy based searching - Updates the defaults to use a hierarchy tab instead of the shot and asset tabs - Added houdini file COP support - Added maya image plane support - Added nuke open project support v1.15.1 2017-May-04 Adds support for entity level actions. Details: Added entity actions. You can now define custom actions on entities (the folders showing up in the loader) just like you can define actions on publishes. This makes it possible to create workflows where you operate on a shot or an asset. Quick examples of workflows that can be realised by adding custom entity actions include - Ability to trigger a remote transfer for a shot. - Ability to change the status of something. - Ability to operate on all publishes inside a shot or asset. - Ability to find a cut for a sequence and load it in. v1.141.14.1 2017-Jan-31 Rolls back the use of a hierarchy model for the loader tree view. Details: Issues discovered on the back end result in the hierarchy model being unable to access some data in Shotgun when the default "Artist" permissions set is used. Until that has been resolved, the hierarchy model won't be used in the loader app. v1.14.0 2017-Jan-30 Adds support for tk-photoshopcc. v1.13.5 2017-Jan-30 New setting option to define left hand side tabs driven by the Shotgun hierarchy model. v1.13.4 2017-Jan-20 Do not display empty action menus in the publish view. v1.13.3 2017-Jan-18 Bug fix to better handle optimized shotgun model. Details: Fixes an issue where subfolders were not appearing in the middle view in the UI when you clicked a node in the tree that had not yet been expanded. v1.13.2 2017-Jan-16 Updated to use open sans font. v1.13.0 2016-Dec-19 Updated to use v5 of the shotgun utils framework with improved data loading performance. v1.12.4 2016-Dec-08 'Show in the file system' available only when a folder widget has some paths. Details: Add 'Show in the file system' to a folder Actions menu only when there are some paths associated with its Shotgun entity. This is done at runtime when the folder is selected. v1.12.3 2016-Nov-28 Left hand side tree view is now ordered in a case insensitive fashion. v1.12.2 2016-Nov-22 Adds multi-selection support to the loader and UI tweaks and left hand tree view is now sorted alphabetically. Details: Multiselection Upon selecting multiple items, a user can now right-click on a selection to see the set of actions that can be executed on all publishes. This change is backwards compatible with the existing hooks. To learn more about how you can leverage the new execute_multiple_actions hook method, please visit our support page. v1.11.3 2016-Oct-25 Tab UI Bug fix. Added support for context.project.id token in configuration. v1.11.2 2016-Mar-23 fixes No data retrievers registered warnings Details: The background task manager that uses the CachedShotgunSchema was not registered which generated the following warning if the schema cache did not exist: Shotgun Warning: No data retrievers registered with this schema manager. Cannot load shotgun schema The background task manager is now registered correctly. v1.11.1 2016-Mar-9 Only allows Alembic imports in Max 2016+. Details: Native Alembic support was added to 3ds Max in the 2016 release. As a result, we have to exclude pre-2016 versions of 3ds Max from Alembic support. v1.11.0 2016-Mar-3 Adds Alembic import support for tk-3dsmaxplus. v1.10.5 2016-Feb-19 Added user activity metrics for particular actions v1.10.4 2016-Feb-02 QA fix removes alembic load support v1.10.3 2016-Feb-01 QA fix for UNC path on windows v1.10.2 2016-Jan-25 Adds support for loading Alembic caches into Nuke. v1.10.1 2016-Jan-22 Adds support for loading Alembic caches into Houdini. v1.10.0 2016-Jan-15 Adds support for context changes. v1.9.1 2015-Nov-26 Fixed a minor regression causing some updates in nuke to not appear in the UI. v1.9.0 2015-Nov-23 Updated to use v4.x.x of the Shotgun Utils Framework v1.8.0 2015-Nov-10 Upgraded to use new versions of the frameworks v1.7.4 This fixes an issue where a tile would be missing in the loader if the user who created the shot or asset had been deleted. v1.7.3 The 'filter_publishes' hook is now also applied to the version history view. v1.7.2 Allow access to the status (sg_status_list) field in a filter_publishes hook v1.7.1 Show task information for publishes in the list view. v1.7.0 Improved thumbnail performance - NOTE - this change is coupled with v2.4.0 of the shotgun-utils framework and will not run correctly without it. When the app update asks you if you want to install v2.4.0 of the shotgun utils framework, please make sure to accept the update. Details: This implements async thumb loading, which was added to the Shotgun Utils Framework v2.4.0. Thumbnails will be loaded in a worker thread rather in the main thread. This improves performance, especially in the case when thumbnails are located on a remote storage. v1.6.2 Added an improved publish quick-filter UI Details: - Adds a case insensitive quickfilter UI which lets a user quickly cull the list of publishes based on an expression. - Tweaked the nuke publish hook so that it handles upper case file extensions. - Fixed an issue with the selection clearing at exit, which was causing errors in Maya. v1.5.2 Added additional file formats to the default nuke hook. v1.5.1 Added psd to the list of supported file extensions in the default nuke hook. v1.5.0 Added a list view mode for more compact browsing. Details: This adds a new mode for the main publish view. More compact than the thumbnail mode, this allows for fast vertical browsing of larger lists of items. v1.4.2 Fixed a bug causing user thumbnails sometimes to show up instead of the main thumbnail. v1.4.1 Fix 3ds Max when merging objects Details: Fixes crashing issues relating to qt dialogs and max modal. v1.3.2 Added support for the new 3DS Max Engine supporting MaxPlus and Max version 2015. v1.3.1 Minor improvements for the Open Publish version of the Loader UI Details: - Actions defined for the main loader are no longer available when just opening a publish - Double-click on a publish or publish version will now perform the default open action v1.3.0 Added Mari support Details: - Mari hooks now support loading published geometry into Mari - Maya hooks add support for loading Mari UDIM textures into Maya 2015 - Added an 'open_publish' method to the app that opens the loader in a modal 'file-open' mode. This allows an artist to select a publish and then click the 'Open' button which will return the selected publish to the calling code. This is used by the new Mari New Project command. - Updated to require core v0.14.66 v1.2.1 Fixed a bug causing publishes with the same name and type but with different tasks to be obscured in the main view. v1.2.0 Various minor improvements and bug fixes Details: - Added right-click refresh buttons in views. - History UI and main UI now pass consistent shotgun data to actions hook - Multi entity link field values now formatted better. - Changed the display name from "Publish Loader" to just "Loader" - Added houdini default hooks and merge support for hip files. - Fixes a bug causing duplicates to show up in the type list. Previously, the type list showed a raw dump of the shotgun data. This meant that if you had two types in this list both named "maya anim", you would have two "maya anim" entries showing up, which is confusing. This is common if you still use the tank types (or in some cases have converted from from the tank types). This change collapses type entries with the same name down to a single entry. - Unix timestamps are now converted before being passed to hook. The unix timestamp handling is to mitigate various shortcomings in the QT/PySide/SG API serialization capabilities and means that all time stamps in the model data are cached as unix timestamps (number of seconds since 1 Jan 1970 in UTC time zone -). This means that all data currently being returned by the sg model comes back with this time data format rather than the standard SG Time stamp. (there is no easy way to slap an interface around the return data). This fix converts these unix timestamps back to sg timestamps prior to passing shotgun data to the action hooks, making it more familiar for TDs and developers. - Fixed pixmap:scaled errors. - Version history now constrains by task and project. When displaying the version history based on an existing publish, the associated fields to display are now determined based on more commonality than before - previously, it was only name, type and associated entity. With this change, project and task are also included. This means that you can have two publishes with the same type and same name for the same shot, but with different tasks associated, and the loader will treat their version histories as completely independent. - Added support for per-profile publish filters. Adds a new parameter to the config such that you can specify a publish filter for each tab on the left hand side. v1.1.0 Updated to use v2.x of the Shotgun Utils framework and v1.x of the QT Widgets framework. Added an option to disable thumbnail loading. v1.0.7 Bug fixes and UI improvements. Details: - Added 3dsmax reference action (XRef Scene) and modified the existing import action to perform a merge. - Fixed incorrect sort order in publish history. - Show publish name and type when in "show subfolders" mode to match default display behavior. - Fixed broken photoshop "open file" action. v1.0.6 UI Polish and minor tweaks. Details: - Changed the copy in some of the hooks. - Fixed a typo in a tooltip. - Added a try/catch around the closeEvent() in the main dialog to ensure graceful exit. v1.0.5 Added maya import action. v1.0.4 Improved breadcrumb behaviour. v1.0.3 Fixed an bug where the wrong publishes would be displayed in some cases. v1.0.2 Tweaked app manifest. v1.0.1 Improved icons and information screens. v1.0.0 Candidate for official first release. v1.4.0 Updated Photoshop action to Add as a Layer Details: - It is now possible to load another PSD, PSB or image in the current Photoshop document. - The loaded media is embedded in a new layer. - This works in all supported versions of Photoshop.
https://support.shotgunsoftware.com/hc/en-us/articles/219033078
CC-MAIN-2019-51
refinedweb
4,657
55.34
10 July 2008 12:45 [Source: ICIS news] LONDON (ICIS news)--Dow Chemical will buy US-based specialty chemicals maker Rohm and Haas for $18.8bn in what it describes as a “game changing move”, the company said on Thursday. Dow will pay $78/share for the Philadelphia-based company and contribute its coatings, biocides and personal care operations to the deal to create a $13bn (€8.3bn) turnover business. The transaction marks a decisive move in Dow’s transformation into an earnings growth company with reduced cyclicality, the chemical giant said in a statement. “After an extensive analysis of acquisition opportunities in the marketplace, it became clear that Rohm and Haas is the ideal company to accelerate Dow’s transformation,” Dow chief executive Andrew Liveris said. “The addition of Rohm and Haas’ portfolio is game-changing for Dow, enabling us to accelerate Dow’s transformation,” he added. Dow has yet to finalise a joint venture with the Petrochemicals Industries Company (PIC) of ?xml:namespace> But with the collective impact of these two deals, performance productions and advanced materials will account for 69% of Dow’s total sales on a 2007 pro forma basis compared with 51% before, Dow said. Financing of the acquisition includes equity investment from Berkshire Hathaway and the Kuwait Investment Authority. Dow shares were trading at €21 on Europe's Xetra exchange, down €1.07 or 4.85%
http://www.icis.com/Articles/2008/07/10/9139380/dow-to-buy-rohm-and-haas-for-18.8bn.html
CC-MAIN-2015-18
refinedweb
232
53.21
c4d.utils.GetBBox() Returns Odd Results On 22/03/2018 at 17:19, xxxxxxxx wrote: According to the documentation, c4d.utils.GetBBox() should return the Center and Radius of the object hierarchy. However, I'm getting unuseable results. To test: 1. Create a cube 2. Make it editable 3. Move the cube's axis to it's base 4. Add an Expression Tag to the Cube with this expression: import c4d def main() : obj = op.GetObject() mp, rad = c4d.utils.GetBBox(obj, c4d.Matrix()) print "(mp: %s, rad: %s)" % (mp, rad) The output is: (mp: Vector(0, -100, 0), rad: Vector(100, 100, 100)) Just because the axis has moved doesn't mean the global center point should move. Expected: (mp: Vector(0, 0, 0), rad: Vector(100, 100, 100)) Am I misunderstanding how to use this method? If not, any workarounds for making it usable? On 25/03/2018 at 12:35, xxxxxxxx wrote: Hi Donovan, thanks for writing us. With reference to your question, in order to get the proper results I'd suggest to use: def main() : obj = op.GetObject() mpTrf = c4d.Matrix() mpTrf.off = obj.GetMp() mp, rad = c4d.utils.GetBBox(obj, mpTrf) print "(mp: %s, rad: %s)" % (mp, rad) By using this approach you're properly providing the method with a transformation matrix whose position is the correct one for the bounding box. Moreover it should be noted that passing a different transformation matrix which could might represent a different coordinate system you'd be able to obtain the bbox information in that coordinate system. Hope it helps. Best, Riccardo
https://plugincafe.maxon.net/topic/10700/14154_c4dutilsgetbbox-returns-odd-results
CC-MAIN-2020-10
refinedweb
265
67.55
How freelancewritinggigs.com Exploited Its Own Writers77 One of the main caveats that all freelance writers should have engraved onto their computer screens is "Never Give Away Your Work." Any prospective job poster who asks for a free sample should be categorically denied. There are some pointers on how to handle these requests in my Hubs: How To Beat The Send Me A Sample Scam, and Avoiding Freelance Slave Labor . That is why I am completely flabbergasted by the "writing contest" being run by freelancewritinggigs.com (FWJ), one of the previously most reputable online freelance writing job sites and communities. In this particular case I'm going to spare the usual biting sarcasm and corrosive vitriol I usually ladle onto online scams.. That is why her latest project simply floored me. After establishing a reputation as one of the champions crusading against online scams that prey on writers, Deborah decided one fine day to launch something called FWJ Idol. The idea sounded great at least at first. A whole bunch of bloggers would audition, American Idol style, and the community would vote on the winning blogger who would then become a paid regular contributor to the site. However, as everyone knows, the devil is in the details. The way Deborah chose to structure this Idol contest opened her site up to widespread catcalls. Just like American Idol, the bloggers would audition, make a cut, do another audition, make another cut, and so on and so on and so on. That is the part that totally floored me and many others. After years of educating and informing her far-flung writing community on how to avoid online writing scams, Deborah (I like to believe "unwittingly", but I do have to doubt how skewed the thought processes were that allowed her to come to these outrageous conclusions) managed to set up one of the biggest sucker traps I've seen in my years of cruising the freelance writing market. The 22 initial contestants keep writing and writing and writing all sorts of very high quality content which is posted to the site and keep getting eliminated with the survivors getting to write and write and write some more! Of the 22 contestants, 21 will have participated in this marathon of blogging which has now run for over two agonizing and painful months for absolutely nothing. Thousands and thousands of quality, unique words for no pay and not even any credit, as the sucker... er... contestants are only identified by numbers! In the meantime this FWJ Idol has turned into nothing more than a free content mega-stuffer for the site. Deborah has been lambasted for preying on the very members of her own community who have supported her for so long, and her "solution" has been that she will delete the blogs of the losing participants when the winner is selected. This "solution" is so disingenuous that it boggles the imagination. Those pages will not only live on in Google caches long after they're gone, but the net benefit in pageviews and pagerank from all of this unique content will persist indefinitely: all at the expense of the bloggers who wrote great content for free. Let's not even get into the fact that the voting process was set up in such a blatantly amateurish manner that the dedicated attempts to "game" the polls have succeeded in distorting the process to the point that whoever wins will be always under suspicion. Deborah, Deborah, Deborah... in the famous words of Jay Leno when he interviewed Hugh Grant shortly after his Hollywood arrest for picking up a cheap prostitute in his car: "What were you thinking?" The only ways out of this sticky wicket that I can come up with would be to either retroactively pay a reasonable per word rate to all participants (which would then allow her to keep the content online) or to just cancel the whole mess right now and apologize profusely to everyone for her ghastly lapse in judgment. Whatever the answer, Deborah Ng needs to go into damage control mode now, as this misbegotten and profoundly flawed FWJ Idol is a fubar blunder of historic proportions. PrintShare it! — Rate it: up down [flag this hub] RSS for comments on this Hub I was just looking at that site for use in an existing hub and as a possible venue for my own stuff. Thanks for the heads-up. (...sigh...) As a faithful reader of this superlative site for years, when I first realized what was going on with FWJ Idol you could have knocked me over with a feather. :( What a shame, I had no idea that was going on over there. Deb is a personal friend of mine. I am not sure what is going on but I will continue to believe that she is completely upfront and ethical. She has, as you say, helped many, many bloggers and writers and I am sure will continue to do so. I do not know her personally, but as I stated in the article I have been following her site for years and I have always listed it among the best and most respectable online freelance writer resources. I don't know what has caused this sudden and appalling lapse, but I invite Ms. Ng to address these issues directly and give her side of the story. This is a shame. I can't say Deb is a personal friend, but we have interacted in the past and I have always found her to be honest and up front. I would hazard a guess that like many of us, she is having trouble turning a successful blog into an income stream, and this may have clouded her judgement. She probably never even considered that she was doing what she has been railling against all this time. Good hub. I really wish you had contacted me prior to writing this piece for a more fair and balanced blog post. This piece was written by someone just looking for a link back for some traffic. I'm now dumber for having read this post. Moving right along. I've been a long time reader and commenter at Deb's site and I can say that Deb's a fair person. The writers who participate know what they're doing and they know how the content is being used. I don't see this as exploitation of any kind and I'm sorry to see someone does. You're right, the devil is in the details. Too bad you didn't pay attention to the details or you wouldn't have gotten some of the facts so blatently wrong. Ther are not 21 writers writing for FWJ - there were over 20 *applicants*. Of the remaining writers, as they are eliminated, they can have their material removed. No-one is forcing them to keep their material online. Sad tht you choose to trash such a popular site. Sounds like you've got some sour grapes. This piece seems to have just been posted for one reason: to draw in traffic. Pathetic. You call yourself a writer and experienced freelancer when you can't even get simple facts straight? I'm sure that's going great for you. Do a little research. It works wonders. For those who've read this post, you should really check out the site for yourself. My only mistake was opening this link from FWJ. This is a very unfair judgment on a person who has done none of the things you've said. If you think that is what's happening at FWJ, then you have clearly misunderstood the spirit of the whole site. Shame on you. If you have problems with Deb, you should have talked to her first instead of launching an attack on her via this hub. This was a very unprofessional move on your part that will definitely affect your future. Various points: - Er... why is no one addressing the issue of the contestants not getting paid? - 22 initial contestants were on the first FWJ Idol Poll. There are 4 now. The 13 participated in the full blown contest, but all 22 wrote SOMETHING for free. Do you want a link? - If you took time to familiarize yourself with my Hub Writing you'd realize that whatever "extra traffic" this Hub will pull in should barely equal a few pennies. I don't play the SEO game with Hubs. - It also seems that some of the contestants themselves may have "misunderstood the spirit of the whole site" judging by the ones who have pulled out for various reasons. - What sour grapes could I have against a freelance writer job site? As a freelance writing job resource FWJ is one of the best around. It's this FWJ Idol thing that is highly suspect so let's not start obfuscating what is a very clear focus. And BTW, I have had nothing to do with FWJ Idol as a contestant, etc. - Are you stating that FWJ Idol is not a contest where contestants write multiple original unpaid content? And exactly how is whistleblowing on a unpaid contest going to "definitely affect my future?" Is jtabergas going to personally appeal to every one of the freelance writing job providers on Earth and inform them not to deal with me because I wrote this Hub and have a vendetta against Deborah Ng, as well as bad breath and an advanced case of body odor? In the famous words of Beavis as The Great Cornholio: "are you threatening me?" :) - I am surprised that Deborah did not address any issues other than "I should have talked to her." I don't do backroom deals. When something is placed for public consumption it has to be judged on its merits. I very clearly specified "I invite Ms. Ng to address these issues directly and give her side of the story," and that invitation is still open. I will be happy to publish, unedited and in full, any such rebuttal. I trust my readers are looking forward to such clarifications. As Mark Knowles pointed out, Deborah "probably never even considered that she was doing what she has been railling against all this time." I explained in my article that "I have a great deal of respect for this site, its principals and what they have accomplished," and I do believe that once this FWJ Idol debacle is behind her, Deborah will continue to provide a superlative service to freelance writers everywhere, including me! :) As one of the original contestants in Deb's competition (eliminated from the first round), I can say that you are wrong to call this a scam. Each contestant knew the rules of the competition and had to weigh the pros and cons of getting involved. If we (the contestants) don't have a problem with the way the competition was organized, then I fail to see why it would be a problem for anyone else. Hal, Perhaps the reason no one is mentioning payment is because the contestants did not make an agreement to receive payment for participating in the competition. The rules and terms of the contest were clear before we started. Second, if you are only receiving pennies for each blog post you make, does that mean you are being scammed or exploited? Of course not, because you already know in advance what to expect and are free to stop posting if you don't find it beneficial. anon, I'm very happy with the way I get compensated for my Hubs directly from HubPages. Although it adds up to a fraction of the income I receive from my other mainstream work, I'm quite content to keep Hubbing because I enjoy providing a service to my loyal readership. I do not engage in black or gray hat SEO ploys to sneak traffic onto my Hubs for the sake of a couple of AdSense dimes. My Hub clearly outlines why I do what I do. I write Hubs that rely on their content to be successful, not byzantine linking strategies. The excuse that contestants were aware of the terms holds water only until you realize that there are hundreds if not thousands of "writing contests" online and all of them are well-populated with naive writers who are giving their work away for the remote chance of "winning." Sure, if that's what a writer wants to do, they're fully within their rights. However, most of these contests are outright scams where no one wins and if a prize is actually awarded it goes to the contest operator's cousin. I'm emphatically not stating that IMHO FWJ Idol is rigged, but it can be seen to be structured to take advantage of the naivete of hungry freelancers. After all, by my count, the writer who becomes the runner-up will have contributed over 6,000 words of their content for free. Even at a bare minimum 2 cents per word, they could have made $120 selling that content legitimately. I believe that if most writers who are about to engage in the various "contests" calculated their returns in this manner there would be a whole lot fewer contestants! :) You edvidently do not see the contests being run by the big magazine companies. Are they also scams? I am sure that having your name associated with such a incredibly poor hub page will find clients knocking down your door. You should be ashamed for demeaning a contest that can onlt benefit writers that might not have other means for potential clients to see their style and dedication to a community. Deb Ng's Freelance Writing Jobs has help hundreds if not thousands of people find work. The time that goes into running this site and finding good paying jobs for us poor folk to apply for by Deb and Jodee takes time and dedication. Anyone that beleives in your horendous misconception of the contest should check it out and see all the the valuable information that writers are contributing. It is a great way for people to show their work. Big magazines do this all the time. As one of the contestants, I have mixed feelings about your post. You label us as suckers, and yet you seem to have no problem exploiting all the 'free content' your readers provide you on a daily basis by filling up your comments section. (If you've ever posted comments on a blog or written/added to threads on a forum, then you've written 'free content.') I also do not agree with you that one should "never" write for free. Instead, writers should be very judicious about doing so, using the qualifying question of, "How will this advance my career?" There are many intangible benefits to consider, such as breaking into a new market or gaining recognition on a national level. It's not always about the money, Hal. I have enjoyed participating in the contest and do not feel exploited in the least. I see very little difference between this and the extremely common practice of guest blogging, with the added difference that one of us 'guest bloggers' will walk away with a blogging job on one of the most respected and relevant sites to freelance writers. It's hard to tell whether you really take umbrage with this contest, or are just posturing and using it in the vein of one the historically best marketing tactics -- create controversy and then fan the flames. As the cops say to looky-loos who block traffic to watch a burning building: "Move along folks -- there's nothing to see here." "see all the the valuable information that writers are contributing"... yeah. For free! You may have neglected to point out that little fact. :) IMHO, and I DO reserve the right to express "my humble opinion" on my Hubs, the "big magazine" contests are valid ONLY if the contributions are published in a large circulation targeted print periodical. The FULLY CREDITED work which will be read by hundreds of thousands of readers can be well worth the effort even for the non-winners as great resume-stuffers and massive public exposure. That does not correlate with writing a dozen free articles when you're identified only by a number or a silly nickname. I'm also not aware of a "big magazine" contest that makes its writers submit up to a dozen free articles. If Deborah had structured her contest by having each contestant write one article and having her readership vote on its merits, that would have been one thing. This Idol structure is exploitative, and that's my stand. As for my Hubs receiving financial benefits from comments, not only is that ludicrous, but at least I don't wave the carrot of a job in front of readers for the best comment! :) And #6, can you tell me how your international exposure as #6 is going to benefit your career? Will you now market your writing under the pen name of #6? :) Various Points: "Er... why is no one addressing the issue of the contestants not getting paid?" --Because they are contestants in a contest for a prize. The prize is the payment. That's how most contests work. Otherwise it's called a job. If I submit a short story to a contest, I don't expect to be paid unless I win that contest. Many of the contests that I enter are judged by popular vote--everyone sees all of the entries. Could that be considered giving my work away free? Sure. Do I see it that way? No. "I don't do backroom deals." --There is a vast difference between getting the facts and "backroom deals." The former is responsible journalism and the latter is a defense against shoddy journalism. Inviting a response in the comments is, frankly, a ridiculous practice. "The devil is in the details," as both you and a commenter stated. Perhaps if Deb Ng had been contacted for her side of the story, rather than being "invited" in the comments section, this article's details would have been much more accurate. The main fault of this article is that out of everyone who reads, comments, and participated in FWJ Idol, only one person could see that this was a scam contest. Everyone else was apparently too dumb to notice that these schmucks were being scammed. A quick read-through of FWJ's comments, though, doesn't support this assumption. Any time a contestant stepped out of bounds or a flaw in the contest was discovered, commenters let everyone know. All of our names will be re-revealed at the end, and I now have another writing credit: Blogger on FWJ (that I will not add until the contest ends.) The brass ring: If I win I will get a byline for an international, well-respected blog. I also think this has been a terrific experience overall and I've learned a great deal, including this lesson: The more successful you are, the more people want to tear you down. This defense of your favorite writing site is touching, but it's all a bit lacking in logic and common sense. #6: IF you win. Yeah, IF I won the lottery ($17 million today) I'd be on my own tropical island chasing the grass skirted hula girls with a lawnmower. It's always an IF, isn't it? Writing for $ is not, however. You write. You get paid. I'm sure that you can understand the benefits. Shawn Norris: "If you submit a short story" to a contest you are submitting a short story to a contest. Simple. You have full rights to submitting that story to another market once you lose. FWJ participants can't do anything with those articles now as they will never pass Copyscape. Furthermore, you submit ONE story. Not a dozen. There are NO inaccuracies in the Hub. Prove it or go play somewhere else. What part of "I will be happy to publish, unedited and in full, any such rebuttal" do you not understand? Not in the comments, but in a standalone Hub. And guess what? I'll even pay Deborah Ng for her writing at the same fair basic rate of 2 cents per word that she denies her contestants. I can see the broader point you are trying to make, and I have to say that I would not have participated in a competition of this type if I did not have full confidence in Deb and the purpose of the FWJ blog. For me, it was never about the $100/month that would be offered to the winning blogger. It was about taking a break from what I have been working on and trying something new and challenging. You are correct that there are many scams out there, and it is important for the new writer in particular to avoid being taken advantage of. Perhaps what distinguished this competitions from others of its type is that all of the participants were already established writers who genuinely wanted to spend time adding something of value to Deb's blog. I did not get the sense that the participants were naive in any way. "- If you took time to familiarize yourself with my Hub Writing you'd realize that whatever "extra traffic" this Hub will pull in should barely equal a few pennies. I don't play the SEO game with Hubs." I"m confused. You're saying that these writers are being scammed for writing "for free" . . . does this mean that you're getting scammed by HubPages.com for contributing "quality content for your readers" for little/no pay? How is this different? She hasn't lead them to believe they'd be doing anything other than competing in a competition for a paid writing gig. So, in a sense, these writers have volunteered their writing as guest posts. I'm sure you've seen guest posts before, right? These guest posts contain quality content without the hope for a paying gig eventually . . . does this mean that every single blog out there are scamming people because they request and encourage guest posting? Back to my original point: it seems you're doing the same thing here on Hubpages as you're trying to protect these writers from doing. Unless, of course, you're not being completely honest about the pennies thing in the quote above. Ummm, Hal. Not for nothing, but if you were such a loyal reader of Deb's blog you'd know she responded some time ago on a post at FWJ . I wouldn't reprint it without permission though. That would be stealing. I'll even pay Deborah Ng for her writing at the same fair basic rate of 2 cents per word that she denies her contestants. 1. I have never heard of being paid for a contest entry - and Deb has never "denied" us payment since that was not the deal. 2. Not to open a can of worms, but is 2 cents a word a fair price for writing? Most of us who write professionally start at rates 10-20 times that amount and strive for $1 per word. anon, thank you for agreeing with my basic premise, and of course, you are more than free to engage in any endeavor you want. I had full confidence in FWJ prior to Idol and I am sure that once this whole mess is behind Deborah Ng, I will have full confidence in it again. However, if anything like this is done again, I'll continue to live up to the expectations of my readership and I'm going to blow the whistle again... and again! :) You'll also find that if you average out all of Jodee's Job Postings (at least the legit ones and the ones that don't get flagged at Craigslist) 2 cents is the median wage. Go to GetAFreelancer.com and see how Third World writers struggle to get .3 or .2 cents per word. My ghostwriting is at around 20 cents per word, and my tech writing starts at 50 cents per word. However, I often write for 2 or 3 cents per word for general content. It's worth my while as I type 90wpm and it helps to pay the rent. If you can get $1 a word for general content, then what can I say... I tip my hat! :)! :) Mike Witt, so are you telling me that I can't inform my readership on my platform of what is going on elsewhere? Did I miss the CNN story on how you were elected as the official arbiter of the First Amendment? I knew that there was a Hayes Code once, but I didn't realize there was a (Nit)-Witt Code! :) (Oh, don't get your nose all out of joint, it was just a pun!) A scam is defined as "a fraudulent or deceptive act or operation". How can this contest be a "scam" if Deb was completely upfront about all the details of the contest and all the contestants knew exactly what they were getting themselves into (eg writing free content, as you have pointed out numerous times)? anon, I don't appreciate people who insinuate that I wrote something when I did not. Can you please point out on this page where I have called FWJ Idol a SCAM? Anyway, I'm off to my weekend. Please continue to post your comments and they will all be approved and published upon my return (all except the ones calling me a "jerkoff tard loser with a tiny dick.") :) You completely missed my point when you responded with the following: ! :) " I never claimed to be a top hubber, nor did I mention anything about SEO. I was commenting on your claim to only be bringing in pennies as mentioned in your comment which (along with mine) is now gone. Ironic. Why did you delete them? Is it easier to try to insult me by saying I'm not a top hubber and that I don't understand hubpages than to leave the comments intact and agree that something valid and true was pointed out? Interesting. Yes, I'm glad I checked my email one last time before heading for the lake. The disappearing comments were an unexpected glitch of when I changed the comments to be moderated (to keep the "jerkoff tard loser" comments off this page). They should all be appearing now. So, Jenn, the bottom line is simple: I only work for a fair and equitable wage and I will CONTINUE to fight to ensure that my fellow writers do as well. FWJ has always done that as well, and as I have stated ad nauseum, when this Idol catastrophe is over, I trust that they will return to their original mission that they have done so well for years. NOT RIPPING OFF WRITERS FOR 6,000+ FREE WORDS! As for my traffic, my got over a quarter of a million legit page views and just check Best Hubs to see the rankings of my articles. So anyone who thinks I wrote this to get a couple of dozen hits off FWJ is clueless. Why did I write it? Because Deborah Ng crossed the line and set up a structure that exploited writers. And I will continue to write Hubs like this as long as my fingers can work my keyboard. My readers have the right to be made aware of what I perceive and believe, as they have made me one of the top writers on HubPages. Now, I'm outta here! C ya on the other side of the weekend! :) Anyone interestested in Deb's response from early this morning can see it here since Hal isn't posting it: OK, I have been following her blog as well, and frankly, it's better than this one...no offense. The contest is a cool concept. Go figure. Oh wait...this isn't a hit piece, it's only link bait...lol...I get the joke now. Wait, you aren't joking? Surely...ok, maybe you are a dork. I'll have to scour your blog and make that determination, wherein I may just do a story on my blog...*evil grin* This is the worst case of grandiose thoughts and sweeping assumptions that I've ever seen. What's more, it preys on someone rather innocent who decided to open a volutary, unabusive contest for anyone who wanted a shot. You make some pretty big statements without looking at all sides of the coin, and I'm frankly disgusted. As a writer who frequently offers guest posts as a marketing strategy (and it's worked very well for me and for others, thank you very much), I'm rather angry that you feel you should choose what is right and wrong for individuals. Who are you to say what writers should do? Had I been in the Idol contest, is it not MY choice to write for free if I think it's the right thing to do? Deb Ng is someone who has earned my strong respect. She treats people well and makes very considerate choices. She also lets people walk all over her because she tries so hard to maintain top values and show good, desirable qualities. You have not done the same. Like a ravaging wolf, you've chosen to slaughter in bloodshed. What disgusting behavior. Do you need traffic and hits SO much that you would do this to a fine person and their site? FreelanceWritingGigs.com is a wonderful blog, and the ones that post to that blog are wonderful people. The very idea of you sinking THIS low is just sick! Sure, moderate the comments... you will still see this. You claim 30 years of experience... yet this one piece clearly shows that you lack writing and ethical experience. And IF you had the experience you claim, you certainly would not be writing Hubs nor would these Hubs be your only claim to fame. Yet you have the nerve to slander another who IS experienced and does very well with her work. You... are a shame. I`m finding it very interesting that you mention you are sure you will be following FWJ after this Idol thing is over and you are confident that it will be useful . . . you just finished turning a ton of people away from the site. People who will never ever look at it again because of this hub. I have been reading Deb`s blog for a long time and have found it quite useful. The contest is definitely different, but if you took the time to actually pay attention to it, you would see that she hasn`t just offered to remove the content . . . she has offered to add bylines and bios. I don`t know about you, but having my byline on a site as big as hers is definitely an advantage. To the best of my knowledge, most of the contestants have opted for this option. Also, it`s a contest. People enter contests without expecting to be paid unless they win. It`s not a job asking for free samples, it`s a contest. You`ve totally inspired me to sign up to become a hubber and earn money for my non-combo McDonald`s meals. ;) Do you really have nothing better to do with your time? I've learned so much from Deb's freelance writing site over the past year, and it kills me that someone could be so hurtful. Although you may not openly call FWJ or Deborah a scammer, you do imply it with one of the first lines in your post. "In this particular case I'm going to spare the usual biting sarcasm and corrosive vitriol I usually ladle onto online scams." I'm new to the site, but found the game interesting. If the contestants had only been required to post one article, it would have been difficult to get a feel for their knowledge and experience. Multiple posts gives the readers a feel for their range. Since they are competing for a prized spot on the site (that is known for helping and guiding freelance writers) then it is important to know more. I will admit that I can almost understand what the point of the above post is all about. The number one rule in freelancing is to never give away your work for free. But all rules are made to be broken. It has to be up to each writer to determine if the free work will be of enough benefit (at some point) to make it worth while. There is no way for me to decide this or anyone else. There are people that would say accepting $.02 per word is being taken advantage of, and I would agree. Unless the circumstances for that particular job made it worth while. In the end, competitions are always a risk and pay or no pay or byline or ghost writing will all have to come down to each individual and what is wanted or needed from the experience. Hal, I think you're completely overlooking the fact that these contestants agreed to do this, and I'm sure are enjoying themselves pretty heartily. How could they be exploited? How could this be unexpected on their part? Additionally, it's a HUGE boon for the writers, because as soon as the contest ends they will be credited if they choose to keep their content up. Personally, this site has made the difference between earning me scraps on places like Hubpages and earning serious money for real jobs. I think that you are taking a cheap shot at a popular page to try and get your adsense revenue up--which, you know what, do what you have to do. But for anyone reading, this is bar none the best site out there to find freelance jobs. Don't let him tell you otherwise. xAC I am a freelance writer, I am currently the sole income provider for my family of 8. My income pays our mortgage, groceries, medical insurance, car payments, you name it. I use Deb's site regularly to pick up smaller gigs. I don't think the #1 rule is "never give it away" because giving it away is often a great way to drive people to your site. The #1 rule should probably be "check your facts" and it doesn't look like you did that. I don't think you read Deb's rules thoroughly, because she's not insisting she keep the material on her site forever. She's not exploiting the writers, that's really a bit overboard. The things they write for this contest can be moved to their personal site or used elsewhere. What kind of contest pays their contestants? In some literary cirlces, writers pay to enter the contest. Congratulations on your success with hubpages. If you don't want to participate, you don't have to. Being ugly might get you hits to your site, but it doesn't make anyone respect you. desperate for traffic much? The use of proxies was a nice touch. If only the person hiding behind so many anonymous names didn't use the same unique phrases. You know Im a big fan of yours but when I first read this I thougth that you were being a bit harsh. So I went onto that site and read through the archives for the Idol. You are absolutly right on everything you said. Dont let any comments from the fans of that site disuade you. There have been plenty of people on that site who have been complaining about it for weeks. Stand your ground as you really saw through it! I am not a frequent commenter on FWJ but have followed the blog and have great respect for Deb and the service she offers. I have followed the contest and you have some of the details incorrect. It is a contest, hence unpaid. I do not see it as a scam when people willingly agree to participate in a contest. All of the bloggers are not writing content each week as there have been regular eliminations. The contest was meant to be a fun way to allow the community to choose a new blogger. It would have been nice to have had the issue raised with FWJ and received their feedback in advance of the article, particulary since it's a site that you respect. This post, has disparaged them and as noted some commenters will now write off FWJ as a scam site. That doesn't seem quite fair to me. It is a shame that you are unable to look at the opporunity Deb is presenting. To take a community that helps writers by finding them jobs and badmouthing it in order to get traffic to your site is a real shame. You, sir, are not a "writer" but instead someone who uses inflammatory words and lies in order to create discussions. You might feel good about the amount of hits your page has gotten over this, but realize that a large majority of those who have visited this page think YOU are the real scam and because of that, you come across as a real joke. Ah... A lovely weekend spent at the lake and now back to a fun week of facing death threats and rabid accusations... I LOVE HUBBING! :) Karl: If you had a clue as to copyright law, you would understand that no one can legally copy and paste content from one site to another without specific permission of the original content owner. I have invited Ms. Ng to write a full rebuttal, and have even offered her fair market compensation per word. She hasn't replied other than to state that I should have talked to her first. The offer is still open, but her reluctance to reply is highly suspect. Blondie Writes: Where in the 3rd paragraph? Huh? Can you count to three? And kindly do not denigrate HubPages in my presence. How dare you state that "IF you had the experience you claim, you certainly would not be writing Hubs." You are displaying your ignorance and rudeness towards a reputable site that is one of the leading ones of its type and draws 7 million unique visitors a month. Go do us all a favor and go play with your building blocks on the freeway. Wess Stewart: Thank you for your informed opinion that both my blog and hers suck. As long as no laws are violated, you're free to write whatever you want about me, Ms. Ng, or Donald Duck on your blog. Just like I'm free to write whatever I want on mine. James Chartrand: Who am I to say what writers should do? Er... Who are you to tell me what I should write and what I shouldn't? Note that on top of your browser window there's an address bar. You are free to type in any other URL and go enjoy its content. You could argue that over on I should have minded my own business as if total suckers want to spend $47 for "Doc" Cohen / GoneBig's total SCAM secret code for huge AdSense revenue I shouldn't get involved. Instead I shamed the author by offering him $470 in cash if the code worked in a completely unbiased scientific test. His failure to agree to the challenge is just as suspect as Ms. Ng's failure to post a PAID rebuttal here. Where is the slaughter and bloodshed? (A bit dramatic, don't you think, Jamie boy... tsk tsk tsk... poor writing...) I called it as I saw it and I await a rebuttal. I am absolutely adamant that the facts in the Hub are correct, and am still awaiting Ms. Ng to prove otherwise. I happen to have something that seems to be rather rare online: It's called a conscience. I also have a responsiblity to my readership to provide my perspective. That's made me one of the top bloggers on HubPages, and I'm extremely proud of that achievement. Genesis: I am very happy that you so overestimate my POWER. You place me at the level of CNN or maybe even the White House. As much as that is flattering, I have repeatedly shown my respect and admiration for FWJ's content in the Hub and the comments. Any reasonable reader will understand that it was FWJ Idol which was a "misbegotten and profoundly flawed" contest. Texas writer: Oh... I'm sooooooo sorry that I hurt mommy. I'll bring her flowers and everything will be ok. :) Here I am in the middle of an investigation on five separate death threats I've received from lunatic CPU lappers, and I'm supposed to have my heart bleed because Ms. Ng is taking some deserved heat over a contest that was so ineptly conceived and implemented that she was being blasted on her own site LONG before I wrote my Hub? This is the internet, Texy. Everything that can be justified as a valid opinion on facts goes... of course short of death threats and other illegalities. If a web publisher can't stand the heat, they should get out of the kitchen. Kathryn: Yes, "going to spare..." I believe I was extremely balanced when confronting this issue. You believe upwards of 12 posts of 500 or more words is reasonable. I call it a sucker trap. You say tom-at-oh, I say tom-ah-toe. And I work for general content for around 2 cents a word all the time. That's 2 cents per word more than the payment made for the words squandered by the losers of FWJ Idol. AC Gaughen: I'm not too sure about the contestants "are enjoying themselves pretty heartily." There is plenty of content on FWJ showing otherwise and a rather telling attrition rate of contestants who obviously got fed up running endlessly on the FWJ treadmill. If you earned scraps on HubPages, that either means that you didn't try hard enough or you're not good enough. Don't blame HubPages for that. ." Exactly why am I telling people otherwise? About what? Make sense AC! Keep writing Hubs like you write comments and you'll have no one to blame but yourself for your lack of success. Lisa Russell: Prove that my statements in my Hub are factually wrong, or stop making your accusations. Are you serious about the contest entries being able to be "used elsewhere"? Where? In Writer La La Land? Ever heard of Copyscape? If you haven't, your freelance writing income must be feeding your family of 8 nothing but roadkill. anon: Did your momma have any kids that lived? :) First read: especially this part: !" Then enjoy as I am this AdSense report for the past three days: Friday, August 15, 2008 $1.56. Saturday, August 16, 2008 $3.07. Sunday, August 17, 2008 $3.94. That's a grand total of $8.57 in three days ON TWO HUNDRED AND FORTY-SIX HUBS! If it had to rely on AdSense earnings, I'd have to ask Lisa Russell if she had an extra plate of roadkill. HubPages pays me and other top Hubbers a respectable and fair wage per Hub. That is public knowledge. The great thing about that is that it allows me the total freedom to write whatever I want without chasing traffic. Now, please.... go join Blondie Writes in chasing traffic... on the freeway. Nevada: I can't say that I've gone through the comments on this Hub for various anonymous posters using unique phrases, but thanks for looking out for me! Hey, as far as I'm concerned, as long as a poster doesn't threaten to kill me, they can use any name and any IP they want. bean: Thanks! To hear the cries and wails of the FWJ aficionados, you'd think that this was the first accusation against Idol. I'm happy that you went to verify with your own eyes that what I am stating in the Hub is correct. Karen Swim: I have no details incorrect. Prove otherwise with facts, not innuendo. Amy: Why don't you join anon in the "people who have no clue about AdSense" queue?:) Now, if you'll excuse me, I have to turn my attention to people who want to kill me. Have a nice day! :) With all due respect, Hal. You're not being fair. Deborah Ng posted a response at freelancewritinggigs.com on Saturday and you refuse to address this in any way. Even if you can't reprint it word from word you can still quote or direct your readers to her response. It would be the fair thing to do. When Deborah's commentors are saying something she doesn't feel is fair towards you (for instance you're working for a non-paying entity) she does respond so they'll have their facts correct. After such an inflamatory posts it would be better for your credibility if you alerted your readers towards Deborah's response instead of calling it suspect and making it seem as if she didn't respond at all. Honestly Hal, you're not doing the righ tthing here. anon hubber: When I read your comment, I went to FWJ and saw that there were 93 comments on the site about this Hub. I'm just starting out a week where my primary concern is going to be 5 death threats, so I really don't feel like reading 93 people commenting on a page that isn't my own about how much I suck. I have enough respect for the FWJ clan to not have to go through 18 long pages of comments like I had to do on overclock.net to pull out the death threats. I'm sure that no one over there at FWJ is crossing the line into felonies, therefore I'm more than happy to let them vent all they want. If they want to address an issue with me, I'm here and awaiting their comments. Hal,What many of the posters here are trying to get at is that Deb wrote a response to your allegations. Is it possible for your to go back to the site, read her response (nobody else's, just hers) and comment on what she said about what you posted about FWJ, as it would provide a little more clarity on your position. It seems like (I said seems, no judgement here) you are ignoring her response. By all means, take care of your death threats issue first, totally understandable, just consider addressing her rebuttal when you come back to post again. Again, no disrespect. I don't know anything about that other site and can't be bothered to go investigate all this rot. However, I will say as an uninvolved observer, it seems the bulk of the opposition for Hal's point comes from the very site he writes about. Starts to look like defenders of the faith with all the testimonials lining up. Hard to tell if Hal is way off base or if he hit the nail on the head. I will say that all the arguments in favor of this contest because "they read the rules and knew what they were getting into" are the same arguments tobacco companies and rip-off home loan people make when their products are impuned. So, just, if anyone cares, that argument is a horrible one and makes you sound like a scammer to those of us reading this hub and comments just to be amused. I'm not saying you or that contest ARE scammers/scam, just saying when you use that argument, you convey the impression that you have no legitimate answer and are going to rely on technicalities and redtape to keep your reputation clear. Hal I tried to post this three times with an error so forgive me if you get it three times. I believe what anon hubber up there is saying is that Deb did respond to your post, just not on your exact terms. Anon is not asking you to read the comments she's asking you to acknowledge the post and direct your readers there for Deb's side of the story. Deb posted a full rebuttal at her blog a few days ago. That you are not responding to it or sending anyone's attention to it is even more suspect than what you are accusing her of. Thanks and hopefully this one went through. David and MW, I have gone onto FWJ and read Ms. Ng's commentary on the front page of freelancewritinggigs.com, although forgive me for not ploughing through 94 comments. I did conduct a foray into another and very interesting part of the site, however, as you will soon see: - Ms. Ng, please don't sue me for reprinting this excerpt without your express permission from your site but it is necessary to make my point: ." Er... Ms. Ng. Please excuse me. But I just went on FWJ (not google caches, the actual site) and managed to pull up each and every single entry from the beginning of the contest in full. Yup, two months of content. All there. In its entirety. Every single word. Absolutely nothing has been pulled in. It's all on freelancewritinggigs.com under the FWJ Idol tab. I have copied it all and have it here for safekeeping should it suddenly vanish now, like the entire death threat thread did on overclock.net (which I also copied in full). I find it interesting that I'm being called a liar and a hatchet-jobber who didn't check my facts by the FWJ Faithful on this Hub, when Ms. Ng lays down a whopper like this one. Holy Toledo, Deb. I'm TRYING to give you the best benefit of the doubt, but you're shooting yourself in the foot by outright lying to your readership. You can't get far in this business by trying to obfuscate the truth when anyone who can click a mouse can see you're lying! - I fully believe that Ms. Ng is wholly ethical and it was her inexperience in running an online contest which is to blame for the problems. Leaving the issue of free content provision aside for a moment, the "gaming" of the polls was a result of the voting being set up so that it could be "gamed." Thousands of polls are run on the internet every day and most of them are set up with security features which Ms. Ng (again I continue to trust through inexperience, not malicious intent) did not implement. As for the outright whopper above... well... er... what can I say... my faith of non malicious intent is beginning to be chipped away. - I never stated FWJ was scamming. I stated Idol was an exploitation and I still believe that. - I don't believe this Hub is a hit piece, neither was it written for traffic as I have made amply clear in my comments. If I calculate the page views for this Hub since its writing, I strongly doubt it would buy me a stick of chewing gum. - Stating that FWJ does not own the rights to the content and don't expect exclusivity is so foolish as to be downright misleading. Exactly what is ANY writer supposed to do with content that will fail Copyscape? Paper their walls with it? No other web publisher in their right minds will ever give them a dime for it. - 22 people applied for the job. Yes, they did. However, they didn't do it through telepathy. Every one of them wrote SOMETHING. 21 of them will have written that SOMETHING for NOTHING. Some may have written "Hi Deb I want in." Others may have written enough words to fill a novella. Excuse me if you can't see the irony of the situation where a site which has been SO STELLAR IN PROTECTING WRITERS AGAINST GIVING AWAY THEIR CONTENT actually turns around and engages in placing tens of thousands of perfectly good, valid, custom-written, unique words into Google caches for eternity! If asked people to write an ENTIRE NOVELLA FOR FREE which I would post under A NUMBER on my site so that not only could they never use the content again (ah... that ol' devil Copyscape) but that I would stuff my site jam full with very attractive content, I would be in exactly the same boat as the scammers who prey on naive writers and suck them dry while their traffic soars. NO, FWJ IS NOT A SCAM SITE. It is a respectable and wonderful operation. But this FWJ Idol is wholly misconceived and implemented with all the deftness of a ball peen hammer. 21 people write for free. 1 wins a job. Hey, there's worse deals out there so in and as of itself, I might have let it go uncommented. However, when you consider the reputation and the status of the site that it's on, it becomes inconceivable and inexcusable. As to how I would proceed if I were in Ms. Ng's place... I offered the two best options I can think of at the end of my Hub. I'd be more than happy to entertain other suggestions. The bottom line in my reply to Ms. Ng's commentary is as follows: 1) You're still welcome to post a full original rebuttal. I will place it entirely unedited and untouched in a Guest Hub and pay you the rate you well know is a reasonable market value of 2 cents per word. 2) I stand by every word I've written, including "sucker trap." Sorry, Ms. Ng, but my discovery that your FWJ Idol Archive is still stuffed full of two months of the content you stated you were pulling on a regular basis establishes that you are lying for some reason known only to you (and it may be malicious or it may just be naivete... it's beyond me to be able to make that determination.) Shadesbreath: Yes, there are some rather disturbing parallels to tobacco companies and home loan schemes here. I wish that wasn't the case, but in light of the FWJ Idol Archive online right now, it very well may seem as that could be true. I appreciate your comment and support. Oh, BTW, to ape: I don't appreciate the innuendo that I am writing comments on this Hub under another name. Prove your allegation or stay in the CyberTrash where you belong. But, then again, you ARE an ape after all! :) Deborah Ng said on her blog she will not engage in back and forth with you so I'm going to do for her. I love playing devil's advocate. Hal, in your passion I think you missed a key detail. Deb <em>offered</em> all conntestants the opportunity to have their content pulled immediately after the voting <em>each</em> week. The contestants chose to keep their content there. Not because Deb exploited them but because the contestants themselves chose to keep the content on FWJ until after the contest when they would each be given bylines, bio and links back to their websites or blogs. And it's not all there in its entirety. One contestant did request to have his or her content removed and Deb and Jodee promptly honored that request. So you didn't pull up each and every entry, only the entries of the contestants who wanted to leave their content up. See Hal? Options. No one is being exploited. I do appreciate how you have no time to read through 93 comments but have time to troll Deb's blog (as well as various message boards where people don't like you) to find stuff to throw back here. It's tough being so busy and you have my deepest sympathies. You wanted someone to point out where you were factually incorrect. While several people have done that, apparently you are too busy dealing with death threats to actually comprehend what they've pointed out to you. Here is one glaring error that I will point out specifically so you don't have to read your own work looking for it: "Thousands and thousands of quality, unique words for no pay and not even any credit" (Regarding the credit part, this is factually incorrect. In Deb's rebuttal, she clearly stated that all bloggers who choose to leave their content on the site once the contest has ended will recieve a full byline and links to wherever they want.) While I am not an active member of the FWJ community, I do frequent the blog for it's daily job postings. The blog has been instrumental in helping hundreds of writers, and your post bashing Deb for this contest seems highly unfair. Had you spoken to Deb before you wrote this, I believe you could have avoided much of the negative reactions you've recieved since this piece went live. I'd like to echo what others have previously stated, in that the truly sad thing about this entire article is the number of newbie writers you've undoubtedly turned off to Deb's site. These are writers who, never having visited Deb's site before, would possibily have benefited from the work her and Jodee do on a daily basis. Unfortunately, your negative remarks have convinced some of those writers that Deb's site is worthless. Good job - you must be very proud. Hal I think if you're going to make this statement: "After years of educating and informing her writing community on how to avoid online writing scams, Deborah Ng set up one of the biggest sucker traps in the freelance writing market. " You have to provide proof. Your opinion isn't proof. How do you know for a fact Deborah Ng set up a sucker trap? Did she tell anyone this? Are contestants crying foul? Is anyone but you claiming this is indeed a sucker trap? No. So what proof -ther than your opinion and only those facts you wish to share with your readers, do you have that Deborah Ng knowingly set up a sucker trap? So if you're going to state Deborah Ng set up a "sucker trap" you need better proof than your opinion and a few comments that continue to leave out many details. This is a dangerous, slanderous statement and I recommend you re-word it to this as being your opinion. The reason the content is still on the site is because Deb offered to take it down IF the contestants requested that. Alternatively, she offered to add by-lines and bios. to the articles - again, IF the contestants wanted that. I fail to see how you conclude that Deb is lying? Hal -- I believe Deb only pulled content from those contestants who wished to have their content pulled. I know at least some of it has been pulled, because I went through the Idol Archives at one point wishing to reread a post, and it wasn't there. Hal to be fair Deborah offered to pull content more than once and long before you posted this. All of the contestants but one requested the content stay up at FWJ. They would much rather have their bylines and links at such a prestigious site. One person did seek to have content removed and it happened right away. So Deborah didn't lie, you misrepresented the facts again, and you didn't see all the entrants' content because one person took it down. She didn't lie and your statement that "every single word" is still there is not true. P.S. I don't think it's necessary to save screen caps of all the content since the onwers of said content requested it stay up. Hal Deborah Ng isn't a liar. She told all the contestants she would only keep their content up until the poll but they didn't want her to take it down. One writer did but the others asked for the content to stay. You should clarify this to your readers. Hi! Thanks to all FWJ fans for your comments on your site. I really appreciate all the innovative ways you've found to call me a scumbag. Very creative! :) It's me: "Since most writers contribute towards the end of the week, should that have their content pulled it would only be up at FWJ for a matter of a few days." OK, now, Ms. Ng is a freelance writer of some note and she wrote that abortive pseudo-sentence which can be construed one of a couple of zillion different ways? So what's the story now (as it seems to change a lot), only one contestant pulled out and wanted their stuff off while the other ones that pulled out wanted it left on? Or as you said conntestants (sic) but I won't make the obvious pun about conning... :) Denise: Ah, negative reactions. They don't faze me much as long as they're not trying to kill me. When you say that "In Deb's rebuttal, she clearly stated" are you referring to that mishmash incomprehensible sentence I just listed? That's clearly stating? Yikes! :) Denise, please review in the Hub and in the comments how many times I displayed my respect and admiration for FWJ minus Idol. If anyone takes that to reflect on the entire site, then they're obviously misreading. Amy and FWJer: I can't understand clearly what Ms. Ng is "clearly stating" so I await further "clarifications." But the real bottom line is that the content is there, it was not paid for, it is benefiting FWJ and not its writers, and no matter if the contestants went into it with wide open eyes or not, IT IS NOT RIGHT! WRITERS SHOULD BE PAID FOR 6,000+ WORDS! Sheesh! You'd think I was talking to a bunch of overclock/lappers, not professional freelance writers! anon: Yes, I did take the time to finally plough through 98 comments on that site. I found it very telling that an industry bigwig like John Hewitt from Writer's Resource Center stated that he thought that the "flack was inevitable." The rest was mostly the predictable Hal-bashing. Meh. Sticks and stones. There were some gems: "Did I say that Hal stinks? He smells, has incurable B.O. and is a poor writer. I bet he doesn’t brush his teeth either." Nah. I'm a good writer. But you're right about the other stuff. :) "I used to write for HubPages, too, as a columnist for one specific niche. I earned $5.00/article + ad revenue…" You got paid that since you didn't draw flies with your Hubs. I no rite no Hubs for no five dollahs, bubba. :) Those were really good ones. However, there were a couple that need a smackdown: "No response here, Hal? I mean posting as yourself, of course? Telling." What the HELL are you talking about? If you're accusing me like ape of posting anonymously, you can stop the accusations right now. The last time I posted ANYTHING on FWJ was well over a year ago. If you think that I've been playing hide and seek games you are 100% out of your mind. Back to you, anon, I find it rather funny that "an attorney" would WITHIN ONE HOUR post one comment from Taylorsville, Georgia, another from Amman, Jordan, and another from Novato, California. Dang gum it, I knew that dem dere fancy corporate jets you bigshot lawyers had were fast but I dinna think dat fast! And I'm the one accused of proxy hiding, where here we have the very first proxy hiding "attorney" I've ever heard of. Hmm... the interesting people you meet on the net! :) However, the funniest thing was that you want to get Deb to sue me. HAHHAHAHHHAAAAAAAaaaaaaaaaaaaaaaaaaaa! Dude, you have given me the best laugh since the bozo death threat punks at overclock.net. Geez, Mr. Attorney... I await your confirmation of my assets. I'll be glad to save you the time:1) 1985 Chevy Sprint, 200,000+ miles. Value (if you're lucky) $100.2) PC. Value (if you're real lucky) $500.3) Two pairs of running shoes, a couple of folding tables, a 20" TV and a whole rented house full of rented furniture. Value $100 on a good day.Too bad you didn't get me a couple of weeks ago when I sold my Harley. But it all went towards my credit cards so my current cash on hand is $284.07. You wanna garnishee my wages? No prob. I've been considering that I should stop working and write my Great Canadian Novels living rent free at my nice, elderly mom's home where she has a lovely bedroom waiting and three delicious squares a day. Try garnisheeing her lasagna and she'll bite you. Be forewarned. She still has ALL her own teeth! :) I look forward to your lawsuit. I hope your pro bono work is cheap! :) Oh, and did I tell you that you were (here comes my favorite word) A MORON? :) Oh, and to save everyone the trouble of coming up with conspiracy theories as to why I'm not going to reply for a day or so, let me save you the energy. I'M BUSY. Keep posting and I'll publish when I return. Keep it friendly or you go into the CyberTrash. :) Hal since youre quoting John Hewitt why not add his whole entire quote? You see to just take out the bits and pieces that (self) serve you best. What, and give Mr. Peek A Boo Pro Bono something more to sue me for, like copyright infringement? Heck that might cost me my favorite Harley TShirt and maybe even my Las Vegas Dice Coffee Cup! :P It's on freelancewritinggigs.com. Go read it. Now I AM OUTTA HERE! ADIOS! SEE YA IN A COUPLE OF DAYS! And since you want to keep it friendly you really owe Deborah an apology for calling her a liar. Ms. Deborah Ng: I wish to inform you that I sincerely, profoundly, and completely, from the bottom of my heart, never wished to hurt you or your operation which I respect completely and still do as I clearly stated in the original article. If it has been insinuated or stated by me in any way that you are a liar, it may have been due to the lack of clarity in your post response on your site which contained a sentence which was difficult to comprehend. Therefore, if I was in error on that specific fact, you not only have my complete, honest and unwavering apology, but I wish to voluntarily offer that I write a Hub outlining the spectacular, amazing, wonderful and magnificent work you have been doing on FWJ for years in helping hundreds or thousands of freelance writers, including yours truly, in finding good jobs, avoiding the myriad pratfalls of the business, and making good money. Prior to the writing of the Hub on this page, I had NEVER had any occasion to doubt your ethics, honesty, integrity or goodwill. If you operated FWJ Idol in a way that I, through my responsibility to my readers, saw fit to inform them of its failings, I did not EVER wish it to be construed that you were in any way a scammer of any type or form. We are both aware that FWJ Idol had its problems and you have been open with the "gaming" poll allegations to your credit. With the upcoming conclusion of FWJ Idol, I have absolutely no reason to question or doubt your operating policies as FWJ is a 100% class act. Once again, in the spirit of the brotherhood and sisterhood of freelance writers around the world, of which your FWJ community is one of the bedrock foundations, I extend my hand to you in friendship, respect, and deference in homage to your years of fine work and the many years ahead of you where you will surely continue your superlative service to the freelance writers who need your help. I am proud to be a supporter of FWJ and will continue to be one as long as you are operating it. Again, you have my complete and humble apology and complete retraction if you deem I have misrepresented anything about you or your ethics. Now, I will see you all in a couple of days! Bye! :) HI, NICE HUB The damage is done, son. OH... I'm trembling in my shoes! Oh well... I hope you like to collect lasagna. DOLT! :P You said: "it may have been due to the lack of clarity in your post response on your site which contained a sentence which was difficult to comprehend. ?" Deborah said: ." How was this unclear? You can't tell us this isn't a hit piece. You're taking a few words here and there and using them for your benefit without showing the whole picture. Case in point is John Hewitt's quote. You said: "I found it very telling that an industry bigwig like John Hewitt from Writer's Resource Center stated that he thought that the "flack was inevitable." The full quote actually reads: "Deb, I’m sorry to hear about the hit piece. I have to admit I thought that someone would take offense sooner or later. Contests like this always generate controversy. Your inspiration, American Idol, generates plenty of controversy and conspiracy theories. I still think the contest was a brilliant idea, but I’m afraid some flack was inevitable. In the words of a wise fish… Just Keep Swimming. John" Takes on a whole different meaning, doesn't it? You know why Deborah Ng is a success? She weighs all sides of the story and presents an unbiased point of view. She doesn't just up and trash people or their hard work. If she disagrees, she does so in a respectful manner without name calling, finger pointing and half truths. Even her comment policy is respectful. You sir, could take a lesson from Deborah Ng. I have my plate real full for the next couple of days on deadlines, so I'm gonna have to keep this short: "Since most writers contribute towards the end of the week, should that have their content pulled it would only be up at FWJ for a matter of a few days." I'm sorry, but I doubt that anyone can tell me that is a clear sentence. The way I interpreted it was that the content was off. It could be read that way or one of many other ways. I am not blessed with ESP and thus can only tell what a person is trying to say by the words they write. When it was made amply clear from "whoever" was posting as "Me" that was NOT the intent, then I did the right thing. I apologized because if I implied that Ms. Ng was lying it was based on that muddled sentence, not on any other fact. My apology above explains EXACTLY what I misunderstood and makes eloquent amends. Am I going to reverse my stand and state categorically that FWJ Idol was the best-executed online contest ever run? No. The flaws are evident and even Ms. Ng has acknowledged some of them. Therefore, I'm perfectly happy to live up to the situation that I described in my apology. "Me" stated "And since you want to keep it friendly you really owe Deborah an apology for calling her a liar." I believe my apology was more than satisfactory to clarify my obvious misunderstanding based on the one mangled sentence she wrote. If Ms. Ng wants to sic Mr. Ambulance Chaser Proxy Hiding Moron Esq. on me, then I hope that she will like driving around town in my rusty 1985 Chevrolet Sprint. She's welcome to it since it's raining right now and I don't think it's going to start today. So go for it. I've got nothing to lose and I'll laugh so hard while her so-called attorney tries to sue me that I might crack a rib! :) I've got no dog in this fight. I visited this and Deborah Ng's for the first time today, directed from a third site I frequent. But I just wanted to say how silly I find this controversy. Ms. Eng obviously has a loyal following, and many of her die-hard fans have gone a little overboard in their desire to defend her. It's sort of telling that Mr. Licino cotinues to thrive on this controversy, while Ms. Ng has addressed the issue and moved on. He can't continue to argue his point if no one will argue with him. Er... has Ms. Ng addressed the issue and accepted my apology? I haven't seen a clue even from the mysterious "Me" to say "I read your apology and it's accepted/it's not accepted/you suck/whatever" but she does seem to have sic'ed her Proxy Hiding Lawyer on me. I am not surprised by the lawyer, since by his comments he seems like a slimeball. No self-respecting lawyer would ever tip his hand to advise that he is going to launch a suit through an anonymous proxy-hiding posting, unless he got his law degree from the Nigerian Institute of Scam Law. However, I do admit that I am surprised by Ms. Ng's silence. I can understand if she doesn't want to post here, but an acknowledgement to my apology on her site: "I read your apology and "it's accepted/it's not accepted/you suck/whatever", with a proxy hidden "Me" post here to let my readers know would seem to be the proper thing to do. But then again, it's a free country and she can do whatever she wants to. The biggest scam in the FWJ Idol is the illusion that it's a fair and unbiased interview, and not a popularity contest. There have been some issues with schooling the system, as Deb put it, even after she assigned code names to make contestants anonymous. I've heard rumors of a group who is out to prove that one, if not more, of the contestants is padding the votes-- namely one that is consistently in the lead and miraculously jumps in vote numbers any time the contestant's rivals begin to catch up. There has also been speculation that a certain contestant that has been oft lauded has been handpicked by the powers that be. In these same circles, there are rumors of rampant censoring of comments so that there isn't any dissention among the ranks. In other words, keeping everyone in line and making examples out of certain posters. Whether there is any truth to these rumors, I don't know. But if you know where to listen, you'll hear many more such stories. I have no doubt that the FWJ team didn't intend for these things to happen when they first started the interview process. Maybe things just weren't planned as well as they should have been. Maybe they just got in over their heads. Who knows. No one but the person monitoring, moderating and posting behind the scenes. I am the anon in Amman, Jordan. I did not claim to be an attorney, nor did I write all of the posts authored by anon. More than one person can be "anon," which is short for anonymous. WOW!! This hub certainly hit a nerve! You know.....I wonder who would ever want to be on American Idol and set themselves up to that kind of torture. The same could be said for those poor bloggers. Why would they even get involved in such a scheme to begin with? Helium is another site where you write and write and write and you may never see a dime for all your work. When I found HUb Pages I could see right away that this is a much better system. You are directly rewarded for your efforts. Of course, even making money on the Hub Pages is not easy? Why? Because of the millions of bloggers out there flooding the internet with often free content. If you want to make money writing, I am not sure that blogging is the way to do it, or any other online scheme. I am now looking into sending my stories to magazines, and maybe even start writing that daunting first novel. What I think would be interesting and fun to do would be to create a novel online with a group of writers, and then trying to get it published. This feat would have been just about impossible before the internet, but not anymore. Perhaps the group of writers could agree on an idea and then each writer would create a chapter. The writers would vote on what stays and what goes. Perhaps an editor would volunteer some time for a portion of the profits. Of course the devil would be in the details as you stated so well in your hub. What happens when you get a writer in there who wants to run the entire show, or the writer who simply cannot write very well, but thinks they are the next Stephen King? All in all, I have found writing online a very unrewarding experience. I have found my blogs used by websites, and I never saw a dime for these stories which I never gave permission for anyone to use. I was at first flattered that my blogs were getting read at all, but as time went by, and I found my blogs being used with no credit to me whatsoever, I became sour on the entire blogging ordeal. Of course, I have no copyright or anything, so I suppose that they did have a right to use what they pleased, however I do not think it is very fair. For now, I am trying my best to write interesting hubs, and I am very pleased to find my hubs listed on the first page of google! Maybe there is a light at the end of the tunnel after all! Hal-WOW, I just got done reading the comments here and they really BLASTED you because you chose to speak out about something very unfair. I give you credit for standing up and saying what you did, and I am behind you 100%. Obviously they would not be this upset if indeed they were on the up and up. If they really want to be as fair as they claim to be those bloggers should get PAID for all their hard work and for all the traffic they drew into the site. Kudos to you Hal for trying to keep these guys honest. Keep up the good work, I am happy I read this hub, and I plan to read your hubs in the future. Thanks! eyes in the back of my ears: Don't your eyeglass frames poke you in the eyes? (Couldn't resist that pun.) :) I have been clear that Ms. Ng herself has acknowledged that there have been some fairly serious issues with the implementation of FWJ Idol. I cannot comment on anything but facts that I can ascertain with my own eyes, which strangely seem to be placed well in front of my ears. (Couldn't resist that pun either... sorry...) The best advice I could give is that these sorts of "contests" are run on the internet all the time, whether they involve freelance writers or not: There are many acknowledged standards and "safety nets" that are readily available which could have been integrated into Idol, yet they were not. I don't believe this was done through malice, but perhaps through lack of knowledge of these safeguards. I cannot speak for Ms. Ng, but if I were to do something like that (and don't worry... I have no such intention), I would take exceptional care to ensure that I would build in every possible precaution to keep it from spinning out of control as FWJ Idol clearly has. I appreciate your comments. anon: Ah, to savor a kebab in Shmeisani! I'm drooling just thinking of it. Thanks for the clarification. FYI, judging by his IPs your anon-clone hopped around in several states in his postings, often going coast to coast within minutes, so I'm still impressed by the Mach 20+ speed of his law firm's Lear Jet or SR71 Blackbird or Space Shuttle or whatever he has... :) magnoliazz: Thank you so much for your comments. Not only do I truly and profoundly appreciate them but I am very thankful that you chose to be one of my formal Hub Fans. Yes, it profoundly stinks that your content and mine is being not just posted with links, but outright copy and pasted onto other sites some without even having the consideration to publish our names so that they can claim authorship. It's no more or no less outright theft as showing up at 2 am in a 7-11 waving a Saturday Night Special. Unfortunately the structure of the internet makes it well nigh impossible to squash all of these pathetic copiers, so just consider it a backhanded compliment that your content is strong enough that they deem it will draw in traffic. That means you're a great writer, and they are just thieving lowlife scum. To the best of my knowledge I was the first person ever to attempt to build an online community to write a collaborative novel. It may be hard to believe but it occurred several years before the release of the first Mosaic Web Browser at a time when online activities occurred on 2400 baud modems connecting to CompuServe at $12 an hour. I headed an active and participatory group of writers which produced some truly sterling work but it proved unworkable. It is impossible to accurately credit each writer in an activity such as that. If everyone shares equally, then the person who created an entire fleshed-out subplot gets their nose out of joint because they're receiving the same compensation as the writer who plunked in two lines of dialogue. If you try to develop a prorated compensation structure, then every single person involved thinks it's unfair and you make no one happy. I've very carefully sidestepped online collaborations in the more than two decades since. I think I have made it more than amply clear that: 1) I have always respected the fine work of FWJ (outside of the lapse shown by Idol) and still do. 2) I have stated my case and stand by each word. The only retraction I made was when I misunderstood a single sentence (almost incomprehensibly) written by Ms. Ng, and my verbiose apology above settles that specific issue to (at least) my satisfaction. 3) I continue to encourage any freelance writer to avail themselves of the superlative resources and friendly (to everyone but me lately... boo hoo...) community at freelancewritinggigs.com. Ms. Ng has stated on her site that she considers the matter closed, and therefore I will too. Good hub! I was asked to write in a similar manner, and I told the folks to go suck an egg :) Yes, as I stated in my Hub, it is a trick that scammers are using more and more and I'm amazed that writers keep falling for it. Please note I am NOT including FWJ in that statement as I said, the matter is closed. Freelance writers, unite. Stand strong and don't let anyone sell you short. Oops! Sorry to barge into this intellectual discussion, but LOL! I could not help it ... I had to ... I just had to! I was reading every comment on this hub with great interest since I am a newcomer to the game of freelance writing and I needed more information on the benefits and "dangers" that free lance writers have to face. However, when I came to the part where you mentioned chasing grass skirted girls with a lwan mover ... LOL, I just burst out laughing! I scrolled right to the bottom just to leave this comment !!! LOL, Hal, I admire your writing style, your humor, and your knowledge! JVM: 100% agreed!!! quicksand: Thanks! I appreciate the kind words! Others: This may be of interest: Allena, than you for your kind words. I'll keep callin' 'em as I see 'em with my reader's interests first and foremost in my mind. If some people take exception to my candid honesty, then they can blast me all they want! :) Golem Zero: Yes, the matter is resolved as publicly stated on both this Hub and FWJ. Ms. Ng stated that she was not about to seek legal recourse, I apologized for misreading the intent of that badly phrased sentence (although not for the valid points I made in the original Hub), and the issue is closed completely and forever. FWJ remains IMHO the number one source for up to date job leads in this niche and participation in the community is optional. Thanks for your comments and support! I appreciate it! <!-- /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-parent:""; margin:0in; margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11;} --> Although I don’t agree with you totally, I must admit that this is an excellent hub. I love your brutally honest writing style; very comical (it’s been a long time since I laughed so hard). The e-book ministers of propaganda are the worst with their promises of untold riches in exchange for $19.95. Believe me, I’ve read my share of electronic books and in almost every case, the free e-books where the best. I recently wrote an e-book with the intention of making a few bucks but decided to publish it for free here on Hubpages. So, here I am; an affiliate marketer and SEO junkie giving you praise. Again, while I don’t agree with you entirely, I must say well done! I’m a fan. Thanks for the compliments! Much appreciated! :) You might want to check out: where I Photoshop my own Adsense statement to show that I'm raking in the dough. However, I am honest enough to immediately below that image publish my REAL Adsense statement to show people how easy it is to fake it in order to scam poor innocents into BUY MY GET RICH BY TUESDAY ON HUBPAGES EBOOK. :) talented_ink says: 11 months ago This is the first time I've heard of this site, but it's ashamed how a site that is supposed to be so helpful to all freelance writers can end up abusing the talents of those who entered the contest.
http://hubpages.com/hub/How-freelancewritinggigscom-Exploited-Its-Own-Writers
crawl-002
refinedweb
14,668
71.44
import "context" Package. Canceled is the error returned by Context.Err when the context is canceled. DeadlineExceeded is the error returned by Context.Err when the context's deadline passes. A CancelFunc tells an operation to abandon its work. A CancelFunc does not wait for the work to stop. After the first call, subsequent calls to a CancelFunc do nothing.{} //.{} } A Context carries a deadline, a cancelation signal, and other values across API boundaries. Context's methods may be called by multiple goroutines simultaneously. Background returns a non-nil, empty Context. It is never canceled, has no values, and has no deadline. It is typically used by the main function, initialization, and tests, and as the top-level Context for incoming requests.. This example demonstrates the use of a cancelable context to prevent a goroutine leak. By the end of the example function, the goroutine started by gen will return without leaking. //. This example passes a context with an arbitrary deadline to tell a blocking function that it should abandon its work as soon as it gets to it.) } This example passes a context with a timeout to tell a blocking function that it should abandon its work after the timeout elapses. // Package context imports 5 packages (graph) and is imported by 50757 packages. Updated 2018-06-08. Refresh now. Tools for package owners.
https://godoc.org/context
CC-MAIN-2018-26
refinedweb
224
59.9
Provided by: mpv_0.27.2-1ubuntu1_amd64 NAME mpv - a media player SYNOPSIS mpv [options] [file|URL|PLAYLIST|-] mpv [options] files. INTERACTIVE CONTROL mpv has a fully configurable, command-driven control layer which allows you to control mpv using keyboard, mouse, or remote control (there is no LIRC support - configure remotes as input devices instead). See the --input- options for ways to customize it. The following listings are not necessarily complete. See etc/input.conf for a list of default bindings. User input.conf files and Lua scripts can define additional key bindings.. (The following keys are valid only when using a video output that supports the corresponding adjustment, or the software equalizer (--vf=eq).)). (The following keys are valid if you have a keyboard with multimedia keys.) PAUSE Pause. STOP Stop playing and quit. PREVIOUS and NEXT Seek backward/forward 1 minute. If you miss some older key bindings, look at etc/restore-old-bindings.conf in the mpv git repository. Mouse Control button 3 and button 4 Seek backward/forward 1 minute. button 5 and button 6 Decrease/increase volume. USAGE Command line arguments starting with - are interpreted as options, everything else as filenames or URLs. All options except flag options (or choice options which include yes) require a parameter in the form --option=value. One exception is the lone - (without anything else), which means media data will be read from stdin. Also, -- (without anything else) will make the player interpret all following arguments as filenames, even if they start with -. (To play a file named -, you need to use ./-.). Legacy option syntax The --option=value syntax is not strictly enforced, and the alternative legacy syntax -option value and --option value will also work. This is mostly for compatibility with MPlayer. Using these should be avoided. Their semantics can change any time in the future. For example, the alternative syntax will consider an argument following the option a filename. mpv -fs no will attempt to play a file named no, because --fs is a flag option that requires no parameter. If an option changes and its parameter becomes optional, then a command line using the alternative syntax will break. Currently, the parser makes no difference whether an option starts with -- or a single -. This might also change in the future, and --option value might always interpret value as filename in order to reduce ambiguities. Escaping spaces and other special characters Keep in mind that the shell will partially parse and mangle the arguments you pass to mpv. For example, you might need to quote or escape options and filenames: mpv "filename with spaces.mkv" --title="window title" It gets more complicated if the suboption parser is involved. The suboption parser puts several options into a single string, and passes them to a component at once, instead of using multiple options on the level of the command line. The suboption parser can quote strings with " and [...]. Additionally, there is a special form of quoting with %n% described below. For example, assume the hypothetical foo filter can take multiple options:' Shells may actually strip some quotes from the string passed to the commandline, so the example quotes the string twice, ensuring that mpv receives the " quotes. The [...] form of quotes wraps everything between [ and ]. It's useful with shells that don't interpret these characters in the middle of an argument (like bash). These quotes are balanced (since mpv 0.9.0): the [ and ] nest, and the quote terminates on the last ] that has no matching [ within the string. (For example, [a[b]c] results in a[b]c.) The fixed-length quoting syntax is intended for use with external scripts and programs. It is started with % and has the following format: %n%string_of_length_n Examples mpv '--vf=foo:option1=%11%quoted text' test.avi Or in a script: mpv --vf=foo:option1=%`expr length "$NAME"`%"$NAME" test.avi Suboptions passed to the client API are also subject to escaping. Using mpv_set_option_string() is exactly like passing --name=data to the command line (but without shell processing of the string). Some options support passing values in a more structured way instead of flat strings, and can avoid the suboption parsing mess. For example, --vf supports MPV_FORMAT_NODE, which lets you pass suboptions as a nested data structure of maps and arrays. Paths ./. Using the file:// pseudo-protocol is discouraged, because it involves strange URL unescaping rules. The name - itself is interpreted as stdin, and will cause mpv to disable console controls. (Which makes it suitable for playing data piped to stdin.) The special argument -- can be used to stop mpv from interpreting the following arguments as options. When using the client API, you should strictly avoid using mpv_command_string for invoking the loadfile command, and instead prefer e.g. mpv_command to avoid the need for filename escaping. For paths passed to suboptions, the situation is further complicated by the need to escape special characters. To work this around, the path can be additionally wrapped in the fixed-length syntax, e.g. %n%string_of_length_n (see above). Some mpv options interpret paths starting with ~. Currently, the prefix ~~/ expands to the mpv configuration directory (usually ~/.config/mpv/). ~/ expands to the user's home directory. (The trailing / is always required.) There are the following paths as well: ┌─────────────┬──────────────────────────────────┐ │Name │ Meaning │ ├─────────────┼──────────────────────────────────┤ │~~home/ │ same as ~~/ │ ├─────────────┼──────────────────────────────────┤ │~~global/ │ the global config path, if │ │ │ available (not on win32) │ ├─────────────┼──────────────────────────────────┤ │~~osxbundle/ │ the OSX bundle resource path │ │ │ (OSX only) │ ├─────────────┼──────────────────────────────────┤ │~~desktop/ │ the path to the desktop (win32, │ │ │ OSX) │ └─────────────┴──────────────────────────────────┘ Per-File Options When playing multiple files, any option given on the command line usually affects all files. Example: mpv --a file1.mkv --b file2.mkv --c ┌──────────┬────────────────┐ │File │ Active options │ ├──────────┼────────────────┤ │file1.mkv │ --a --b --c │ ├──────────┼────────────────┤ │file2.mkv │ --a --b --c │ └──────────┴────────────────┘ (This is different from MPlayer and mplayer2.) Also, if any option is changed at runtime (via input commands), they are not reset when a new file is played. Sometimes, it is useful to change options per-file. This can be achieved by adding the special per-file markers --{ and --}. (Note that you must escape these on some shells.) Example: mpv --a file1.mkv --b --\{ --c file2.mkv --d file3.mkv --e --\} file4.mkv --f ┌──────────┬─────────────────────────┐ │File │ Active options │ ├──────────┼─────────────────────────┤ │file1.mkv │ --a --b --f │ ├──────────┼─────────────────────────┤ │file2.mkv │ --a --b --f --c --d --e │ ├──────────┼─────────────────────────┤ │file3.mkv │ --a --b --f --c --d --e │ ├──────────┼─────────────────────────┤ │file4.mkv │ --a --b --f │ └──────────┴─────────────────────────┘ Additionally, any file-local option changed at runtime is reset when the current file stops playing. If option --c is changed during playback of file2.mkv, it is reset when advancing to file3.mkv. This only affects file-local options. The option --a is never reset here. List Options). ┌────────┬──────────────────────────────────┐ │Suffix │ Meaning │ ├────────┼──────────────────────────────────┤ │-add │ Append 1 or more items (may │ │ │ become alias for -append) │ ├────────┼──────────────────────────────────┤ │-append │ Append single item (avoids need │ │ │ for escaping) │ ├────────┼──────────────────────────────────┤ │-clr │ Clear the option │ ├────────┼──────────────────────────────────┤ │-del │ Delete an existing item by │ │ │ integer index │ ├────────┼──────────────────────────────────┤ │-pre │ Prepend 1 or more items │ ├────────┼──────────────────────────────────┤ │-set │ Set a list of items │ └────────┴──────────────────────────────────┘ Although some operations allow specifying multiple ,-separated items, using this is strongly discouraged and deprecated, except for -set. Without suffix, the action taken is normally -set. Some options (like --sub-file, --audio-file, --opengl-shader) are aliases for the proper option with -append action. For example, --sub-file is an alias for --sub-files-append. Playing DVDs. User-specific options override system-wide options and options given on the command line override either. The syntax of the configuration files is option=value. Everything after a # is considered a comment. Options that work without values can be enabled by setting them to yes and disabled by setting them to no. Even suboptions can be specified in this way. Example configuration file # Use opengl video output by default. vo=opengl # Use quotes for text that can contain spaces: status-msg="Time: ${time-pos}" Escaping spaces and special characters Almost all command line options can be put into the configuration file. Here is a small guide: ┌──────────────────┬──────────────────────────┐ │Option │ Configuration file entry │ ├──────────────────┼──────────────────────────┤ │--flag │ flag │ ├──────────────────┼──────────────────────────┤ │-opt val │ opt=val │ ├──────────────────┼──────────────────────────┤ │--opt=val │ opt=val │ ├──────────────────┼──────────────────────────┤ │-opt "has spaces" │ opt="has spaces" │ └──────────────────┴──────────────────────────┘ File-specific Configuration Files. Profiles=opengl-hq [fast] vo=vdpau # using a profile again extends it [slow] framedrop=no # you can also include other profiles profile=big-cache Auto profiles Some profiles are loaded automatically. The following example demonstrates this: Auto profile loading [protocol.dvd] profile-desc="profile for dvd:// streams" alang=en [extension.flv] profile-desc="profile for .flv files" vf=flip The profile name follows the schema type.name, where type can be protocol for the input/output protocol in use (see --list-protocols), and extension for the extension of the path of the currently played file (not the file format). This feature is very limited, and there are no other auto profiles. TAKING SCREENSHOTS. A screenshot will usually contain the unscaled video contents at the end of the video filter chain and subtitles. By default, S takes screenshots without subtitles, while s includes subtitles. Unlike with MPlayer, the screenshot video filter is not required. This filter was never required in mpv, and has been removed. TERMINAL STATUS LINE. · AV: or V: (video only) or A: (audio only) · The current time position in HH:MM:SS format (playback-time property) · The total file duration (absent if unknown) (length property) · Playback speed, e.g. `` x2.0``. Only visible if the speed is not normal. This is the user-requested speed, and not the actual speed (usually they should be the same, unless playback is too slow). (speed property.) · Playback percentage, e.g. (13%). How much of the file has been played. Normally calculated out of playback position and duration, but can fallback to other methods (like byte position) if these are not available. (percent-pos property.) · The audio/video sync as A-V: 0.000. This is the difference between audio and video time. Normally it should be 0 or close to 0. If it's growing, it might indicate a playback problem. (avsync property.) · Total A/V sync change, e.g. ct: -0.417. Normally invisible. Can show up if there is audio "missing", or not enough frames can be dropped. Usually this will indicate a problem. (total-avsync-change property.) · Encoding state in {...}, only shown in encoding mode. ·. ·.) · Cache state, e.g. Cache: 2s+134KB. Visible if the stream cache is enabled. The first value shows the amount of video buffered in the demuxer in seconds, the second value shows additional data buffered in the stream cache in kilobytes. (demuxer-cache-duration and cache-used properties.) PROTOCOLS http://...,://. ytdl://.... fd://123 Read data from the given file descriptor (for example 123). This is similar to piping data to stdin via -, but can use an arbitrary file descriptor.. PSEUDO GUI MODE this happens only in the following cases: · if started using the mpv.desktop file on Linux (e.g. started from menus or file associations provided by desktop environments) · if started from explorer.exe on Windows (technically, if it was started on Windows, and all of the stdout/stderr/stdin handles are unset) · started out of the bundle on OSX · if you manually use --player-operation-mode=pseudo-gui on the command line This mode applies options from the builtin profile builtin-pseudo-gui, but only if these haven't been set in the user's config file or on the command line. Also, for compatibility with the old pseudo-gui behavior, the options in the pseudo-gui profile are applied unconditionally. In addition, the profile makes sure to enable the pseudo-GUI mode, so that --profile=pseudo-gui works like in older mpv releases. The profiles are currently defined as follows: [builtin-pseudo-gui] terminal=no force-window=yes idle=once screenshot-directory=~~desktop/ [pseudo-gui] player-operation-mode=pseudo-gui WARNING: · mpv dvd://1 --alang=hu,en chooses the Hungarian language track on a DVD and falls back on English if Hungarian is not available. · · mpv dvd://1 --slang=hu,en chooses the Hungarian subtitle track on a DVD and falls back on English if Hungarian is not available. · mpv --slang=jpn example.mkv plays a Matroska file with Japanese subtitles. --aid=<ID|auto|no> Select audio track. auto selects the default, no disables audio. See also --alang. mpv normally prints available audio tracks on the terminal when starting playback of a file. --audio is an alias for --aid. --aid=no or --audio=no or --no-audio disables audio playback. (The latter variant does not work with the client API.) --sid=<ID|auto|no> Display the subtitle stream specified by <ID>. auto selects the default, no disables subtitles. --sub is an alias for --sid. --sid=no or --sub=no or --no-sub disables subtitle decoding. (The latter variant does not work with the client API.) --vid=<ID|auto|no> Select video channel. auto selects the default, no disables video. --video is an alias for --vid. --vid=no or --video=no or --no-video disables video playback. (The latter variant does not work with the client API.) If video is disabled, mpv will try to download the audio only if media is streamed with youtube-dl, because it saves bandwidth. This is done by setting the ytdl_format to "bestaudio/best" in the ytdl_hook.lua script. --ff-aid=<ID|auto|no>, --ff-sid=<ID|auto|no>, --ff-vid=<ID|auto|no> Select audio/subtitle/video streams by the FFmpeg stream index. The FFmpeg stream index is relatively arbitrary, but useful when interacting with other software using FFmpeg (consider ffprobe). Note that with external tracks (added with --sub-files and similar options), there will be streams with duplicate IDs. In this case, the first stream in order is selected. -: you can start playback in this mode, and then set select tracks at runtime by setting the filter graph. Note that if --lavfi-complex is set before playback is started, the referenced tracks are always selected. Playback Control --start=<relative time> Seek to given time position. The general format for. If --audio-pitch-correction (on by default) is used, playing with a speed higher than normal automatically inserts the scaletempo audio filter. --pause Start the player in paused state. --shuffle Play files in random order. --chapter=<start[-end]> Specify which chapter to start playing at. Optionally specify which chapter to end playing at. See also: --start. -. The value no is a deprecated alias for auto. --playlist=<filename> Play files according to a playlist file (Supports some common formats. If no format is detected, it will be treated as list of files, separated by newline characters. Note that XML playlist formats are not supported.) You can play playlists directly and without this option, however, this option disables any security mechanisms that might be in place. You may also need this option to load plaintext files as playlist. WARNING:. Default: yes - NOTE:. Note that --playlist always loads all entries, so you use that instead if you really have the need for this functionality. -. On older FFmpeg versions, this will not work in some cases. Some FFmpeg demuxers might not respect this option. This option does not prevent opening of paired subtitle files and such. Use --autoload-files=no to prevent this. This option does not always work if you open non-files (for example using dvd://directory would open a whole bunch of files in the given directory). Prefixing the filename with ./ if it doesn't start with a / will avoid this. -. The force mode is like inf, but does not skip playlist entries which have been marked as failing. This means the player might waste CPU time trying to loop a file that doesn't exist. But it might be useful for playing webradios under very bad network conditions. --loop-file=<N|inf|no>, --loop=<N|inf|no> Loop a single file N times. inf means forever, no means normal playback. For compatibility, --loop-file and --loop-file=yes are also accepted, and are the same as --loop-file=inf. The difference to --loop-playlist is that this doesn't loop the playlist, just the file itself. If the playlist contains only a single file, the difference between the two option is that this option performs a seek on loop, instead of reloading the file. --loop is an alias for this option. --ab-loop-a=<time>, --ab-loop-b=<time> Set loop points. If playback passes the b timestamp, it will seek to the a timestamp. Seeking past the b point doesn't loop (this is intentional). If both options are set to no, looping is disabled. Otherwise, the start/end of the file is used if one of the options is set to no. The loop-points can be adjusted at runtime with the corresponding properties. See also ab-loop command. -. Useful for loading ordered chapter files that are not located on the local filesystem, or if the referenced files are in different directories. Note: a playlist can be as simple as a text file containing filenames separated by newlines. --chapters-file=<filename> Load chapters from this file, instead of using the chapter metadata found in the main file. This accepts a media file (like mkv) or even a pseudo-format like ffmetadata and uses its chapters to replace the current file's chapters. This doesn't work with OGM or XML chapters directly. --sstep=<sec> Skip <sec> seconds after every frame. NOTE: Without --hr-seek, skipping will snap to keyframes. --stop-playback-on-init-failure=<yes|no> Stop playback if either audio or video fails to initialize. Currently, the default behavior is no for the command line player, but yes for libmpv. With no, playback will continue in video-only or audio-only mode if one of them fails. This doesn't affect playback of audio-only or video-only files. Program Behavior --help, --h Show short summary of options. You can also pass a string to this option, which will list all top-level options which contain the string in the name, e.g. --h=scale for all options that contain the word scale. The special string * lists all top-level. NOTE: Files explicitly requested by command line options, like --include or --use-filedir-conf, will still be loaded. See also: --config-dir. --list-options Prints all available options. --list-properties Print a list of the available properties. --list-protocols Print a list of the supported protocols. --log-file=<path> Opens the given path for writing, and print log messages to it. Existing files will be truncated. The log level always corresponds to -v, regardless of terminal verbosity levels. -. Note that the --no-config option takes precedence over this option. -. This behavior is disabled by default, but is always available when quitting the player with Shift+Q. --watch-later-directory=<path>). This option is useful for debugging only. --idle=<no|yes|once> Makes mpv wait idly instead of quitting when there is no file to play. Mostly useful in input mode, where mpv can be controlled through input commands. once will only idle at start and let the player close once the first playlist has finished playing.) Default: Do not reset anything. This can be changed with this option. It accepts a list of options, and mpv will reset the value of these options on playback start to the initial value. The initial value is either the default value, or as set by the config file or command line. In some cases, this might not work as expected. For example, --volume will only be reset if it is explicitly set in the config file or the command line. The special name all resets as many options as possible. Examples · --reset-on-next-file=pause Reset pause mode when switching to the next file. · --reset-on-next-file=fullscreen,speed Reset fullscreen and playback speed settings if they were changed during playback. · --reset-on-next-file=all Try to reset all settings that were changed during playback. --write-filename-in-watch-later-config Prepend the watch later config files with the name of the file they refer to. This is simply written as comment on the top of the file. WARNING: This option may expose privacy-sensitive information and is thus disabled by default. --ignore-path-in-watch-later-config Ignore path (i.e. use filename only) when using watch later feature. --show-profile=<profile> Show the description and content of a profile. --use-filedir-conf Look for a file-specific configuration file in the same directory as the file that is being played. See File-specific Configuration Files. WARNING:.) If the script can't do anything with an URL, it will do nothing. The exclude script option accepts a |-separated list of URL patterns which mpv should not use with youtube-dl. The patterns are matched after the http(s):// part of the URL. ^ matches the beginning of the URL, $ matches its end, and you should use % before any of the characters ^$()%|,.[]*+-? to match that character. Examples · --script-opts=ytdl_hook-exclude='^youtube%.com' will exclude any URL that starts with or. · --script-opts=ytdl_hook-exclude='%.mkv$|%.mp4$' will exclude any URL that ends with .mkv or .mp4. See more lua patterns here: - =. There is no sanity checking so it's possible to break things (i.e. passing invalid parameters to youtube-dl). Example · --ytdl-raw-options=username=user,password=pass · --ytdl-raw-options=force-ipv6= -. NOTE:. The argument selects the drop methods, and can be one of the following: . NOTE: -. Set this option only if you have reason to believe the automatically determined value is wrong. --hwdec=<api> Specify the hardware video decoding API that should be used if possible. Whether hardware decoding is actually done depends on the video codec. If hardware decoding is not possible, mpv will fall back on software decoding. <api> can be one of the following: no always use software decoding (default) auto enable best hw decoder (see below) yes exactly the same as auto auto-copy enable best hw decoder with copy-back (see below) vdpau requires --vo=vdpau or --vo=opengl (Linux only) vdpau-copy copies video back into system RAM (Linux with some GPUs only) vaapi requires --vo=opengl or --vo=vaapi (Linux only) vaapi-copy copies video back into system RAM (Linux with Intel GPUs only) videotoolbox requires --vo=opengl (OS X 10.8 and up), or --vo=opengl-cb (iOS 9.0 and up) videotoolbox-copy copies video back into system RAM (OS X 10.8 or iOS 9.0 and up) dxva2 requires --vo=opengl with --opengl-backend=angle or --opengl-backend=dxinterop (Windows only) dxva2-copy copies video back to system RAM (Windows only) d3d11va requires --vo=opengl with --opengl-backend=angle (Windows 8+ only) d3d11va-copy copies video back to system RAM (Windows 8+ only) mediacodec copies video back to system RAM (Android only) rpi requires --vo=opengl (Raspberry Pi only - default if available) rpi-copy copies video back to system RAM (Raspberry Pi only) cuda requires --vo=opengl (Any platform CUDA is available) cuda-copy copies video back to system RAM (Any platform CUDA is available) crystalhd copies video back to system RAM (Any platform supported by hardware) auto tries to automatically enable hardware decoding using the first available method. This still depends what VO you are using. For example, if you are not using --vo=vdpau or --vo=opengl, vdpau decoding will never be enabled. Also note that if the first found method doesn't actually work, it will always fall back to software decoding, instead of trying the next method (might matter on some Linux systems). auto-copy selects only modes that copy the video data back to system memory after decoding. Currently, this selects only one of the following modes: vaapi-copy, dxva2-copy, d3d11va-copy, mediacodec. If none of these work, hardware decoding is disabled. This mode is always guaranteed to incur no additional loss compared to software decoding, and will allow CPU processing with video filters. The vaapi mode, if used with --vo=opengl, requires Mesa 11 and most likely works with Intel GPUs only. It also requires the opengl EGL backend (automatically used if available). You can also try the old GLX backend by forcing it with --opengl-backend=x11, but the vaapi/GLX interop is said to be slower than vaapi-copy. opengl vo is not being used or filters are required.. Quality reduction with hardware decoding In theory, hardware decoding does not reduce video quality (at least for the codecs h264 and HEVC). However, due to restrictions in video output APIs, as well as bugs in the actual hardware decoders, there can be some loss, or even blatantly incorrect results. In some cases, RGB conversion is forced, which means the RGB conversion is performed by the hardware decoding API, instead of the OpenGL code used by --vo=opengl. This means certain colorspaces may not display correctly, and certain filtering (such as debanding) cannot be applied in an ideal way. This will also usually force the use of low quality chroma scalers instead of the one specified by --cscale. In other cases, hardware decoding can also reduce the bit depth of the decoded image, which can introduce banding or precision loss for 10-bit files. vdpau usually safe (if used with ANGLE builds that support EGL_KHR_stream path - otherwise, it converts to RGB), except that 10 bit input (HEVC main 10 profiles) will be rounded down to 8 bits, which results in reduced quality. dxva2 is not safe. It appears to always use BT.601 for forced RGB conversion, but actual behavior depends on the GPU drivers. Some drivers appear to convert to limited range RGB, which gives a faded appearance. In addition to driver-specific behavior, global system settings might affect this additionally. This can give incorrect results even with completely ordinary video sources. rpi always uses the hardware overlay renderer, even with --vo=opengl. cuda should be safe, but it has been reported to corrupt the timestamps causing glitched, flashing frames on some files. It can also sometimes cause massive framedrops for unknown reasons. Caution is advised. crystalhd is not safe. It always converts to 4:2:2 YUV, which may be lossy, depending on how chroma sub-sampling is done during conversion. It also discards the top left pixel of each frame for some reason. All other methods, in particular the copy-back methods (like dxva2-copy etc.) should hopefully be safe, although they can still cause random decoding issues. At the very least, they shouldn't affect the colors of the image. In particular, auto-copy will only select "safe" modes (although potentially slower than other methods), but there's still no guarantee the chosen hardware decoder will actually work correctly. In general, it's very strongly advised to avoid hardware decoding unless absolutely necessary, i.e. if your CPU is insufficient to decode the file in questions. If you run into any weird decoding issues, frame glitches or discoloration, and you have --hwdec turned on, the first thing you should try is disabling it. --opengl-hwdec-interop=<name> This is useful for the opengl and opengl-cb VOs for creating the hardware decoding OpenGL interop context, but without actually enabling hardware decoding itself (like --hwdec does). If set to an empty string (default), the --hwdec option is used. For opengl, if set, do not create the interop context on demand, but when the VO is created. For opengl-cb, if set, load the interop context as soon as the OpenGL context is created. Since opengl-cb has no on-demand loading, this allows enabling hardware decoding at runtime at all, without having to temporarily set the hwdec option just during OpenGL context initialization with mpv_opengl_cb_init_gl(). See --opengl-hwdec-interop=help for accepted values. This lists the interop backend, with the --hwdec alias after it in [...]. Consider all values except the proper interop backend name, auto, and no as silently deprecated and subject to change. Also, if you use this in application code (e.g. via libmpv), any value other than auto and no should be avoided, as backends can change. Currently the option sets a single value. It is possible that the option type changes to a list in the future. The old alias --hwdec-preload has different behavior if the option value is no. -. This option has no effect if --video-unscaled option is used. --video-aspect=<ratio|no> Override video aspect ratio, in case aspect information is incorrect or missing in the file being played. See also --no-video-aspect. These values have special meaning: 0 disable aspect ratio handling, pretend the video has square pixels no same as 0 -1 use the video stream or container aspect (default) But note that handling of these special values might change in the future. Examples · --video-aspect=4:3 or --video-aspect=1.3333 · --video-aspect=16:9 or --video-aspect=1.7777 · -. The current default for mpv is container. Normally you should not set this. Try the various choices if you encounter video that has the wrong aspect ratio in mpv, but seems to be correct in other players. -. Note that the scaler algorithm may still be used, even if the video isn't scaled. For example, this can influence chroma conversion. The video will also still be scaled in one dimension if the source uses non-square pixels (e.g. anamorphic widescreen DVDs). This option is disabled if the --no-keepaspect option is used. -). For example, displaying a 1280x720 video fullscreen on a 1680x1050 screen with --video-pan-x=-0.1 would move the video 168 pixels to the left (making 128 pixels of the source video invisible). This option is disabled if the --no-keepaspect option is used. - done by inserting the stereo3d conversion filter.. --video-zoom=<value> Adjust the video display scale factor by the given value. The parameter is given log 2. For example, --video-zoom=0 is unscaled, --video-zoom=1 is twice the size, --video-zoom=-2 is one fourth of the size, and so on. This option is disabled if the --no-keepaspect option is used. -. If video and screen aspect match perfectly, these options do nothing. This option is disabled if the --no-keepaspect option is used. -. NOTE:. This behaves exactly like the deinterlace input property (usually mapped to d). Keep in mind that this will conflict with manually inserted deinterlacing filters, unless you take care. (Since mpv 0.27.0, even the hardware deinterlace filters will conflict. Also since that version, --deinterlace=auto was removed, which used to mean that the default interlacing option of possibly inserted video filters was used.) --frames=<number> Play/convert only first <number> video frames, then quit. --frames=0 loads the file, but immediately quits before initializing playback. (Might be useful for scripts which just want to determine some file properties.) For audio-only playback, any value greater than 0 will quit playback immediately after initialization. The value 0 works as with video. -. Not all VOs support this option. Some will silently ignore it. Available color ranges are: auto automatic selection (equals to full range) (default) limited limited range (16-235 per component), studio levels full full range (0-255 per component), PC levels NOTE: It is advisable to use your graphics driver's color range option instead, if available. --hwdec-codecs=<codec1,codec2,...|all> Allow hardware decoding for a given list of codecs only. The special value all always allows all codecs.,vp9. Note that the hardware acceleration special codecs like h264_vdpau are not relevant anymore, and in fact have been removed from Libav in this form. This is usually only needed with broken GPUs, where a codec is reported as supported, but decoding causes more problems than it solves.: · opengl: requires at least OpenGL 4.4. (In particular, this can't be made work with opengl-cb.) Using video filters of any kind that write to the image data (or output newly allocated frames) will silently disable the DR code path. There are some corner cases that will result in undefined behavior (crashes and other strange behavior) if this option is enabled. These are pending towards being fixed properly at a later point. -. Some options which used to be direct options can be set with this mechanism, like bug, gray, idct, ec, vismv, skip_top (was st), skip_bottom (was sb), debug. Example --vd-lavc-o=debug=pict -. <skipvalue> can be. -. You can list audio devices with --audio-device=help. This outputs the device name in quotes, followed by a description. The device name is what you have to pass to the --audio-device option. The list of audio devices can be retrieved by API by using the audio-device-list property. While the option normally takes one of the strings as indicated by the methods above, you can also force the device for most AOs by building it manually. For example name/foobar forces the AO name to use the device foobar. Example for ALSA MPlayer and mplayer2 required you to replace any ',' with '.' and any ':' with '=' in the ALSA device name. For example, to use the device named dmix:default, you had to do: -ao alsa:device=dmix=default In mpv you could instead use: --audio-device=alsa/dmix:default --audio-exclusive=<yes|no> Enable exclusive output mode. In this mode, the system is usually locked out, and only mpv will be able to output audio. This only works for some audio outputs, such as wasapi and coreaudio. Other audio outputs silently ignore this options. They either have no concept of exclusive mode, or the mpv side of the implementation is missing. -. Possible codecs are ac3, dts, dts-hd. Multiple codecs can be specified by separating them with ,. dts refers to low bitrate DTS core, while dts-hd refers to DTS MA (receiver and OS support varies). If both dts and dts-hd are specified, it behaves equivalent to specifying dts-hd only. In earlier mpv versions you could use --ad to force the spdif wrapper. This does not work anymore. Warning There is not much reason to use this. HDMI supports uncompressed multichannel PCM, and mpv supports lossless DTS-HD decoding via FFmpeg's new DCA decoder (based on libdcadec). -. - at the end of the list suppresses fallback on other available decoders not on the --ad list. + in front of an entry forces the decoder. Both of these should not normally be used, because they break normal decoder auto-selection! Both of these methods are deprecated. Examples --ad=mp3float Prefer the FFmpeg/Libav mp3float decoder over all other MP3 decoders. --ad=help List all available decoders. Warning Enabling compressed audio passthrough (AC3 and DTS via SPDIF/HDMI) with this option is not possible. Use --audio-spdif instead. --volume=<value> Set the startup volume. 0 means silence, 100 means no volume reduction or amplification. Negative values can be passed for compatibility, but are treated as 0. Since mpv 0.18.1, this always controls the internal mixer (aka "softvol"). -. --balance=<value> How much left/right channels contribute to the audio. (The implementation of this feature is rather odd. It doesn't change the volumes of each channel, but instead sets up a pan matrix to mix the left and right channels.) Deprecated. --audio-delay=<sec> Audio delay in seconds (positive or negative float value). Positive values delay the audio, and negative values delay the video. --mute=<yes|no|auto> Set startup audio mute status (default: no). auto is a deprecated possible value that is equivalent to no. See also: --volume. --softvol=<no|yes|auto> Deprecated/unfunctional. Before mpv 0.18.1, this used to control whether to use the volume controls of the audio output driver or the internal mpv volume filter. The current behavior is that softvol is always enabled, i.e. as if this option is set to yes. The other behaviors are not available anymore, although auto almost matches current behavior in most cases. The no behavior is still partially available through the ao-volume and ao-mute properties. But there are no options to reset these. -. The standard mandates that DRC is enabled by default, but mpv (and some other players) ignore this for the sake of better audio quality. - This and enabling passthrough via --ad are deprecated in favor of using --audio-spdif=dts-hd. --audio-channels=<auto-safe|auto|layouts> Control which audio channels are output (e.g. surround vs. stereo). There are the following possibilities: · --audio-channels=auto-safe Use the system's preferred channel layout. If there is none (such as when accessing a hardware device instead of the system mixer), force stereo. Some audio outputs might simply accept any layout and do downmixing on their own. This is the default. · --audio-channels=auto Send the audio device whatever it accepts, preferring the audio's original channel layout. Can cause issues with HDMI (see the warning below). · -. Using this mode is recommended for direct hardware output, especially over HDMI (see HDMI warning below). · --audio-channels=stereo Force a plain stereo downmix. This is a special-case of the previous item. (See paragraphs below for implications.) If a list of layouts is given, each item can be either an explicit channel layout name (like 5.1), or a channel number. Channel numbers refer to default layouts, e.g. 2 channels refer to stereo, 6 refers to 5.1. See --audio-channels=help output for defined default layouts. This also lists speaker names, which can be used to express arbitrary channel layouts (e.g. fl-fr-lfe is 2.1). If the list of channel layouts has only 1 item, the decoder is asked to produce according output. This sometimes triggers decoder-downmix, which might be different from the normal mpv downmix. (Only some decoders support remixing audio, like AC-3, AAC or DTS. You can use --ad-lavc-downmix=no to make the decoder always output its native layout.) One consequence is that --audio-channels=stereo triggers decoder downmix, while auto or auto-safe never will, even if they end up selecting stereo. This happens because the decision whether to use decoder downmix happens long before the audio device is opened. If the channel layout of the media file (i.e. the decoder) and the AO's channel layout don't match, mpv will attempt to insert a conversion filter. Warning Using auto can cause issues when using audio over HDMI. The OS will typically report all channel layouts that _can_ go over HDMI, even if the receiver does not support them. If a receiver gets an unsupported channel layout, random things can happen, such as dropping the additional channels, or adding noise. You are recommended to set an explicit whitelist of the layouts you want. For example, most A/V receivers connected via HDMI and that can do 7.1 would be served by: --audio-channels=7.1,5.1,stereo --audio-normalize-downmix=<yes|no> Enable/disable normalization if surround audio is downmixed to stereo (default: no). If this is disabled, downmix can cause clipping. If it's enabled, the output might be too silent. It depends on the source audio. Technically, this changes the normalize suboption of the lavrresample audio filter, which performs the downmixing. If downmix happens outside of mpv for some reason, this has no effect. --audio-display=<no|attachment> Setting this option to attachment (default) will display image attachments (e.g. album cover art) when playing audio files. It will display the first image found, and additional images are available as video tracks. Setting this option to no disables display of video entirely when playing audio files. This option has no influence on files with normal video tracks. --audio-files=<files> Play audio from an external file while viewing a video. This is a list option. See List Options for details. -. --softvol-max is a deprecated alias and should not be used. -. Making this larger will make soft-volume and other filters react slower, introduce additional issues on playback speed change, and block the player on audio format changes. A smaller buffer might lead to audio dropouts. This option should be used for testing only. If a non-default value helps significantly, the mpv developers should be contacted. Default: 0.2 (200 ms). -. Not all AOs support this. - NOTE:> Add a subtitle file to the list of external subtitles. If you use --sub-file only once, this subtitle file is displayed by default. If --sub-file is used multiple times, the subtitle to use can be switched at runtime by cycling subtitle tracks. It's possible to show two subtitles at once: use --sid to select the first subtitle index, and --secondary-sid to select the second index. (The index is printed on the terminal output after the --sid= in the list of streams.) This is a list option. See List Options for details. -. There are some caveats associated with this feature. For example, bitmap subtitles will always be rendered in their usual position, so selecting a bitmap subtitle as secondary subtitle will result in overlapping subtitles. Secondary subtitles are never shown on the terminal if video is disabled. NOTE:). NOTE:. Like --sub-scale, this can break ASS subtitles. -). Default: yes. This option is misnamed. The difference to the confusingly similar sounding option --sub-scale-by-window is that --sub-scale-with-window still scales with the approximate window size, while the other option disables this scaling. Affects plain text subtitles only (or ASS if --sub-ass-override is set high enough). --sub-ass-scale-with-window=<yes|no> Like --sub-scale-with-window, but affects subtitles in ASS format only. Like --sub-scale, this can break ASS subtitles. Default: no. -. NOTE:-speed=25/23.976 plays frame based subtitles which have been loaded assuming a framerate of 23.976 at 25 FPS. --sub-ass-force-style=<[Style.]Param=Value[,...]> Override some style or script info parameters. Examples · --sub-ass-force-style=FontName=Arial,Default.Bold=1 · --sub-ass-force-style=PlayResY=768 NOTE: Enabling hinting can lead to mispositioned text (in situations it's supposed to match up video background), or reduce the smoothness of animations with some badly authored ASS scripts. It is recommended to not use this option, unless really needed. - complex is the default. If libass hasn't been compiled against HarfBuzz, libass silently reverts to simple. --sub-ass-styles=<filename> Load all SSA/ASS styles found in the specified file and use them for rendering text subtitles. The syntax of the file is exactly like the [V4 Styles] / [V4+ Styles] section of SSA/ASS. NOTE:. Default: no. --sub-use-margins Enables placing toptitles and subtitles in black borders when they are available, if the subtitles are in a plain text format (or ASS if --sub-ass-override is set high enough). Default: yes. Renamed from --sub-ass-use-margins. To place ASS subtitles in the borders too (like the old option did), also add --sub-ass-force-margins. --sub-ass-vsfilter-aspect-compat=<yes|no>. --sub-ass-vsfilter-blur-compat=<yes|no> Scale \blur tags by video resolution instead of script resolution (enabled by default). This is bug in VSFilter, which according to some, can't be fixed anymore in the name of compatibility. Note that this uses the actual video resolution for calculating the offset scale factor, not what the video filter chain or the video output use. -. Choosing anything other than no will make the subtitle color depend on the video color space, and it's for example in theory not possible to reuse a subtitle script with another video file. The --sub-ass-override option doesn't affect how this option is interpreted. --stretch-dvd-subs=<yes|no> Stretch DVD subtitles when playing anamorphic videos for better looking fonts on badly mastered DVDs. This switch has no effect when the video is stored with square pixels - which for DVD input cannot be the case though. Many studios tend to use bitmap fonts designed for square pixels when authoring DVDs, causing the fonts to look stretched on playback on DVD players. This option fixes them, however at the price of possibly misaligning some subtitles (e.g. sign translations). Disabled by default. -.) This option does not display subtitles correctly. Use with care. Disabled by default. -). NOTE:.) The default value for this option is auto, which enables autodetection. The following steps are taken to determine the final codepage, in order: · if the specific codepage has a +, use that codepage · if the data looks like UTF-8, assume it is UTF-8 · if --sub-codepage is set to a specific codepage, use that · run uchardet, and if successful, use that · otherwise, use UTF-8-BROKEN Examples · --sub-codepage=latin2 Use Latin 2 if input is not UTF-8. · --sub-codepage=+cp1250 Always force recoding to cp1250. The pseudo codepage UTF-8-BROKEN is used internally. If it's set, subtitles are interpreted as UTF-8 with "Latin 1" as fallback for bytes which are not valid UTF-8 sequences. iconv is never involved in this mode. This option changed in mpv 0.23.0. Support for the old syntax was fully removed in mpv 0.24.0. -. NOTE: <rate> > video fps speeds the subtitles up for frame-based subtitle files and slows them down for time-based ones. See also: --sub-speed. --sub-gauss=<0.0-3.0> Apply Gaussian blur to image subtitles (default: 0). This can help to make pixelated DVD/Vobsubs look nicer. A value other than 0 also switches to software subtitle scaling. Might be slow. NOTE: Never applied to text subtitles. --sub-gray Convert image subtitles to grayscale. Can help to make yellow DVD/Vobsubs look nicer. NOTE: Assuming that /path/to/video/video.avi is played and --sub-file-paths=sub:subtitles is specified, mpv searches for subtitle files in these directories: · /path/to/video/ · /path/to/video/sub/ · /path/to/video/subtitles/ · the sub configuration subdirectory (usually ~/.config/mpv/sub/) This is a list option. See List Options for details. - · --sub-font='Bitstream Vera Sans' · --sub-font='Comic Sans MS' NOTE:. Default: 55. -. NOTE: ignored when --sub-back-color is specified (or more exactly: when that option is not set to completely transparent). --sub-border-size=<size> Size of the sub font border in scaled pixels (see --sub-font-size for details). A value of 0 disables borders. Default: 3. --sub-color=<color> Specify the color used for unstyled text subtitles. The color is specified in the form r/g/b, where each color component is specified as number in the range 0.0 to 1.0. It's also possible to specify the transparency by using r/g/b/a, where the alpha value 0 means fully transparent, and 1.0 means opaque. If the alpha component is not given, the color is 100% opaque. Passing a single number to the option sets the sub to gray, and the form gray/a lets you specify alpha additionally. Examples · --sub-color=1.0/0.0/0.0 set sub to opaque red · --sub-color=1.0/0.0/0.0/0.75 set sub to opaque red with 75% alpha · --sub-color=0.5/0.75 set sub to 50% gray with 75% alpha Alternatively, the color can be specified as a RGB hex triplet in the form #RRGGBB, where each 2-digit group expresses a color value in the range 0 (00) to 255 (FF). For example, #FF0000 is red. This is similar to web colors. Alpha is given with #AARRGGBB. Examples · --sub-color='#FF0000' set sub to opaque red · --sub-color='#C0808080' set sub to 50% gray with 75% alpha --sub-margin-x=<size> Left and right screen margin for the subs in scaled pixels (see --sub-font-size for details). This option specifies the distance of the sub to the left, as well as at which distance from the right border long sub text will be broken. Default: 25. --sub-margin-y=<size> Top and bottom screen margin for the subs in scaled pixels (see --sub-font-size for details). This option specifies the vertical margins of unstyled text subtitles. If you just want to raise the vertical subtitle position, use --sub-pos. Default: 22. --sub-align-x=<left|center|right> Control to which corner of the screen text subtitles should be aligned to (default: center). Never applied to ASS subtitles, except in --no-sub-ass mode. Likewise, this does not apply to image subtitles. -. Default: 0. --sub-spacing=<size> Horizontal sub font spacing in scaled pixels (see --sub-font-size for details). This value is added to the normal letter spacing. Negative values are allowed. Default: 0. -. Default: no. --sub-filter-sdh-harder=<yes|no> Do harder SDH filtering (if enabled by --sub-filter-sdh). Will also remove speaker labels and text within parentheses using both lower and upper case letters. Default: no. Window --title=<string> Set the window title. This is used for the video window, and if possible, also sets the audio stream title. Properties are expanded. (See Property Expansion.) WARNING:) This option does not work properly with all window managers. In these cases, you can try to use --geometry to position the window explicitly. It's also possible that the window manager provides native features to control which screens application windows should use. See also --fs-screen. -) This option does works properly only with window managers which understand the EWMH _NET_WM_FULLSCREEN_MONITORS hint. Note (OS X) all does not work on OS X and will behave like current. See also --screen. -. Normally, this will act like set pause yes on EOF, unless the --keep-open-pause=no option is set. The following arguments can be given:. NOTE: This option is not respected when using --frames. Explicitly skipping to the next file if the binding uses force will terminate playback as well. Also, if errors or unusual circumstances happen, the player can quit anyway. Since mpv 0.6.0, this doesn't pause if there is a next file in the playlist, or the playlist is looped. Approximately, this will pause when the player would normally exit, but in practice there are corner cases in which this is not the case (e.g. mpv --keep-open file.mkv /dev/null will play file.mkv normally, then fail to open /dev/null, then exit). (In mpv 0.8.0, always was introduced, which restores the old behavior.) -). Unlike --keep-open, the player is not paused, but simply continues playback until the time has elapsed. (It should not use any resources during "playback".) This affects image files, which are defined as having only 1 video frame and no audio. The player may recognize certain non-images as images, for example if --length is used to reduce the length to 1 frame, or if you seek to the last frame. This option does not affect the framerate used for mf:// or --merge-files. For that, use --mf-fps instead. -. WARNING:). Enabled by default. --snap-window (Windows only) Snap the player window to screen edges. --ontop Makes the player window stay on top of other windows. On Windows, if combined with fullscreen mode, this causes mpv to be treated as exclusive fullscreen window that bypasses the Desktop Window Manager. -, this option is ignored. The coordinates are relative to the screen given with --screen for the video output drivers that fully support --screen. NOTE: Generally only supported by GUI VOs. Ignored for encoding. Note (X11) This option does not work properly with all window managers.. See also --autofit and --autofit-larger for fitting the window into a given size without changing aspect ratio. -. This option never changes the aspect ratio of the window. If the aspect ratio mismatches, the window's size is reduced until it fits into the specified size. Window position is not taken into account, nor is it modified by this option (the window manager still may place the window differently depending on size). Use --geometry to change the window position. Its effects are applied after this option. See --geometry for details how this is handled with multi-monitor setups. Use --autofit-larger instead if you just want to limit the maximum size of the window, rather than always forcing a window size. Use --geometry if you want to force both window width and height to a specific size. NOTE:). For example, --window-scale=0.5 would show the window at half the video size. - vdpau, opengl,. --heartbeat-cmd=<command> WARNING: This option is redundant with Lua scripting. Further, it shouldn't be needed for disabling screensaver anyway, since mpv will call xdg-screensaver when using X11 backend. As a consequence this option has been deprecated with no direct replacement. Command that is executed every 30 seconds during playback via system() - i.e. using the shell. The time between the commands can be customized with the --heartbeat-interval option. The command is not run while playback is paused. NOTE: mpv uses this command without any checking. It is your responsibility to ensure it does not cause security problems (e.g. make sure to use full paths if "." is in your path like on Windows). It also only works when playing video (i.e. not with --no-video but works with --vo=null). This can be "misused" to disable screensavers that do not support the proper X API (see also --stop-screensaver). If you think this is too complicated, ask the author of the screensaver program to support the proper X APIs. Note that the --stop-screensaver does not influence the heartbeat code at all. Example for xscreensaver mpv -- mp_pipe and the pipe will stay valid. --input-terminal, --no-input-terminal --no-input-terminal prevents the player input commands. --input-ipc-server=<filename> Enable the IPC support and create the listening socket at the given path. On Linux and Unix, the given path is a regular filesystem path. On Windows, named pipes are used, so the path refers to the pipe namespace (\\.\pipe\<name>). If the \\.\pipe\ prefix is missing, mpv will add it automatically before creating the pipe, so --input-ipc-server=/tmp/mpv-socket and --input-ipc-server=\\.\pipe\tmp\mpv-socket are equivalent for IPC on Windows. See JSON IPC for. On X11, a sub-window with input enabled grabs all keyboard input as long as it is 1. a child of a focused window, and 2. the mouse is inside of the sub-window. It can steal away all keyboard input from the application embedding the mpv window, and on the other hand, the mpv window will receive no input if the mouse is outside of the mpv window, even though mpv has focus. Modern toolkits work around this weird X11 behavior, but naively embedding foreign windows breaks it. The only way to handle this reasonably is using the XEmbed protocol, which was designed to solve these problems. GTK provides GtkSocket, which supports XEmbed. Qt doesn't seem to provide anything working in newer versions. If the embedder supports XEmbed, input should work with default settings and with this option disabled. Note that input-default-bindings is disabled by default in libmpv as well - it should be enabled if you want the mpv default key bindings. (This option was renamed from --input-x11-keyboard.) OSD --osc, --no-osc Whether to load the on-screen-controller (default: yes). --no-osd-bar, --osd-bar Disable display of the OSD bar. This will make some things (like seeking) use OSD text messages instead of the bar. You can configure this on a per-command basis in input.conf using osd- prefixes, see Input command prefixes. If you want to disable the OSD completely, use --osd-level=0. --osd-duration=<time> Set the duration of the OSD messages in ms (default: 1000). --osd-font=<name> Specify font to use for OSD. The default is sans-serif. Examples · --osd-font='Bitstream Vera Sans' · --osd-font='Comic Sans MS' --osd-font-size=<size> Specify the OSD font size. See --sub-font-size for details. Default: 55. -. This is also used for the show-progress command (by default mapped to P), or in some non-default cases when seeking. --osd-status-msg is a legacy equivalent (but with a minor. This option has been replaced with --osd-msg3. The only difference is that this option implicitly includes ${osd-sym-cc}. This option is ignored if --osd-msg3 is not empty. --osd-playing-msg=<string> Show a message on OSD when playback starts. The string is expanded for properties, e.g. --osd-playing-msg='file: ${filename}' will show the message file: followed by a space and the currently played filename. See Property Expansion. -. NOTE: ignored when --osd-back-color is specified (or more exactly: when that option is not set to completely transparent). --osd-border-size=<size> Size of the OSD font border in scaled pixels (see --sub-font-size for details). A value of 0 disables borders. Default: 3. -). This option specifies the distance of the OSD to the left, as well as at which distance from the right border long OSD text will be broken. Default: 25. --osd-margin-y=<size> Top and bottom screen margin for the OSD in scaled pixels (see --sub-font-size for details). This option specifies the vertical margins of the OSD. Default: 22. -. Default: 0. --osd-spacing=<size> Horizontal OSD/sub font spacing in scaled pixels (see --sub-font-size for details). This value is added to the normal letter spacing. Negative values are allowed. Default: 0. --video-osd=<yes|no> Enabled OSD rendering on the video window (default: yes). This can be used in situations where terminal OSD is preferred. If you just want to disable all OSD rendering, use --osd-level=0. It does not affect subtitles or overlays created by scripts (in particular, the OSC needs to be disabled with --no-osc). This option is somewhat experimental and could be replaced by another mechanism in the future. Screenshot --screenshot-format=<type> Set the image file type used for saving screenshots. Available choices: png PNG jpg JPEG (default) jpeg JPEG (alias for jpg) --screenshot-tag-colorspace=<yes|no> Tag screenshots with the appropriate colorspace. Note that not all formats are supported. Default: no. -. The template can start with a relative or absolute path, in order to specify a directory location where screenshots should be saved. If the final screenshot filename points to an already existing file, the file will not be overwritten. The screenshot will either not be saved, or if the template contains %n, saved using different, newly generated filename. Allowed format specifiers: %[#]. NOTE:. If the directory does not exist, it is created on the first screenshot. If it is not a directory, an error is generated when trying to write a screenshot. This option is not set by default, and thus will write screenshots to the directory from which mpv was started. In pseudo-gui mode (see PSEUDO GUI MODE), this is set to the desktop. -. To get a list of available scalers, run --sws-scaler=help. Default: bicubic. -). See also: --really-quiet and --msg-level. --really-quiet Display even less output and status messages than with --quiet. --no-terminal, --terminal Disable any use of the terminal and stdin/stdout/stderr. This completely silences any message output. Unlike --really-quiet, this disables input and terminal initialization as well. --no-msg-color Disable colorful console output on terminals. --msg-level=<module1=level1,module2=level2,...> Control verbosity directly for each module. The all module changes the verbosity of all the modules not explicitly specified on the command line. Run mpv with --msg-level=all=trace to see all messages mpv outputs. You can use the module names printed in the output (prefixed to each line in [...]) to limit the output to interesting modules. NOTE: Some messages are printed before the command line is parsed and are therefore not affected by --msg-level. To control these messages, you have to use the MPV_VERBOSE environment variable; see ENVIRONMENT VARIABLES for details. Available levels: no complete silence fatal fatal messages only error error messages warn warning messages info informational messages status status messages (default) v verbose messages debug debug messages trace very noisy debug messages Example mpv --msg-level=ao/sndio=no Completely silences the output of ao_sndio, which uses the log prefix [ao/sndio]. mpv --msg-level=all=warn,ao/alsa=error Only show warnings or worse, and let the ao_alsa output show errors only. --term-osd=<auto|no|force> Control whether OSD messages are shown on the console when no video output is available (default: auto). auto use terminal OSD if no video output active no disable terminal OSD force use terminal OSD even if video output active The auto mode also enables terminal OSD if --video-osd=no was set. -. Default: [-+-]. --term-playing-msg=<string> Print out a string after starting playback. The string is expanded for properties, e.g. --term-playing-msg='file: ${filename}' will print the string file: followed by a space and the currently played filename. See Property Expansion. -. See also: --tv-normid. --tv-normid=<value> (v4l2 only) Sets the TV norm to the given numeric ID. The TV norm depends on the capture card. See the console output for a list of available TV norms. --tv --tv input commands tv_step_channel, tv_set_channel and tv_last_channel will be usable for a remote control. Not compatible with the frequency parameter. NOTE: The channel number will then be the position in the 'channels' list, beginning with 1. Examples tv://1, tv://TV1, tv_set_channel 1, tv_set_channel TV1 - · 704x576 PAL · 704x480 NTSC 2 medium size · 352x288 PAL · 352x240 NTSC 4 small size · 176x144 PAL ·. --cache-default=<kBytes|no> Set the size of the cache in kilobytes (default: 75000. --cache-backbuffer=<kBytes> Size of the cache back buffer (default: 75000 KB). This will add to the total cache size, and reserved the amount for seeking back. The reserved amount will not be used for readahead, and instead preserves already read data to enable fast seeking back. --cache-file=<TMP|path> Create a cache file on the filesystem. There are two ways of using this: 1. Passing a path (a filename). The file will always be overwritten. When the general cache is enabled, this file cache will be used to store whatever is read from the source stream.. If you want to use a file cache, this mode is recommended, because it doesn't break ordered chapters or --audio-file. These modes open multiple cache streams, and using the same file for them obviously clashes. See also: --cache-file-size. --cache-file-size=<kBytes> Maximum size of the file created with --cache-file. For read accesses above this size, the cache is simply not used. Keep in mind that some use-cases, like playing ordered chapters with cache enabled, will actually create multiple cache files, each of which will use up to this much disk space. (Default: 1048576, 1 GB.) -. (Default: 10.) -' \ Will generate HTTP request:.) Additionally, if the option is a number, the stream with the highest rate equal or below the option value is selected. The bitrate as used is sent by the server, and there's no guarantee it's actually meaningful.. Default: no). OpenGL renderer options The following video options are currently all specific to --vo=opengl opengl. (This filter is an alias for sinc-windowed sinc) ewa_lanczos Elliptic weighted average Lanczos scaling. Also known as Jinc. Relatively slow, but very good quality. The radius can be controlled with scale-radius. Increasing the radius makes the filter sharper but adds more ringing. (This filter is an alias for jinc-windowed jinc). There are some more filters, but most are not as useful. For a complete list, pass help as value, e.g.:. Note that the maximum supported filter radius is currently 3, due to limitations in the number of video textures that can be loaded simultaneously. -. Note that depending on filter implementation details and video scaling ratio, the radius that actually being used might be different (most likely being increased a bit). -. Note that this doesn't affect the special filters bilinear and bicubic_fast, nor does it affect any polar (EWA) scalers. -. All weights are linearly interpolated from those samples, so increasing the size of lookup table might improve the accuracy of scaler. - --opengl-fbo-format that has at least 16 bit precision. This option has no effect on HDR content. --correct-downscaling When using convolution based filters, extend the filter size when downscaling. Increases quality, but reduces performance while downscaling. This will perform slightly sub-optimally for anamorphic video (but still better than without it) since it will extend the size to match only the milder of the scale factors between the axes. --interpolation Reduce stuttering caused by mismatches in the video fps and display refresh rate (also known as judder). WARNING: This requires setting the --video-sync option to one of the display- modes, or it will be silently disabled. This was not required before mpv 0.14.0. This essentially attempts to interpolate the missing frames by convoluting the video along the temporal axis. The filter used can be controlled using the --tscale setting. Note that this relies on vsync to work, see --opengl-swapinterval for more information. It should also only be used with an --opengl-fbo-format that has at least 16 bit precision. -.) The default is intended to almost always enable interpolation if the playback rate is even slightly different from the display refresh rate. But note that if you use e.g. --video-sync=display-vdrop, small deviations in the rate can disable interpolation and introduce a discontinuity every other minute. Set this to -1 to disable this logic. -. Note that the depth of the connected video display device cannot be detected. Often, LCD panels will do dithering on their own, which conflicts with this option and leads to ugly). Used in --dither=fruit mode only. -. --opengl-debug Check for OpenGL errors, i.e. call glGetError(). Also, request a debug OpenGL context (which does nothing with current graphics drivers as of this writing). --opengl-swapinterval=<n> Interval in displayed frames between two buffer swaps. 1 is equivalent to enable VSYNC, 0 to disable VSYNC. Defaults to 1 if not specified. Note that this depends on proper OpenGL vsync support. On some platforms and drivers, this only works reliably when in fullscreen mode. It may also require driver-specific hacks if using multiple monitors, to ensure mpv syncs to the right one. Compositing window managers can also lead to bad results, as can missing or incorrect display FPS information (see --display-fps). --opengl The syntax is not stable yet and may change any time. The general syntax of a user shader looks like this: //!METADATA ARGS... //!METADATA ARGS... vec4 hook() { ... return something; } //!METADATA ARGS... //!METADATA ARGS... ... Each section of metadata, along with the non-metadata lines after it, defines a single block. There are currently two types of blocks, HOOKs and TEXTUREs. A TEXTURE block can set the following options: opengl. Although format names follow a common naming convention, not all of them are available on all hardware, drivers, GL versions, and so on. FILTER <LINEAR|NEAREST> The min/magnification filter used when sampling from this texture. BORDER <CLAMP|REPEAT|MIRROR> The border wrapping mode used when sampling from this texture. Following the metadata is a string of bytes in hexadecimal notation that define the raw texture data, corresponding to the format specified by FORMAT, on a single line with no extra whitespace. A HOOK block can set the following options:. Compute shaders in mpv are treated a bit different from fragment shaders. Instead of defining a vec4 hook that produces an output sample, you directly define void hook which writes to a fixed writeonly image unit named out_image (this is bound by mpv) using imageStore. To help translate texture coordinates in the absence of vertices, mpv provides a special function NAME_map(id) to map from the texel space of the output image to the texture coordinates for all bound textures. In particular, NAME_pos is equivalent to NAME_map(gl_GlobalInvocationID), although using this only really makes sense if (tw,th) == (bw,bh). Each bound mpv texture (via BIND) will make available the following definitions to that shader pass, where NAME is the name of the bound texture:. Normally, users should use either NAME_tex or NAME_texOff to read from the texture. For some shaders however , it can be better for performance to do custom sampling from NAME_raw, in which case care needs to be taken to respect NAME_mul and NAME_rot. In addition to these parameters, the following uniforms are also globally available:. Internally, vo_opengl may generate any number of the following textures. Whenever a texture is rendered and saved by vo_opengl, all of the passes that have hooked into it will run, in the order they were added by the user. This is a list of the legal hook points:. Only the textures labelled with resizable may be transformed by the pass. When overwriting a texture marked fixed, the WIDTH, HEIGHT and OFFSET must be left at their default values. --opengl-shader=<file> CLI/config file only alias for --opengl) If you increase the --deband-iterations, you should probably decrease this to compensate. - and after. X11/GLX only. --opengl-vsync-fences=<N> Synchronize the CPU to the Nth past frame using the GL_ARB_sync extension. A value of 0 disables this behavior (default). A value of 1 means it will synchronize to the current frame after rendering it. Like --glfinish and --waitvsync, this can lower or ruin performance. Its advantage is that it can span multiple frames, and effectively limit the number of frames the GPU queues ahead (which also has an influence on vsync). -). The value auto will try to determine whether the compositor is active, and calls DwmFlush only if it seems to be. This may help to get more consistent frame intervals, especially with high-fps clips - which might also reduce dropped frames. Typically, a value of windowed should be enough, since full screen may bypass the DWM. Windows only. - --opengl-dumb-mode). Windows with ANGLE only. -. Windows with ANGLE only. -. If set to yes, the --angle-max-frame-latency, --angle-swapchain-length and --angle-flip options will have no effect. Windows with ANGLE only. --angle-flip=<yes|no> Enable flip-model presentation, which avoids unnecessarily copying the backbuffer by sharing surfaces with the DWM (default: yes). This may cause performance issues with older drivers. If flip-model presentation is not supported (for example, on Windows 7 without the platform update), mpv will automatically fall back to the older bitblt presentation model. If set to no, the --angle-swapchain-length option will have no effect. Windows with ANGLE only. --angle-max-frame-latency=<1-16> Sets the maximum number of frames that the system is allowed to queue for rendering with the ANGLE backend (default: 3). Lower values should make VSync timing more accurate, but a value of 1 requires powerful hardware, since the CPU will not be able to "render ahead" of the GPU. Windows with ANGLE only. - --opengl-dumb-mode). Windows with ANGLE only. --angle-swapchain-length=<2-16> Sets the number of buffers in the D3D11 presentation queue when using the ANGLE backend (default: 6). At least 2 are required, since one is the back buffer that mpv renders to and the other is the front buffer that is presented by the DWM. Additional buffers can improve performance, because for example, mpv will not have to wait on the DWM to release the front buffer before rendering a new frame to it. For this reason, Microsoft recommends at least 4. Windows with ANGLE only. --cocoa-force-dedicated-gpu=<yes|no> Deactivates the automatic graphics switching and forces the dedicated GPU. (default: no) OS X only. --opengl-sw Continue even if a software renderer is detected. --opengl-backend=<sys> The value auto (the default) selects the windowing backend. You can also pass help to get a complete list of compiled in backends (sorted by autoprobe order). auto auto-select (default) cocoa Cocoa/OS X win Win32/WGL. x11 X11/GLX x11probe For internal autoprobing, equivalent to x11 otherwise. Don't use directly, it could be removed without warning as autoprobing is changed. wayland Wayland/EGL drm DRM/EGL (drm-egl is a deprecated alias) x11egl X11/EGL. --opengl-es=<mode> Select whether to use GLES: yes Try to prefer ES over Desktop GL force2 Try to request a ES 2.0 context (the driver might ignore this) no Try to prefer desktop GL over ES auto Use the default for each backend (default) --opengl32f. Default: auto, which maps to rgba16 on desktop GL, and rgba16f or rgb10_a2 on GLES (e.g. ANGLE), unless GL_EXT_texture_norm16 is available. --opengl-gamma=<0.1..2.0> Set a gamma value (default: 1.0). If gamma is adjusted in other ways (like with the --gamma option or key bindings and the gamma property), the value is multiplied with the other gamma value. Recommended values based on the environmental brightness: 1.0 Brightly illuminated (default) 0.9 Slightly dim 0.8 Pitch black room NOTE: Typical movie content (Blu-ray etc.) already contains a gamma drop of about 0.8, so specifying it here as well will result in even darker image than intended! --gamma-auto Automatically corrects the gamma value depending on ambient lighting conditions (adding a gamma boost for dark rooms). With ambient illuminance of 64lux, mpv will pick the 1.0 gamma value (no boost), and slightly increase the boost up until 0.8 for 16lux. NOTE: Only implemented on OS X. - NOTE: of 2.0 is somewhat conservative and will mostly just apply to skies or directly sunlit surfaces. A setting of 0.0 disables this option. --gamut to set the gamut to simulate. For example, --target-gamut. NOTE: On Windows, the default profile must be an ICC profile. WCS profiles are not supported. -. NOTE: This is not cleaned automatically, so old, unused cache files may stick around indefinitely. -, --opengl-gamma and --post-shader. It also increases subtitle performance when using --interpolation. The downside of enabling this is that it restricts subtitles to the visible portion of the video, so you can't have subtitles exist in the black margins below a video (for example). If video is selected, the behavior is similar to yes, but subs are drawn at the video's native resolution, and scaled along with the video. WARNING: --opengl. --opengl-tex-pad-x, --opengl). --opengl-dumb-mode=<yes|no|auto> This mode is extremely restricted, and will disable most extended OpenGL features. That includes high quality scalers and custom shaders! It is intended for hardware that does not support FBOs (including GLES, which supports it insufficiently), or to get some more performance out of bad or old hardware. This mode is forced automatically if needed, and this option is mostly useful for debugging. The default of auto will enable it automatically if nothing uses features which require FBOs. This option might be silently removed in the future. --opengl-shader-cache-dir=<dirname> Store and load compiled GL shaders in this directory. Normally, shader compilation is very fast, so this is usually not needed. But some GL implementations (notably ANGLE, the default on Windows) have relatively slow shader compilation, and can cause startup delays. NOTE: This is not cleaned automatically, so old, unused cache files may stick around indefinitely. This option might be silently removed in the future, if ANGLE fixes shader compilation speed. --cuda-decode-device=<auto|0..> Choose the GPU device used for decoding when using the cuda hwdec.. The default includes a common list of tags, call mpv with --list-options to see it. -. If you use this option, you usually want to set it to display-resample to enable a timing mode that tries to not skip or repeat frames when for example playing 24fps video on a 24Hz screen. The e.g. mkv). If the sync code detects severe A/V desync, or the framerate cannot be detected, the player automatically reverts to audio mode for some time or permanently. The modes with desync in their names do not attempt to keep audio/video in sync. They will slowly (or quickly) desync, until e.g. the next seek happens. These modes are meant for testing, not serious use.. The default settings are not loose enough to speed up 23.976 fps video to 25 fps. We consider the pitch change too extreme to allow this behavior by default. Set this option to a value of 5 to enable it. Note that in the --video-sync=display-resample mode, audio speed will additionally be changed by a small amount if necessary for A/V sync. See --video-sync-max-audio-change. -. Possible values of <prio>: idle|belownormal|normal|abovenormal|high|realtime WARNING:). Unlike --sub-files and --audio-files, this includes all tracks, and does not cause default stream selection over the "proper" file. This makes it slightly less intrusive. This is a list option. See List Options for details. --external-file=<file> CLI/config file only alias for --external-files-append. Each use of this option will add a new external files. --autoload-files=<yes|no> Automatically load/select external files (default: yes). If set to no, then do not automatically load external files as specified by --sub-auto and --audio-file-auto. If external files are forcibly added (like with --sub-files), they will not be auto-selected. This does not affect playlist expansion, redirection, or other loading of referenced files like with ordered chapters. --record-file=<file> Record the current stream to the given target file. The target file will always be overwritten without asking.: · A label of the form aidN selects audio track N as input (e.g. aid1). · A label of the form vidN selects video track N as input. · A label named ao will be connected to the audio output. ·. Examples · --lavfi-complex='[aid1] asplit [ao] [t] ; [t] aphasemeter [vo]' Play audio track 1, and visualize it as video using the aphasemeter filter. · --lavfi-complex='[aid1] [aid2] amix [ao]' Play audio track 1 and 2 at the same time. · -). · --lavfi-complex='[aid1] asplit [ao] [t] ; [t] aphasemeter [t2] ; [vid1] [t2] overlay [vo]' Play audio track 1, and overlay its visualization over video track 1. · --lavfi-complex='[aid1] asplit [t1] [ao] ; [t1] showvolume [t2] ; [vid1] [t2] overlay [vo]' Play audio track 1, and overlay the measured volume for each speaker over video track 1. · null:// --lavfi-complex='life [vo]' Conways' Life Game. See the FFmpeg libavfilter documentation for details on the available filters. AUDIO OUTPUT DRIVERS Audio output drivers are interfaces to different audio output facilities. The syntax is: --ao=<driver1,driver2,...[,]> Specify a priority list of audio output drivers to be used.: alsa (Linux only) ALSA audio output driver See ALSA audio output options for options specific to this AO. WARNING: The following global options are supported by this audio output: -. The following global options are supported by this audio output: -. Automatically redirects to coreaudio_exclusive when playing compressed formats. The following global options are supported by this audio output: - NOTE: This driver is not very useful. Playing multi-channel audio with it is slow. pulse PulseAudio audio output driver The following global options are supported by this audio output: -). If you have stuttering video when using pulse, try to enable this option. (Or try to update PulseAudio.) sdl SDL 1.2+ audio output driver. Should work on any platform supported by SDL 1.2, but may require the SDL_AUDIODRIVER environment variable to be set appropriately for your system. NOTE: This driver is for compatibility with extremely foreign environments, such as systems where none of the other drivers are available. The following global options are supported by this audio output: -. The following global options are supported by this audio output: - The following global options are supported by this < >). NOTE: Completely useless, unless you intend to run RSound. Not to be confused with RoarAudio, which is something completely different. sndio Audio output to the OpenBSD sndio sound system NOTE: Experimental. There are known bugs and issues. (Note: only supports mono, stereo, 4.0, 5.1 and 7.1 channel layouts.) wasapi Audio output to the Windows Audio Session API. VIDEO OUTPUT DRIVERS Video output drivers are interfaces to different video output facilities. The syntax is: --vo=<driver1,driver2,...[,]> Specify a priority list of video output drivers to be used. If the list has a trailing ,, mpv will fall back on drivers not contained in the list. NOTE: See --vo=help for a list of compiled-in video output drivers. The recommended output driver is --vo=opengl, which is the default. All other drivers are for compatibility or special purposes. If the default does not work, it will fallback to other drivers (in the same order as listed by --vo=help). Available video output drivers are: xv (X11 only) Uses the XVideo extension to enable hardware-accelerated display. This is the most compatible VO on X, but may be low-quality, and has issues with OSD and subtitle display. NOTE: This driver is for compatibility with old systems. The following global options are supported by this video output: -. NOTE: This is a fallback only, and should not be normally used. vdpau (X11 only) Uses the VDPAU interface to display and optionally also decode video. Hardware decoding is used with --hwdec=vdpau.. The following global options are supported by this video output: --vo-vdpau-sharpen=<-1-1> (Deprecated. See note about vdpaupp.) For positive values, apply a sharpening algorithm to the video, for negative values a blurring algorithm (default: 0). --vo-vdpau-denoise=<0-1> (Deprecated. See note about vdpaupp.) Apply a noise reduction algorithm to the video (default: 0; no noise reduction). --vo-vdpau-deint=<-4-4> (Deprecated. See note about vdpaupp.) Select deinterlacing mode (default: 0). In older versions (as well as MPlayer/mplayer2) you could use this option to enable deinterlacing. This doesn't work anymore, and deinterlacing is enabled with either the d key (by default mapped to the command cycle deinterlace), or the --deinterlace option. Also, to select the default deint mode, you should use something like --vf-defaults=vdpaupp:deint-mode=temporal instead of this sub-option..) Makes temporal deinterlacers operate both on luma and chroma (default). Use no-chroma-deint to solely use luma and speed up advanced deinterlacing. Useful with slow video memory. --vo-vdpau-pullup (Deprecated. See note about vdpaupp.) Try to apply inverse telecine, needs motion adaptive temporal deinterlacing. -. Using the VDPAU frame queuing functionality controlled by the queuetime options makes mpv's frame flip timing less sensitive to system CPU load and allows mpv no reason to disable this for fullscreen mode (as the driver issue should. direct3d (Windows only) Video output driver that uses the Direct3D interface. NOTE:. The following global options are supported by this video output: -. Debug options. These might be incorrect, might be removed in the future, might crash, might cause slow downs, etc. Contact the developers if you actually need any of these for performance or proper operation. -. opengl OpenGL video output driver. It supports extended scaling methods, dithering and color management. See OpenGL renderer options for options specific to this VO. By default, it tries to use fast and fail-safe settings. Use the opengl-hq profile to use this driver with defaults set to high quality rendering. (This profile is also the replacement for --vo=opengl-hq.) The profile can be applied with --profile=opengl-hq and its contents can be viewed with --show-profile=opengl-hq. Requires at least OpenGL 2.1. Some features are available with OpenGL 3 capable graphics drivers only (or if the necessary extensions are available). OpenGL ES 2.0 and 3.0 are supported as well. Hardware decoding over OpenGL-interop is supported to some degree. Note that in this mode, some corner case might not be gracefully handled, and color space conversion and chroma upsampling is generally in the hand of the hardware decoder APIs. opengl makes use of FBOs by default. Sometimes you can achieve better quality or performance by changing the --opengl-fbo-format option to rgb16f, rgb32f or rgb. Known problems include Mesa/Intel not accepting rgb16, Mesa sometimes not being compiled with float texture support, and some OS X setups being very slow with rgb16 but fast with rgb32f. If you have problems, you can also try enabling the --opengl-dumb-mode=yes option. sdl SDL 2.0+ Render video output driver, depending on system with or without hardware acceleration. Should work on all platforms supported by SDL 2.0. For tuning, refer to your copy of the file SDL_hints.h. NOTE: This driver is for compatibility with systems that don't provide proper graphics drivers, or which support GLES only. The following global options are supported by this video output: -. NOTE: This driver is for compatibility with crappy systems. You can use vaapi hardware decoding with --vo=opengl too. The following global options are supported by this video output: -). This option doesn't apply if libva supports video post processing (vpp). In this case, the default for deint-mode is no, and enabling deinterlacing via user interaction using the methods mentioned above actually inserts the vavpp video filter. If vpp is not actually supported with the libva backend in use, you can use this option to forcibly enable VO based deinterlacing.. Usually, it's better to disable video with --no-video instead. The following global options are supported by this video output: --vo-null-fps=<value> Simulate display FPS. This artificially limits how many frames the VO accepts per second. caca Color ASCII art video output driver that works on a text console. NOTE:. The following global options are supported by this video output: -. NOTE: This driver is for compatibility with systems that don't provide working OpenGL drivers. The following global options are supported by this video output: ->.) This also supports many of the options the opengl VO has. rpi (Raspberry Pi) Native video output on the Raspberry Pi using the MMAL API. This is deprecated. Use --vo=opengl instead, which is the default and provides the same functionality. The rpi VO will be removed in mpv 0.23.0. Its functionality was folded into --vo=opengl, which now uses RPI hardware decoding by treating it as a hardware overlay (without applying GL filtering). Also to be changed in 0.23.0: the --fs flag will be reset to "no" by default (like on the other platforms). The following deprecated global options are supported by this video output: -). The following global options are supported by this video output: -) AUDIO FILTERS Audio filters allow you to modify the audio stream and its properties. The syntax is: --af=... Setup a chain of audio filters. See --vf for the syntax. NOTE:. See --vf group of options for info on how --af-defaults, --af-add, --af-pre, --af-del, --af-clr, and possibly others work. Available filters are:. It supports only the following sample formats: u8, s16, s32, float.. The default is 640. Some receivers might not be able to handle this. Valid values: 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320, 384, 448, 512, 576, 640. The special value auto selects a default bitrate based on the input channel number:. equalizer=g1:g2:g3:...:g10 10 octave band graphic equalizer, implemented using 10 IIR band-pass filters. This means that it works regardless of what type of audio is being played back. The center frequencies for the 10 bands are: ┌────┬────────────┐ │No. │ frequency │ └────┴────────────┘ │0 │ 31.25 Hz │ ├────┼────────────┤ │1 │ 62.50 Hz │ ├────┼────────────┤ │2 │ 125.00 Hz │ ├────┼────────────┤ │3 │ 250.00 Hz │ ├────┼────────────┤ │4 │ 500.00 Hz │ ├────┼────────────┤ │5 │ 1.00 kHz │ ├────┼────────────┤ │6 │ 2.00 kHz │ ├────┼────────────┤ │7 │ 4.00 kHz │ ├────┼────────────┤ │8 │ 8.00 kHz │ ├────┼────────────┤ │9 │ 16.00 kHz │ └────┴────────────┘ a resampling filter before it reaches this filter. <g1>:<g2>:<g3>:...:<g10> floating point numbers representing the gain in dB for each frequency band (-12-12) Example mpv --af=equalizer=11:11:10:5:0:-12:0:5:12:12 media.avi Would amplify the sound in the upper and lower frequency region while canceling it almost completely around 1 kHz. channels=nch[:routes] Can be used for adding, removing, routing and copying audio channels. If only <nch> is given, the default routing is used. It works as follows: If the number of output channels is greater than the number of input channels, empty channels are inserted (except when mixing from mono to stereo; then the mono channel is duplicated). If the number of output channels is less than the number of input channels, the exceeding channels are truncated. <nch> number of output channels (1-8) <routes> List of , separated routes, in the form from1-to1,from2-to2,.... Each pair defines where to route each channel. There can be at most 8 routes. Without this argument, the default routing is used. Since , is also used to separate filters, you must quote this argument with [...] or similar. Examples mpv --af=channels=4:[0-1,1-0,2-2,3-3] media.avi Would change the number of channels to 4 and set up 4 routes that swap channel 0 and channel 1 and leave channel 2 and 3 intact. Observe that if media containing two channels were played back, channels 2 and 3 would contain silence but 0 and 1 would still be swapped. mpv --af=channels=6:[0-0,0-1,0-2,0-3] media.avi Would change the number of channels to 6 and set up 4 routes that copy channel 0 to channels 0 to 3. Channel 4 and 5 will contain silence. NOTE: You should probably not use this filter. If you want to change the output channel layout, try the format filter, which can make mpv automatically up- and downmix standard channel layouts.. All parameters are optional. The first 3 parameters restrict what the filter accepts as input. They will therefore cause conversion filters to be inserted before this one. The out- parameters tell the filters or audio outputs following this filter how to interpret the data without actually doing a conversion. Setting these will probably just break things unless you really know you want this for some reason, such as testing or dealing with broken media. . <out-format> <out-srate> <out-channels> NOTE: this filter used to be named force. The old format filter used to do conversion itself, unlike this one which lets the filter system handle the conversion. volume[=<volumedb>[:...]] Implements software volume control. Use this filter with caution since it can reduce the signal to noise ratio of the sound. In most cases it is best to use the Master volume control of your sound card or the volume knob on your amplifier. WARNING: This filter is deprecated. Use the top-level options like --volume and --replaygain... instead. NOTE: This filter is not reentrant and can therefore only be enabled once for every audio stream. <volumedb> Sets the desired gain in dB for all channels in the stream from -200 dB to +60 dB, where -200 dB mutes the sound completely and +60 dB equals a gain of 1000 (default: 0). replaygain-track Adjust volume gain according to the track-gain replaygain value stored in the file metadata. replaygain-album Like replaygain-track, but using the album-gain value instead. replaygain-preamp Pre-amplification gain in dB to apply to the selected replaygain gain (default: 0). replaygain-clip=yes|no Prevent clipping caused by replaygain by automatically lowering the gain (default). Use replaygain-clip=no to disable this. replaygain-fallback Gain in dB to apply if the file has no replay gain tags. This option is always applied if the replaygain logic is somehow inactive. If this is applied, no other replaygain options are applied. softclip Turns soft clipping on. Soft-clipping can make the sound more smooth if very high volume levels are used. Enable this option if the dynamic range of the loudspeakers is very low. WARNING: This feature creates distortion and should be considered a last resort. s16 Force S16 sample format if set. Lower quality, but might be faster in some situations. detach Remove the filter if the volume is not changed at audio filter config time. Useful with replaygain: if the current file has no replaygain tags, then the filter will be removed if this option is enabled. (If --softvol=yes is used and the player volume controls are used during playback, a different volume filter will be inserted.) Example mpv --af=volume=10.1 media.avi Would amplify the sound by 10.1 dB and hard-clip if the sound level is too high. pan=n:[<matrix>]). <matrix> A list of values [L00,L01,L02,...,L10,L11,L12,...,Ln0,Ln1,Ln2,...], where each element Lij means how much of input channel i is mixed into output channel j (range 0-1). So in principle you first have n numbers saying what to do with the first input channel, then n numbers that act on the second input channel etc. If you do not specify any numbers for some input channels, 0 is assumed. Note that the values are separated by ,, which is already used by the option parser to separate filters. This is why you must quote the value list with [...] or similar. Examples mpv --af=pan=1:[0.5,0.5] media.avi Would downmix from stereo to mono. mpv --af=pan=3:[1,0,0.5,0,1,0.5] media.avi Would give 3 channel output leaving channels 0 and 1 intact, and mix channels 0 and 1 into output channel 2 (which could be sent to a subwoofer for example). NOTE: If you just want to force remixing to a certain output channel layout, it is easier to use the format filter. For example, mpv '--af=format=channels=5.1' '--audio-channels=5.1' would always force remixing audio to 5.1 and output it like this. This filter supports the following af-command commands: set-matrix Set the <matrix> argument dynamically. This can be used to change the mixing matrix at runtime, without reinitializing the entire filter chain. a value will cause noticeable this to your input.conf to step by musical semi-tones: [ multiply speed 0.9438743126816935 ] multiply speed 1.059463094352953 WARNING:. This filter has a number of additional sub-options. You can list them with mpv --af=rubberband=help. This will also show the default values for each option. The options are not documented here, because they are merely passed to librubberband. Look at the librubberband documentation to learn what each option does: (The mapping of the mpv rubberband filter sub-option names and values to those of librubberband follows a simple pattern: "Option" + Name + Value.) This filter supports the following af-command commands:. WARNING: Don't forget to quote libavfilter graphs as described in the lavfi video filter section. o=<string> AVOptions. VIDEO FILTERS Video). Before the filter name, a label can be specified with @name:, where name is an arbitrary user-given name, which identifies the filter. This is only needed if you want to toggle the filter at runtime. A ! before the filter name means the filter is enabled by default. It will be skipped on filter creation. This is also useful for runtime filter toggling. See the vf command (and toggle sub-command) for further explanations and examples. The general filter entry syntax is: ["@"<label-name>":"] ["!"] <filter-name> [ "=" <filter-parameter-list> ] or for the special "toggle" syntax (see vf command): "@"<label-name> and the filter-parameter-list: <filter-parameter> | <filter-parameter> "," <filter-parameter-list> and filter-parameter: ( <param-name> "=" <param-value> ) | <param-value> param-value can further be quoted in [ / ] in case the value contains characters like , or =. This is used in particular with the lavfi filter, which uses a very similar syntax as mpv (MPlayer historically) to specify filters and their parameters. You can also set defaults for each filter. The defaults are applied before the normal filter parameters. --vf-defaults=<filter1[=parameter1:parameter2:...],filter2,...> Set defaults for each filter. NOTE:. mpv-only filters are: crop[=w:h:x:y] Crops the given part of the image and discards the rest. Useful to remove black bands from widescreen videos. <w>,<h> Cropped width and height, defaults to original width and height. <x>,<y> Position of the cropped picture, defaults to center. expand[=w:h:x:y:aspect:round] Expands (not scales) video resolution to the given value and places the unscaled original at coordinates x, y. ) <aspect> Expands to fit an aspect instead of a resolution (default: 0). Example expand=800::::4/3 Expands to 800x600, unless the source is higher resolution, in which case it expands to fill a 4/3 aspect. <round> Rounds up to make both width and height divisible by <r> (default: 1). flip Flips the image upside down. mirror Mirrors the image on the Y axis. rotate[=0|90|180|270] Rotates the image by a multiple of 90 degrees clock-wise. scale[=w:h:param:param2:chr-drop:noup:arnd Scales the image with the software scaler (slow) and performs a YUV<->RGB color space conversion (see also --sws). All parameters are optional. <w>:<h> scaled width/height (default: original width. <param>[:<param2>] (see) <chr-drop> chroma skipping 0 Use all available input lines for chroma (default). 1 Use only every 2. input line for chroma. 2 Use only every 4. input line for chroma. 3 Use only every 8. input line for chroma. . no Disable accurate rounding (default). yes Enable accurate rounding. dsize[=w:h:aspect-method:r:aspect] Changes the intended display aspect at an arbitrary point in the filter chain. Aspect can be given as a fraction (4/3) or floating point number (1.33). Note that this filter does not do any scaling itself; it just affects what later scalers (software or hardware) will do when auto-scaling to the correct aspect. <w>,<h> New aspect ratio given by a display width and height. Unlike older mpv versions or MPlayer, this does not set the display size.). <aspect> Force an aspect ratio. format=fmt=<value>:colormatrix=<value>:... Restricts the color space for the next filter without doing any conversion. Use together with the scale filter for a real conversion. NOTE:. These options are not always supported. Different video outputs provide varying degrees of support. The opengl and vdpau video output drivers usually offer full support. The xv output can set the color space if the system video driver supports it, but not input and output levels. The scale video filter can configure color space and input levels, but only if the output format is RGB (if the video output driver supports RGB output, you can force this with -vf scale,format=rgba). If this option is set to auto (which is the default), the video's color space flag will be used. If that flag is unset, the color space will be selected automatically. This is done using a simple heuristic that attempts to distinguish SD and HD video. If the video is larger than 1279x576 pixels, BT.709 (HD) will be used; otherwise BT.601 (SD) is selected. Available color spaces are:. The same limitations as with <colormatrix> apply. Available color ranges are:. This option only affects video output drivers that perform color management, for example opengl with the target-prim or icc-profile suboptions set. If this option is set to auto (which is the default), the video's primaries flag will be used. If that flag is unset, the color space will be selected automatically, using the following heuristics: If the <colormatrix> is set or determined as BT.2020 or BT.709, the corresponding primaries are used. Otherwise, if the video height is exactly 576 (PAL), BT.601-625 is used. If it's exactly 480 or 486 (NTSC), BT.601-525 is used. If the video resolution is anything else, BT.709 is used. Available primaries are:. This option only affects video output drivers that perform color management. If this option is set to auto (which is the default), the gamma will be set to BT.1886 for YCbCr content, sRGB for RGB content and Linear for XYZ content. Available gamma functions are:. The default of 0.0 will default to the source's nominal peak luminance. . noformat[=fmt] Restricts the color space for the next filter without doing any conversion. Unlike the format filter, this will allow any color space except the one you specify. NOTE: For a list of available formats, see noformat=fmt=help. <fmt> Format name, e.g. rgb15, bgr24, 420p, etc. (default: 420p). lavfi=graph[:sws-flags[:o=opts]] Filter video using FFmpeg's libavfilter. <graph> The libavfilter graph string. The filter must have a single video input pad and a single video output pad. See for syntax and available filters. WARNING: If you want to use the full filter syntax with this option, you have to quote the filter graph in order to prevent mpv's syntax and the filter graph syntax from clashing.. See;a=blob;f=libswscale/swscale.h. <o> Set AVFilterGraph options. These should be documented by FFmpeg. Example '--vf=lavfi=yadif:o="threads=2,thread_type=slice"' forces a specific threading configuration. pullup[=jl:jr:jt:jb:sb:mp] Pulldown reversal (inverse telecine) filter, capable of handling mixed hard-telecine, 24000/1001 fps progressive, and 30000/1001 fps progressive content. The pullup filter makes use of future context in making its decisions. It is stateless in the sense that it does not lock onto a pattern to follow, but it instead looks forward to the following fields in order to identify matches and rebuild progressive frames. process video with slight blurring between the fields, but may also cause interlaced frames in the output. mp (metric plane) This option may be set to u or. yadif=[mode:interlaced-only] Yet another deinterlacing filter <mode> frame Output 1 frame for each frame. field Output 1 frame for each field (default). frame-nospatial Like frame but skips spatial interlacing check. field-nospatial Like field but skips spatial interlacing check. <interlaced-only> no Deinterlace all frames. yes Only deinterlace frames marked as interlaced (default). This filter is automatically inserted when using the d key (or any other key that toggles the deinterlace property or when using the --deinterlace switch), assuming the video output does not have native deinterlacing support. If you just want to set the default mode, put this filter and its options into --vf-defaults instead, and enable deinterlacing with d or --deinterlace. Also, note that the d key is stupid enough to insert a deinterlacer twice when inserting yadif with --vf, so using the above methods is recommended..) abl or above_below_left_first above-below (left eye above, right eye below) abr) gradfun[=strength[:radius|:size=<size>]] Fix the banding artifacts that are sometimes introduced into nearly flat regions by truncation to 8-bit color depth. Interpolates the gradients that should go where the bands are, and dithers them. <strength> Maximum amount by which the filter will change any one pixel. Also the threshold for detecting nearly flat regions (default: 1.5). <radius> Neighborhood to fit the gradient to. Larger radius makes for smoother gradients, but also prevents the filter from modifying pixels near detailed regions (default: disabled). <size> size of the filter in percent of the image diagonal size. This is used to calculate the final radius size (default: 1).() WARNING:. By default, this uses the special value auto, which sets the option to the number of detected logical CPU cores. The following variables are defined by mpv:. Useful for some filters which insist on having a FPS.). Note that there's currently a mechanism that allows the vdpau VO to change the deint-mode of auto-inserted vdpaupp filters. To avoid confusion, it's recommended not to use the --vo=vdpau suboptions related to filtering.. buffer=<num> Buffer <num> frames in the filter chain. This filter is probably pretty useless, except for debugging. (Note that this won't help to smooth out latencies with decoding, because the filter will never output a frame if the buffer isn't full, except on EOF.) ENCODING You. Options are managed in lists. There are a few commands to manage the options list. -. Options are managed in lists. There are a few commands to manage the options list. -. Options are managed in lists. There are a few commands to manage the options list. - INTERFACE The mpv core can be controlled with commands and properties. A number of ways to interact with the player use them: key bindings (input.conf), OSD (showing information with properties), JSON IPC, the client API (libmpv), and the classic slave mode. input.conf The input.conf file consists of a list of key bindings, for example: s screenshot # take a screenshot with the s key LEFT seek 15 # map the left-arrow key to seeking forward by 15 seconds Each line maps a key to an input command. Keys are specified with their literal value (upper case if combined with Shift), or a name for special keys. For example, a maps to the a key without shift, and A maps to a with shift. The file is located in the mpv configuration directory (normally at ~/.config/mpv/input.conf depending on platform). The default bindings are defined here: A list of special keys can be obtained with mpv --input-keylist In general, keys can be combined with Shift, Ctrl and Alt: ctrl+q quit mpv can be started in input test mode, which displays key bindings and the commands they're bound to on the OSD, instead of executing the commands: mpv --input-test --force-window --idle (Only closing the window will make mpv exit, pressing normal keys will merely display the binding, even if mapped to quit.) General Input Command Syntax [Shift+][Ctrl+][Alt+][Meta+]<key> [{<section>}] [<prefixes>] <command> (<argument>)* [; <command>] Note that by default, the right Alt key can be used to create special characters, and thus does not register as a modifier. The option --no-input-right-alt-gr changes this behavior. Newlines always start a new binding. # starts a comment (outside of quoted string arguments). To bind commands to the # key, SHARP can be used. <key> is either the literal character the key produces (ASCII or Unicode character), or a symbolic name (as printed by --input-keylist). <section> (braced with { and }) is the input section for this command. Arguments are separated by whitespace. This applies even to string arguments. For this reason, string arguments should be quoted with ". Inside quotes, C-style escaping can be used. You can bind multiple commands to one key. For example: a show-text "command 1" ; show-text "command 2" It's also possible to bind a command to a sequence of keys: a-b-c show-text "command run after a, b, c have been pressed" (This is not shown in the general command syntax.) If a or a-b or b are already bound, this will run the first command that matches, and the multi-key command will never be called. Intermediate keys can be remapped to ignore in order to avoid this issue. The maximum number of (non-modifier) keys for combinations is currently 4.. The second argument consists of flags controlling the seek). Multiple flags can be combined, e.g.: absolute+keyframes. By default, keyframes is used for relative seeks, and exact is used for absolute seeks. Before mpv 0.9, the keyframes and exact flags had to be passed as 3rd parameter (essentially using a space instead of +). The 3rd parameter is still parsed, but is considered deprecated.. The first argument is optional, and can change the behavior: mark Mark the current time position. The next normal revert-seek command will seek back to this point, no matter how many seeks happened since last time. Using it without any arguments gives you the default behavior.. This does not work with audio-only playback.. Multiple flags are available (some can be combined with +): . Older mpv versions required passing single and each-frame as second argument (and did not have flags). This syntax is still understood, but deprecated and might be removed in the future.. screenshot-to-file <filename> [subtitles|video|window] Take a screenshot and save it to a given file. The format of the file will be guessed by the extension (and --screenshot-format is ignored - the behavior when the extension is missing or unknown is arbitrary). The second argument is like the first argument to screenshot. If the file already exists, it's overwritten. Like all input command parameters, the filename is subject to property expansion as described in Property Expansion. The async flag has an effect on this command (see screenshot command).. Second argument: .) The third argument is a list of options and values which should be set while the file is playing. It is of the form opt1=value1,opt2=value2,... Not all options can be changed this way. Some options require a restart of the player.). The program is run in a detached way. mpv doesn't wait until the command is completed, but continues playback right after spawning it. To get the old behavior, use /bin/sh and -c as the first two arguments. Example run "/bin/sh" "-c" "echo ${title} > /tmp/playing" This is not a particularly good example, because it doesn't handle escaping, and a specially prepared file might allow an attacker to execute arbitrary shell commands. It is recommended to write a small shell script, and call that with run..) The title argument sets the track title in the UI. The lang argument sets the track language, and can also influence stream selection with flags set to auto. sub-remove [<id>] Remove the given subtitle track. If the id argument is missing, remove the current track. (Works on external subtitle files only.) sub-reload [<id>] Reload the given subtitle tracks. If the id argument is missing, reload the current track. (Works on external subtitle files only.) This works by unloading and re-adding the subtitle track.. For embedded subtitles (like with Matroska), this works only with subtitle events that have already been displayed, or are within a short prefetch range.>). Second argument: <button> The button number of clicked mouse button. This should be one of 0-19. If <button> is omitted, only the position will be updated. Third argument: . The mode argument is one of the following: . The first argument decides what happens:.) A special variant is combining this with labels, and using @name without filter name and parameters as filter entry. This toggles the enable/disable flag.. The argument is always needed. E.g. in case of clr use vf clr "". You can assign labels to filter by prefixing them with @name: (where name is a user-chosen arbitrary identifier). Labels can be used to refer to filters by name in all of the filter chain modification commands. For add, using an already used label will replace the existing filter. The vf command shows the list of requested filters on the OSD after changing the filter chain. This is roughly equivalent to show-text ${vf}. Note that auto-inserted filters for format conversion are not shown on the list, only what was requested by the user. Normally, the commands will check whether the video chain is recreated successfully, and will undo the operation on failure. If the command is run before video is configured (can happen if the command is run immediately after opening a file and before a video frame is decoded), this check can't be run. Then it can happen that creating the video chain fails. Example for input.conf · a vf set flip turn video upside-down on the a key · b vf set "" remove all video filters on b · c vf toggle lavfi=gradfun toggle debanding on c Example how to toggle disabled filters at runtime ·. ·. The internal counter is associated using the property name and the value list. If multiple commands (bound to different keys) use the same name and value list, they will share the internal counter. The special argument !reverse can be used to cycle the value list in reverse. Compared with a command that just lists the value in reverse, this command will actually share the internal counter with the forward-cycling key binding (as long as the rest of the arguments are the same). Note that there is a static limit of (as of this writing) 10 arguments (this limit could be raised on demand). enable-section <section> [flags] Enable all key bindings in the named input section. The enabled input sections form a stack. Bindings in sections on the top of the stack are preferred to lower sections. This command puts the section on top of the stack. If the section was already on the stack, it is implicitly removed beforehand. (A section cannot be on the stack more than once.) The flags parameter can be a combination (separated by +) of the following flags: . If the contents parameter is an empty string, the section is removed. The section with the name default is the normal input section. In general, input sections have to be enabled with the enable-section command, or they are ignored. The last parameter has the following meaning: <default> (also used if parameter omitted) Use a key binding defined by this section only if the user hasn't already bound this key to a command. <force> Always bind a key. (The input section that was made active most recently wins if there are ambiguities.) This command can be used to dispatch arbitrary keys to a script or a client API user. If the input section defines script-binding commands, it is also possible to get separate events on key up/down, and relatively detailed information about the key state. The special key name unmapped can be used to match any unmapped key. overlay-add <id> <x> <y> <file> <offset> <fmt> <w> <h> <stride> Add an OSD overlay sourced from raw data. This might be useful for scripts and applications controlling mpv, and which want to display things on top of the video window. Overlays are usually displayed in screen resolution, but with some VOs, the resolution is reduced to that of the video's. You can read the osd-width and osd-height properties. At least with --vo-xv and anamorphic video (such as DVD), osd-par should be read as well, and the overlay should be aspect-compensated. id is an integer between 0 and 63 identifying the overlay element. The ID can be used to add multiple overlay parts, update a part by using this command with an already existing ID, or to remove a part with overlay-remove. Using a previously unused ID will add a new overlay, while reusing an ID will update it. x and y specify the position where the OSD should be displayed. file specifies the file the raw image data is read from. It can be either a numeric UNIX file descriptor prefixed with @ (e.g. @4), or a filename. The file will be mapped into memory with mmap(), copied, and unmapped before the command returns (changed in mpv 0.18.1). It is also possible to pass a raw memory address for use as bitmap memory by passing a memory address as integer prefixed with an & character. Passing the wrong thing here will crash the player. This mode might be useful for use with libmpv. The offset parameter is simply added to the memory address (since mpv 0.8.0, ignored before). offset is the byte offset of the first pixel in the source file. (The current implementation always mmap's the whole file from position 0 to the end of the image, so large offsets should be avoided. Before mpv 0.8.0, the offset was actually passed directly to mmap, but it was changed to make using it easier.) fmt is a string identifying the image format. Currently, only bgra is defined. This format has 4 bytes per pixels, with 8 bits per component. The least significant 8 bits are blue, and the most significant 8 bits are alpha (in little endian, the components are B-G-R-A, with B as first byte). This uses premultiplied alpha: every color component is already multiplied with the alpha component. This means the numeric value of each component is equal to or smaller than the alpha component. (Violating this rule will lead to different results with different VOs: numeric overflows resulting from blending broken alpha values is considered something that shouldn't happen, and consequently implementations don't ensure that you get predictable behavior in this case.) w, h, and stride specify the size of the overlay. w is the visible width of the overlay, while stride gives the width in bytes in memory. In the simple case, and with the bgra format, stride==4*w. In general, the total amount of memory accessed is stride * h. (Technically, the minimum size would be stride * (h - 1) + w * 4, but for simplicity, the player will access all stride * h bytes.) NOTE:. The argument is the name of the binding. It can optionally be prefixed with the name of the script, using / as separator, e.g. script-binding scriptname/bindingname. For completeness, here is how this command works internally. The details could change any time. On any matching key event, script-message-to or script-message is called (depending on whether the script name is included), with the following arguments: 1. The string key-binding. 2. The name of the binding (as established above). 3. The key state as string (see below). 4. The key name (since mpv 0.15.0). The key state consists of 2 letters:. Note that the <label> is a mpv filter label, not a libavfilter filter name. af-command <label> <cmd> <args> Same as vf-command, but for audio filters. apply-profile <name> Apply the contents of a named profile. This is like using profile=name in a config file, except you can map it to a key binding to change it at runtime. There is no such thing as "unapplying" a profile - applying a profile merely sets all option values listed within the profile. load-script <path> Load a script, similar to the --script option. Undocumented commands: tv-last-channel (TV/DVB only), ao-reload (experimental/internal). Hooks. There are two special commands involved. Also, the client must listen for client messages (MPV_EVENT_CLIENT_MESSAGE in the. When a specific event happens, all registered handlers are run serially. This uses a protocol every client has to follow explicitly. When a hook handler is run, a client message (MPV_EVENT_CLIENT_MESSAGE) is sent to the client which registered the hook. This message has the following arguments:) Upon receiving this message, the client can handle the event. While doing this, the player core will still react to requests, but playback will typically be stopped. When the client is done, it must continue the core's hook execution by running the hook-ack command. hook-ack <string> Run the next hook in the global chain of hooks. The argument is the 3rd argument of the client message that starts hook execution for the current client. The following hooks are currently. Note that this does not yet apply default track selection. Which operations exactly can be done and not be done, and what information is available and what is not yet available yet, is all subject to change. on_unload Run before closing a file, and before actually uninitializing everything. It's not possible to resume playback in this state. Input Command Prefixes These. All of the osd prefixes are still overridden by the global --osd-level settings. Input Sections Input sections group a set of bindings, and enable or disable them at once. In input.conf, each key binding is assigned to an input section, rather than actually having explicit text sections. See also: enable-section and disable-section commands. Predefined bindings: default Bindings without input section are implicitly assigned to this section. It is enabled by default during normal playback. encode Section which is active in encoding mode. It is enabled exclusively, so that bindings in the default sections are ignored. Properties Properties are used to set mpv options during runtime, or to query arbitrary information. They can be manipulated with the set/add/cycle commands, and retrieved with show-text, or anything else that uses property expansion. (See Property Expansion.) The property name is annotated with RW to indicate whether the property is generally writable. If an option is referenced, the property will normally take/return exactly the same values as the option. In these cases, properties are merely a way to change an option at runtime. Property list NOTE:.) OSD formatting will display it in the form of +1.23456%, with the number being (raw - 1) * 100 for the given raw property value..) This has a sub-property: filename/no-ext Like the filename property, but if the text contains a ., strip all text after the last .. Usually this removes the file extension. file-size Length in bytes of the source file/stream. (This is the same as ${stream-end}. For ordered chapters and such, the size of the currently played segment is returned.) estimated-frame-count Total number of frames in current file. NOTE: This is only an estimate. (It's computed from two unreliable quantities: fps and stream length.) estimated-frame-number Number of current frame in current stream. NOTE:. Otherwise, if the media type is DVD, return the volume ID of DVD. Otherwise, return the filename property..) (Renamed from demuxer.) stream-path Filename (full path) of the stream layer filename. (This is probably useless. It looks like this can be different from path only when using e.g. ordered chapters.). This replaces the length property, which was deprecated after the mpv 0.9 release. (The semantics are the same.). drop-frame-count is a deprecated alias. frame-drop-count Frames dropped by VO (when using --framedrop=vo). vo-drop-frame-count is a deprecated alias.. This has a number of sub-properties. Replace N with the 0-based edition index. disc-titles/count Number of titles. disc-titles/id Title ID as integer. Currently, this is the same as the title index. disc-titles/length Length in seconds. Can be unavailable in a number of cases (currently it works for libdvdnav only). . This has a number of sub-properties. Replace N with the 0-based edition index.. "title" MPV_FORMAT_STRING "default" MPV_FORMAT_FLAG angle (RW) Current DVD angle. metadata Metadata key/value pairs. If the property is accessed with Lua's mp.get_property_native, this returns a table with metadata keys mapping to metadata values. If it is accessed with the client API, this returns a MPV_FORMAT_NODE_MAP, with tag keys mapping to tag values. For OSD, it returns a formatted list. Trying to retrieve this property as a raw string doesn't work. This has a number of sub-properties:. The layout of this property might be subject to change. Suggestions are welcome how exactly this property should work. When querying the property with the client API using MPV_FORMAT_NODE, or with Lua mp.get_property_native, this will return a mpv_node with the following contents:). Per-chapter metadata is very rare. Usually, only the chapter name (title) is set. For accessing other information, like chapter start, see the chapter-list property. vf-metadata/<filter-label> Metadata added by video filters. Accessed by the filter label, which, if not explicitly specified using the @filter-label: syntax, will be <filter-name>NN. Works similar to metadata property. It allows the same access methods (using sub-properties). An example of this kind of metadata are the cropping parameters added by --vf=lavfi=cropdetect. af-metadata/<filter-label> Equivalent to vf-metadata/<filter-label>, but for audio filters. idle-active Return yes if no file is loaded, but the player is staying around because of the --idle option. (Renamed from idle.) core-idle Return yes if the playback core is paused, otherwise no. This can be different pause in special situations, such as when the player pauses itself due to low network cache. This also returns yes if playback is restarting or if nothing is playing at all. In other words, it's only no if there's actually video playing. (Behavior since mpv 0.7.0.), or when switching ordered chapter segments. This is because the same underlying code is used for seeking and resyncing.) mixer-active Return yes if the audio mixer is active, no otherwise. This option is relatively useless. Before mpv 0.18.1, it could be used to infer behavior of the volume property.. When querying the property with the client API using MPV_FORMAT_NODE, or with Lua mp.get_property_native, this will return a mpv_node with the following contents:. Writing to it may change the currently used hardware decoder, if possible. (Internally, the player may reinitialize the decoder, and will perform a seek to refresh the video properly.) You can watch the other hwdec properties to see whether this was successful. Unlike in mpv 0.9.x and before, this does not return the currently active hardware decoder. Since mpv 0.18.0, hwdec-current is available for this purpose. --opengl-hwdec-interop can load it eagerly.) If there are multiple drivers loaded, they will be separated by ,. If no VO is active or no interop driver is known, this property is unavailable. This does not necessarily use the same values as hwdec. There can be multiple interop drivers for the same hardware decoder, depending on platform and VO. This is somewhat similar to the --opengl-hwdec-interop option, but it returns the actually loaded backend, not the value of this option..) When querying the property with the client API using MPV_FORMAT_NODE, or with Lua mp.get_property_native, this will return a mpv_node with the following contents:. These have the same values as video-out-params/dw and video-out-params/dh.. Has the same sub-properties as video-params.. Sub-properties: video-frame-info/picture-type video-frame-info/interlaced video-frame-info/tff video-frame-info/repeat container-fps Container FPS. This can easily contain bogus values. For videos that use modern container formats or video codecs, this will often be incorrect. (Renamed from fps.). If video is active, this reports the effective aspect value, instead of the value of the --video-aspect option.. This property is experimental and might be removed in the future.. This has a number of sub-properties. Replace N with the 0-based playlist entry index.. When querying the property with the client API using MPV_FORMAT_NODE, or with Lua mp.get_property_native, this will return a mpv_node with the following contents:. This has a number of sub-properties. Replace N with the 0-based track index.. When querying the property with the client API using MPV_FORMAT_NODE, or with Lua mp.get_property_native, this will return a mpv_node with the following contents:. This has a number of sub-properties. Replace N with the 0-based chapter index. chapter-list/count Number of chapters. chapter-list/N/title Chapter title as stored in the file. Not always available. chapter-list/N/time Chapter start time in seconds as float. When querying the property with the client API using MPV_FORMAT_NODE, or with Lua mp.get_property_native, this will return a mpv_node with the following contents: MPV_FORMAT_NODE_ARRAY MPV_FORMAT_NODE_MAP (for each chapter) "title" MPV_FORMAT_STRING "time" MPV_FORMAT_DOUBLE af, vf (RW) See --vf/--af and the vf/af command. When querying the property with the client API using MPV_FORMAT_NODE, or with Lua mp.get_property_native, this will return a mpv_node with the following contents: MPV_FORMAT_NODE_ARRAY MPV_FORMAT_NODE_MAP (for each filter entry) "name" MPV_FORMAT_STRING "label" MPV_FORMAT_STRING [optional] "enabled" MPV_FORMAT_FLAG [optional] "params" MPV_FORMAT_NODE_MAP [optional] "key" MPV_FORMAT_STRING "value" MPV_FORMAT_STRING It's also possible to write the property using this format.. If this property returns true, seekable will also return true. · --osd-status-msg='This is ${osd-ass-cc/0}{\\b1}bold text' · show-text "This is ${osd-ass-cc/0}{\b1}bold text" Any ASS override tags as understood by libass can be used. Note that you need to escape the \ character, because the string is processed for C escape sequences before passing it to the OSD code. A list of tags can be found here:. This is further subdivided into two frame types, vo-passes/fresh for fresh frames (which have to be uploaded, scaled, etc.) and vo-passes/redraw for redrawn frames (which only have to be re-painted). The number of passes for any given subtype can change from frame to frame, and should not be relied upon. Each frame type has a number of further sub-properties. Replace TYPE with the frame type, N with the 0-based pass index, and M with the 0-based sample index.. When querying the property with the client API using MPV_FORMAT_NODE, or with Lua mp.get_property_native, this will return a mpv_node with the following contents: Note that directly accessing this structure via subkeys is not supported, the only access is through aforementioned MPV_FORMAT_NODE.. The unit is bits per second. OSD formatting turns these values in kilobits (or megabits, if appropriate), which can be prevented by using the raw property value, e.g. with ${=video-bitrate}. Note that the accuracy of these properties is influenced by a few factors. If the underlying demuxer rewrites the packets on demuxing (done for some file formats), the bitrate might be slightly off. If timestamps are bad or jittery (like in Matroska), even constant bitrate streams might show fluctuating bitrate. How exactly these values are calculated might change in the future. In earlier versions of mpv, these properties returned a static (but bad) guess using a completely different method.}. These properties shouldn't be used anymore. audio-device-list Return the list of discovered audio devices. This is mostly for use with the client API, and reflects what --audio-device=help with the command line player returns. When querying the property with the client API using MPV_FORMAT_NODE, or with Lua mp.get_property_native, this will return a mpv_node with the following contents: MPV_FORMAT_NODE_ARRAY MPV_FORMAT_NODE_MAP (for each device entry) "name" MPV_FORMAT_STRING "description" MPV_FORMAT_STRING The name is what is to be passed to the --audio-device option (and often a rather cryptic audio API-specific ID), while description is human readable free form text. The description is set to the device name (minus mpv-specific <driver>/ prefix) if no description is available or the description would have been an empty string. The special entry with the name set to auto selects the default audio output driver and the default device. The property can be watched with the property observation mechanism in the client API and in Lua scripts. (Technically, change notification is enabled the first time this property is read.) audio-device (RW) Set the audio device. This directly reads/writes the --audio-device option, but on write accesses, the audio output will be scheduled for reloading. Writing this property while no audio output is active will not automatically enable audio. (This is also true in the case when audio was disabled due to reinitialization failure after a previous write access to audio-device.) This property also doesn't tell you which audio device is actually in use. How these details are handled may change in the future. current-vo Current video output driver (name as used with --vo). current-ao Current audio output driver (name as used with --ao). audio-out-detected-device Return the audio device selected by the AO driver (only implemented for some drivers: currently only coreaudio).. When querying the property with the client API using MPV_FORMAT_NODE, or with Lua mp.get_property_native, this will return a mpv_node with the following contents:. There shouldn't be any reason to access options/<name> instead of <name>, except in situations in which the properties have different behavior or conflicting semantics. file-local-options/<name> Similar to options/<name>, but when setting an option through this property, the option is reset to its old value once the current file has stopped playing. Trying to write an option while no file is playing (or is being loaded) results in an error. (Note that if an option is marked as file-local, even options/ will access the local value, and the old value, which will be restored on end of playback, cannot be read or written until end of playback.) option-info/<name> Additional per-option information. This has a number of sub-properties. Replace <name> with the name of a top-level option. No guarantee of stability is given to any of these sub-properties - they may change radically in the feature. You.) Option changes at runtime are affected by this as well.. Option changes at runtime are affected by this as. Strictly speaking, option access via API (e.g. mpv_set_option_string()) has the same problem, and it's only a difference between CLI/API. Within input.conf, property expansion can be inhibited by putting the raw prefix in front of commands. The following expansions are supported: $. In places where property expansion is allowed, C-style escapes are often accepted as well. Example: · \n becomes a newline character · \\ expands to \ Raw and Formatted Properties · ${time-pos} expands to 00:14:23 (if playback position is at 14 minutes 23 seconds) · ${=time-pos} expands to 863.4 (same time, plus 400 milliseconds - milliseconds are normally not shown in the formatted case) Sometimes, the difference in amount of information carried by raw and formatted property values can be rather big. In some cases, raw values have more information, like higher precision than seconds with time-pos. Sometimes it is the other way around, e.g. aid shows track title and language in the formatted case, but only the track number if it is raw. ON SCREEN CONTROLLER The. Using the OSC By ┌──────────────┬────────────────────────────────┐ │left-click │ play previous file in playlist │ ├──────────────┼────────────────────────────────┤ │right-click │ show playlist │ ├──────────────┼────────────────────────────────┤ │shift+L-click │ show playlist │ └──────────────┴────────────────────────────────┘ pl next ┌──────────────┬────────────────────────────┐ │left-click │ play next file in playlist │ ├──────────────┼────────────────────────────┤ │right-click │ show playlist │ ├──────────────┼────────────────────────────┤ │shift+L-click │ show playlist │ └──────────────┴────────────────────────────┘ title Displays current media-title, filename, or custom title ┌────────────┬──────────────────────────────────┐ │left-click │ show playlist position and │ │ │ length and full title │ ├────────────┼──────────────────────────────────┤ │right-click │ show filename │ └────────────┴──────────────────────────────────┘ cache Shows current cache fill status play ┌───────────┬───────────────────┐ │left-click │ toggle play/pause │ └───────────┴───────────────────┘ skip back ┌──────────────┬──────────────────────────────────┐ │left-click │ go to beginning of chapter / │ │ │ previous chapter │ ├──────────────┼──────────────────────────────────┤ │right-click │ show chapters │ ├──────────────┼──────────────────────────────────┤ │shift+L-click │ show chapters │ └──────────────┴──────────────────────────────────┘ skip frwd ┌──────────────┬────────────────────┐ │left-click │ go to next chapter │ ├──────────────┼────────────────────┤ │right-click │ show chapters │ ├──────────────┼────────────────────┤ │shift+L-click │ show chapters │ └──────────────┴────────────────────┘ time elapsed Shows current playback position timestamp ┌───────────┬──────────────────────────────────┐ │left-click │ toggle displaying timecodes with │ │ │ milliseconds │ └───────────┴──────────────────────────────────┘ seekbar Indicates current playback position and position of chapters ┌───────────┬──────────────────┐ │left-click │ seek to position │ └───────────┴──────────────────┘ time left Shows remaining playback time timestamp ┌───────────┬──────────────────────────────────┐ │left-click │ toggle between total and │ │ │ remaining time │ └───────────┴──────────────────────────────────┘ audio and sub Displays selected track and amount of available tracks ┌──────────────┬──────────────────────────────────┐ │left-click │ cycle audio/sub tracks forward │ ├──────────────┼──────────────────────────────────┤ │right-click │ cycle audio/sub tracks backwards │ ├──────────────┼──────────────────────────────────┤ │shift+L-click │ show available audio/sub tracks │ └──────────────┴──────────────────────────────────┘ vol ┌────────────┬────────────────┐ │left-click │ toggle mute │ ├────────────┼────────────────┤ │mouse wheel │ volume up/down │ └────────────┴────────────────┘ fs ┌───────────┬───────────────────┐ │left-click │ toggle fullscreen │ └───────────┴───────────────────┘ Key Bindings These key bindings are active by default if nothing else is already bound to these keys. In case of collision, the function needs to be bound to a different key. See the Script Commands section. ┌────┬──────────────────────────────────┐ │del │ Cycles visibility between never │ │ │ / auto (mouse-move) / always │ └────┴──────────────────────────────────┘ Configuration The OSC offers limited configuration through a config file lua-settings/osc.conf placed in mpv's user dir and through the --script-opts command-line option. Options provided through the command-line will override those from the config file. Config Syntax The config file must exactly follow the following syntax: # this is a comment optionA=value1 optionB=value2 # can only be used at the beginning of a line and there may be no spaces around the = or anywhere else. Command-line Syntax To avoid collisions with other scripts, all options need to be prefixed with osc-. Example: --script-opts=osc-optionA=value1,osc-optionB=value2 Configurable Options layout Default: bottombar The layout for the OSC. Currently available are: box, slimbox, bottombar and topbar. Default pre-0.21.0 was 'box'. seekbarstyle Default: bar Sets the style of the seekbar, slider (diamond marker), knob (circle marker with guide), or bar (fill). Default pre-0.21.0 was 'slider'. deadzonesize Default: 0.5 Size of the deadzone. The deadzone is an area that makes the mouse act like leaving the window. Movement there won't make the OSC show up and it will hide immediately if the mouse enters it. The deadzone starts at the window border opposite to the OSC and the size controls how much of the window it will span. Values between 0.0 and 1.0, where 0 means the OSC will always popup with mouse movement in the window, and 1 means the OSC will only show up when the mouse hovers it. Default pre-0.21.0 was 0. minmousemove Default: 0 Minimum amount of pixels the mouse has to move between ticks to make the OSC show up. Default pre-0.21.0 was 3. showwindowed Default: yes Enable the OSC when windowed showfullscreen Default: yes Enable the OSC when fullscreen scalewindowed Default: 1.0 Scale factor of the OSC when windowed. scalefullscreen Default: 1.0 Scale factor of the OSC when fullscreen scaleforcedwindow Default: 2.0 Scale factor of the OSC when rendered on a forced (dummy) window vidscale Default: yes Scale the OSC with the video no tries to keep the OSC size constant as much as the window size allows valign Default: 0.8 Vertical alignment, -1 (top) to 1 (bottom) halign Default: 0.0 Horizontal alignment, -1 (left) to 1 (right) barmargin Default: 0 Margin from bottom (bottombar) or top (topbar), in pixels boxalpha Default: 80 Alpha of the background box, 0 (opaque) to 255 (fully transparent) hidetimeout Default: 500 Duration in ms until the OSC hides if no mouse movement, must not be negative fadeduration Default: 200 Duration of fade out in ms, 0 = no fade title Default: ${media-title} String that supports property expansion that will be displayed as OSC title. ASS tags are escaped, and newlines and trailing slashes are stripped. tooltipborder Default: 1 Size of the tooltip outline when using bottombar or topbar layouts timetotal Default: no Show total time instead of time remaining timems Default: no Display timecodes with milliseconds visibility Default: auto (auto hide/show on mouse move) Also supports never and always boxmaxchars Default: 80 Max chars for the osc title at the box layout. mpv does not measure the text width on screen and so it needs to limit it by number of chars. The default is conservative to allow wide fonts to be used without overflow. However, with many common fonts a bigger number can be used. YMMV. Script Commands Example You could put this into input.conf to hide the OSC with the a key and to set auto mode (the default) with b: a script-message osc-visibility never b script-message osc-visibility auto osc-playlist, osc-chapterlist, osc-tracklist Shows a limited view of the respective type of list using the OSC. First argument is duration in seconds. LUA SCRIPTING. mpv provides the built-in module mp, which contains functions to send commands to the mpv core and to retrieve information about playback state, user settings, file information, and so on. These scripts can be used to control mpv in a similar way to slave mode. Technically, the Lua code uses the client API internally. Example A script which leaves fullscreen mode when the player is paused: function on_pause_change(name, value) if value == true then mp.set_property("fullscreen", "no") end end mp.observe_property("pause", "bool", on_pause_change) Details on the script initialization and lifecycle. When the player quits, all scripts will be asked to terminate. This happens via a shutdown event, which by default will make the event loop return. If your script got into an endless loop, mpv will probably behave fine during playback, but it won't terminate when quitting, because it's waiting on your script. Internally, the C code will call the Lua function mp_event_loop after loading a Lua script. This function is normally defined by the default prelude loaded before your script (see player/lua/defaults.lua in the mpv sources). The event loop will wait for events and dispatch events registered with mp.register_event. It will also handle timers added with mp.add_timeout and similar (by waiting with a timeout). Since mpv 0.6.0, the player will wait until the script is fully loaded before continuing normal operation. The player considers a script as fully loaded as soon as it starts waiting for mpv events (or it exits). In practice this means the player will more or less hang until the script returns from the main chunk (and mp_event_loop is called), or the script calls mp_event_loop or mp.dispatch_events directly. This is done to make it possible for a script to fully setup event handlers etc. before playback actually starts. In older mpv versions, this happened asynchronously. mp functions The mp module is preloaded, although it can be loaded manually with require 'mp'. It provides the core client API. mp.command(string) Run the given command. This is similar to the commands used in input.conf. See List of Input Commands. By default, this will show something on the OSD (depending on the command), as if it was used in input.conf. See Input Command Prefixes how to influence OSD usage per command. Returns true on success, or nil, error on error. mp.commandv(arg1, arg2, ...) Similar to mp.command, but pass each command argument as separate parameter. This has the advantage that you don't have to care about quoting and escaping in some cases. Example: mp.command("loadfile " .. filename .. " append") mp.commandv("loadfile", filename, "append") These two commands are equivalent, except that the first version breaks if the filename contains spaces or certain special characters. Note that properties are not expanded. You can use either mp.command, the expand-properties prefix, or the mp.get_property family of functions. Unlike mp.command, this will not use OSD by default either (except for some OSD-specific commands). mp.command_native(table [,def]) Similar to mp.commandv, but pass the argument list as table. This has the advantage that in at least some cases, arguments can be passed as native types. Returns a result table on success (usually empty), or def, error on error. def is the second parameter provided to the function, and is nil if it's missing. mp.get_property(name [,def]) Return the value of the given property as string. These are the same properties as used in input.conf. See Properties for a list of properties. The returned string is formatted similar to ${=name} (see Property Expansion). Returns the string on success, or def, error on error. def is the second parameter provided to the function, and is nil if it's missing. mp.get_property_osd(name [,def]) Similar to mp.get_property, but return the property value formatted for OSD. This is the same string as printed with ${name} when used in input.conf. Returns the string on success, or def, error on error. def is the second parameter provided to the function, and is an empty string if it's missing. Unlike get_property(), assigning the return value to a variable will always result in a string. mp.get_property_bool(name [,def]) Similar to mp.get_property, but return the property value as Boolean. Returns a Boolean on success, or def, error on error. mp.get_property_number(name [,def]) Similar to mp.get_property, but return the property value as number. Note that while Lua does not distinguish between integers and floats, mpv internals do. This function simply request a double float from mpv, and mpv will usually convert integer property values to float. Returns a number on success, or def, error on error. mp.get_property_native(name [,def]) Similar to mp.get_property, but return the property value using the best Lua type for the property. Most time, this will return a string, Boolean, or number. Some properties (for example chapter-list) are returned as tables. Returns a value on success, or def, error on error. Note that nil might be a possible, valid value too in some corner cases. mp.set_property(name, value) Set the given property to the given string value. See mp.get_property and Properties for more information about properties. Returns true on success, or nil, error on error. mp.set_property_bool(name, value) Similar to mp.set_property, but set the given property to the given Boolean value. mp.set_property_number(name, value) Similar to mp.set_property, but set the given property to the given numeric value. Note that while Lua does not distinguish between integers and floats, mpv internals do. This function will test whether the number can be represented as integer, and if so, it will pass an integer value to mpv, otherwise a double float. mp.set_property_native(name, value) Similar to mp.set_property, but set the given property using its native type. Since there are several data types which cannot represented natively in Lua, this might not always work as expected. For example, while the Lua wrapper can do some guesswork to decide whether a Lua table is an array or a map, this would fail with empty tables. Also, there are not many properties for which it makes sense to use this, instead of set_property, set_property_bool, set_property_number. For these reasons, this function should probably be avoided for now, except for properties that use tables natively.). After calling this function, key presses will cause the function fn to be called (unless the user remapped the key with another binding). The name argument should be a short symbolic string. It allows the user to remap the key binding via input.conf using the script-message command, and the name of the key binding (see below for an example). The name should be unique across other bindings in the same script - if not, the previous binding with the same name will be overwritten. You can omit the name, in which case a random name is generated internally. The last argument is used for optional flags. This is a table, which can have the following entries:. Internally, key bindings are dispatched via the script-message-to or script-binding input commands and mp.register_script_message. Trying to map multiple commands to a key will essentially prefer a random binding, while the other bindings are not called. It is guaranteed that user defined bindings in the central input.conf are preferred over bindings added with this function (but see mp.add_forced_key_binding). Example: function something_handler() print("the key was pressed") end mp.add_key_binding("x", "something", something_handler) This will print the message the key was pressed when x was pressed. The user can remap these key bindings. Then the user has to put the following into their input.conf to remap the command to the y key: y script-binding something This will print the message when the key y is pressed. (x will still work, unless the user remaps it.) You can also explicitly send a message to a named script only. Assume the above script was using the filename fooscript.lua:. Some events have associated data. This is put into a Lua table and passed as argument to fn. The Lua table by default contains a name field, which is a string containing the event name. If the event has an error associated, the error field is set to a string describing the error, on success it's not set. If multiple functions are registered for the same event, they are run in registration order, which the first registered function running before all the other ones. Returns true if such an event exists, false otherwise. See Events and List of events for details.)). If possible, change events are coalesced. If a property is changed a bunch of times in a row, only the last change triggers the change function. (The exact behavior depends on timing and other things.). This is a one-shot timer: it will be removed when it's fired. Returns a timer object. See mp.add_periodic_timer for details.. If you write this, you can call t:kill() ; t:resume() to reset the current timeout to the new one. (t:stop() won't use the new timeout.) oneshot (RW) Whether the timer is periodic (false) or fires just once (true). This value is used when the timer expires (but before the timer callback function fn is run). Note that these are method, and you have to call them using : instead of . (Refer to .) Example: The script /path/to/fooscript.lua becomes fooscript. mp.osd_message(text [,duration]) Show an OSD message on the screen. duration is in seconds, and is optional (uses --osd-duration by default). Advanced mp functions These. If the allow_wait parameter is set to true, the function will block until the next event is received or the next timer expires. Otherwise (and this is the default behavior), it returns as soon as the event loop is emptied. It's strongly recommended to use mp.get_next_timeout() and mp.get_wakeup_pipe() if you're interested in properly working notification of new events and working timers.. Used by mp.add_key_binding, so be careful about name collisions. mp.unregister_script_message(name) Undo a previous registration with mp.register_script_message. Does nothing if the name wasn't registered. mp.msg functions This module allows outputting messages to the terminal, and can be loaded with require 'mp.msg'. msg.log(level, ...) The level parameter is the message priority. It's a string and one of fatal, error, warn, info, v, debug. The user's settings will determine which of these messages will be visible. Normally, all messages are visible, except v and debug. The parameters after that are all converted to strings. Spaces are inserted to separate multiple parameters. You don't need to add newlines. msg.fatal(...), msg.error(...), msg.warn(...), msg.info(...), msg.verbose(...), msg.debug(...) All of these are shortcuts and equivalent to the corresponding msg.log(level, ...) call. mp.options functions! The identifier is used to identify the config-file and the command-line options. These needs to unique to avoid collisions with other scripts. Defaults to mp.get_script_name(). Example implementation: require 'mp.options' local options = { optionA = "defaultvalueA", optionB = -0.5, optionC = true, } read_options(options, "myscript") print(options.optionA) The config file will be stored in lua-settings/identifier.conf in mpv's user folder. Comment lines can be started with # and stray spaces are not removed. Boolean values will be represented with yes/no. Example config: # comment optionA=Hello World optionB=9999 optionC=no Command-line options are read from the --script-opts parameter. To avoid collisions, all keys have to be prefixed with identifier-. Example command-line: --script-opts=myscript-optionA=TEST,myscript-optionB=0,myscript-optionC=yes mp.utils functions This built-in module provides generic helper functions for Lua, and have strictly speaking nothing to do with mpv or video/audio playback. They are provided for convenience. Most compensate for Lua's scarce standard library. Be warned that any of these functions might disappear any time. They are not strictly part of the guaranteed API.). If the filter argument is given, it must be one of the following strings:. On error, nil, error is returned.. The parameter t is a table. The function reads the following entries:.) The function returns a table as result with the following entries:. On Windows, killed is only returned when the process has been killed by mpv as a result of cancellable being set to true. killed_by_us Set to true if the process has been killed by mpv as a result of cancellable being set to true. utils.subprocess_detached(t) Runs an external process and detaches it from mpv's control. The parameter t is a table. The function reads the following entries: args Array of strings of the same semantics as the args used in the subprocess function. The function returns nil. utils.parse_json(str [, trail]) Parses the given string argument as JSON, and returns it as a Lua table. On error, returns nil, error. (Currently, error is just a string reading error, because there is no fine-grained error reporting of any kind.) The returned value uses similar conventions as mp.get_property_native() to distinguish empty objects and arrays. If the trail parameter is true (or any value equal to true), then trailing non-whitespace text is tolerated by the function, and the trailing text is returned as 3rd return value. (The 3rd return value is always there, but with trail set, no error is raised.) utils.format_json(v) Format the given Lua table (or value) as a JSON string and return it. On error, returns nil, error. (Errors usually only happen on value types incompatible with JSON.) The argument value uses similar conventions as mp.set_property_native() to distinguish empty objects and arrays. utils.to_string(v) Turn the given value into a string. Formats tables and their contents. This doesn't do anything special; it is only needed because Lua is terrible. Events Events are notifications from player core to scripts. You can register an event handler with mp.register_event. Note that all scripts (and other parts of the player) receive events equally, and there's no such thing as blocking other scripts from receiving events.. The event has the reason field, which takes one of these values:. Keep in mind that these messages are meant to be hints for humans. You should not parse them, and prefix/level/text of messages might change any time.. The following events also happen, but are deprecated: tracks-changed, track-switched, pause, unpause, metadata-update, chapter-change. Use mp.observe_property() instead. Extras This. See Hooks for currently existing hooks and what they do - only the hook list is interesting; handling hook execution is done by the Lua script function automatically. JAVASCRIPT JavaScript support in mpv is near identical to its Lua support. Use this section as reference on differences and availability of APIs, but otherwise you should refer to the Lua documentation for API details and general scripting in mpv. Example JavaScript code which leaves fullscreen mode when the player is paused: function on_pause_change(name, value) { if (value == true) mp.set_property("fullscreen", "no"); } mp.observe_property("pause", "bool", on_pause_change); Similarities with Lua mpv tries to load a script file as JavaScript if it has a .js extension, but otherwise, the documented Lua options, script directories, loading, etc apply to JavaScript files too. Script initialization and lifecycle is the same as with Lua, and most of the Lua functions at the modules mp, mp.utils and mp.msg are available to JavaScript with identical APIs - including running commands, getting/setting properties, registering events/key-bindings/property-changes/hooks, etc. Differences from Lua No need to load modules. mp, mp.utils and mp.msg are preloaded, and you can use e.g. var cwd = mp.utils.getcwd(); without prior setup. mp.options is currently not implemented, but mp.get_opt(...) is. Errors are slightly different. Where the Lua APIs return nil for error, the JavaScript ones return undefined. Where Lua returns something, error JavaScript returns only something - and makes error available via mp.last_error(). Note that only some of the functions have this additional error value - typically the same ones which have it in Lua. Standard APIs are preferred. For instance setTimeout and JSON.stringify are available, but mp.add_timeout and mp.utils.format_json are not. No standard library. This means that interaction with anything outside of mpv is limited to the available APIs, typically via mp.utils. However, some file functions were added, and CommonJS require is available too - where the loaded modules have the same privileges as normal scripts. Language features - ECMAScript 5 The mp.add_timeout(seconds, fn) JS: id = setTimeout(fn, ms) mp.add_periodic_timer(seconds, fn) JS: id = setInterval(fn, ms) mp.register_idle(fn) JS: id = setTimeout(fn) mp.unregister_idle(fn) JS: clearTimeout(id) utils.parse_json(str [, trail]) JS: JSON.parse(str) utils.format_json(v) JS: JSON.stringify(v) utils.to_string(v) see dump below. mp.suspend() JS: none (deprecated). mp.resume() JS: none (deprecated). mp.resume_all() JS: none (deprecated). mp.get_next_timeout() see event loop below. mp.dispatch_events([allow_wait]) see event loop below. mp.options module is not implemented currently for JS. Scripting APIs - identical to Lua ) mp.get_property(name [,def]) (LE) mp.get_property_osd(name [,def]) (LE) mp.get_property_bool(name [,def]) (LE) mp.get_property_number(name [,def]) (LE) mp.get_property_native(name [,def]) (LE) mp.set_property(name, value) (LE) mp.set_property_bool(name, value) (LE) mp.set_property_number(name, value) (LE) mp.set_property_native(name, value) (LE) mp.get_time() mp.add_key_binding(key, name|fn [,fn [,flags]]) mp.add_forced_key_binding(...) mp.remove_key_binding(name) mp.register_event(name, fn) mp.unregister_event(fn) mp.observe_property(name, type, fn) mp.unobserve_property(fn) mp.get_opt(key) mp.get_script_name() mp.osd_message(text [,duration]) mp.get_wakeup_pipe() mp.enable_messages(level) mp.register_script_message(name, fn) mp.unregister_script_message(name) mp.msg.log(level, ...) mp.msg.fatal(...) mp.msg.error(...) mp.msg.warn(...) mp.msg.info(...) mp.msg.verbose(...) mp.msg.debug(...) mp.utils.getcwd() (LE) mp.utils.readdir(path [, filter]) (LE) mp.utils.split_path(path) mp.utils.join_path(p1, p2) mp.utils.subprocess(t) mp.utils.subprocess_detached(t) mp.add_hook(type, priority, fn)"). Note: read_file and write_file throw on errors, allow text content only. mp.get_time_ms() Same as mp.get_time() but in ms instead of seconds. mp.get_script_file() Returns the file name of the current script. exit() (global) Make the script exit at the end of the current event loop iteration. Note: please reomve: id = setTimeout(fn [,duration [,arg1 [,arg2...]]]) id = setTimeout(code_string [,duration]) clearTimeout(id) id = setInterval(fn [,duration [,arg1 [,arg2...]]]) id = setInterval(code_string [,duration]) clearInterval(id) setTimeout and setInterval return id, and later call fn (or execute code_string) after duration ms. Interval also repeat every duration. duration has a minimum and default value of 0, code_string is a plain string which is evaluated as JS code, and [,arg1 [,arg2..]] are used as arguments (if provided) when calling back fn. The clear...(id) functions cancel timer id, and are irreversible. Note: timers always call back asynchronously, e.g. setTimeout(fn) will never call fn before returning. fn will be called either at the end of this event loop iteration or at a later event loop iteration. This is true also for intervals - which also never call back twice at the same event loop iteration. Additionally, timers are processed after the event queue is empty, so it's valid to use setTimeout(fn) instead of Lua's mp.register_idle(fn).. Modules and require are supported, standard compliant, and generally similar to node.js. However, most node.js modules won't run due to missing modules such as fs, process, etc, but some node.js modules with minimal dependencies do work. In general, this is for mpv modules and not a node.js replacement. A .js file extension is always added to id, e.g. require("./foo") will load the file ./foo.js and return its exports object. An id is relative (to the script which require'd it) if it starts with ./ or ../. Otherwise, it's considered a "top-level id" (CommonJS term). Top level id is evaluated as absolute filesystem path if possible (e.g. /x/y or ~/x). Otherwise, it's searched at scripts/modules.js/ in mpv config dirs - in normal config search order. E.g. require("x") is searched as file x.js at those dirs, and id foo/x is searched as file foo/x.js. No global variable, but a module's this at its top lexical scope is the global object - also in strict mode. If you have a module which needs global as the global object, you could do this.global = this; before require. Functions and variables declared at a module don't pollute the global object. The event loop The event loop poll/dispatch mpv events as long as the queue is not empty, then processes the timers, then waits for the next event, and repeats this forever. You could put this code at your script to replace the built-in event loop, and also print every event which mpv sends to your script: function mp_event_loop() { var wait = 0; do { var e = mp.wait_event(wait); dump(e); // there could be a lot of prints... if (e.event != "none") { mp.dispatch_event(e); wait = 0; } else { wait = mp.process_timers() / 1000; } } while (mp.keep_running); } mp_event_loop is a name which mpv tries to call after the script loads. The internal implementation is similar to this (without dump though..). e = mp.wait_event(wait) returns when the next mpv event arrives, or after wait seconds if positive and no mpv events arrived. wait value of 0 returns immediately (with e.event == "none" if the queue is empty). mp.dispatch_event(e) calls back the handlers registered for e.event, if there are such (event handlers, property observers, script messages, etc). mp.process_timers() calls back the already-added, non-canceled due timers, and returns the duration in ms till the next due timer (possibly 0), or -1 if there are no pending timers. Must not be called recursively. Note: exit() is also registered for the shutdown event, and its implementation is a simple mp.keep_running = false. JSON IPC You can use the socat tool to send commands (and receive replies) from the shell. Assuming mpv was started with: mpv file.mkv --input-ipc-server=/tmp/mpvsocket Then you can control it using socat: > echo '{ "command": ["get_property", "playback-time"] }' | socat - /tmp/mpvsocket {"data":190.482000,"error":"success"} In this case, socat copies data between stdin/stdout and the mpv socket connection. See the --idle option how to make mpv start without exiting immediately or playing a file. It's also possible to send input.conf style text-only commands: > echo 'show-text ${playback-time}' | socat - /tmp/mpvsocket But you won't get a reply over the socket. (This particular command shows the playback time on the player's OSD.) Command Prompt example. Assuming mpv was started with: mpv file.mkv --input-ipc-server=\\.\pipe\mpvsocket You can send commands from a command prompt: echo show-text ${playback-time} >\\.\pipe\mpvsocket To be able to simultaneously read and write from the IPC pipe, like on Linux, it's necessary to write an external program that uses overlapped file I/O (or some wrapper like .NET's NamedPipeClientStream.) Protocol Clients can execute commands on the player by sending JSON messages of the following form: { "command": ["command_name", "param1", "param2", ...] } where command_name is the name of the command to be executed, followed by a list of parameters. Parameters must be formatted as native JSON values (integers, strings, booleans, ...). Every message must be terminated with \n. Additionally, \n must not appear anywhere inside the message. In practice this means that messages should be minified before being sent to mpv. mpv will then send back a reply indicating whether the command was run correctly, and an additional field holding the command-specific return data (it can also be null). { "error": "success", "data": null } mpv will also send events to clients with JSON messages of the following form: { "event": "event_name" } where event_name is the name of the event. Additional event-specific fields can also be present. See List of events for a list of all supported events. Because events can occur at any time, it may be difficult at times to determine which response goes with which command. Commands may optionally include a request_id which, if provided in the command request, will be copied verbatim into the response. mpv does not intrepret the request_id in any way; it is solely for the use of the requester. For example, this request: { "command": ["get_property", "time-pos"], "request_id": 100 } Would generate this response: { "error": "success", "data": 1.468135, "request_id": 100 } All commands, replies, and events are separated from each other with a line break character (\n). If the first character (after skipping whitespace) is not {, the command will be interpreted as non-JSON text command, as they are used in input.conf (or mpv_command_string() in the client API). Additionally, lines starting with # and empty lines are ignored. Currently, embedded 0 bytes terminate the current line, but you should not rely on this. Commands. Example: { "command": ["get_property", "volume"] } { "data": 50.0, "error": "success" } get_property_string Like get_property, but the resulting data will always be a string. Example: { "command": ["get_property_string", "volume"] } { "data": "50.000000", "error": "success" } set_property Set the given property to the given value. See Properties for more information about properties. Example: { "command": ["set_property", "pause", true] } { "error": "success" } set_property_string Like set_property, but the argument value must be passed as string. Example: { "command": ["set_property_string", "pause", "yes"] } { "error": "success" } observe_property Watch a property for changes. If the given property is changed, then an event of type property-change will be generated. Example: { "command": ["observe_property_string", 1, "volume"] } { "error": "success" } { "event": "property-change", "id": 1, "data": "52.000000", "name": "volume" } unobserve_property Undo observe_property or observe_property_string. This requires the numeric id passed to the observed command as argument. Example: { "command": ["unobserve_property", 1] } { "error": "success" } request_log_messages Enable output of mpv log messages. They will be received as events. The parameter to this command is the log-level (see mpv_request_log_messages C API function). Log message output is meant for humans only (mostly for debugging). Attempting to retrieve information by parsing these messages will just lead to breakages with future mpv releases. Instead, make a feature request, and ask for a proper event that returns the information you need. enable_event, disable_event Enables or disables the named event. Mirrors the mpv_request_event C API function. If the string all is used instead of an event name, all events are enabled or disabled. By default, most events are enabled, and there is not much use for this command. get_version Returns the client API version the C API of the remote mpv instance provides. See also: DOCS/client-api-changes.rst. UTF-8. CHANGELOG There is no real changelog, but you can look at the following things: · The release changelog, which should contain most user-visible changes, including new features and bug fixes: · The git log, which is the "real" changelog · The files client-api-changes.rst and interface-changes.rst in the DOCS sub directoryon the git repository, which document API and user interface changes (the latter usually documents breaking changes only, rather than additions). ·: · · · · · C PLUGINS location C plugins are put into the mpv scripts directory in its config directory (see the FILES section for details). They must have a .so file extension. They can also be explicitly loaded with the --script option. API A C plugin must export the following function: int mpv_open_cplugin(mpv_handle *handle) The plugin function will be called on loading time. This function does not return as long as your plugin is loaded (it runs in its own thread). The handle will be deallocated as soon as the plugin function returns. The return value is interpreted as error status. A value of 0 is interpreted as success, while -1 signals an error. In the latter case, the player prints an uninformative error message that loading failed. Return values other than 0 and -1 are reserved, and trigger undefined behavior. Within the plugin function, you can call libmpv API functions. The handle is created by mpv_create_client() (or actually an internal equivalent), and belongs to you. You can call mpv_wait_event() to wait for things happening, and so on. Note that the player might block until your plugin calls mpv_wait_event() for the first time. This gives you a chance to install initial hooks etc. before playback begins. The details are quite similar to Lua scripts. Linkage to libmpv The current implementation requires that your plugins are not linked against libmpv. What your plugins uses are not symbols from a libmpv binary, but symbols from the mpv host binary. Examples See: · ENVIRONMENT VARIABLES There are a number of environment variables that can be used to control the behavior of mpv. HOME, XDG_CONFIG_HOME Used to determine mpv config directory. If XDG_CONFIG_HOME is not set, $HOME/.config/mpv is used. $HOME/.mpv is always added to the list of config search paths with a lower priority.. Notable environment variables: http_proxy URL to proxy for http:// and https:// URLs. no_proxy List of domain patterns for which no proxy should be used. List entries are separated by ,. Patterns can include *. the roaming application data directory (%APPDATA%) under Windows.. HOME FIXME: Document this. EXIT CODES Normally. Note that quitting the player manually will always lead to exit code 0, overriding the exit code that would be returned normally. Also, the quit input command can take an exit code: in this case, that exit code is returned. FILES For Windows-specifics, see FILES ON WINDOWS section. . Only available when libass is built with fontconfig. ~/. Each file is a small config file which is loaded if the corresponding media file is loaded. It contains the playback position and some (not necessarily all) settings that were changed during playback. The filenames are hashed from the full paths of the media files. It's in general not possible to extract the media filename from this hash. However, you can set the --write-filename-in-watch-later-config option, and the player will add the media filename to the contents of the resume config file. ~/.config/mpv/lua-settings/osc.conf This is loaded by the OSC script. See the ON SCREEN CONTROLLER docs for details. Other files in this directory are specific to the corresponding scripts as well, and the mpv core doesn't touch them. Note that the environment variables $XDG_CONFIG_HOME and $MPV_HOME can override the standard directory ~/.config/mpv/. Also, the old config location at ~/.mpv/ is still read, and if the XDG variant does not exist, will still be preferred. FILES ON WINDOWS You can find the exact path by running echo %APPDATA%\mpv\mpv.conf in cmd.exe. Other config files (such as input.conf) are in the same directory. See the FILES section above. The environment variable $MPV_HOME completely overrides these, like on UNIX. If a directory named portable_config next to the mpv.exe exists, all config will be loaded from this directory only. Watch later config files are written to this directory as well. (This exists on Windows only and is redundant with $MPV_HOME. However, since Windows is very scripting unfriendly, a wrapper script just setting $MPV_HOME, like you could do it on other systems, won't work. portable_config is provided for convenience to get around this restriction.) Config files located in the same directory as mpv.exe are loaded with lower priority. Some config files are loaded only once, which means that e.g. of 2 input.conf files located in two config directories, only the one from the directory with higher priority will be loaded. A third config directory with the lowest priority is the directory named mpv in the same directory as mpv.exe. This used to be the directory with the highest priority, but is now discouraged to use and might be removed in the future. Note that mpv likes to mix / and \ path separators for simplicity. kernel32.dll accepts this, but cmd.exe does not. GPLv2+ MPV(1)
http://manpages.ubuntu.com/manpages/bionic/man1/mpv.1.html
CC-MAIN-2019-18
refinedweb
28,503
58.79
Yesterday seemed like any other day until FOMC minutes. Then the markets began to seize. Given the limited trading intensity we’ve witnessed so far in 2017, one could be forgiven for thinking the world was about to end. But at the end of the day, how different was yesterday from any other day? Was it really that crazy? To investigate, I started by grabbing all the 1-minute data for the SPY from 2016 onward. Then I transformed every day to start at 1.0, where the price represented the daily return up until that point. That yielded a graph for 2017-04-05 which looked like: Next, for each day, I computed the ‘distance’ or ‘dissimilarity’ between every pairwise combination of days. I did this in order to find the most similar trading days to April 5th 2017. In practice, I used the LazyMatrix model from SliceMatrix-IO‘s Python SDK. This model is an example of a distance matrix, whereby we can build a list of all the most similar datapoints: from slicematrixIO import SliceMatrix sm = SliceMatrix(api_key) # normed_data is all the daily data series starting at 1.0, 9:30 to the close lz = sm.LazyMatrix(dataset = normed_data.T) distances = lz.rankDist("2017-04-05") distances.head() Which yielded: Then I plotted the resulting dates to see how well they compared to yesterday… (hint yesterday is in GOLD) Ok, so maybe yesterday wasn’t so special… If thats the case, maybe the past can be a good guide to the future, so let’s see what happened on the very next day for the most similar dates we found above: Not a single debacle in sight… this could explain why all this fire-insurance isn’t doing squat for me at the moment. Should have written this in the morning… Interested in Machine Learning Applications for Trading? Create complex machine learning systems in just a few lines of code This example is powered by SliceMatrix-IO, the next generation in machine intelligence Platform as a Service (PaaS). SliceMatrix delivers an end-to-end machine intelligence solutions to end-users so that they can seamlessly develop machine intelligent applications and systems. Get started building for free and get your api key here Categories: Markets, Python, Quant Tools This is an interesting post amidst a unique and interesting publication. I wonder what this same analysis would yield for the Wednesdays when the FOMC wraps up the live meeting and the 2pm insanity that historically ensues.
https://mktstk.com/2017/04/06/fomc-minutes-a-day-like-no-other/
CC-MAIN-2018-17
refinedweb
417
60.04
The next step after connection with SmartFox server might be login to SmartFox. If you wish to know how to connect to SmartFox server, refer to this post. Try below code to login to SmartFox: using Sfs2X; using Sfs2X.Core; using Sfs2X.Requests; using System; namespace TestSmartFox { class Program { static private SmartFox client; static private string serverIP = "127.0.0.1"; static private int serverPort = 9933; static public string zone = "BasicExamples"; static public bool debug = true; static void Main(string[] args) { client = new SmartFox(); //Important! Set it to true in Unity3D client.ThreadSafeMode = false; client.AddEventListener(SFSEvent.CONNECTION, (evt) => { bool bSuccess = (bool)evt.Params["success"]; Console.WriteLine(client.IsConnected ? "Successfully connected to SmartFox Server" : "Failed to connect to SmartFox Server"); var request = new LoginRequest("Tracy", "", zone); //[1] client.Send(request); //[2] }); client.AddEventListener(SFSEvent.LOGIN, (evt) => { //[3] Console.WriteLine("The User login success"); }); client.Connect(serverIP, serverPort); } } } It is very simple to login using the client API: - First create a login request, with given user name, password (empty string “” here) and which zone to login; - Send the request using the SmartFox client; - When the user logins, you will get notified Check the results using AdminTool, you will clearly see this log: Once Logined, you can then retrieve all the rooms and select one of the room to join in, add or edit below code: client.AddEventListener(SFSEvent.ROOM_JOIN, (evt) => { Console.WriteLine("Join room success"); }); client.AddEventListener(SFSEvent.LOGIN, (evt) => { Console.WriteLine("The User login success"); var rooms = client.RoomList; //[1] rooms.ForEach((room) => Console.WriteLine(room.Name)); if (rooms.Count > 0) { var request = new JoinRoomRequest(client.RoomList.First().Name); //[2] client.Send(request); //[3] } }); Here, we can retrieve all rooms in the zone by calling Line [1], and then join one of them as shown in Line [2-3] above. Pretty simple & straightforward! Running the code above, we can see: Happy coding! 2 responses to “Using SmartFox with C# (II) : Login and join room”
https://xinyustudio.wordpress.com/2015/09/14/using-smartfox-with-c-ii-login-and-join-room/
CC-MAIN-2017-43
refinedweb
324
52.36
Documentation Next: Customization through callbacks Up: Interactive Shell Previous: Help Interface customization The Gurobi interactive shell lives within a full-featured scripting language. This allows you to perform a wide range of customizations to suit your particular needs. Creating custom functions requires some knowledge of the Python language, but you can achieve a lot by using a very limited set of language features. Let us consider a simple example. Imagine that you store your models in a certain directory on your disk. Rather than having to type the full path whenever you read a model, you can create your own custom read method: gurobi> def myread(filename): ....... return read('/home/john/models/'+filename)Note that the indentation of the second line is required. Defining this function allows you to do the following: gurobi> m = myread('stein9') Read MPS format model from file /home/john/models/stein9.mps If you don't want to type this function in each time you start the Gurobi shell, you can store it in a file. The file would look like the following: from gurobipy import * def myread(filename): return read('/home/john/models/'+filename)The from gurobipy import *line is required in order to allow you to use the readmethod from the Gurobi shell in your custom function. The name of your customization file must end with a .pysuffix. If the file is named custom.py, you would then type the following to import this function: gurobi> from custom import *One file can contain as many custom functions as you'd like (see custom.pyin <installdir>/examples/pythonfor an example). If you wish to make site-wide customizations, you can also customize the gurobi.pyfile that is included in <installdir>/lib. Next: Customization through callbacks Up: Interactive Shell Previous: Help
https://www.gurobi.com/documentation/8.1/quickstart_windows/interface_customization.html
CC-MAIN-2019-35
refinedweb
296
55.44
On Sun, Sep 16, 2012 at 11:04 AM, damodar kulkarni <kdamodar2000 at gmail.com>wrote: > A feature being "more complicated" for the users to use or for the > implementers to implement cannot be the SOLE reason Haskell started out with a completely flat namespace; a minimal and minimally disruptive "hierarchical" namespace was added, without any special semantics and without any intent to define such semantics. Haskell does not have the concept of modules as containers that you seem to have; it never has had them; and there are ongoing and unresolved arguments over whether some ML-like module system should be adopted, so it's unlikely that they would be added in the absence of such resolution. You apparently believe such a thing is implicitly part of namespaces and that we need some reason to not have automatically created them to your specifications. Could you explain where you got this notion? -- brandon s allbery allbery.b at gmail.com wandering unix systems administrator (available) (412) 475-9364 vm/sms -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
http://www.haskell.org/pipermail/beginners/2012-September/010636.html
CC-MAIN-2014-15
refinedweb
178
50.06
{-# OPTIONS -Wall -fno-warn-orphans -fno-warn-missing-signatures #-} {-# LANGUAGE CPP #-} -- If a work request is sent to the gang while another is already running -- then just run it sequentially instead of dying. #define SEQ_IF_GANG_BUSY 1 -- Trace all work requests sent to the gang. #define TRACE_GANG 0 -- | Gang primitives. module Data.Array.Parallel.Unlifted.Distributed.Gang ( Gang , seqGang , forkGang , gangSize , gangIO, gangST , traceGang, traceGangST ) where import GHC.IO import GHC.ST import Control.Concurrent (forkOn) import Control.Concurrent.MVar import Control.Exception (assert) import Control.Monad #if TRACE_GANG import GHC.Exts (traceEvent) import System.Time ( ClockTime(..), getClockTime ) #endif -- Requests and operations on them -------------------------------------------- -- | The 'Req' type encapsulates work requests for individual members of a gang. data Req -- | Instruct the worker to run the given action then signal it's done -- by writing to the MVar. = ReqDo (Int -> IO ()) (MVar ()) -- | Tell the worker that we're shutting the gang down. -- The worker should signal that it's received the equest down by -- writing to the MVar before returning to its caller (forkGang) | ReqShutdown (MVar ()) -- | Create a new request for the given action. newReq :: (Int -> IO ()) -> IO Req newReq p = do mv <- newEmptyMVar return $ ReqDo p mv -- | Block until a thread request has been executed. -- NOTE: only one thread can wait for the request. waitReq :: Req -> IO () waitReq req = case req of ReqDo _ varDone -> takeMVar varDone ReqShutdown varDone -> takeMVar varDone -- Thread gangs and operations on them ---------------------------------------- -- | A 'Gang' is a group of threads which execute arbitrary work requests. data Gang = Gang !Int -- Number of 'Gang' threads [MVar Req] -- One 'MVar' per thread (MVar Bool) -- Indicates whether the 'Gang' is busy instance Show Gang where showsPrec p (Gang n _ _) = showString "<<" . showsPrec p n . showString " threads>>" -- | A sequential gang has no threads. seqGang :: Gang -> Gang seqGang (Gang n _ mv) = Gang n [] mv -- | The worker thread of a 'Gang'. -- The threads blocks on the MVar waiting for a work request. gangWorker :: Int -> MVar Req -> IO () gangWorker threadId varReq = do traceGang $ "Worker " ++ show threadId ++ " waiting for request." req <- takeMVar varReq case req of ReqDo action varDone -> do traceGang $ "Worker " ++ show threadId ++ " begin" start <- getGangTime action threadId end <- getGangTime traceGang $ "Worker " ++ show threadId ++ " end (" ++ diffTime start end ++ ")" putMVar varDone () gangWorker threadId varReq ReqShutdown varDone -> do traceGang $ "Worker " ++ show threadId ++ " shutting down." putMVar varDone () -- | Finaliser for worker threads. -- We want to shutdown the corresponding thread when it's MVar becomes -- unreachable. Without this the program can compilain about -- "Blocked indefinitely on an MVar" because worker threads are still -- blocked on the request MVars when the program ends. Whether this finalizer -- is called or not is very racey. It can happen 1 in 10 times, or less often. -- -- We're relying on the comment in System.Mem.Weak that says -- "If there are no other threads to run, the runtime system will check for -- runnable final -> IO () finaliseWorker varReq = do varDone <- newEmptyMVar putMVar varReq (ReqShutdown varDone) takeMVar varDone return () -- | Fork a 'Gang' with the given number of threads (at least 1). forkGang :: Int -> IO Gang forkGang n = assert (n > 0) $ do -- Create the vars we'll use to issue work requests. mvs <- sequence . replicate n $ newEmptyMVar -- Add finalisers so we can shut the workers down cleanly if they -- become unreachable. mapM_ (\var -> addMVarFinalizer var (finaliseWorker var)) mvs -- Create all the worker threads zipWithM_ forkOn [0..] $ zipWith gangWorker [0 .. n-1] mvs -- The gang is currently idle. busy <- newMVar False return $ Gang n mvs busy -- | O(1). Yield the number of threads in the 'Gang'. gangSize :: Gang -> Int gangSize (Gang n _ _) = n -- | Issue work requests for the 'Gang' and wait until they have been executed. -- If the gang is already busy then just run the action in the requesting -- thread. gangIO :: Gang -> (Int -> IO ()) -> IO () gangIO (Gang n [] _) p = mapM_ p [0 .. n-1] #if SEQ_IF_GANG_BUSY gangIO (Gang n mvs busy) p = do traceGang "gangIO: issuing work requests (SEQ_IF_GANG_BUSY)" b <- swapMVar busy True traceGang $ "gangIO: gang is currently " ++ (if b then "busy" else "idle") if b then mapM_ p [0 .. n-1] else do parIO n mvs p _ <- swapMVar busy False return () #else gangIO (Gang n mvs busy) p = parIO n mvs p #endif -- | Issue some requests to the worker threads and wait for them to complete. parIO :: Int -- ^ Number of threads in the gang. -> [MVar Req] -- ^ Request vars for worker threads. -> (Int -> IO ()) -- ^ Action to run in all the workers, it's -- given the ix of the particular worker --- thread it's running on. -> IO () parIO n mvs p = do traceGang "parIO: begin" start <- getGangTime reqs <- sequence . replicate n $ newReq p traceGang "parIO: issuing requests" zipWithM_ putMVar mvs reqs traceGang "parIO: waiting for requests to complete" mapM_ waitReq reqs end <- getGangTime traceGang $ "parIO: end " ++ diffTime start end -- | Same as 'gangIO' but in the 'ST' monad. gangST :: Gang -> (Int -> ST s ()) -> ST s () gangST g p = unsafeIOToST . gangIO g $ unsafeSTToIO . p -- Tracing ------------------------------------------------------------------- #if TRACE_GANG getGangTime :: IO Integer getGangTime = do TOD sec pico <- getClockTime return (pico + sec * 1000000000000) diffTime :: Integer -> Integer -> String diffTime x y = show (y-x) -- | Emit a GHC event for debugging. traceGang :: String -> IO () traceGang s = do t <- getGangTime traceEvent $ show t ++ " @ " ++ s #else getGangTime :: IO () getGangTime = return () diffTime :: () -> () -> String diffTime _ _ = "" -- | Emit a GHC event for debugging. traceGang :: String -> IO () traceGang _ = return () #endif -- | Emit a GHC event for debugging, in the `ST` monad. traceGangST :: String -> ST s () traceGangST s = unsafeIOToST (traceGang s)
http://hackage.haskell.org/package/dph-prim-par-0.6.0.1/docs/src/Data-Array-Parallel-Unlifted-Distributed-Gang.html
CC-MAIN-2014-52
refinedweb
895
64.3
- 1Step 1 HARDWARE (WiFi Pants owners skip to SOFTWARE below) You'll need 6x 33 ohm resistors (either through-hole or SMT), some wire, and an ESP-12F module. Some variants of the ESP-12E should work as well, but there are some that do not have the GPIO9 and GPIO10 pins connected to the edge, and others that have a different pinout than listed here. From what I can tell there is only one variant of the ESP-12F (labeled "ESP-12-F QIO L4" on the back) so that is a safer bet. ESP-12F also has an onboard 10uF capacitor across the input supply, which saves you from having to add one yourself. You probably want something to mount the ESP-12F on. Perfboard is fine, as would be cardboard or just double-sided foam tape on the back of the Pi. No matter what you use, make sure that nothing is under the antenna or range will be greatly reduced. The ESP-12F LED is a good mark for the start of this keepout region. I use outdoor double-sided foam tape on my hand-built boards since it's tough but not hard to reposition, and raises the module enough to not short against the board. Creative application of hot glue or epoxy putty might work fine as well. - 2Step 2 Make the connections between the ESP-12F or -12E and the Raspberry Pi HAT. Keep wires short as possible, and minimize the number of times they cross over each other. If you're using magnet wire or Kynar (standard issue blue wire) you might want to use a heavier gauge for the 3.3V and GND. Ideally the wire lengths of the 6 SDIO signals should be identical, but even a couple of inches of difference shouldn't matter. If you can, keep SD_CLK close to the average length of the other wires. The SDIO signals (names beginning "SD_") should have 33 ohm resistors in series (in between the Pi and ESP-12F connections). It may work with other slightly larger values, or with no resistor at all. If using 1-bit SDIO instead of 4-bit, omit D2 and D3. There's not much of a good reason for this though. A capacitor of 10uF or more may be needed between 3.3V and GND near the module, though a legit ESP-12F already has a 10uF supply cap onboard. Wire it like this: Note that the CH_PD signal is connected to the ID_SD signal. This must be mapped as a GPIO (GPIO0) and switched to an output and driven low then switched to an input again before the driver is loaded in order to reset the module. This will be added in the near future in the driver. The ESP-03 column above is for people who want to use an ESP-03 module instead. This requires soldering to the pins of the SPI flash chip for some of the signals. The SPI flash may be desoldered first, or left in place. - 3Step 3 SOFTWARE Start with a fresh Raspbian SD image of 2016-05-10 or later. A network connection to the Pi is required for the initial installation for Raspbian to fetch dependencies to build the driver module. This could be the onboard Ethernet of a model B Pi, a supported USB-Ethernet or USB-WiFi adapter, or the Pi Zero connected through another PC over USB. - 4Step 4 Log in as "pi" and Install prerequisite packages to build the driver. #make sure everything is up to date sudo apt-get update && sudo apt-get -y upgrade #install module build dependencies sudo apt-get -y install dkms raspberrypi-kernel-headers - 5Step 5 Add the line "dtoverlay=sdio,poll_once=off" to /boot/config.txt, or replace any existing "dtoverlay=sdio" line. If you're using 4-bit SDIO, which you likely are, run this to set up the boot configuration: #4-bit SDIO setup (for WiFi Pants board) sudo sed -i -e "/^dtoverlay.*sdio/d" /boot/config.txt sudo sh -c 'echo "dtoverlay=sdio,poll_once=off" >> /boot/config.txt' If you're using 1-bit SDIO, use this instead (or the WiFi driver will crash when you load it): #1-bit SDIO setup (NOT for WiFi Pants board) sudo sed -i -e "/^dtoverlay.*sdio/d" /boot/config.txt sudo sh -c 'echo "dtoverlay=sdio,poll_once=off,bus_width=1" >> /boot/config.txt' - 6Step 6 If you didn't use ID_SD (GPIO0, HAT pin 27) as the CH_PD GPIO, provide the kernel module with the right one via a modprobe.d conf file. Replace "5" below with the correct GPIO number. Skip this step if you are using a WiFi Pants board as it uses the default of GPIO0. sudo sh -c 'echo "options esp8089 esp_reset_gpio=5" > /etc/modprobe.d/esp.conf' - 7Step 7 Reboot to load the updated kernel and activate SDIO sudo reboot - 8Step 8 After the reboot, log in and fetch the DKMS package for esp8089: wget gunzip esp8089-dkms_1.9.20190603_all.deb.gz Check for the latest release in case I forget to update these instructions - 9Step 9 Install the package (sub the filename from the wget above if different): sudo dpkg -i esp8089-dkms_1.9.20190603_all.deb This package will be automatically rebuilt and installed every time Raspbian installs a new kernel. - 10Step 10 Load the driver. sudo modprobe esp8089 If the ESP8089/8266 is found, you should see these lines spread through dmesg: [ 2.253368] mmc1: new high speed SDIO card at address 0001[ 5.416128] ***** EAGLE DRIVER VER:bdf5087c3deb***** [ 5.618948] esp_sdio_init power up OK [ 6.140907] first normal exit [ 6.141112] esp_sdio_remove enter [ 6.249295] eagle_sdio: probe of mmc1:0001:1 failed with error -123 [ 6.249485] mmc1: card 0001 removed [ 6.333036] mmc1: new high speed SDIO card at address 0001 [ 6.716629] esp_host:bdf5087c3deb [ 7.126350] esp_op_add_interface STA The probe error is expected when the ESP restarts after the firmware is loaded. Discussions Become a Hackaday.io Member Create an account to leave a comment. Already have an account? Log In. I'm running Raspbian Stretch Lite, clean install. I have a custom PCB with an ESP-12F configured in the same was as above but I'm not able to get it working. I've followed all of the above steps but the final line "sudo modprobe esp8089" returns the following error: "modprobe: ERROR: could not insert 'esp8089': Unknown symbol in module, or unknown parameter (see dmesg)" Any pointers so I can get this working? Are you sure? yes | no Greetings, I had everything running nicely. Unfortunately I did an apt-get update and apt-get dist-upgrade. This forces the Wi-Fi to be reinstalled. Since I didn't expect this, I am providing a heads-up to others. Are you sure? yes | no I am not getting good results with the 2017-03-02 Jessie-lite. Has anyone else given this a try? Are you sure? yes | no It seems like this no longer works (situationally). I just set up a new OctoPi image, and its latest release as of today comes with kernel 4.4.19. But the raspberrypi-kernel-headers version that gets installed today is 4.4.38. So I had to also sudo apt-get install --reinstall raspberrypi-bootloader and then after a reboot I was on kernel 4.4.38, matching the installed headers, so the dkms compile would succeed. Are you sure? yes | no Thanks for the heads-up. I haven't updated any of my boards to this kernel, but I'll try tonight. Are you sure? yes | no I ran into similar trouble with 4.4.38 on a plain raspbian install. What worked for me was to reboot after apt-get upgrade finished, then force DKMS to build manually: sudo dkms autoinstall; sudo depmod -a Then reboot. Are you sure? yes | no Where in the steps was this done? Thanks! Are you sure? yes | no Hi I'm comepletely new to this but I have worked around with esp8266. My question is about the firmware of esp8266? Do you simply flash it with esp8089 firmware on that GitHub or something? Are you sure? yes | no Hello ajlitt, thank you for this awesome hack. I had no success with an ESP-12-E QIO L2. I didn't had an F-L4 revision at home, but I still had a spare ESP-03 available. You can have a look at my proto-board here: I have tested my bandwidth with this method: # wget -O /dev/null In my case, the wifi speed is not very great. I got a mean speed of 1824 kbps for downloading this 11MB test file. I wonder if there could be a problem with my wiring, because the red wires are close to the ceramic antenna of the ESP-03. Are you sure? yes | no From my experience the ESP-03 antenna is very sensitive to nearby metal. The best speed I saw from an ESP-03 was when I had the groundplane cutout for the antenna on the module hanging entirely off of the PCB. Look through my early project logs to see the two implementations I have with the ESP-03. The first had the antenna near a groundplane and I saw throughput much like you describe. my second attempt is what I described above and worked much better (though not as well as my chip-on-board WiFi HAT and ESP12F Pants designs). Also I recommend using a local endpoint for testing throughput to eliminate throttling from an internet endpoint. I find iperf to be a reliable and repeatable test. Are you sure? yes | no (...was when I had the groundplane cutout for the antenna on the module hanging entirely off of the PCB...) Seems very interresting but I need to see photos to understand clearly. I looked on hackaday, and did not find the early project you mention. Could you help me finding it please? Are you sure? yes | no Specifically these two logs: Note how I have the antenna portion of the module off the edge of the board? Are you sure? yes | no In my realization, the antenna portion is already outside of the proto-board. Look here, in the second picture: Are you sure? yes | no How does it work when it's very close to your wifi AP? My top speeds were with the router sitting on the same desk as the ESP. Are you sure? yes | no I put rpi0-0 50cm away from the AP. Same wget command as before. The bandwidth is 1912 kbps. Are you sure? yes | no I just received my ordered ESP-12F QIO L4. I first got it to work with aerial wiring, and it works very well, definitely more reliable than with my ESP-03 with unsoldered Flash chip. I got very good transfer speed now: # wget -O /dev/null # (3,33 MB/s) — « /dev/null » sauvegardé [11536384/11536384] which is about 26 Mbps. I think that this is the Rpi-0 which is the limitation now. BTW, I would like to understand why there is such difference of speed between both ESP. This is the same ESP8266EX core, no? The only difference I see is that resistors on my ESP-03 breadboard are 50 Ohm, and on ESP-12F it is 22 Ohm. I tried on both setup without resistors, but the WiFi connection was failing or not stable in both cases. Do you see another reason that could explain such data rate difference? Are you sure? yes | no I don't know. I haven't dug into the design differences between the two besides the greater attention to detail in the ESP-12F. My guess is that it has something to do with the routing and bypassing of the power to the ESP8266 on the ESP-03, which IIRC didn't obey the recommendations in the Espressif docs for caps and routing. Are you sure? yes | no I have a raspberrypi B rev 2 and it has no gpio 26 what can I do now ? Are you sure? yes | no @guyf2010 attempted this below. Not sure if he got it working. All the pins except SD_D2 and SD_D3 are within the rev 1 Pi 26 pin header, which means it may work in 1-bit SDIO mode. Follow the above instructions to run the interface in 1-bit mode and let us know if it works for you. Are you sure? yes | no I did get it working in the end. You are restricted to 1-bit SDIO. Only issue I ran into was solved by taking everything apart and doing it again, I suspect a bad connection to have been the problem. Are you sure? yes | no Thanks for this! I did my own version ( ) and it turned out great, with one caveat, mentioned there: I have to unload the module before I reset the ESP as in step 10 of the instructions. Are you sure? yes | no and thank you for the patch for the recent kernels. I made the decision to use GPIO0 (ID_SD) as the reset signal on my board a while back in order to minimize pin utilization. I have noticed that relying on ID_SD to reset the ESP by way of the Pi's initialization code is unreliable. So I made this design choice with the intention that I'd add the reset twiddle in the driver. This would not only give a chance to reset GPIO0 before downloading firmware, but it would allow for the driver to clear the ESP's state on a warm reboot. I still haven't implemented this, and honestly I forgot about it as I got bogged down in the hardware. Are you sure? yes | no @Anthony Lieuallen : correction, it's now implemented. The driver twiddles GPIO0 (ID_SD) on load and on unload, with a module parameter to change the gpio#. Works great! Are you sure? yes | no Confirmed, it works great. Awesome work! Are you sure? yes | no I've been following this thread and now built up my own board. I'm trying kernel 4.4 branch and did the required #include change. Module compiles and installs ok. But ESP8266 is not detected, there are no mmc1 in the boot message, toggle CH_PD gives nothing. ESP does issue "boot mode: (7, 6) waiting for host" from its own serial output though. I measured SD_CLK and SD_CMD line, SD_CLK gives 125kHz clock and SD_CMD toggles occasionally. SD_D0 stays high. Here is /sys/kernel/debug/mmc1/ios: root@raspberrypi:~# dmesg | grep mmc1 root@raspberrypi:~# cd /sys/kernel/debug root@raspberrypi:/sys/kernel/debug/mmc1# cat ios clock: 0 Hz vdd: 0 (invalid) bus mode: 2 (push-pull) chip select: 0 (don't care) power mode: 0 (off) bus width: 0 (1 bits) timing spec: 0 (legacy) signal voltage: 0 (3.30 V) driver type: 0 (driver type B) root@raspberrypi:/sys/kernel/debug/mmc1# cat clock 0 Any clue please? Are you sure? yes | no What esp8266 module are you using? That symptom means that the two aren't doing their handshake. Any chance that SD_CLK or SD_CMD are going to the wrong pins on the module? Are you sure? yes | no It has ESP-12-F QIO L4 written on the module. I go replace the module and try again. Are you sure? yes | no Are you sure? yes | no I haven't tried. I have used it successfully with hostapd so there's that. Are you sure? yes | no hi, i installed kernel on new sdcard (followed above steps) am facing new problem now. please check dmesg. [ 115.117299] ERROR::dwc_otg_hcd_urb_enqueue:505: Not connected [ 115.117299] [ 115.195098] mmc1: queuing unknown CIS tuple 0x01 (3 bytes) [ 115.203501] mmc1: queuing unknown CIS tuple 0x1a (5 bytes) [ 115.206877] mmc1: queuing unknown CIS tuple 0x1b (8 bytes) [ 115.211181] mmc1: queuing unknown CIS tuple 0x80 (1 bytes) [ 115.211327] mmc1: queuing unknown CIS tuple 0x81 (1 bytes) [ 115.211453] mmc1: queuing unknown CIS tuple 0x82 (1 bytes) [ 115.211528] mmc1: new high speed SDIO card at address 0001 [ 115.280354] usb 1-1: USB disconnect, device number 2 [ 115.280400] usb 1-1.1: USB disconnect, device number 3 [ 115.280807] smsc95xx 1-1.1:1.0 eth0: unregister 'smsc95xx' usb-20980000.usb-1.1, smsc95xx USB 2.0 Ethernet [ 115.280904] smsc95xx 1-1.1:1.0 eth0: hardware isn't capable of remote wakeup [ 115.292461] fib_del_ifaddr: bug: prim == NULL [ 115.297041] usb 1-1.4: USB disconnect, device number 4 [ 115.306156] usb 1-1.5: USB disconnect, device number 5 [ 115.333821] Unable to handle kernel paging request at virtual address 99945138 [ 115.341337] pgd = da554000 [ 115.344126] [99945138] *pgd=00000000 [ 115.347812] Internal error: Oops: 5 [#1] ARM [ 115.352174] Modules linked in: esp8089(O+) mac80211 cfg80211 rfkill snd_bcm2835 snd_pcm snd_seq snd_seq_device snd_timer snd joydev evdev bcm2835_gpiomem bcm2835_wdt uio_pdrv_genirq uio [ 115.369347] CPU: 0 PID: 2380 Comm: modprobe Tainted: G O 4.4.8+ #880 [ 115.376967] Hardware name: BCM2708 [ 115.380440] task: da58df60 ti: d9470000 task.ti: d9470000 [ 115.385960] PC is at load_module+0x19e8/0x1ef4 [ 115.390502] LR is at mutex_lock+0x1c/0x48 [ 115.394598] pc : [] lr : [] psr: 20000113 [ 115.394598] sp : d9471e88 ip : d9471e70 fp : d9471f34 [ 115.406265] r10: c0577174 r9 : 00000000 r8 : bf24e7c8 [ 115.411588] r7 : bf24e780 r6 : bf24e78c r5 : 99945124 r4 : d9471f3c [ 115.418231] r3 : bf24e93c r2 : 00000000 r1 : 00000000 r0 : 00000000 [ 115.424876] Flags: nzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user [ 115.432140] Control: 00c5387d Table: 1a554008 DAC: 00000055 [ 115.437997] Process modprobe (pid: 2380, stack limit = 0xd9470188) [ 115.444294] Stack: (0xd9471e88 to 0xd9472000) [ 115.448745] 1e80: bf24e78c 00007fff bf24e780 c007bc08 00000000 bf24e78c [ 115.457074] 1ea0: bf24e93c bf24e78c bf24ea14 bf24e878 bf2214fc 00000000 00000000 00000000 [ 115.465400] 1ec0: de8de000 c05702cc d9471eec bf21f090 00000002 00000000 00000000 00000000 [ 115.473723] 1ee0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 [ 115.482050] 1f00: 00000000 00000000 b6ec6948 0000566d 00000000 b6c7e66d b6ec6948 d9470000 [ 115.490376] 1f20: de8e366d 00000000 d9471fa4 d9471f38 c007ea34 c007ca70 00000000 de89e000 [ 115.498703] 1f40: 0004566d de8d9ab8 de8d9993 de8e1728 00039a14 0003c1f4 00000000 00000000 [ 115.507028] 1f60: 00000000 00005178 0000001e 0000001f 00000014 00000018 0000000d 00000000 [ 115.515352] 1f80: 00000000 00060000 7fc9f560 00000080 c000f9e8 d9470000 00000000 d9471fa8 [ 115.523679] 1fa0: c000f820 c007e964 00000000 00060000 b6c39000 0004566d b6ec6948 b6c39000 [ 115.532007] 1fc0: 00000000 00060000 7fc9f560 00000080 7fc9f508 0004566d b6ec6948 00000000 [ 115.540335] 1fe0: 00000000 be8c596c b6ebdfb4 b6e28534 60000010 b6c39000 e3510000 0a00007b [ 115.548687] [] (load_module) from [] (SyS_init_module+0xdc/0x134) [ 115.556688] [] (SyS_init_module) from [] (ret_fast_syscall+0x0/0x1c) [ 115.564936] Code: e51b3094 e1530005 e2455008 0a000009 (e5953014) [ 115.571301] ---[ end trace 3d97829848ad5363 ]--- [ 115.690115] Indeed it is in host mode hprt0 = 00021501 [ 115.990018] usb 1-1: new high-speed USB device number 6 using dwc_otg [ 115.990242] Indeed it is in host mode hprt0 = 00001101 [ 116.370511] usb 1-1: New USB device found, idVendor=0424, idProduct=9514 [ 116.370545] usb 1-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 116.372002] hub 1-1:1.0: USB hub found [ 116.372172] hub 1-1:1.0: 5 ports detected [ 116.660038] usb 1-1.1: new high-speed USB device number 7 using dwc_otg [ 116.760699] usb 1-1.1: New USB device found, idVendor=0424, idProduct=ec00 [ 116.772930] usb 1-1.1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 116.853312] smsc95xx v1.0.4 [ 116.969132] smsc95xx 1-1.1:1.0 eth0: register 'smsc95xx' at usb-20980000.usb-1.1, smsc95xx USB 2.0 Ethernet, b8:27:eb:e6:83:61 [ 117.049833] smsc95xx 1-1.1:1.0 eth0: hardware isn't capable of remote wakeup [ 117.120068] usb 1-1.4: new low-speed USB device number 8 using dwc_otg [ 117.267170] usb 1-1.4: New USB device found, idVendor=0458, idProduct=003a [ 117.279465] usb 1-1.4: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 117.291971] usb 1-1.4: Product: USB Optical Mouse [ 117.301792] usb 1-1.4: Manufacturer: Genius [ 117.372506] input: Genius USB Optical Mouse as /devices/platform/soc/20980000.usb/usb1/1-1/1-1.4/1-1.4:1.0/0003:0458:003A.0004/input/input3 [ 117.414064] hid-generic 0003:0458:003A.0004: input,hidraw0: USB HID v1.11 Mouse [Genius USB Optical Mouse] on usb-20980000.usb-1.4/input0 [ 117.550066] usb 1-1.5: new low-speed USB device number 9 using dwc_otg [ 117.703408] usb 1-1.5: New USB device found, idVendor=1c4f, idProduct=0026 [ 117.715635] usb 1-1.5: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 117.728130] usb 1-1.5: Product: USB Keyboard [ 117.737616] usb 1-1.5: Manufacturer: SIGMACHIP [ 117.811489] input: SIGMACHIP USB Keyboard as /devices/platform/soc/20980000.usb/usb1/1-1/1-1.5/1-1.5:1.0/0003:1C4F:0026.0005/input/input4 [ 117.913647] hid-generic 0003:1C4F:0026.0005: input,hidraw1: USB HID v1.10 Keyboard [SIGMACHIP USB Keyboard] on usb-20980000.usb-1.5/input0 [ 117.971602] input: SIGMACHIP USB Keyboard as /devices/platform/soc/20980000.usb/usb1/1-1/1-1.5/1-1.5:1.1/0003:1C4F:0026.0006/input/input5 [ 118.071605] hid-generic 0003:1C4F:0026.0006: input,hidraw2: USB HID v1.10 Device [SIGMACHIP USB Keyboard] on usb-20980000.usb-1.5/input1 [ 118.476422] smsc95xx 1-1.1:1.0 eth0: link up, 100Mbps, full-duplex, lpa 0x4DE1 Are you sure? yes | no Did you toggle CH_PD before loading the driver? I see this if the Pi is warmbooted and the ESP is not reset before loading the driver. If you're doing that, does it oops if you load the esp8089 module when the ESP8266 is not plugged into the Pi? I'm wondering if your kernel and module are out of sync. Are you sure? yes | no i installed driver in kernel and it is loading automatically. Changed CH_PD to other gpios, but no use. Are you sure? yes | no this error because of OS version(wheezy). with jessy it is working fine. Are you sure? yes | no It only works with Jessie because the wheezy kernel doesn't have proper sdio support not toggling gpio6). there is no capacitor across power(3.3V) on add-on board. really required? what is preferred value? and what about series termination resistors? (values? and how to choose value?) Are you sure? yes | no Espressif recommends a 10uF capacitor across the power rails, close to the chip. ESP-03 doesn't have this for some reason (cheapness?), but adding 10uF or more near the module across 3.3V seems to work fine. I use 33 ohm resistors in my production design. I chose this by trial and error using a scope to determine which value resulted in the cleanest signals at each end of the bus. I've used values between 33 and 200 ohms. I would try the capacitor first. Are you sure? yes | no tried cap across the power rails, still facing same issue. i have to try resistors. toggling gpio anymore). there is no capacitor across power(3.3V) on add-on board. really required? what is preferred value? and what about series termination resistors? (values? and how to choose value?) Are you sure? yes | no ESP-03 & RPI B+ board connections -------------------------------------- SD SPI RPI_gpio -------------------------------------- sd_clk sck 22 sd_cmd cs 23 sd_d0 so 24 sd_d1 si 25 sd_d2 hold 26 sd_d3 wp 27 -------------------------------------- CH_PD connected to GPIO_6. esp gpio 0, 2 and 15 are not connected. Are you sure? yes | no pi@raspberrypi:~/esp8089 $ sudo dmesg -c [ 708.380468] ***** EAGLE DRIVER VER:bdf5087c3deb***** [ 719.380012] esp_sdio_init ------ RETRY ------ [ 730.380143] esp_sdio_init ------ RETRY ------ [ 741.380252] esp_sdio_init ------ RETRY ------ [ 752.380418] esp_sdio_init ------ RETRY ------ [ 752.380600] eagle sdio can not power up! I am toggling CH_PD also. help me? Are you sure? yes | no Do you see the "mmc1: new high speed SDIO card..." message in dmesg at all in step 9? If so, then sd_clk and sd_cmd are working, but the sd_dN lines may not be. Make sure that you're not overclocking the SD card in your config.txt. Ideally you should be using a stock config.txt from Raspbian with only the changes in the instructions. Do you have a cap across the supply going to the ESP-03? The ESP-12F has a 10uF cap onboard, but the ESP-03 only has 0.1uF bypass. Finally, I recommend scoping each of the signals. You should see a constant clock on sd_clk, periodic data on sd_cmd, and traffic on each of the sd_dN lines when the driver loads. Are you sure? yes | no now changed sdio to 1bit mode it is working fine (in config.txt) I have to check my board. Are you sure? yes | no I'm having issues getting the SDIO interface working on one of the earlier Pi models. I'm trying to use one of the 26 pin Pi 1Bs, but I am unable to get the mmc1 device to init. I've been trying to use 1-bit SDIO, as this Pi doesn't have D2 broken out. Attempting to run 'sudo modprobe esp8089' results in an error followed by the system hanging. (The error says "Unable to handle paging request at virtual address 904edc38"). Are there any modifications required to get the RPi 1 connected to an ESP8266? On an important but unrelated note, the pin table suggests HAT pins 2 and 4 as ground pins, but aren't they 5V? Thanks Are you sure? yes | no Looks like the mmc1 device is being created. But running cat /sys/kernel/debug/mmc1/ios says the clock speed is 0Hz. Any ideas on a fix? Are you sure? yes | no It's very likely the blobs for the 1st gen Pis didn't get the same treatment as the rest for enabling SDIO. Are you sure? yes | no Looking at some of the info online, it may be possible to wire up an ESP-03 module without removing/soldering onto the flash chip. The table under the ESP-03 at has CLK, MOSI, MISO and CS0 listed as alternate functions for GPIO pins that are exposed. I don't currently have any ESP-03 modules, nor the know-how on accessing the alternate functions. But it may well be the easiest way to get a module with ceramic antenna hooked up to a Pi. Another thing I noticed is the mention of using GPIO0, GPIO2 and MTDO as a 3-bit SDIO connection in the GitHub documentation at. It's unclear to me whether it would be usable as an interface, or simply used for configuring the SDIO interface used here already. Are you sure? yes | no Yes, I expect after all that just like the ESP-12E/F modules, the flash on the ESP-03 & ESP-04 can be left soldered in place. ESP-03 is where @ajlitt started all of this... 3-bit SDIO - that must be a typo! No idea what interface they might be hinting at. Are you sure? yes | no Is it possible it's a 3-cable SDIO interface? Maybe SD_CLK, SD_CMD and SD_D0. Which makes me wonder, is SD_D1 required in these instructions for 1-bit mode? Are you sure? yes | no Unfortunately only the pins I'm using are able to boot as SDIO. Those GPIOs aren't part of the SDIO bus. They are sampled by the ESP on reset to determine how to boot. There's a table on the 4th tab ("strapping") of Espressif's pinlist spreadsheet () that shows the enumeration. This is why we use GPIO0 to select between booting from flash (001) and from UART (011) for most ESP8266 projects. For this usage, we want the ESP to boot from an SDIO host. When MTDO(GPIO15) is high, the ESP boots from SDIO, while GPIO0 and GPIO2 choose the bus speed and protocol rev. I leave all three of these unconnected because the ESP8266 has internal pull-ups which configures it to boot in SDIO high speed v2 mode. Are you sure? yes | no I see. I had wondered if the reference was more to controlling the SDIO bus rather than being an SDIO bus. Looking further into the CLK/MOSI/MISO/CS0 pins available on the ESP-03 as alt functions, it turns out that the controller for that SPI bus is entirely separate from the one used to access the flash, so the SDIO signals would not be readable on that bus. (TL;DR: ajlitt, you were right) On a side note, I don't know the SDIO bus very well, but if four Dn pins are required for 4-bit mode, wouldn't it be possible to use one Dn pin for 1-bit mode? Or are two pins required for bi-directional connections? Are you sure? yes | no Thanks for your great instructions. I am having trouble with number 10. When I toggle the gpio, the led light on the ESP12F flashes and the system seems to become unresponsive. I am on the pi2. Thanks for your help. Are you sure? yes | no Does it eventually come back? This is what I usually see when the driver is having trouble loading firmware to the module. Are you attempting to overclock the SDIO bus? Have you checked the wiring on the SDIO_Dn signals? Are you sure? yes | no You should only do step 10 if the module does not show up ie driver not loaded. If the driver has loaded already & you do this, the system will hang. Precede it with "sudo modprobe -r esp8089" to avoid that happening. Are you sure? yes | no Great! I have stopped the system hang but now I am getting this error: ***** EAGLE DRIVER VER:bdf5087c3deb***** [ 65.861546] esp_sdio_dummy_probe enter [ 66.064687] esp_sdio_init power up OK [ 76.324728] resetting event timeout [ 76.324753] esp_init_all failed: -110 [ 76.324760] first error exit Any ideas? *******EDIT****** I was able to fix it by re-wiring with 33 ohm resistors. Thanks for the advice and the great project! I am getting about 25mb/s. Are you sure? yes | no I'd re-check the connections between Pi & module, both wiring-wise & for good contact. Are you sure? yes | no It is ALIVE! I had it wired wrong the first and second time! (my screenshot of the old instructions still had D3 and VCC swapped - oops) - thanks for helping me through! (it would be cool if you could re-add the esp03-table to the instructions) ifconfig detects a wlan0 and iwlist has found my access point! To recap for everyone having problems compiling the driver on 4.4.x: - in 'sdio_stub.c' change the line "#include <mach/gpio.h>" to "#include <linux/gpio.h>" - make sure the esp is wired correctly Are you sure? yes | no Congrats! Good point, I forgot that some people might still want to work with the ESP-03. Are you sure? yes | no Well done for battling through! Table 8, page 14 of the ESP8266EX Datasheet Version 4.4 is a solid reference for the pin mapping. Are you sure? yes | no It seems to fit here better: i can't compile esp8089 - it can't find <mach/gpio.h> what am i doing wrong? i followed your instructions and am hanging on number (8) In file included from /home/pi/esp8089/sdio_sif_esp.c:58:0: /home/pi/esp8089/sdio_stub.c:7:23: fatal error: mach/gpio.h: No such file or directory #include <mach/gpio.h> thanks Are you sure? yes | no It's been a while since I went through the instructions on a fresh install. I'll do that tonight. I don't want to hold your password manager up, it looks slick! Are you sure? yes | no Thanks! i forgot to mention that i run Kernel 4.4.2 (installed via rpi-update), because the USB gadget driver isn't available on 4.1 on another note: what resistor value do you recommend for the esp-03 with the flash desoldered? (currently i have forgotten to install them) Are you sure? yes | no Ah, your problem is due to kernel differences. 4.4.1 has no include/mach/gpio.h for bcm2708. Try changing the path to <linux/gpio.h>. Are you sure? yes | no It compiled now. But I can't get it to work. when I do `insmod esp8089.ko` it says: insmod: ERROR: could not insert module esp8089.ko: Unknown symbol in module dmesg output: Are you sure? yes | no Oh, and using no resistors is OK as long as it's working. If you see instability start with 33 ohms and go up from there. You may need that if you want to try for 62.5MHz SDIO. Are you sure? yes | no ah, do "sudo modprobe mac80211" first Are you sure? yes | no sorry for taking up your time, but it still doesn't work. now, `modprobe esp8089` takes a lot longer, but still fails with this message: modprobe: ERROR: could not insert 'esp8089': No such device dmesg: ***** EAGLE DRIVER VER:bdf5087c3deb***** [ 105.312618] esp_sdio_init ------ RETRY ------ [ 116.313184] esp_sdio_init ------ RETRY ------ [ 127.312652] esp_sdio_init ------ RETRY ------ [ 138.312617] esp_sdio_init ------ RETRY ------ [ 138.312742] eagle sdio can not power up! Are you sure? yes | no do you have by chance the pinout for the esp-03 still at hand? i figure, i might have swapped a signal or two Are you sure? yes | no That's progress. Did you wire CH_PD on the ESP-03 to a GPIO? Try toggling it low and then high, then try loading the module Are you sure? yes | no i have. it is on the pi's pin 7 (GPIO4?) and i did: root@hardpass:/home/pi# echo 4 > /sys/class/gpio/export root@hardpass:/home/pi# echo low > /sys/class/gpio/gpio4/direction root@hardpass:/home/pi# echo in > /sys/class/gpio/gpio4/direction and then modprobe again. but same result EDIT: SO....i found a screenshot of the old instructions for the esp03 - and i managed to flip the pinout (pin1 to pin5, etc). I'll rewire it and then it will hopefully work. thanks for helping me get it compiled! Are you sure? yes | no so, i rewired it, but it still doesn't work. its getting rather late now here so i'll try again tomorrow. thanks anyways ;) Are you sure? yes | no 1-bit mode still makes some sense - two components off the BoM and two fewer precious GPIOs claimed by the SDIO interface (with more chance to avoid conflicts with other add-on boards). To that end I've made a pull-request re adding another overlay specifically for 1-bit mode, configuring only pins 22-25. Are you sure? yes | no I agree, but for most of the people following the directions verbatim it doesn't make sense to not wire those last two. It's a shame the Pi bootloader overlay system doesn't allow for more complexity with parameters. It would be much easier if we could have a single boolean that could enable the correct number of GPIOs and set the bus width at the same time. Are you sure? yes | no
https://hackaday.io/project/8678/instructions/discussion-56635
CC-MAIN-2021-04
refinedweb
5,907
78.04
Showing messages 1 through 5 of 5. - Answer to the self.puts dilema 2004-07-19 18:23:49 wkaha@yahoo.com [Reply | View] - Answer to the self.puts dilema 2004-07-19 20:06:35 Christopher Roach [Reply | View] There really isn't any reason for the self.hello call other than to prove the point that a private method (hello, in this case) cannot be prefaced with a reference of any kind, even a reference to the object in which the private method resides. So, yes, you could try out the code without the self.hello call, but that would defeat the purpose of the example. The example is absolutely useless, I know, but it was created purely for pedagogical reasons rather than utility. - Answer to the self.puts dilema 2004-07-19 16:12:40 Merc [Reply | View] Just a style note. Unlike Python, Ruby doesn't require parentheses around method calls that take no arguments (in fact, under most circumstances, it doesn't require them around methods that take arguments). That's why you can say puts "Hello World!", rather than puts("Hello World!"). Most Ruby programmers omit them when they're not necessary, so they would write that code: class MyClass def sayHello() self.hello() end private def hello puts "Hello, World!!!" end end myClass = MyClass.new myClass.sayHello Part of the reason this is significant is that it allows you to treat functions as if they were attributes: class ComplexNumber attr_accessor :real, :complex end n = ComplexNumber.new n.real = 5 n.complex = 6 puts "(#{n.real}, #{n.complex})" n.real, and n.real= are actually method calls, but by omitting parentheses they look more natural. - Answer to the self.puts dilema 2004-07-19 20:30:53 Christopher Roach [Reply | View] You're absolutely correct — you do not need the parentheses in a Ruby method call. My use of the parentheses in the example was simply because I am so used to using them in C/C++ and Java in my daily work. I. I've tried without and it works.
http://www.oreillynet.com//cs/user/view/cs_msg/41300
crawl-001
refinedweb
347
68.06
#include "libgomp.h" #include <stdlib.h> #include <string.h> #include <stdbool.h> /* This attribute contains PTHREAD_CREATE_DETACHED. */ pthread_attr_t gomp_thread_attr; pthread_key_t gomp_tls_key; #endif /* This is to enable best-effort cleanup after fork. */ static bool gomp_we_are_forked; /* This structure is used to communicate across pthread_create. */ return pool; } /* Free a thread pool and release its threads. */ static void gomp_free_pool_helper (void *thread_pool) { struct gomp_thread *thr = gomp_thread (); struct gomp_thread_pool *pool = (struct gomp_thread_pool *) thread_pool; gomp_barrier_wait_last (&pool->threads_dock); gomp_sem_destroy (&thr->release); thr->thread_pool = NULL; thr->task = NULL; pthread_exit (NULL); gomp_free_thread_pool (bool threads_are_running) void gomp_free_thread (void *arg __attribute__((unused))) struct gomp_thread_pool *pool = thr->thread_pool; if (pool) { int i; if (pool->threads_used > 0) { int i; if (threads_are_running) for (i = 1; i < pool->threads_used; i++) { struct gomp_thread *nthr = pool->threads[i]; for (i = 1; i < pool->threads_used; i++) nthr->fn = gomp_free_pool_helper; { nthr->data = pool; struct gomp_thread *nthr = pool->threads[i]; nthr->fn = gomp_free_pool_helper; nthr->data = pool; } /* This barrier undocks threads docked on pool->threads_dock. */ gomp_barrier_wait (&pool->threads_dock); /* And this waits till all threads have called gomp_barrier_wait_last in gomp_free_pool_helper. */ } /* This barrier undocks threads docked on pool->threads_dock. */ gomp_barrier_wait (&pool->threads_dock); /* And this waits till all threads have called gomp_barrier_wait_last in gomp_free_pool_helper. */ /* Now it is safe to destroy the barrier and free the pool. */ gomp_barrier_destroy (&pool->threads_dock); gomp_managed_threads -= pool->threads_used - 1L; gomp_mutex_unlock (&gomp_managed_threads_lock); /* Clean up thread objects */ gomp_sem_destroy (&nthr->release); nthr->thread_pool = NULL; nthr->task = NULL; } free (pool->threads); if (pool->last_team) } /* This is called whenever a thread exits which has a non-NULL value for gomp_thread_destructor. In practice, the only thread for which this occurs is the one which created the thread pool. */ gomp_free_thread_pool (true); /* This is called in the child process after a fork. According to POSIX, if a process which uses threads calls fork(), then there are very few things that the resulting child process can do safely -- mostly just exec(). However, in practice, (almost?) all POSIX implementations seem to allow arbitrary code to run inside the child, *if* the parent process's threads are in a well-defined state when the fork occurs. And this circumstance can easily arise in OMP-using programs, e.g. when a library function like DGEMM uses OMP internally, and some other unrelated part of the program calls fork() at some other time, when no OMP sections are running. Therefore, we make a best effort attempt to handle the case: OMP section (in parent) -> quiesce -> fork -> OMP section (in child) "Best-effort" here means that: - Your system may or may not be able to handle this kind of code at all; our goal is just to make sure that if it fails it's not gomp's fault. - All threadprivate variables will be reset in the child. Fortunately this is entirely compliant with the spec, according to the rule of nasal demons. - We must have minimal speed impact, and no correctness impact, on compliant programs. We use this callback to notice when a fork has a occurred, and if the child later attempts to enter an OMP section (via gomp_team_start), then we know that it is non-compliant, and are free to apply our best-effort strategy of cleaning up the old thread pool structures and spawning a new one. Because compliant programs never call gomp_team_start after forking, they are unaffected. gomp_after_fork_callback (void) /* Only "async-signal-safe operations" are allowed here, so let's keep it simple. No mutex is needed, because we are currently single-threaded. */ gomp_we_are_forked = 1; /* Launch a team. */ thr = gomp_thread (); nested = thr->ts.team != NULL; if (__builtin_expect (gomp_we_are_forked, 0)) gomp_free_thread_pool (0); gomp_we_are_forked = 0; if (__builtin_expect (thr->thread_pool == NULL, 0)) thr->thread_pool = gomp_new_thread_pool (); thr->thread_pool->threads_busy = nthreads; /* The pool should be cleaned up whenever this thread exits... */ pthread_setspecific (gomp_thread_destructor, thr); /* ...and also in any fork()ed children. */ pthread_atfork (NULL, NULL, gomp_after_fork_callback); pool = thr->thread_pool; task = thr->task; /* { dg-do run } */ /* { dg-timeout 10 } */ #include <omp.h> #include <sys/wait.h> #include <unistd.h> #include <assert.h> static int saw[4]; check_parallel (int exit_on_failure) memset (saw, 0, sizeof (saw)); #pragma omp parallel num_threads (2) { int iam = omp_get_thread_num (); saw[iam] = 1; } // Encode failure in status code to report to parent process if (exit_on_failure) if (saw[0] != 1) _exit(1); else if (saw[1] != 1) _exit(2); else if (saw[2] != 0) _exit(3); else if (saw[3] != 0) _exit(4); else _exit(0); // Use regular assertions else assert (saw[0] == 1); assert (saw[1] == 1); assert (saw[2] == 0); assert (saw[3] == 0); int main () // Initialize the OMP thread pool in the parent process check_parallel (0); pid_t fork_pid = fork(); if (fork_pid == -1) return 1; else if (fork_pid == 0) // Call OMP again in the child process and encode failures in exit // code. check_parallel (1); // Check that OMP runtime is still functional in parent process after // the fork. check_parallel (0); // Wait for the child to finish and check the exit code. int child_status = 0; pid_t wait_pid = wait(&child_status); assert (wait_pid == fork_pid); assert (WEXITSTATUS (child_status) == 0); // Check that the termination of the child process did not impact // OMP in parent process. return 0;
https://gcc.gnu.org/bugzilla/attachment.cgi?id=32548&action=diff
CC-MAIN-2019-26
refinedweb
826
54.63
Calculating center of object Hello, currently I am trying to calculate the center of an object using the following code import cv2 import numpy as np vid = cv2.VideoCapture(0) vid.set(10,.05) def processVid(): while(True): ret, frame = vid.read() hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) lower_green = np.array([70,200,200]) upper_green = np.array([90,255,255]) mask = cv2.inRange(hsv, lower_green, upper_green) res = cv2.bitwise_and(frame,frame,mask=mask) getPixel(res) cv2.imshow('Masked Image',res) if cv2.waitKey(1) & 0xFF == ord('q'): break vid.release() cv2.destroyAllWindows() def getPixel(img): for r in range(0,479): for c in range(0,639): print img[r,c] processVid() Currently, this code only prints out the pixel value at the row and column, as I'm having a bit of difficulty determining the best solution to what I'm trying to accomplish. I want this code to take a video feed, apply the designated filters to the video feed and then be able to determine the distance away that the object from the center of the video (i.e. if res is 640x480, I want to find the distance away from pixel @ (320,240)). Here is an example of what my current image looks like, and I want to be able to find the center of that object to the center of the video feed. What I had thought to do was to run through every pixel, and find whether it was black or colored, and then based on that be able to calculate the center. In doing this, it was a slow process (and I mean slow), not to mention I was running this off of a raspberry pi, and it is a video feed so the process would have to be almost instantaneous for accuracy in distance. I'm just curious if there is a built in OpenCV function that does this already? I haven't been able to find any documentation for this yet.
http://answers.opencv.org/question/87074/calculating-center-of-object/?answer=87075
CC-MAIN-2019-04
refinedweb
330
72.46
You may have often wondered how to generate a random number from within a certain range. This is exactly what we will look at in this recipe; we will obtain a random number that resides in an interval between a minimum (min) and maximum (max) value. This is simple; look at how it is done in random_range.dart: import 'dart:math'; var now = new DateTime.now(); Random rnd = new Random(); Random rnd2 = new Random(now.millisecondsSinceEpoch); void main() { int min = 13, max = 42; int r = min + rnd.nextInt(max - min); print("$r is in the range of $min and $max"); // e.g. 31 // used as a function nextInter: print("${nextInter(min, max)}"); // for example: 17 int r2 = min + rnd2.nextInt(max - min); ... No credit card required
https://www.oreilly.com/library/view/dart-scalable-application/9781787288027/ch25s08.html
CC-MAIN-2019-43
refinedweb
126
67.15
MQTT And Python 2015-02-18 20:01 Update: This article now also exists as a git repository. I played around with MQTT and Python for an upcoming project of mine. Here are some notes and a simple example code for a subscriber. Requirements Requires Mosquitto to be installed. On Linux with the apt-get package manager: sudo apt-get install mosquitto sudo apt-get install mosquitto-clients Note: mosquitto-clients is to get the mosquitto_pub to make it simple to try stuff from the command line. Also install virtualenv if you want to use it (recommended): sudo apt-get install python-virtualenv Working directory The use of virtualenv is optional but recommended for playing around with this example code. mkdir mqtt-mosquitto-example cd mqtt-mosquitto-example virtualenv . source bin/activate pip install paho-mqtt Subscriber code import paho.mqtt.client as paho def on_message(mosq, obj, msg): print "%-20s %d %s" % (msg.topic, msg.qos, msg.payload) mosq.publish('pong', 'ack', 0) def on_publish(mosq, obj, mid): pass if __name__ == '__main__': client = paho.Client() client.on_message = on_message client.on_publish = on_publish #client.tls_set('root.ca', certfile='c1.crt', keyfile='c1.key') client.connect("127.0.0.1", 1883, 60) client.subscribe("kids/yolo", 0) client.subscribe("adult/#", 0) while client.loop() == 0: pass # vi: set fileencoding=utf-8 : Test the subscriber example First start the subscriber which will enter a loop waiting for new messages: ::bash ./subscriber.py Then open a new terminal and send a message: mosquitto_pub -d -h localhost -q 0 -t adult/pics -m "can i haz moar kittenz" This should generate a message in the terminal running the subscriber. Configure SSL/TLS See mosquitto-tls on how to generate certificates and keys. Once created you need to adjust /etc/mosquitto/mosquitto.conf and subscriber.py accordingly and then use mosquitto_pub with cert and key: mosquitto_pub -d -h localhost --cafile root.ca --cert c1.crt --key c1.key -q 0 -t adult/pics -m "can i haz moar kittenz" References * * * * *
https://inb4.se/post/2015-02-18-mqtt-and-python/
CC-MAIN-2018-39
refinedweb
334
59.9
Java Server, PHP client john price Ranch Hand Joined: Feb 24, 2011 Posts: 495 I like... posted Jan 31, 2012 19:42:24 0 The Java Server is up and running. It is on one of my open ports (set on my router). It is also not blocked by the firewall (it is white listed). This happens if it is on the webpage : The PHP client just times out and does not successfully connect to the server. I see this message after 30 seconds : "Connection timed out". If I run it in BASH (Linux command line), it works : cc11rocks@cc11rocks-1005HA ~/Desktop/PHP/ChatApp $ php ChatApp.php cc11rocks@cc11rocks-1005HA ~/Desktop/PHP/ChatApp $ java Server Got something It's at least working on the desktop, so I assume it has something to do with PHP and not Java at all. I could be wrong, so I will leave this up for the time being. Server.java import java.net.*; import java.io.*; public class Server { public static void main(String[] args) throws IOException { ServerSocket serverSocket = null; try { serverSocket = new ServerSocket(****); } catch (IOException e) { System.err.println("Could not listen on port: ****."); System.exit(1); } Socket clientSocket = null; try { clientSocket = serverSocket.accept(); System.out.println("Got something"); } catch (IOException e) { System.err.println("Accept failed."); System.exit(1); } PrintWriter out = new PrintWriter(clientSocket.getOutputStream(), true); BufferedReader in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream())); String inputLine, outputLine; while ((inputLine = in.readLine()) != null) { System.out.println(inputLine); } out.close(); in.close(); clientSocket.close(); serverSocket.close(); } } If you guys could know PHP, please look for the disconnect between the programs : HTML File : <html> <head> </head> <p> <form method="post" action="app.php"> <input type="submit" name="submit" value="Go"> </form> </p> </body> </html> PHP File : <? $host = "***.***.***.***"; $port = "****"; $timeout = 30; //timeout in seconds $socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP) or die("Unable to create socket\n"); socket_set_nonblock($socket) or die("Unable to set nonblock on socket\n"); $time = time(); while (!@socket_connect($socket, $host, $port)) { $err = socket_last_error($socket); if ($err == 115 || $err == 114) { if ((time() - $time) >= $timeout) { socket_close($socket); die("Connection timed out.\n"); } sleep(1); continue; } die(socket_strerror($err) . "\n"); } socket_set_block($socket) or die("Unable to set block on socket\n"); ?> I have posted this at Feb 01, 2012 16:02:31 0 I also tried the following : <?php $server = "***.***.***.***"; $port = "****"; $socket = fsockopen($server, $port, $eN, $eS); if ($socket) { fwrite($socket, "Hello mate!"); } ?> It worked 100% on the desktop, but timed out on the server. john price Ranch Hand Joined: Feb 24, 2011 Posts: 495 I like... posted Feb 03, 2012 19:57:32 0 There was absolutely nothing wrong with the code. It was because of the hosting service I was using. I set up an Apache server on my computer and it work 100% now. Topic discussed and solved at : I agree. Here's the link: subject: Java Server, PHP client Similar Threads socket output writing problem Socket : MultiClient Server NIO Socket weirdnes in Solaris 2.10 2 JavaTutorial Errors quick fix Decompression of strings received from socket connection All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/566092/sockets/java/Java-Server-PHP-client
CC-MAIN-2014-42
refinedweb
522
57.77
Content-type: text/html #include <sys/stream.h> #include <sys/strsun.h> int miocpullup(mblk_t *mp, size_t size); Solaris DDI specific (Solaris DDI). mp M_IOCTL message. size Number of bytes to prepare. The miocpullup() function prepares the payload of the specified M_IOCTL message for access by ensuring that it consists of at least size bytes of data. If the M_IOCTL message is transparent, or its total payload is less than size bytes, an error is returned. Otherwise, the payload is concatenated as necessary to provide contiguous access to at least size bytes of data. As a special case, if size is zero, miocpullup() returns successfully, even if no payload exists. Zero is returned on success. Otherwise an errno value is returned indicating the problem. This function can be called from user, kernel or interrupt context. STREAMS Programming Guide
http://backdrift.org/man/SunOS-5.10/man9f/miocpullup.9f.html
CC-MAIN-2016-44
refinedweb
138
61.22
You can obtain an SSL certificate for your site when you: - Use hosts that integrate SSL and configure HTTPS. - Use a host that provides free SSL security. - Use our or another third partner's web host. - Get an SSL certificate from a Certificate Authority. View SSL for your domain - Go to Google Domains. - Select your domain. - Click Menu . - Click Security. - Scroll to the SSL certificate box. - If you have one or more SSL certificates, expand the certificate box for details. For more information, go to Google Transparency Report. Use hosts that integrate SSL and configure HTTPS A number of web hosts provide SSL certificates and automatically configure web servers to support HTTPS connections. Use a host that provides free SSL security Google provides SSL security for the sites hosted on the following Google products for free. Google My Business You can create a site through Google My Business and integrate it with your secure namespace domain: - Create your free website with Google in the Google My Business Help Centre. - Use your existing domain name for your site in the Google My Business Help Centre. Blogger You can set up a custom domain name for Blogger to satisfy the security requirements of a domain in a secure namespace, such as . For more information, go to Set up a custom domain in the Blogger Help Centre. Use our or another third partner's web host Google's web host partners can provide security at a variety of prices. They start from no additional cost: - Bluehost (WordPress) - Shopify - Squarespace - Weebly - Wix For more information, go to Web presence. You can also integrate your domain with any other web host that provides SSL security. For instructions, go to Map your domain to a third-party web host. Check in your web host's Help Centre to make sure that it provides SSL. Get an SSL certificate from a Certificate Authority You can obtain an SSL certificate for your domain directly from a Certificate Authority (CA). You'll then have to configure the certificate on your web host or on your own servers if you host it yourself. You can get a free SSL certificate from Let's Encrypt, a popular CA that provides certificates in the interest of creating a safer Internet: More help on configuring HTTPS:
https://support.google.com/domains/answer/7630973?hl=en-GB&ref_topic=9018335
CC-MAIN-2020-40
refinedweb
383
63.39
To create and write to a text file, you create a java.io.PrintWriter object with a given file name. This object has methods print and println for writing to the file. When you are done, you should call the close() method of the PrintWriter: import java.io.PrintWriter val s = new PrintWriter("test.txt") for (i <- 1 to 10) { s.print(i) s.print(" --> ") s.println(i * i) } s.close()The PrintWriter object also has a printf method, but this doesn't work with numeric Scala types (PrintWriter is a Java class and doesn't integrate perfectly into the Scala type system). If you want to write formatted text, you should use the format method: import java.io.PrintWriter val s = new PrintWriter("test.txt") for (i <- 1 to 10) s.print("%3d --> %d\n".format(i, i*i)) s.close()
http://otfried.org/scala/writing_files.html
CC-MAIN-2018-05
refinedweb
142
78.55
BeanShell ScriptingQR for this page From Fiji BeanShell is the scripting language in Fiji which is similar both to the ImageJ macro language and to Java. In fact, you can even execute almost verbatim Java code, but the common case is to write scripts, i.e. leave out all the syntactic sugar to make your code part of a class. BeanShell also does not require strict typing (read: you do not need to declare variables with types), making it easy to turn prototype code into proper Java after seeing that the code works. Contents Quickstart If you are already familiar with Java or the macro language, the syntax of Beanshell will be familiar to you. The obligatory Hello, World! example: // This prints 'Hello, World!' to the output area print("Hello, World!"); Variables can be assigned values: someName = 1; someName = "Hello"; Variables are not strongly typed in BeanShell by default; If you use a variable name without specifying the type of it, you can assign anything to it. Optionally, you can declare variables with a data type, in which case the type is enforced: String s; s = 1; // this fails Note: The builtin functions of the ImageJ Macro language are not available in Beanshell. Syntax Variables - A variable is a placeholder for a changing entity - Each variable has a name - Each variable has a value - Values can be any data type (numeric, text, etc) - Variables can be assigned new values Set variables' values Variables are assigned a value by statements of the form name = value ended by a semicolon. The value can be an expression. intensity = 255; value = 2 * 8 + 1; title = “Hello, World!”; text = “title”; Using variables You can set a variable's value to the value of a second variable: text = title; Note that the variable name on the left hand side of the equal sign refers to the variable itself (not its value), but on the right hand side, the variable name refers to the current value stored in the variable. As soon as the variables are assigned a new value, they simply forget the old value: x = y; y = x; After the first statement, x took on the value of y, so that the second statement does not change the value of y. The right hand side of an assignment can contain complicated expressions: x = y * y – 2 * y + 3; Note that the right hand side needs to be evaluated first before the value is assigned to the variable: intensity = intensity * 2; This statement just doubled the value of intensity. It is important to use comments in your source code, not only for other people to understand the intent of the code, but also for yourself, when you come back to your code in 6 months. Comments look like this: // This is a comment trying to help you to // remember what you meant to do here: a = Math.exp(x * Math.sin(y)) + Math.atan(x * y – a); You should not repeat the code in English, but describe the important aspects not conveyed by the code. For example, there might be a bug requiring a workaround, and you might want to explain that in the comments (lest you try to "fix" the workaround). Comments are often abused to disable code, such as debugging statements // x = 10; // hard-code x to 10 for now, just for debugging If you have a substantial amount of things to say in a comment, you might use multi-line comments: /* Multi-line comments can be started by a slash followed by a star, and their end is marked by a star followed by a slash: */ Further reading For more information, see BeanShell's Quickstart page. Tips You can source scripts (i.e. interpret another script before continuing to interpret the current script) using this line: this.interpreter.source("the-other-script.bsh"); Examples Add CIEL*a*b numbers to the status bar If your monitor is calibrated to sRGB, there is an easy way to do display also the L, a and b values in the status bar: import color.CIELAB; import java.awt.Label; import java.awt.event.KeyEvent; import java.awt.event.KeyListener; import java.util.regex.Matcher; import java.util.regex.Pattern; // IJ1's API does not offer all I want setAccessibility(true); // press Escape on the Fiji window to stop it class Add_CIELab_to_Status extends Thread implements KeyListener { protected ImageJ ij; protected Label status; protected Pattern pattern = Pattern.compile("^.* value=([0-9]*),([0-9]*),([0-9]*)$"); protected float[] lab, rgb; public Add_CIELab_to_Status() { ij = IJ.getInstance(); status = ij.statusLine; ij.addKeyListener(this); lab = new float[3]; rgb = new float[3]; } public void run() { try { for (;;) { String text = status.getText(); Matcher matcher = pattern.matcher(text); if (matcher.matches()) { for (int i = 0; i < 3; i++) rgb[i] = Float.parseFloat(matcher.group(i + 1)) / 255; CIELAB.sRGB2CIELAB(rgb, lab); status.setText(text + ", L=" + IJ.d2s(lab[0], 2) + ",a*=" + IJ.d2s(lab[1], 3) + ",b*=" + IJ.d2s(lab[2], 3)); } Thread.sleep(5); } } catch (InterruptedException e) {} } public void keyPressed(KeyEvent e) { if (e.getKeyCode() == KeyEvent.VK_ESCAPE) { ij.removeKeyListener(this); interrupt(); } } public void keyReleased(KeyEvent e) {} public void keyTyped(KeyEvent e) {} } new Add_CIELab_to_Status().start(); This example starts a new thread (make sure to implement the run() method but actually call the start() method!) which polls the status bar. It also registers itself as a key listener so it can stop the process when the user hits the Escape key when the main window is in focus.
http://fiji.sc/wiki/index.php/Beanshell_Scripting
CC-MAIN-2015-06
refinedweb
913
62.07
Classes and functions are provided in the namespace ROOT::Math or ROOT::Fit in the case of the fitting classes. From the release 5.20, the GenVector (physics and geometry vector) package is not anymore part of MathCore, but it belongs to a separate library (libGenVector). MathCore contains instead now classes which were originally part of libCore. These include: See also: N.B.: For consulting the reference MathCore documentation it is strongly recommended to look at these online doc. The class documentation shown from the class links below is not complete, it is missing some template methods. Last modified: Thu Mar 5 20:44:45 CET 2009
http://root.cern.ch/root/htmldoc//MATH_MATHCORE_Index.html
crawl-003
refinedweb
107
58.69
Urgent requirement Key requirement for business analyst Key requirement for business analyst What are the key requirement of the business analyst? A Business analyst plays a key role in handling organisational business and projects and thus need to possess technological REQUIREMENT - Java Interview Questions REQUIREMENT i have requirement like this i want to print 1 1,2,2 1,2,2,3,3,3 1,2,2,3,3,3,4,4,4,4 i want print like this ?i want source code plz reply /** * @author Rajesh Sirangu * */ public class What is the most important requirement for OLTP ? What is the most important requirement for OLTP ? What is the most important requirement for OLTP ? Hi, In the transaction server, the client component usually includes GUI and the server components usually JRequisite - Requirement Management Tool JRequisite - Requirement Management Tool JRequisite 0.0.1 is released! The first version includes a flowchart diagram editor. See more about this agile requirement System and Hardware Requirement for Windows 8 Microsoft has launched its new Windows 8 in the market, but most of the users are thinking is their computer, laptop or tablet enough to run Windows 8. Here are the minimum system and hardware requirement that your computer, laptop Hardware Requirement for Linux. Hardware Requirement for Linux. For Installing Linux Server/ Desktop... Intel processor. Minimum Requirement for 32-bit version. Fedora Core 6... Hard Disk Space Requirement. Minimum space required at the initial stage jsp - JDBC jsp how to link jsp code with next htmlcode. Hi keerthi, Can u explain ur question clearly? What is your exact Requirement jsp hosts jsp hosts Hi, What is the meaning of jsp hosts or JSP hosting environments? Thanks JSP hosts are the hosting companies providing hosting environment for JSP and Servlets. These companies are providing: 1) Shared Charts in JSP - JSP-Servlet of charts in JSP.. So, Can i know the pre requirement for that? Do i need... the following code - JSP-Servlet jsp code hi my requirement is generate dynamic drop down lists... to generate second drop down list by using jsp? pls ? Hi Friend...").options[cid].text; window.location.replace(" JSP - JSP-Servlet need your kind help.The Requirement is as follows: 1) My HTML Page must consist... requirement , it is not possible to do like this. Tell me your requirements clearly... the data in a html form. It must be jsp. Because, html is static where as jsp JSP and JDBC - JSP-Servlet JSP and JDBC Respected sir/Madam, I am R.Ragavendran.. I am in urgent need of programming from roseindia team.. My requirement is as follows: 1) There must be a home.jsp page in which three fields must be present Session Tracking JSP - JSP-Servlet R.Ragavendran.. I Immediately need a coding for session tracking in JSP. Actually when... and Logout time in the next page.. This is my requirement.. OUTPUT: UserID... information. Thanks Servlet,Jsp Servlet,Jsp This is my requirement?I have a login page and register page?If I give the url as ,it should go to login page?.then from login page on click of register button I can register how to generate captcha in jsp page ? - JSP-Servlet how to generate captcha in jsp page ? hi friends, i would like to implement Captcha in login screen. i'm unsing struts. could you please give some...=41499 Thanks. Hi friend, As per your requirement for captcha Jsp Code - Java Beginners Jsp Code Hi, I am new to java programming & as per the requirement, i need to implement a 'SEARCH' functionality which will search the database & should display a unique record. The design contains the 4 input boxes Paypal - JSP-Servlet Paypal Could U plz give me any idea how to integrate paypal payment gateway from simple jsp page.i want integration code. Thanx U... attribute of form tag, as per ur requirement, and send the values as hidden fields Populating values from Child Popup Jsp to Parent jsp Populating values from Child Popup Jsp to Parent jsp Hi, My requirement follows like this.. I have a parent jsp where u have search button. If u click on this a popup jsp opens and based on certain critierias in popup jsp we Calender - JSP-Servlet display a calender. i want a calender to be displayed on my home page as jsp... in Advance Hi friend, As per your requirement i have an application to solve the problem. JSP,JDBC and HTML(Very Urgent) - JSP-Servlet JSP,JDBC and HTML(Very Urgent) Respected Sir/Madam, Thanks for your response. You asked me to give my requirements clearly. My requirement... details from database using JDBC and JSP. The home page i.e HTML Page must contain Admin and User Login JSP - JSP-Servlet Admin and User Login JSP Respected Sir, I am R.Ragavendran.. i need a JSP based program immediately.. In the home page, there must be a login... in and Log out time for the end of the day.. This is my requirement and its disable the form - JSP-Servlet wrote jsp page in that 8 forms are there.all forms are same but the input values... Ajax. after response is coming to the jsp page,particular form button... Multiple form are created in Jsp. Servlet is used to insert the data. Ajax is used JSP Code - Java Beginners JSP Code Dear Frnds, I m using a 'multiple selecttion in List box' & trying to get all the value selected in list box. Here is the line which i...("drpdwnAllRole"); This is giving me only first selected value but my requirement need JSP:NEED RESPONSE ASAP PLEASE! - JSP-Servlet JSP:NEED RESPONSE ASAP PLEASE! Hi freind, If you could respond... and then sends this data in url to the next jsp page. The code works fine... another requirement similar where instead of the id I have to get a String a (name javascript error in IE - JSP-Servlet javascript error in IE Please Help me, I have an urgent requirement i.e. JavaScript prompt box is not working in IE. It Works firefox,safari,--- etc. ********Actually my requirement is without changing setting JSP Life Cycle be overridden if there is a requirement to do JSP specific initialization.... This method can be overridden if there is requirement to perform JSP specific...JSP Life Cycle In this section we will discuss about life cycle of JSP. Like JSP Upload and Downloading files - JSP-Servlet JSP Upload and Downloading files Respected Sir/Madam, Very... and downloading files in JSP, I am facing a problem.. 1) The file name is getting inserted... requirement asap.. Bcoz i am in the midway of a big ocean !!! Thanks/Regards JSP Code - Java Beginners JSP Code Dear frnds, I have a problem where i need to display.... Please help me to find the solution. Let me know if i m unclear on my requirement...:8080/examples/jsp/country.jsp?value="+val); } --Please Select-- "> Employee Details - JSP-Servlet . Will it work for me. Will it be executing according to my requirement. If yes,I will try... in process.jsp, in Home Page. Because, for jsp or html elements like radio buttons... to do that, you have to forward form to either java action class or to some jsp (i.e Need to implement Paging and field based sorting in JSP Servlet - JSP-Servlet Need to implement Paging and field based sorting in JSP Servlet Hi... sorting on records. Following is my requirement: Suppose we fetch the record... Roll No. Class etc etc As per my requirement i need to first java - JSP-Servlet java how to write javascript in servlet? Hi friend, Write Javascript in servlet follow some steps: 1.Create a file Javascript file "valid.js" as accroding requirement; "valid.js". function validate Array Creation - JSP-Servlet Array Creation hi i have a requirement in which i need to convert 10000 comma separated string into multiple arrays , that is suppose i have got 1000 comma separated string values now i convert it into single array like embed ganttChart on JSP page embed ganttChart on JSP page How I can embed ganttChart on JSP page...)))); schedule1.add(new Task("Requirement Analysis", new SimpleTimePeriod(date...)))); schedule2.add(new Task("Requirement Analysis", new SimpleTimePeriod(date(16 jsp-jdbc - JDBC jsp-jdbc Hi! html- jsp-jdbc program from the html form where... jsp frequently. I am getting error through request.getParameter(). can you please... Write clearly what is your requirement and write your code also i can Program Urgent - JSP-Servlet Program Urgent Respected Sir/Madam, I am R.Ragavendran. I am in urgent need of the coding. My requirement is as follows: Beside Enter Employee ID... for more information. Thanks. Amardeep displaying output on web page immediately whent the jsp buffer size is full. And how to set jsp buffer size in bytes displaying output on web page immediately whent the jsp buffer size is full. And how to set jsp buffer size in bytes Here is my requirement, I have to display output on the browser after jsp buffer is full. And I have to set Clearing the output on the web page generated by a jsp requirement, First, i have to retrieve the records from the database and display those records on the web page using jsp(display.jsp) and if these records... on the web page using same jsp(display.jsp). Please help me in resolving this problem JSP JSP Hi, What is JSP? What is the use of JSP? Thanks Hi, JSP Stands for Java Server Pages. It is Java technology for developing web applications. JSP is very easy to learn and it allows the developers to use Java servlet program - JSP-Servlet the servlet is executed .can you please send me the code? I hope my requirement how to use one form out of multiple form from one jsp to another jsp how to use one form out of multiple form from one jsp to another jsp Hi All. please give me your valuable advise for below requirement.. I have a .jsp( say abc.jsp) file which contains multiple Action form.I am required to: Retrieving newly inserted records and displaying in jsp forever Retrieving newly inserted records and displaying in jsp forever Sir, here is my requirement, First i have to retrieve newly added 10 records from a table and display in jsp(Each row contains a field called as "session Radio Button Problem in jsp. Radio Button Problem in jsp. I have a small doubt in my application, my requirement is to get a "single selectible row", I generated a radio button... radio button values from jsp to action using javascript jsp jsp In any jsp all implicit objects are available directly, in such a case why we need PageContext object again
http://roseindia.net/tutorialhelp/comment/86762
CC-MAIN-2014-10
refinedweb
1,799
66.84
Lately,. I recently read about a campaign to fix outlook and it got me thinking. Why hasn't Microsoft considered starting fresh on any of their software? My personal choice would be a reset on Internet Explorer. The browser wars are over and Microsoft won. It's too bad the web lost. At work we've been considering trying out Flex for some more intensive UI elements and it occurred to me that the consistency alone would probably be worth losing the hacky tests we have to keep up in order to feel confident releasing javascript intensive code on multiple platforms. Random web developer wish lists aside, it is tough to recognize when some piece of software should be rewritten from scratch. Weighing the pros and cons rarely provides a true measure of the whether you'll be successful or not. Yet, even without a good means of measuring needs, it is clear that a rewrite can be very helpful. I think Apple provides an excellent model for rewrites. They have successfully rewritten their operating system, a web browser, and an office suite, to great success (in my opinion at least). One theme in all these rewrites has been the inclusion of other pieces of software. Gecko (khtml) has been critical to providing a new suite of tools simply by making it possible to pay attention to other aspects of the applications. Likewise, OS X utilizing FreeBSD was not necessarily innovative, but rather an effective means of raising the level of abstraction. Like programming languages, abstracting away low level detail enables thinking at a level closer to how people think. In programming, this allows programmers to reduce the complexity. When you can safely say "x = 5" without having to think about memory management or the scope of the variable, you create space in your available mind for other details. In the same way, introducing a well established library or piece of code can help take an application to the next level by moving the innovations closer to the user. Going back to Microsoft, I wonder why they have not really made efforts to capitalise on these kind of changes in the software landscape. I'm sure lawyers would be involved in the equation, but as a participant in the software landscape, it is clear it hurts users. It is always difficult to standardise on things as competition and differences are where innovation occurs, but just like programming languages, when there is a low level standard to build on, using something else just adds complexity. I think the time to start over is never clear. One sign that it might be worth it though is a low level library or system that might free up your resources for features closer to the user. At the end of the day software is there for people to use, so anything you can do to improve the experience for the person using is worth more than keeping a code base out of convenience. I ran into this today and wanted to write it down somewhere both for myself and posterity (ok, mostly for myself). As you may know, python dictionaries are mutable. This means you can do things like this: x = {'foo':'bar'} def change_it(d, new_value): d['foo'] = new_value change_it(x, 'baz') print x # should print {'foo': 'baz'} As you can see, the variable x can be changed anywhere. With that in mind what I was doing was basically this: x = {'foo': 'bar'} def change_it(new_value): y = x['foo'] y = new_value change_it('baz') print x # we get {'foo': 'bar'} as expected So, this might give you the impression that when you access a value in a dictionary that value is not a reference to the original. But, what about this? x = {'foo': {1: 'one'}} def change_it(new_value): y = x['foo'] y.update(new_value) change_it({2: 'two'}) print x # do you expect {'foo': {1: 'one'}} ? You will actually get: {'foo': {1: 'one', 2: 'two'}} The reason this happens is because the reference you get back is a mutable object, so in the above code, the variable "y" points to the same mutable dict "x['foo']" points to. We can test this with this bit of code: x = {'foo': {1: 'one'}} y = x['foo'] print y is x['foo'] # True When you consider python always passes by reference this makes sense. At the same time, it can be a little sneaky if you're not thinking about it since it is usually pretty simple to just use code like "y = x['foo']" and expect it to act like a copy. Thanks to a few folks in #cherrypy on oftc.net for clearing up my explanation on this. The other day I had something of an issue at work. I was working on retooling our testing environment when there was a need to provide a fix for something in production. I couldn't reproduce the issue, so I decided to add some extra logging to help try and gather some data on the issue. With the code in place, it became clear that I didn't know how I was going to move those changes to the production repo while keeping my other work safe. After looking into the issue further, I thought rebase might be helpful. Rebase is a great extension, but it wasn't going to provide a fix (that I know of). The rebase extension allows you to choose the order of two existing heads. The classic example is when you are working on a feature, you pull to get the most recent changes and you want to upgrade to the latest from the remote repo, while keeping your changes "in front" or after the pulled code in the history. My description might be a little off, but it was how I understood the process. In my situation the scenario was as if I already rebased and did so incorrectly. Fortunately, the transplant extension came to the rescue. What I wanted to do was effectively recreate my local repo and correct the order of commits so my unfinished work was "in front of" my production fix. To put things plainly, I had a sequence of commits 'ACB' where 'C' was unfinished, so I wanted to move it to the front and have 'ABC'. What I gathered is it is not really possible to reorder the commits since the time is always attached to the changeset. But, I was able to push my production fixes without having to push my working changes, which was good enough for me. I started by cloning the remote repo. Then I transplanted the production changesets I needed from my local repo. Then I pushed back to the remote repo. I then transplanted the rest of my changes to the new clone. Just for good measure I pull my new remote changes into my local repo and merged to see what would happen in terms of history. It actually made it clear that things had been transplanted at different points in time and reordered. Here is what it looked like: > ls local > hg clone ssh://user@remote/hg/repo remote > ls local remote > cd remote > hg transplant -s ../local ... interactively choose changesets to apply ... > hg transplant -s --continue # if any merges failed > hg push Being able to push my production changes without having to also push my working changes means the person doing the release can merge with default without having to exclude my working changesets. This doesn't seem like a huge win, but I think it is pretty helpful way to avoid someone working with changesets they didn't write themselves. It seems like it is a decent work flow as well. Keeping your own "production" or "pusher" repo as an intermediary for a remote production repo can be a helpful way of making sure you introduce atomicity while still keeping your changes in VC. I've found the more commit points you create, the easier it is to see where things might have gone wrong. The downside is that your changes might become interspersed with other changes. Rebase definitely helps this case and I believe using a local production repo for pushing also provides another means of keeping merges simple and obvious. When I first started using twitter for Ume, I thought it might be kind of cool if people did interviews over twitter. The character limit is an interesting constraint, there is a real time aspect, and the person getting interviewed can review an answer. For myself, I enjoy handling email interviews because you have a chance to edit your thoughts, which means readers get a more interesting and informative response. With that in mind I set out to write a crawler of sorts to monitor a set of twitter accounts and compile an interview between some people. The idea for a crawler type of application was also partly stemmed from my desire to get more comfortable with threads. In addition to my own perceived concurrency issues, I started playing with BerkleyDB, and later, Tokyo Cabinet. In the end though, simplicity won and I dropped the persistence and just do everything on demand. The result is TwtrView! All it does is pull the last 50 tweets from the users you enter and see what messages include @replies to any of those users. This makes it possible to see an interview as well as find conversations folks might be having. It actually has been somewhat useful because when I see a conversation someone might be having, I enter the two usernames and I can see where things came from. There are definitely some limits. I'm using on twitter account for the API calls so if something doesn't work, it might have had too many requests. If people use this, then I'm sure OAuth will become much higher on my list of things to do. Also, the requests are made from my server, so it is very much network bound in terms of processing. I hope someone else might find it useful. If you have any issues feel free to send me an email or comment below. I just read this article by Seth Godin regarding the Next Google. While it is a lot of fun to talk about the "next" big thing, what struck me was the competitive perspective Microsoft has taken regarding Google. Microsoft became the definition of PC through partnering with IBM compatible computers. For whatever reason, it was appealing to IBM and the computer industry to effectively work together to create an ecosystem for personal computing. It is unclear why the web has ushered in an environment of extreme competition when the societal impact is easily as important. In my mind, the collaborative nature of both PCs and the web seem very similar. Both considered the environment of available more important than exclusivity. This addresses needs of users to feel that what they create and consume is something they control. All and all, it makes sense that when you enable people to communicate under their own terms, the environment is extremely valuable. So why then when Microsoft had such a huge portion of the market chose to focus on web presence instead of collaborating with folks like Yahoo! and Google? Why did they waste the time fighting a browser war? Why did win the war and leave the winning browser to rot? It just doesn't make sense to me. Nothing they have done promotes the users need to communicate on their own terms. Google meets users needs by providing a way to find things to consume. If Microsoft were acting as they did before, it seems like the focus should be on integrating their creative software with the web. This is not a web based word processor, but rather an obvious way for people to create web content that is Google compatible. In reality, I'm OK with how things have gone. I could do without testing Internet Explorer, but I am glad that Apple has gained market share. I'm glad that Microsoft has been ignoring its duties as the singular provider of operatiing systems in order to create subpar web entities. The lack of success seems to only help new systems to become the "next" big thing. With that said, I do wonder what sort of web we'd have if Microsoft had taken a compatability approach to the web instead of the exclusivity path. Recently, I started using my Linux partition on my macbook and so far, things have been running pretty smoothly. I'm enjoying stumpwm with my two monitors and I'm pretty comfortable with the setup. That said, there are always some small oddities and frustrations that seem to creep in. My biggest gripe is sound. It is unclear what Apple does to make the internal speakers sound reasonable. What is clear is that Ubuntu is not doing the same thing. While I was pretty happy my Apple keys worked for turning up and down the volume, the actual sound out of speakers is pretty terrible. It nothing headphones doesn't fix, but still a hassle. Another issue is the sound server. In linux there are a set of back ends that "talk" to your sound hardware. Gnome (or Ubuntu) seems to adopted Pulse Audio, which is a compatibility layer on top of the lower level back ends. From a programmers prospective, this abstracts some of the more complicated issues and allows for a simpler API. From the users perspective, things just work and all seems to be well. In actuality, what really happens is that Firefox seems to eventually stop playing sound (ie Flash) and both Firefox and the Pulse Audio session need to be restarted in order to listen to hip new indie music. I'm sure much of this process has to do with the Flash plugin, but from the standpoint of a user, it's a pain. Generally, I've found everything to be rather stable, with the exception of Firefox. It seems to always crash at some point. Flash is most likely the culprite, although I really don't spend very much time on sites using Flash. It just occurred to me that it is possible ads might be the main source of bad Flash, so I'd wonder if an ad blocker plugin might help. I'm not really a fan of ad blockers since advertising is such a critical part of the web ecosystem, yet I'm downright sick of Firefox crashing, so it might just be the way to go. There are other issues of course. I'd like to gnome terminal, but I can't get the fonts to look like a regular xterm. I'm using Gnome with stumpwm so Adobe Air apps don't complain. This adds the gnome tool bar which is rather unnecessary. Gripes aside, it has been really easy to get back into Linux on the desktop. It is a lot of fun to have a system you can tweak to your liking. It is also kind of fun to have things to work on. While my sound issues are definitely a pain in the neck, it also gives me something to try and fix, which generally means understanding more about my system. Most of the time it is not the most useful knowledge, but often times it is really helpful stuff. For example, setting up my VPN to only be used on selected traffic. I can't say I'd reccommend using linux to everyone, but if you are using OSX because it's a *nix, it is worth trying out linux on th desktop. Over the weekend we went to the Austin Zoo. It was a really good time. One of the tigers was in some play area where they put meat in the trees, so he was really active "hunting" for the surprises. The Austin Zoo isn't a huge zoo. There is not an enormous facility or any sort of major features. From what I gather, it is much more focused on helping animals that were forced to enter into captivity. This makes zoo feel a little closer to a rescue facility than a typical zoo. Unfortunately, this also makes it clear that the zoo needs better funding. You can see cages where they simply put sheet metal on top with some large rocks to secure the makeshift roof. There empty cages that serve as storage for tools. Sometimes the cages seem a little smaller than expected. While I'm sure the zoo keepers know what they are doing, it does make one wonder what kind of oversight is involved in running a zoo. I also wonder if the Austin Zoo is really associated to the city or it is just a name some group was able to use. In any case, my impression is not meant to be condemning. Instead I'd hope someone reading this might consider how they can help. We made sure to spend some money in the gift shop as well as make a donation. It wasn't much, but if more people visit, it can only help improve the situation. And despite the humble surroundings, it really is a ton of fun. I know for a fact that non-profits are hurting right now. The economic downturn really puts a damper on this critical part of our society since they are already being run through the extra resource people have. When it seems like there isn't any extra, that means the non-profits get cut out. This is really a shame if you ask me. If you work, you are likely to pay taxes and that means funding non-profits as the government is a pretty huge contributor. When you pay taxes, you effectively lose the choice of where that money goes and in effect, you might very well be funding something you don't believe in. When you contribute to a non-profit, it is your choice. You can remain local if you'd like or try helping a global issue. The important thing is that you get the chance to make a choice. This kind of turned into a rant, so my apologies. I'm not asking for money or anything, just sharing an observation I had at the zoo. Since SXSW, I've been in something of a catch up mode trying to finish up a major upgrade to the latest jQuery at work and it has been pretty time consuming. I work on a pretty Javascript heavy application at work that has gone somewhat stale. They say if it's not broke, don't fix it and we did just that. The impetus then to go ahead and take the plunge to upgrade jQuery was a bug in IE 8 that had to do with the scrolling plugin we were using. A few tests were put together and we found that the latest jQuery would do the trick, so that started us down the path. We found out later the bug was actually an IE 8 release candidate bug that had been fixed in the released version. It was somewhat disheartening to do so much work on a bug that wasn't actually a bug. Since we have a shiny new jQuery, I'm not all that upset. In our case the 80/20 rule was very much true. The first step was to change the script tag to the latest version, run the tests and see what breaks. After changing some of the API calls, the vast majority of things worked. I'd say around 80% in fact! The things that didn't work were a bit more troubling. One of the major changes was that we moved from using the old interface library to using the new jQuery-UI. This seemed to be a good idea as there were some helpful updates to the widget APIs as well as an entirely new widget and theme pattern that could be helpful. Unfortunately, what caused the most trouble was upgrading our code that used the old widgets. One interesting aspect the jQuery team added was the inclusion of themes for UI widgets. This required making a difficult decision regarding how a UI widget is implemented in the DOM. The basic options are to 1) ask the user to declare the different components on page or 2) as the user to declare a container where the widget will fill in the required DOM elements. The first option is a tedious one since you will need to code for many different cases and define not only the required elements, but rigidity of the markup. The second option is also difficult because you have the obligation to expose the HTML the widget will write in order to allow setting the state of the widget (ie when a user goes back). The jQuery team chose the second option and it seems to be the correct decision. While it definitely caused me quite a bit of grief understanding how the widgets changed, in the end, the pattern seems like the best trade off. Another aspect of the upgrade that I thought was interesting was the disparate documentation out there for making a jQuery plugin. I have been writing most of my javascript code as jQuery plugins in order to keep everything within their framework. My reasoning is that I can push the cross browser issues to the library and hopefully save myself some time. The problem is that there are quite a few plugin tutorials that all offer slightly slightly different views of how to properly write a plugin. After reading some mailing list posts and seeing the new widget API in jQuery-UI, it is clear that this problem is being addressed, albeit with caution. The unanswered questions regarding how to write a plugin stems from the flexibility found in Javascript. You can write code in such a wide variety of styles that choosing one or even recommending one is difficult. While I was not able to utilise it, the jQuery-UI widget API is a huge step towards providing a plugin pattern that I hope will eventually be extended to general jQuery plugins. This upgrade has been pretty tough and I'm sure part of that was my own fault. My tendency was to try not to change the way the widget worked. My logic was that the algorithm had worked previously, so changing things radically would only present the opportunity for new, unknown bugs. While I still think this is true, one side effect of reimplementing things is that you have the opportunity to see what the underlying library is doing with less distractions as well as understanding potential pitfalls the previous author might have already solved. It is also relatively painless to make a go at reimplementing things and drop it if it becomes obvious you are on the wrong path. As I move forward I think my goal will be to make creating one off revisions of things as quickly and easily as possible. I'm not sure how best to do that but I have feeling if I streamline my experimentation workflow, the result will not only be more ideas getting fleshed out, but a better understanding of bugs that I'm fixing. I've tried and failed to learn Lisp more times than I can count. At this point there are no theories as to why it has been such a challenge. Fortunately, I'm pleased to announce this last foray has proved to be much more fruitful. It started with the simple task of wanting a better view of a csv file. At work we export data and one of the simple formats is csv. Since my computer life increasingly revolves around Emacs, it seemed like it would be nice to find a way to convert a buffer with a CSV file to a table in org-mode. Emacs makes it really easy to send a selection to shell command, so the pattern was already in place to implement this in Python. After my Python version was done though, it seemed clear it should be relatively easy to write it in Lisp. The first steps involved doing a little string manipulation. Taking a line and turning it into an org-mode table line was pretty easy: (defun addpipe (line) (concat "| " (mapconcat 'identity (split-string line ",") " | " ) " |"))) That effectively splits the line by commas and immediately puts it back together, replacing the comma with a pipe. It then adds a pipe on either end. Next up was to select the text in the current buffer, split it by new lines, and format each line placing it in another buffer for viewing. ((defun csv2org ( ) (interactive) (setq buffer-text (buffer-substring (point-min) (point-max))) (setq prebuffer (delete "" (split-string buffer-text "\n"))) (setq mapped-buffer (mapcar 'addpipe prebuffer)) (with-current-buffer (get-buffer-create "*csv2org*") (dolist (line mapped-buffer) (insert (format "%s\n" line)))) (set-buffer "*csv2org*")) I'm sure smarter folks than me a more elegant way of doing this, but it worked for me and fit my understanding of the problem. For example, creating the "*csv2org*" buffer most likely has its usability issues. That said, I'm pretty happy that I was able to get what I wanted basically working. Obviously there is a great deal more to learn when it come to working with Emacs in Lisp, but this experiment is getting marked down as a success. While there were not any major revelations, things started clicking more than before. One helpful bit was becoming more comfortable with executing Lisp in Emacs. Using the scratch buffer, it became easy to play with the function and setup a simple development pattern for trying things. Also, all the Emacs and Lisp bookmarks I've recorded over the past couple years ended up providing useful tips for getting further along than before. For my next trick, I'm going to implement a simple way to paste code to. Fortunaely, I've already started making progress and despite not finding much information on the URL Package, it seems like it shouldn't take much to get it working good enough.
http://ionrock.org/blog
crawl-002
refinedweb
4,385
69.11
Today seems to have been mostly about porting. I’ve not really developed anything new, but I have learned a lot by moving things around between machines. Yesterday I was a little disappointed with the un-tuned performance of the FFT code I had found and got running on the Freescale Freedom KL25Z, so I decided to givce it a go on some other platforms. The first step was to move it to my development PC to see how that stacks up. The code needed a few changes – using an operating system timer rather than a hardware one, and sending output to the console rather than a serial port. The FFT code itself needed no changes, though. #include #include #include #include #include #include "FFT.h" #define POINTS 1024 #define BKSP 8 #define SCALE 4 const double PI = 3.14159265358979323846; const double CIRCLE = 2 * PI; float data[POINTS]; float height(int i) { return fabs(data[i]); } clock_t when() { struct timeval tv; gettimeofday(&tv, NULL); return (tv.tv_sec % 1000) * 1000000L + tv.tv_usec; } void run(float freq) { float step = freq * CIRCLE / POINTS; for (int i = 0; i < POINTS; ++i) { data[i] = (sin(i * step) * SCALE * log(freq)) + SCALE; } clock_t t = when(); printf("start, t=%ld\n", t); vRealFFT(data, POINTS); t = when() - t; printf("stop, t=%ld\n", t); printf("FFT with %d points took %f seconds (%ld ticks at %ld ticks/sec)\r\n", POINTS, ((float)t) / CLOCKS_PER_SEC, t, CLOCKS_PER_SEC); float max = 0; for (int i = 0; i < POINTS/2; ++i) { if (height(i) > max) max = data[i]; } for (float level = max; level > 0; level -= max/10) { for (int i = 0; i < POINTS; i += 2) { putchar( (height(i) > level) ? '|' : ' '); } printf("\r\n"); } for (int i = 0; i < POINTS/2; +); } } When I ran it, I got the expected output, but massively faster than the KL25z! (somewhere between 100-150 us to convert 1024 points, compared with over 100 ms on the KL25z). Of course, to complete the test I had to run the same code on the Raspberry Pi. As expected it fell somewhere in the middle at about 3ms. Note though, that this test was done the “lazy way” by shipping the PC code over to a Raspbian Pi and compiling it and running it under Linux. I’m guessing it might be slightly faster if it had the whole machine to itself. I might even try running it on the Arduino at some point. I assume that will be even slower than the KL25z, though. Later I decided to also port the LED traffic light example from Ardiono to KL25z. Unfortunately the KL25z comes without headers. Connecting to the pcb holes is possible, but clumsy: So I soldered some headers to allow me to plug in hook-up wires, just as I did with the Arduino. The code is slightly modified to use the mbed abstractions rather than the Arduino ones, but the basic code is identical #include "mbed.h" DigitalOut red(PTB0); DigitalOut amber(PTB1); DigitalOut green(PTB2); int red_flag = 1; int amber_flag = 2; int green_flag = 4; void show_colour(int flags) { if (flags & red_flag) red = 1; if (flags & amber_flag) amber = 1; if (flags & green_flag) green = 1; wait(1); red = 0; amber = 0; green = 0; } int main() { while(1) { show_colour(red_flag); show_colour(red_flag | amber_flag); show_colour(green_flag); show_colour(amber_flag); } } Sounds like you’re testing the written-for-mbed FFT code on your desktop PC and RasPi? I wonder if you’d get different performance figures if you tried on your PC and RasPi? (I’ve not used it myself, I just happened to spot a link to it on another website, and it reminded me of your bat-detector project!)
http://raspberryalphaomega.org.uk/2013/03/28/porting-fft-and-traffic-lights/
CC-MAIN-2020-34
refinedweb
608
66.78
Opened 11 years ago Closed 11 years ago Last modified 10 years ago #8233 closed (wontfix) Decimal Field can accept float value Description The Decimal Field accepts the following types: - a decimal.Decimal instance value; - a string representation of a float; What about to accept float values too? The float value can be converted using the DecimalField format_number method, see the code above: def to_python(self, value): if value is None: return value try: # if the value is a float, convert it to a string and let decimal.Decimal do the work if isinstance(value, float): value = self.format_number(value) return decimal.Decimal(value) except decimal.InvalidOperation: raise validators.ValidationError( _("This value must be a decimal number.")) Attachments (1) Change History (4) Changed 11 years ago by comment:1 Changed 11 years ago by comment:2 Changed 11 years ago by The point of a Decimal type is that it doesn't have the precision problems that floats do. The Python decimal library correctly refuses to do this kind of conversion implicitly, and so should we. From the decimal docs: "To create a Decimal from a float, first convert it to a string. This serves as an explicit reminder of the details of the conversion (including representation error)." comment:3 Changed 10 years ago by Milestone 1.0 maybe deleted Patch over the trunk (r8301).
https://code.djangoproject.com/ticket/8233
CC-MAIN-2019-09
refinedweb
226
56.25
If you’re familiar with relational databases like MySQL or PostgreSQL, you’re probably also familiar with auto incrementing IDs. You select a primary key for a table and make it auto incrementing. Every row you insert afterwards, each of them gets a new ID, automatically incremented from the last one. We don’t have to keep track of what number comes next or ensure the atomic nature of this operation (what happens if two different client wants to insert a new row at the very same time? do they both get the same id?). This can be very useful where sequential, numeric IDs are essential. For example, let’s say we’re building a url shortener. We can base62 encode the ID of the url id to quickly generate a short slug for that long url. Fast forward to MongoDB, the popular NoSQL database doesn’t have any equivalent to sequential IDs. It’s true that you can insert anything unique as the required _id field of a mongodb document, so you can take things to your hand and try to insert unique ids yourselves. But you have to ensure the uniqueness and atomicity of the operation. A very popular work around to this is to create a separate mongodb collection. Then maintain documents with a numeric value to keep track of your auto incrementing IDs. Now, every time we want to insert a new document that needs a unique ID, we come back to this collection, use the $inc operator to atomically increment this number and then use the incremented number as the unique id for our new document. Let me give an example, say we have an messages collection. Each new message needs a new, sequential ID. We create a new collection named sequences. Each document in this sequences collection will hold the last used ID for a collection. So, for tracking the unique ID in the messages collection, we create a new document in the sequences collection like this: { "_id" : "messages", "value" : 0 } Next, we will write a function that can give us the next sequential ID for a collection by it’s name. The code is in Python, using PyMongo library. def get_sequence(name): collection = db.sequences document = collection.find_one_and_update({"_id": name}, {"$inc": {"value": 1}}, return_document=True) return document["value"] If we need the next auto incrementing ID for the messages collection, we can call it like this: {"_id": get_sequence("messages")} Find and Modify – Deprecated If you have searched on Google, you might have come across many StackOverflow answers as well as individual blog posts which refer to findAndModify() call ( find_and_modify in Pymongo). This was the way to do things. But it’s deprecated now, so please use the new find_one_and_update function now. (How) Does this scale? We would only call the get_sequence function before inserting a new mongo document. The function uses the $inc operator which is atomic in nature. Mongo guarantees this. So even if 100s of different clients trying to increment the value for the same document, they will be all applied one after one. So each value they get will be unique, new IDs. I personally haven’t been able to test this strategy at a larger scale but according to people on StackOverflow and other forums, people have scaled this to thousands and millions of users. So I guess it’s pretty safe.
http://polyglot.ninja/
CC-MAIN-2021-17
refinedweb
563
63.19
When writing tests for [Django views](), especially for projects at [work](), I’ve almost completely abandoned any sort of detailed test for the template being rendered. My tests usually look something like this: def test_link_archive_should_show_published_links(self): """Links in draft status shouldn't appear in the archive.""" #create some data for testing, optionally use a fixture l = Link(url="", title="Worst. Blog. Ever.", published=False) l.save() r = self.client.get("/links/archive/") self.assertValidResponse(r) self.assertContains(r, l.title, 0) l.published = True l.save() r = self.client.get("/links/archive/") self.assertValidResponse(r) self.assertContains(r, l.title, 1) It’s slightly heretical, I know. But you don’t want your test suite breaking because of a change in HTML, or a change in your [date format](), or the addition of new output, etc. Those changes are frequently made far down the road from the initial launch (say during a [redesign]()) when you’re likely not to remember why you tested for such fine grained output. Since I’m generally [testing first]() (and who isn’t?) I use the view test as a chance to figure out what information — no matter the whims of the designer, destiny or the sands of time — really has to be presented to the user for this view to fulfill it’s job. Test for the presence of those strings and move on. I usually test for context. Which is then testing the backend code instead of the templates actually rendering the code. Seems like a good middle ground. Eric, I’m of two minds on testing the context. If you work with designers who change and deploy templates at will, then the context is almost a contract between the developer and the designer. It should be tested to ensure that you’re always providing the right data with the right template variable names. But if the developer is making the template changes then it feels like you’re actually testing an internal of the app. It doesn’t matter that the blog title is available as {{ blog.title }} in the template, it matters that the blog title appear when I GET the page. I just found your blog post from two years ago and I feel tempted to add some from myself. I agree that very strict testing templates is a bad idea. However, when writing views for Android i tend to test the behavior of widgets or ‘views’ (call them whatever you like), and I like to do it for django templates too. In the example you present, you make a false assumption that some ‘drafts’ can be injected into the context and rendered. Well, thats not true, and shouldn’t be true unless your design is broken. It’s the views obligation to pass only valid data (no drafts) to the template. So it’s not the test that’s bad, but the design. There are some cases, when you must test templates. Simple example: pagination. Let’s say we’ve got three pages each on different url. When on first page there should be a link to ‘next’ pointing to second page. On second page we should have ‘previous’ and ‘next’ and only ‘previous’ on the third one. Presence of those links is essential. Otherwise the end user won’t be able to navigate, even if views work correctly. So we should test for presence of those links, shouldn’t we? It’s good to make tests as non-specific as possible. Just test the presence of the link anywhere in the template output. There may be some uber-clever way to deal with this problem in view-layer (controller), but nothing comes to my mind right now.
https://chrisheisel.com/2009/01/21/django-testing-tip-dont-test-template-output/
CC-MAIN-2019-18
refinedweb
619
73.98
Machine Learning for Beginners: An Introduction to Neural Networks A simple explanation of how they work and how to implement one from scratch in Python. | UPDATED: 3 things are happening here. First, each input is multiplied by a weight: Next, all the weighted inputs are added together with a bias : Finally, the sum is passed through an activation function: The activation function is used to turn an unbounded input into an output that has a nice, predictable form. A commonly used activation function is the sigmoid function: The sigmoid function only outputs numbers in the range . You can think of it as compressing to - big negative numbers become ~, and big positive numbers become ~. A Simple Example Assume we have a 2-input neuron that uses the sigmoid activation function and has the following parameters: is just a way of writing in vector form. Now, let’s give the neuron an input of . We’ll use the dot product to write things more concisely: The neuron outputs given the inputs . That’s it! This process of passing inputs forward to get an output is known as feedforward. Coding a Neuron Time to implement a neuron! We’ll use NumPy, a popular and powerful computing library for Python, to help us do math: import numpy as np def sigmoid(x): # Our activation function: f(x) = 1 / (1 + e^(-x)) return 1 / (1 + np.exp(-x)) class Neuron: def __init__(self, weights, bias): self.weights = weights self.bias = bias def feedforward(self, inputs): # Weight inputs, add bias, then use the activation function total = np.dot(self.weights, inputs) + self.bias return sigmoid(total) weights = np.array([0, 1]) # w1 = 0, w2 = 1 bias = 4 # b = 4 n = Neuron(weights, bias) x = np.array([2, 3]) # x1 = 2, x2 = 3 print(n.feedforward(x)) # 0.9990889488055994 Recognize those numbers? That’s the example we just did! We get the same answer of . 2. Combining Neurons into a Neural Network A neural network is nothing more than a bunch of neurons connected together. Here’s what a simple neural network might look like: This network has 2 inputs, a hidden layer with 2 neurons ( and ), and an output layer with 1 neuron (). Notice that the inputs for are the outputs from and - that’s what makes this a network. A hidden layer is any layer between the input (first) layer and output (last) layer. There can be multiple hidden layers! An Example: Feedforward Let’s use the network pictured above and assume all neurons have the same weights , the same bias , and the same sigmoid activation function. Let denote the outputs of the neurons they represent. What happens if we pass in the input ? The output of the neural network for input is . Pretty simple, right? Let’s implement feedforward for our neural network. Here’s the image of the network again for reference: import numpy as np # ... code from previous section here class OurNeuralNetwork: ''' A neural network with: - 2 inputs - a hidden layer with 2 neurons (h1, h2) - an output layer with 1 neuron (o1) Each neuron has the same weights and bias: - w = [0, 1] - b = 0 ''' def __init__(self): weights = np.array([0, 1]) bias = 0 # The Neuron class here is from the previous section self.h1 = Neuron(weights, bias) self.h2 = Neuron(weights, bias) self.o1 = Neuron(weights, bias) def feedforward(self, x): out_h1 = self.h1.feedforward(x) out_h2 = self.h2.feedforward(x) # The inputs for o1 are the outputs from h1 and h2 out_o1 = self.o1.feedforward(np.array([out_h1, out_h2])) return out_o1 network = OurNeuralNetwork() x = np.array([2, 3]) print(network.feedforward(x)) # 0.7216325609518421 We got again! Looks like it works. Liking this post so far? Subscribe to my newsletter to get more ML content in your inbox. 3. Training a Neural Network, Part 1 Say we have the following measurements: Let’s train our network to predict someone’s gender given their weight and height: We’ll represent Male with a and Female with a , and we’ll also shift the data to make it easier to use: I arbitrarily chose the shift amounts ( and ) to make the numbers look nice. Normally, you’d shift by the mean. Loss Before we train our network, we first need a way to quantify how “good” it’s doing so that it can try to do “better”. That’s what the loss is. We’ll use the mean squared error (MSE) loss: Let’s break this down: - is the number of samples, which is (Alice, Bob, Charlie, Diana). - represents the variable being predicted, which is Gender. - is the true value of the variable (the “correct answer”). For example, for Alice would be (Female). - is the predicted value of the variable. It’s whatever our network outputs. is known as the squared error. Our loss function is simply taking the average over all squared errors (hence the name mean squared error). The better our predictions are, the lower our loss will be! Better predictions = Lower loss. Training a network = trying to minimize its loss. An Example Loss Calculation Let’s say our network always outputs - in other words, it’s confident all humans are Male 🤔. What would our loss be? Code: MSE Loss Here’s some code to calculate loss for us: import numpy as np def mse_loss(y_true, y_pred): # y_true and y_pred are numpy arrays of the same length. return ((y_true - y_pred) ** 2).mean() y_true = np.array([1, 0, 0, 1]) y_pred = np.array([0, 0, 0, 0]) print(mse_loss(y_true, y_pred)) # 0.5 Nice. Onwards! 4. Training a Neural Network, Part 2 We now have a clear goal: minimize the loss of the neural network. We know we can change the network’s weights and biases to influence its predictions, but how do we do so in a way that decreases loss? This section uses a bit of multivariable calculus. If you’re not comfortable with calculus, feel free to skip over the math parts. For simplicity, let’s pretend we only have Alice in our dataset: Then the mean squared error loss is just Alice’s squared error: Another way to think about loss is as a function of weights and biases. Let’s label each weight and bias in our network: Then, we can write loss as a multivariable function: Imagine we wanted to tweak . How would loss change if we changed ? That’s a question the partial derivative can answer. How do we calculate it? Here’s where the math starts to get more complex. Don’t be discouraged! I recommend getting a pen and paper to follow along - it’ll help you understand. To start, let’s rewrite the partial derivative in terms of instead: We can calculate because we computed above: Now, let’s figure out what to do with . Just like before, let be the outputs of the neurons they represent. Then Since only affects (not ), we can write We do the same thing for : here is weight, and is height. This is the second time we’ve seen (the derivate of the sigmoid function) now! Let’s derive it: We’ll use this nice form for later. We’re done! We’ve managed to break down into several parts we can calculate: This system of calculating partial derivatives by working backwards is known as backpropagation, or “backprop”. Phew. That was a lot of symbols - it’s alright if you’re still a bit confused. Let’s do an example to see this in action! Example: Calculating the Partial Derivative We’re going to continue pretending only Alice is in our dataset: Let’s initialize all the weights to and all the biases to . If we do a feedforward pass through the network, we get: The network outputs , which doesn’t strongly favor Male () or Female (). Let’s calculate : Reminder: we derived for our sigmoid activation function earlier. We did it! This tells us that if we were to increase , would increase a tiiiny bit as a result. Training: Stochastic Gradient Descent We have all the tools we need to train a neural network now! We’ll use an optimization algorithm called stochastic gradient descent (SGD) that tells us how to change our weights and biases to minimize loss. It’s basically just this update equation: is a constant called the learning rate that controls how fast we train. All we’re doing is subtracting from : - If is positive, will decrease, which makes decrease. - If is negative, will increase, which makes decrease. If we do this for every weight and bias in the network, the loss will slowly decrease and our network will improve. Our training process will look like this: - Choose one sample from our dataset. This is what makes it stochastic gradient descent - we only operate on one sample at a time. - Calculate all the partial derivatives of loss with respect to weights or biases (e.g. , , etc). - Use the update equation to update each weight and bias. - Go back to step 1. Let’s see it in action! Code: A Complete Neural Network It’s finally time to implement a complete neural network: import numpy as np def sigmoid(x): # Sigmoid activation function: f(x) = 1 / (1 + e^(-x)) return 1 / (1 + np.exp(-x)) def deriv_sigmoid(x): # Derivative of sigmoid: f'(x) = f(x) * (1 - f(x)) fx = sigmoid(x) return fx * (1 - fx) def mse_loss(y_true, y_pred): # y_true and y_pred are numpy arrays of the same length. return ((y_true - y_pred) ** 2).mean() class OurNeuralNetwork: ''' A neural network with: - 2 inputs - a hidden layer with 2 neurons (h1, h2) - an output layer with 1 neuron (o1) *** DISCLAIMER ***: The code below is intended to be simple and educational, NOT optimal. Real neural net code looks nothing like this. DO NOT use this code. Instead, read/run it to understand how this specific network works. ''' def __init__(self): # Weights self.w1 = np.random.normal() self.w2 = np.random.normal() self.w3 = np.random.normal() self.w4 = np.random.normal() self.w5 = np.random.normal() self.w6 = np.random.normal() # Biases self.b1 = np.random.normal() self.b2 = np.random.normal() self.b3 = np.random.normal() def feedforward(self, x): # x is a numpy array with 2 elements. h1 = sigmoid(self.w1 * x[0] + self.w2 * x[1] + self.b1) h2 = sigmoid(self.w3 * x[0] + self.w4 * x[1] + self.b2) o1 = sigmoid(self.w5 * h1 + self.w6 * h2 + self.b3) return o1 def train(self, data, all_y_trues): ''' - data is a (n x 2) numpy array, n = # of samples in the dataset. - all_y_trues is a numpy array with n elements. Elements in all_y_trues correspond to those in data. ''' learn_rate = 0.1 epochs = 1000 # number of times to loop through the entire dataset for epoch in range(epochs): for x, y_true in zip(data, all_y_trues): # --- Do a feedforward (we'll need these values later) sum_h1 = self.w1 * x[0] + self.w2 * x[1] + self.b1 h1 = sigmoid(sum_h1) sum_h2 = self.w3 * x[0] + self.w4 * x[1] + self.b2 h2 = sigmoid(sum_h2) sum_o1 = self.w5 * h1 + self.w6 * h2 + self.b3 o1 = sigmoid(sum_o1) y_pred = o1 # --- Calculate partial derivatives. # --- Naming: d_L_d_w1 represents "partial L / partial w1" d_L_d_ypred = -2 * (y_true - y_pred) # Neuron o1 d_ypred_d_w5 = h1 * deriv_sigmoid(sum_o1) d_ypred_d_w6 = h2 * deriv_sigmoid(sum_o1) d_ypred_d_b3 = deriv_sigmoid(sum_o1) d_ypred_d_h1 = self.w5 * deriv_sigmoid(sum_o1) d_ypred_d_h2 = self.w6 * deriv_sigmoid(sum_o1) # Neuron h1 d_h1_d_w1 = x[0] * deriv_sigmoid(sum_h1) d_h1_d_w2 = x[1] * deriv_sigmoid(sum_h1) d_h1_d_b1 = deriv_sigmoid(sum_h1) # Neuron h2 d_h2_d_w3 = x[0] * deriv_sigmoid(sum_h2) d_h2_d_w4 = x[1] * deriv_sigmoid(sum_h2) d_h2_d_b2 = deriv_sigmoid(sum_h2) # --- Update weights and biases # Neuron h1 self.w1 -= learn_rate * d_L_d_ypred * d_ypred_d_h1 * d_h1_d_w1 self.w2 -= learn_rate * d_L_d_ypred * d_ypred_d_h1 * d_h1_d_w2 self.b1 -= learn_rate * d_L_d_ypred * d_ypred_d_h1 * d_h1_d_b1 # Neuron h2 self.w3 -= learn_rate * d_L_d_ypred * d_ypred_d_h2 * d_h2_d_w3 self.w4 -= learn_rate * d_L_d_ypred * d_ypred_d_h2 * d_h2_d_w4 self.b2 -= learn_rate * d_L_d_ypred * d_ypred_d_h2 * d_h2_d_b2 # Neuron o1 self.w5 -= learn_rate * d_L_d_ypred * d_ypred_d_w5 self.w6 -= learn_rate * d_L_d_ypred * d_ypred_d_w6 self.b3 -= learn_rate * d_L_d_ypred * d_ypred_d_b3 # --- Calculate total loss at the end of each epoch if epoch % 10 == 0: y_preds = np.apply_along_axis(self.feedforward, 1, data) loss = mse_loss(all_y_trues, y_preds) print("Epoch %d loss: %.3f" % (epoch, loss)) # Define dataset data = np.array([ [-2, -1], # Alice [25, 6], # Bob [17, 4], # Charlie [-15, -6], # Diana ]) all_y_trues = np.array([ 1, # Alice 0, # Bob 0, # Charlie 1, # Diana ]) # Train our neural network! network = OurNeuralNetwork() network.train(data, all_y_trues) You can run / play with this code yourself. It’s also available on Github. Our loss steadily decreases as the network learns: We can now use the network to predict genders: # Make some predictions emily = np.array([-7, -3]) # 128 pounds, 63 inches frank = np.array([20, 2]) # 155 pounds, 68 inches print("Emily: %.3f" % network.feedforward(emily)) # 0.951 - F print("Frank: %.3f" % network.feedforward(frank)) # 0.039 - M Now What? You made it! A quick recap of what we did: - Introduced neurons, the building blocks of neural networks. - Used the sigmoid activation function in our neurons. - Saw that neural networks are just neurons connected together. - Created a dataset with Weight and Height as inputs (or features) and Gender as the output (or label). - Learned about loss functions and the mean squared error (MSE) loss. - Realized that training a network is just minimizing its loss. - Used backpropagation to calculate partial derivatives. - Used stochastic gradient descent (SGD) to train our network. There’s still much more to do: - Experiment with bigger / better neural networks using proper machine learning libraries like Tensorflow, Keras, and PyTorch. - Build your first neural network with Keras. - Read the rest of my Neural Networks from Scratch series. - Tinker with a neural network in your browser. - Discover other activation functions besides sigmoid, like Softmax. - Discover other optimizers besides SGD. - Read my introduction to Convolutional Neural Networks (CNNs). CNNs revolutionized the field of Computer Vision and can be extremely powerful. - Read my introduction to Recurrent Neural Networks (RNNs), which are often used for Natural Language Processing (NLP). I may write about these topics or similar ones in the future, so subscribe if you want to get notified about new posts. Thanks for reading!
https://victorzhou.com/blog/intro-to-neural-networks/
CC-MAIN-2022-05
refinedweb
2,336
68.16
FlagsAttribute Class Indicates that an enumeration can be treated as a bit field; that is, a set of flags. System.Attribute System.FlagsAttribute Namespace: SystemNamespace: System Assembly: mscorlib (in mscorlib.dll) The FlagsAttribute type exposes the following members.. using System; class Example { // Define an Enum without FlagsAttribute. enum SingleHue : short { None = 0, Black = 1, Red = 2, Green = 4, Blue = 8 }; // Define an Enum with FlagsAttribute. [FlagsAttribute] enum MultiHue : short { None = 0, Black = 1, Red = 2, Green = 4, Blue = 8 }; static void Main( ) { // Display all possible combinations of values. Console.WriteLine( "All possible combinations of values without FlagsAttribute:"); for(int val = 0; val <= 16; val++ ) Console.WriteLine( "{0,3} - {1:G}", val, (SingleHue)val); // Display all combinations of values, and invalid values. Console.WriteLine( "\nAll possible combinations of values with FlagsAttribute:"); for( int val = 0; val <= 16; val++ ) Console.WriteLine( "{0,3} - {1:G}", val, (MultiHue)val); } } // The example displays the following output: // All possible combinations of values without FlagsAttribute: // 0 - None // 1 - Black // 2 - Red // 3 - 3 // 4 - Green // 5 - 5 // 6 - 6 // 7 - 7 // 8 - Blue // 9 - 9 // 10 - 10 // 11 - 11 // 12 - 12 // 13 - 13 // 14 - 14 // 15 - 15 // 16 - 16 // // All possible combinations of values with FlagsAttribute: // 0 - None // 1 - Black // 2 - Red // 3 - Black, Red // 4 - Green // 5 - Black, Green // 6 - Red, Green // 7 - Black, Red, Green // 8 - Blue // 9 - Black, Blue // 10 - Red, Blue // 11 - Black, Red, Blue // 12 - Green, Blue // 13 - Black, Green, Blue // 14 - Red, Green, Blue // 15 - Black, Red, Green, Blue // 16 - 16
https://msdn.microsoft.com/en-us/library/System.FlagsAttribute.aspx
CC-MAIN-2015-35
refinedweb
251
55.88
gd_alias_target man page gd_alias_target — determine the target of an alias defined in a Dirfile database Synopsis #include <getdata.h> const char *gd_alias_target(DIRFILE *dirfile, const char *alias_name); Description The gd_alias_target() function queries a dirfile(5) database specified by dirfile and determines the target field code of the alias specified by alias_name. The dirfile argument must point to a valid DIRFILE object previously created by a call to gd_open(3). Note: the target may itself be the an alias, which will have its own target. To obtain the canonical name of the field ultimately referenced by alias_name, pass it to gd_entry(3) and inspect the field member of the gd_entry_t structure returned. Return Value Upon successful completion, gd_alias_target() returns a pointer to a read-only character string containing the name of the target of the specified alias. On error, gd_alias_target() returns NULL and sets the dirfile error a non-zero error value. Possible error values are: - GD_E_BAD_CODE The name alias_name was not found in the dirfile. GD_E_BAD_DIRFILE The supplied dirfile was invalid. - GD_E_BAD_FIELD_TYPE The entry specified by alias_name was not an alias. The dirfile error may be retrieved by calling gd_error(3). A descriptive error string for the last error encountered can be obtained from a call to gd_error_string(3). History The function gd_alias_target() appeared in GetData-0.8.0. See Also gd_aliases(3), gd_entry(3), gd_open(3), dirfile(5) Referenced By gd_add_alias(3), gd_aliases(3), gd_naliases(3).
https://www.mankier.com/3/gd_alias_target
CC-MAIN-2017-47
refinedweb
237
55.64
- Will you have barbecue afterwards? Because we really want OMGWTFBBQ! Admin What a shame it requires Windows... after being a professional programmer for 25 years, I still don't own a Windows machine (and I've written hundreds of thousands of lines of C++ too...) I don't exactly understand why you did this, either. Seems like there's absolutely no need for it as the code is pure C++. Care to explain? Admin The winner should be dubbed OMGWTFLOL, the latter piece standing for "Language Obliteration Laureate". Admin I love the idea :) Also i only dabbled in C/C++ under linux, and did most of my gtk design in glade, so i'll be perfect for a high score ;) Now off to find some windows vmware image. Admin The explanation is pretty simple: you can use linux too. RTFA. Admin Let me get this straight: one of the prizes is a Mac, but if you actually write code on Mac OS X then you can't enter the contest? Talk about a daily WTF! Admin If the promised GTK+ 2.0 skeleton solution shows up, any Mac users should be able to use that... Admin This is gonna be a cheap idea, but... You've posted the test cases, all of them. What's to prevent me from simply hardcoding the answers to those, and calling that my WTF and submitting? Admin Maybe I'm just blind, but I don't see anything about a deadline. When is the submission deadline for this contest? Admin Um, isn't the mac now an extension of linux? Captcha: muhahaha Yes I know that I am a moron Admin Nevermind, I am blind. The deadling is May 14. Admin Come on, let me do it in C#. Pretty please? Admin That'd be perfectly legit. Form the contest site: "If it passes all of these, then congratulations, you’ve built a fully-functioning calculator so far as we’re concerned." Admin Maybe you should RTFA before you start up your bitch and moan routine. Admin Test cases. What is up with the test cases? Is it still considered a valid entry if all those cases are hard coded into the application? Please answer this! It is very important to how I will pick my design. Admin There's nothing stopping you from that. You could even hardcode the thing for the exact order of key presses required to pass the test cases. This is for fun, we don't care if your calculator barely works as long as it passes the guidelines we've posted. Admin It seems we may have some really astounding submissions in the queue... Admin The whole UI thing scares me away. I don't have Visual Studio and getting the skeleton to work in MINGW or coLinux is probably going to be too much of a bother. Besides, I'm coding some slightly iffy code to keep me amused already. So while I will probably not participate in this context, nonetheless I would like to take this opportunity to give you guys some more rope:You can write a nice object oriented expression evaluator (leeking memory all the way), overload the + operator for char*, or you can write a compiler (lots of byte codes and black magic). Oh, and throw proper Unicode support in there - there's no end of fun to be had with that. "We wanted to make the engine as generic as possible, to optimally prepare ourselves for future requirements." Admin Design??? Well you've already lost! Admin Can our entry use a database, like mysql for example? Admin "Nor is it like the International Obfuscated C Coding Contest; in fact, writing code like that would be a surefire way to lose this contest." Are you sure? I remember a Factorial solution a few years back that was pretty WTF. It created a source file that solved factorial for n-1, compiled it, exec'ed it, and multiplied the result by n to get the final result. Wouldn't it be cool to implement addition by creating a whole bunch of files, and then a whole bunch more, and then counting them? Ooh, I'm giving away my best ideas! Admin Ooh, that's a good one! And XML! Everything will be better if it uses XML extensively. Admin reminds me of one of my favorite webcomics of all time: Admin Hey Alex, I see you finally got around to it :). Joel Admin Much more amusing then hard coding, would be to open an internet connection, connect to the web page containing the desired results, and then look up the answer by screen-scraping. Extra credit to anyone who displays their answers using a flame effect. ;) Admin No. Mac OS X shares some bits with the various BSDs, but the only thing in common with Linux is certain GNU tools in userspace. More to the point. Macs don't come with GTK and it's not easy getting it running. Admin WTF is C/C++? C or C++, pick one or the other. Admin If you have a Mac, why would you want another one? =p Admin Admin Nah, screen scrape the total directory size from 'dir'! Admin Web services! By the way, here is my submission, I'm done. I worked real hard. package test; public class paulaBean { private String paula = "Brillant"; public String getPaula() { return paula; } } Admin So, apparently this site's mission now is to come up with incredibly lame new meanings for established acronyms? WTF. Admin What's worse than failure? Admin A GUI is required? Fuck, fuck, fuck. Who the fuck needs a GUI? And then fucking C/C++? I think I'll pass. Admin What happened did you run out of tampons today? You must be a pretty crappy coder if you can't afford a $300 windows box. And you must be extremely retarded if you can't figure out how to dual boot. Admin Are you limited to one submission or can you submit several? Admin Will my new laptop come with award winning calculator software pre-installed? Admin Sec. 380. Use of Olympic symbols, emblems, trademarks and names Cheers. Admin Oh that's good. It should come installed and you should be contractually obligated to keep it installed for one year on the laptop if you win, and it should be the standard replacement for the calculator that comes with the OS. Admin So wait a minute. Joel Spolsky from JoelOnSoftware is a judge? Well, I guess if anyone in the world knew how to write WTF code, it would be him... I just read his site to see if he still writes WTF stuff. Yep, still writes WTF stuff. Hey Joel, there's a bigass "FIND" button right on the toolbar. Click it, enter what you want, hit enter. DONE. 2,450 emails in my inbox right now, searched in under 10 seconds. Outlook 2003, SP2. eesh. Admin I think to sum up a lot of the questions others are asking: does it have to be self-contained? My guess is that entries probably cannot depend on opening HTTP or database connections but it doesn't hurt to make sure. Anyway, using web services or screen-scraping would certainly be "WTF-ey", but short of building your own TCP/IP stack for it, it's really not very clever and there are certainly better and faster ways to litter the code with bugs and hacks. Admin How about a program that crashes the OS and the number of times you have to reboot the system is the correct answer? Yes, I am the evil twin. Admin Admin Admin I have submitted my submission, entitled "Fast and Deadly". It was originally going to be a copy of the Win32 example source code, but when I clicked that link I got a 404, so I submitted the 404 page with a .zip extension instead. I got submission number 10001 (I think that's the right number of 0s anyway) so I'm pretty sure I was first. And I've read stories here about various enterprises which have done worse. So. :) ... why yes, I did read the rules. I'm even fairly sure as to which of them apply to make my submission invalid. :) Admin I'm not going to make an entry, but if I did, it would require the following things: A physical four-function calculator. A webcam. A robotic arm. OCR and robotic arm-control software. The rest is left as an exercise for the reader. Admin Everybody knows that all the best WTFs are in Java or VB. Why on earth was a language specified? I'm happy to pollute my mind with my deliberate WTF code, but I'm not going back to the monstrosity that is C++. Admin Hmm, some snarky people out there. Sorry, I was misled by the Win32 framework described towards the top. Regarding dual boot, since I don't have any Intel-like machines at home, I think that'd be a little tricky. (My work machine IS Linux but I'm not going to futz with that for this contest...) Onward and upward! Admin The obvious solution is to embed an interpreter of your choice. Start your submission with #include <Python.h>. The rest is left as an exercise. Admin I have half a mind to embed Perl. And then write beautiful, clear, fully documented Perl code to actually perform the function of the application. But the Perl innards will win the contest.
https://thedailywtf.com/articles/comments/The-Worse-Than-Failure-Programming-Contest/?parent=133411
CC-MAIN-2020-29
refinedweb
1,598
75.2
QNAP NAS - File Station (Code) Hi, I am a complete novice when it comes to programming, but I do my best to try and pick up new things, and the reason I bought Pythonista was because I saw a github post where someone had posted some python scripts on how to work with a QNAP) But while I have copied an pasted the content over, into a new folder called QNAP, it does not seem to work ? I'm particularly interested in one of the python scripts here - > as I want to find a way that it can be run, so it asked me for me password and encodes it for me etc. Any help would be appreciated Add these lines to the bottom of get_sid.pyand run it. if __name__ == '__main__': import console print(ezEncode(console.password_alert('Enter your password'))) Many thanks !! Any help you could give me on making the rest of it work would be appreciated.. Hi parkerc, first you should download all py files in the same dir (site-packages or where you write your qnap program). Then you have to write your program (not in site-packages dir). Btw. I don't own a QNAP so I can't test it... from filestation import * #and now follow the README.md sample host = 'your_host_or_ip_addr' user = 'your_user' password = 'your_password' filestation = FileStation(host, user, password) shares = filestation.list_share() #and now you should start with list or search command... edit: First line corrected. Hi, Thanks for the update ;) Ok, I've copied everything I could from that GitHub post in to a folder called QNAP, and then created a new script with what you had provided above and put that outside. So I now have a new qnaprun.py file next to the QNAP folder. However - When I run the qnaprun.py script I get an immediate "syntaxerror : invalid syntax on the first line ? Which is from filestation import **** I understand you do not have a QNAP, hence your support is very much appreciated** . I can't see a * at the end of the first line!? And if you don't use the site-packages dir, your code have to be in the same dir (QNAP). modules I tested it with my localhost and it shows an expected error. ERROR:root:GET error: ... So it should work with a Qnap at the other side. Hi, I have put everything into the same folder now (called Qnap ) but whenever I run that file I continue to get the same syntax error on the first line - yet there is a file in the folder called filestation.py ? Here is the qnaprun.py from filestation import #and now follow the README.md sample host = '10.168.1.111' user = 'your_user' password = 'your_password' filestation = FileStation(host, user, password) shares = filestation.list_share() #and now you should start with list or search command... Add an *to the end of that the first line.
https://forum.omz-software.com/topic/2715/qnap-nas-file-station-code/1
CC-MAIN-2020-45
refinedweb
488
81.83
Using Elastic APM to visualize asyncio behavior A few weeks ago, Chris Wellons’ blog “Latency in Asynchronous Python” was shared on our internal #python channel at Elastic, and as someone looking into the performance characteristics of a recent asyncio-related change, I gave it a read and you should too. It was a timely post as it made me think ahead about a few things, and coincided nicely with our recent usage of Elastic’s Application Performance Monitoring solution. “It’s not what you know, it’s what you can prove.” —Detective Alonzo Harris in the movie Training Day Background I work on the billing team for the Elastic Cloud, where we process usage data and charge customers accordingly. One of the ways we charge is via integrations with the AWS and GCP marketplaces—with Azure on the way—giving customers the ability to draw from committed spend on those platforms and consolidate their infrastructure spending under one account. It’s a feature a lot of customers want, but it introduces some complexity compared to how we bill our direct payment customers. While most of our billing is done in arrears—at the end of the month—marketplaces require that charges are reported to them as they occur. As such, we have a system that reports marketplace customer usage every hour. The initial version of this service, while built using Python’s async/await syntax, didn’t leverage a lot of the concurrency opportunities the asyncio library provides…”premature optimization” and all0. It was designed to be able to take advantage of them, but worked initially as a generator pipeline. It was a greenfield project starting with no customers, so building something that worked correctly was more important than its speed. It is billing software, after all. async def report_usage(bills): async for bill in bills: await submit_bill(bill) async def main(): users = get_users_to_bill() clusters = get_clusters_used(users) bills = generate_bills(clusters) await report_usage(bills) This is basically what our pipeline looked like, where its coroutines all the way down. Because none of those first coroutines were awaited they don’t initially do anything. That is, until we hit Line 9. Once that is awaited, Line 2 is what starts pulling everything through, so this finds one user with usage, gets their cluster details, computes their bill, submits it, then keeps iterating through all of the users with new usage. With a small userbase this runs “fast enough”, but we’re doing a lot of stuff sequentially that we don’t need to. Submitting one user’s bill shouldn’t block us from finding another user’s clusters which shouldn’t block us from generating a third user’s bills, all of which involve platform-specific REST APIs, Postgres queries, Elasticsearch searches, and more. It’s a very I/O heavy application. The following sentence of Knuth’s “premature optimization” quote0—”Yet we should not pass up our opportunities in that critical 3%.”—recently become more relevant, or critical. This system is not only processing more marketplace users than before, it’s processing multiple platforms of users instead of only GCP. Maybe it’s time to dig deeper into asyncio and optimize. Let’s Get Concurrent Once we have all of the users, a first cut would be to process them concurrently and get an immediate boost. There’s a lot of users now so we need to restrict how concurrent we can be so we’re not overloading anything, especially not the cloud platforms we’re reporting to. async def process_user(user, semaphore): async with semaphore: clusters = await get_clusters_used(user) bill = await generate_bill(clusters) await report_usage(bill) async def main(): tasks = [] sem = asyncio.Semaphore(MAX_CONCURRENCY) async for user in get_users_to_bill(): tasks.append(asyncio.create_task(process_user(user, sem))) await asyncio.gather(*tasks) That works fairly well! It runs many times faster than our old pipeline if you look at wall clock time, but if we have 1,000 users and MAX_CONCURRENCY=50, this starts 1,000 tasks and we allow 50 of them to do work at a time. The final task to run might do 1 second of real work but it waited many seconds for the opportunity. A quick aside: Chris’ post is mostly about how this situation affects background tasks when a big group of new tasks like our process_usertasks start up. That problem doesn’t really apply to this system as it’s a standalone application initiated by cronand processes users then exits. There aren’t any other background tasks to interrupt…yet. Bringing APM Into It We already had logs and metrics flowing into Elasticsearch, but those only tell part of the story. Elastic’s observability solutions include a pretty cool APM product, and we love to use our own products, so we started using the elastic-apm package to send performance data to APM Server. import elasticapm client = elasticapm.Client() # get config from env async def process_user(user, semaphore, parent_tx): client.begin_transaction( "user", trace_parent=parent_tx.trace_parent ) elasticapm.set_user_context(user_id=user.user_id) async with semaphore: ... client.end_transaction("user task") This creates a "user task" transaction for each user that we’re going to process, and inside of it we have several spans for some internal functions that do some of the work, like gathering different types of usage via searches and reporting usage to platform-specific REST APIs. If we look at what a run of the service shows in the APM timeline with MAX_CONCURRENCY=2 and then process three users, we see the following: Here we have three "user task" transactions stacked on top of each other, all started at the same time, but with only the top two making progress. The third task begins doing actual work after one of those first two ended. Given the way the code was written we expect that, and without APM we might even think this is fine. Frankly, given how the system needs to work today, it is sort of fine. That is until we need to look at how long user tasks are running for and realize our metrics and understanding are way off. If we’re running for three users or three thousand users, each individual user should take around the same amount of time to run, but the way we wrote this just about every metric will be useless. Even in our tiny example above, the third task clearly completed its actual work the fastest but it gets penalized by having done nothing for for about two-thirds of its lifetime. Refactoring Chris’ post mentions using an asyncio.Queue for the concurrent tasks we want to run. Before I mentioned that our service doesn’t need to worry about anything but those "user task" instances, so if we could live with useless metrics we’d be ok [narrator’s voice: they can’t], but we do have plans that will need to solve what Chris is solving. After implementing roughly what he did, we end up with a much more useful timeline1. We see the same three stacked transactions with the first two beginning right away, and now we see the third doesn’t begin until one of the first two ends. Perfect! Now the third or three thousandth task will take only as long as it actually takes that task to run. Not only are our metrics more useful in the short term, we’re now ready for those changes I mentioned where we need to give background tasks a fair chance. If we’re 200 requests into those 3,000 and a new task needs to run, it gets a chance as the 201st instead of as the 3,001st. Using APM lets us visualize impact our changes have and gives us the ability to track that all over time. It’s a pretty great tool and we’re excited to use it even more. Conclusion asynciois pretty powerful, but it can be a bit difficult to understand and it’s not the right solution for every problem. If you’re interested in learning more about it, Łukasz Langa, a core CPython developer who works on EdgeDB—a database built on top of Postgres via the excellent asyncpg library (which we use a lot!)—has created a video series all about how asyncio does what it does. I highly recommend it. APM—Elastic’s or otherwise—is a really valuable tool. I thought I knew how some of our stuff was working, but proving it via APM has helped inform a bunch of decisions already in the short time this service has been using it. - 0(1,2) “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.” — Donald Knuth, “Structured Programming With Go To Statements” - 1 Don’t pay attention to the exact timing differences between the two implementations. These timelines are generated from an integration test on my laptop that is varying levels of overheating.
https://briancurtin.com/articles/using-elastic-apm-to-visualize-asyncio-behavior/
CC-MAIN-2022-27
refinedweb
1,494
58.82
In this article I intend to share some of the common issues encountered during Outlook Add-in (VSTO) development and how these may be addressed. There is interesting information available out there on the web that would help out the VSTO community and I have also tried to provide the links to these articles. In an earlier article Walkthrough - Automatic Update Process for Outlook Add-in Solutions, I had discussed setting up the prerequisites through the Launch Conditions Editor. This may be a time-consuming process and may have to be repeated for other VSTO projects. I found some interesting information here on how you can achieve it by selecting the VSTO pre-requisites directly in your Setup project. You only have to do it once and it makes life easier. Read the instructions in file 'InstallNotes.txt' that is included as part of this download on how to set it up on your development machine. Once installed, in VS2005 you have to right click the 'VSTO Setup' project and selected 'Properties'. Click on the 'Prerequisites' button that would pop-up something similar to the screen shot shown below. Click here to download Visual Studio Tools for Office runtime (vstor.exe). Click here to download Office 2003 Primary Interop Assemblies (PIA) redistributable (O2003PIA.exe). Your VSTO application needs to be signed with a strong name prior to deployment. To sign the assembly in VS2005, follow these steps: Once you have the .msi file generated from your VSTO Setup project, you are all set to install your solution. But you still need to consider what permissions your application requires on the end user machine and this is where CAS policy comes in. For more information of setting up CAS at the User, Machine and Enterprise levels, refer to the article in this link. To set up CAS for your application at the "User level", follow these steps: Note: This needs to be handled as part of the installation whereby you need to accomplish the equivalent of running Caspol.exe with no user intervention. Something close to this objective is discussed in the following section. I found a sample code (available in both C# and VB.NET) published here, which may be used to handle CAS programmatically. The author has excellently handled both Install and UnInstall in his code. Follow these steps: using System.Security; using System.Security.Policy; using System.Security.Permissions; PolicyLevel PermissionSet private readonly string installPolicyLevel = "Machine"; private readonly string namedPermissionSet = "FullTrust"; A concern normally raised by the user community is that the Manifest file on the .msi installed machine does not have any reference to the published URL. There is a roundabout way to handle this. The objective of writing this article was to share the information I gathered from both experience and reading what the experts out there had to say, so as to provide a helping hand to the VSTO.
http://www.codeproject.com/Articles/16614/Solutions-to-Common-Issues-encountered-during-Outl?PageFlow=FixedWidth
CC-MAIN-2014-35
refinedweb
484
55.64
Hi, I am attempting to test a REST application using Python and the Requests package. The URL I am accessing is supposed to return a JSON part and a binary part. I can retrieve the json part w/out any problems >>> import requests >>> headers = {'content-type': 'multipart/mixed'} >>>>> r = requests.get(url, headers=headers) >>> r.headers['content-type'] 'application/json >>> r.json() {'timestamp': 1465566160924, 'org': 'MATT', 'functions': ['radio', '0'], 'groupid': '', 'msgseq': 0, 'namespace': 'STATUS', 'nodeid': 'C0EE400ABCC9', 'correlationid': 0, 'statuscode': 0, 'isbinarydata': True} What I can't figure out is how to retrieve the elusive second part of my response. The developer who wrote the code told me that as long as I format the request properly, it should respond with both a json part and a binary part. Google searching on this topic results in endless questions about performing multipart POSTs with Request, but as far as I can tell, zilch questions/answers about GETs Any pointers on how to retrieve the binary portion using Requests would be very much appreciated! thanks! Jim
http://python-forum.org/viewtopic.php?f=22&t=20046&sid=b138a1145c343055704fd1fa3a62f14c
CC-MAIN-2017-30
refinedweb
173
51.18
Content-type: text/html fwide - Sets stream orientation to byte or wide-character Standard C Library (libc.so, libc.a) #include <stdarg.h> #include <wchar.h> int fwide( FILE *stream, int mode); Interfaces documented on this reference page conform to industry standards as follows: fwide(): ISO C Refer to the standards(5) reference page for more information about industry standards and associated tags. Points to a FILE structure specifying an open stream Specifies an integer that determines the orientation of the stream. If the integer is greater than zero, the function attempts to make the stream wide-character oriented. If the integer is less than zero, the function attempts to make the stream byte oriented. If the integer is zero, the function does not attempt to alter the stream orientation. The fwide() function, depending on the value of the mode parameter, sets or queries the orientation of a stream of data specified by a FILE structure. If the stream has no orientation, a positive or negative mode value causes the function to change the stream orientation to wide character or byte, respectively. The function does not change stream orientation when it is already set. When the stream orientation is already set, the application must first call freopen() to reinitialize the stream and then call fwide() to reset the stream orientation. A mode value of zero causes the function to return the current stream orientation without attempting to change it. The fwide() function returns an integer value to indicate the stream orientation after the call. The function returns a value greater than zero if the stream orientation is wide-character, a value less than zero if the stream orientation is byte, and 0 (zero) if the stream has no orientation. On error, the function returns 0 (zero) and sets errno to indicate the condition. The file descriptor associated with stream is invalid. Functions: fopen(3) delim off
http://backdrift.org/man/tru64/man3/fwide.3.html
CC-MAIN-2017-04
refinedweb
316
54.32
[Solved] Use of signal-slot connect in Windows 10 - koahnig Moderators ? - koahnig Moderators? @mrjj Yes, I have seen similar messages in the past. Yesterday when the assert was triggered I have checked output, but no message appeared anywhere. Not in the console window and not in the application output window in Qt creator. At first I did rebuilt and noticed that I had started in release. But after the same procedure including a rebuilt the same in debug mode. The only difference I see is that I have upgraded to Win10 a couple of weeks back. Therefore my suspicion that this might be releated. Note, I am using Qt 5.4.1 and receive a message each time starting an application that this Qt version is not tested with windows 10. However, it seem to come only with Qt 5.4.1 and not previous versions such as Qt 5.3.1. Despite the message the applications are running ok. Certainly I could upgrade to Qt 5.5, but my target platforms are currently Windows 2008 server. Therefore, I save the time. @koahnig Ok. understand. I think I saw such messages on win 10 using Qt 5.5, but Im only 80% sure and I can't check as i ran W10 over win W7 again. However, its hard to imagine the OS having an effect on internal meta system, but I would never say never. Do you have the same problem if you use the new syntax ? Do you get any output when you add qDebug("Hello!")or qWarning("Hello!")to your code? If not, that means your debug output has been disabled (that's why you don't see the message). Make sure your project doesn't define QT_NO_DEBUG_OUTPUT, QT_NO_WARNING_OUTPUT, etc. I'm on Windows 10 Pro (64-bit), using Qt 5.4.2 for MSVC 2013 32-bit. When I try to connect a signal to a non-existent slot, I get this message in my Qt Creator "Application Output" pane: QObject::connect: No such slot MyObject::fakeSlot() in ..\TestProg\main.cpp:32 Do you have the same problem if you use the new syntax ? The problem won't exist with the new syntax, because the compiler will detect the error and stop the build ;) @JKSH That's what I wanted @koahnig to check (but I haven't been really clear with my intention) Like you wrote, using the new syntax would avoid the need to check for that warning in the console. @SGaist @mrjj @JKSH Sorry guys. I had apparently another notification issue here in the forum. Not sure, if I simply missed the notification. Thanks for pointing towards the new syntax. Have not been aware of this, because I do not read each time I am using Qt constructs in the docs. However, when the syntax causes the compiler to complain, that is really cool and saves the time. Actually I consider it also as strange that the OS shall have such an effect. By the way, since you're new to the new syntax, I hope this article will be helpful to you: Just for closing the issue here. I have the new syntax for connect. I love it, even though it has "drawbacks" of different functionality according the documentation presented by JKSH above. For me it is perfect! Thanks again for pointing towards teh new syntax. May I humbly suggest to follow your signature ? ;) @mrjj Actually, my initial problem is not solved. ;) The warning at wrong connects still seems to be gone. Which might be a either a problem with my installation or a bug in Qt. - mrjj Qt Champions 2016 @koahnig oh. Well. My bad :) I tried to install win 10 to but it wont upgrade. Pretty strange what ever made it stop to display. The warning at wrong connects still seems to be gone. am using qDebug all the time. It does work. @JKSH qWarning is also working. However, your last question brought up a clue of what might have been the case. I am using a message handler for redirecting the output of qDebug. It allows to ignore all output, to store all to a file and/or to the screen. Since there is a lot of output, the screen output is slowing down dramatically. Therefore, I have redirected the output only to the file. The assert was probably kicking in before the message was written to the file. I have seen this with other output before, but never with the warning for connection failures (Probably I had also screen output then). Anyway even when the problem was between chair and keyboard, it was good to have the discussion. Otherwise I would not have learnt about the new syntax. Thanks again. A point I have missed in my previous response. I have written a small test this time. #include "Clas.h" #include <QTimer> #include <QDebug> Clas::Clas(QObject *parent) : QObject(parent) { QTimer *timea = new QTimer ( this ); connect ( timea, SIGNAL ( timerout() ), this, SLOT (sltQuit() ) ); qDebug() << "debug"; qWarning() << "warning"; } void Clas::sltQuit() { } and here is the output Qt: Untested Windows version 10.0 detected! QObject::connect: No such signal QTimer::timerout() in ..\..\Test\CheckConnect\Clas.cpp:9 debug warning So it is working perfectly also on windows 10 with MinGW version Qt 5.4.1. Just saw something: the warning's right, the signal is timeout not timerout Great! :) @SGaist Yes. That was intensional. I just tested that my installation is still providing the message. Thanks a lot anyway.
https://forum.qt.io/topic/59512/solved-use-of-signal-slot-connect-in-windows-10
CC-MAIN-2018-13
refinedweb
917
75.81
var foodDemand = function(food); { console.log("I want to eat" + " " + "food"); foodDemand("Burrito"); }; How does a function work? What is the problem you're experiencing? Please follow the template that newtopic gives you it makes it easier for people to assist you. But I found your issue anyway. var foodDemand = function(food); { console.log("I want to eat" + " " + "food"); //3. your }; should be here! foodDemand("Burrito"); <--- 2. The way you have it right now the program is trying to call your function inside your function, which won't work. So make sure that the }; is at 3. }; //<---1. this should be after your console log statement. var foodDemand = function(food) { return ("I want to eat" + " " + food); } foodDemand("Burrito") // This is working for me... Yup. You basically needed to take the foodDemand() and put it olputside of the foodDemand function. Thank you guys , yeah it was my mistake. You can't declare and call the function betwen curly braces.
https://discuss.codecademy.com/t/how-does-a-function-work/55487
CC-MAIN-2018-26
refinedweb
158
70.09
Introduction ↩ - Some facts about Python - Where to use Python - Python and typeface design - Python 2 vs. Python 3 - Python’s Design Philosophy Thousands of people from different fields have learned Python, and so can you. It’s just a language, and a way of thinking in terms of rules, systems and patterns. There is no magic involved! With a bit of time and effort you too can become fluent in Python. Some facts about Python The Python programming language was developed by Guido van Rossum in the late 1980s at the Centrum Wiskunde & Informatica in The Netherlands. Python is a high-level programming language with emphasis on readability. ‘High level’ means that it is distant from the 0s and 1s from the computer, and closer to human language (English, in this case). Python is free/open-source software and is available for all major platforms. Because Python is a dynamic programming language (executed at runtime), it is also often used as a scripting language in various kinds of applications. Why is Python called Python? The name ‘Python’ was inspired by the the British comedy group Monty Python. Where to use Python Python can be used in many different environments. MacOS comes with Python out of the box, as do most Linux distributions; downloads for Windows and several other platforms are available from the Python website. Some applications are written entirely in Python, while others support it as a scripting language – to automate things which would otherwise have to be done ‘by hand’ (clicking around in menus and icons). Python and typeface design Python plays an important role in type design and font production. Most font editors today support scripting with Python. Python 2 vs. Python 3 Python 3 was a major update to the Python language. The language was cleaned up and some mistakes were fixed – but in a way that broke full compatibility with the 2.X branch. Some of the new features in Python 3 have been ported backwards to Python 2.6 and 2.7, and are available via the __future__ module. Python 3 adoption was slow at first. But with the retirement of Python 2.7 scheduled for 2020, Python 3 has finally become the default version of Python in a RoboFont-based production workflow. - A py3 version of DrawBot was announced in November 2017. - RoboFont 3 (the first py3 version of RoboFont) was announced in March 2018. - Most core libraries and RoboFont extensions have also been upgraded. The latest version of macOS (10.14 Mojave) still comes with Python 2.7 pre-installed as its default version. Python 3 can be installed separately. Python’s Design Philosophy According to Guido,. These values are carried over to much of the code written in Python, including RoboFont. The Zen of Python The philosophy behind the design of the Python language is summarized in a series of aphorisms called The Zen of Python. This text is available as an ‘easter egg’ in all Python distributions. Simply type the following line of code in your favorite Python console: import this …and the complete Zen of Python will be printed out: >>> Beautiful is better than ugly. >>> Explicit is better than implicit. >>> Simple is better than complex. >>> Complex is better than complicated. >>> Flat is better than nested. >>> Sparse is better than dense. >>> Readability counts. >>> ... Try it out for yourself to read the rest :)
https://doc.robofont.com/documentation/building-tools/python/introduction/
CC-MAIN-2019-09
refinedweb
562
75.71
Answered by: Silverlight User Control Inheritance I? Question Answers... All replies:Class="SL2Test.Page" xmlns="" xmlns: . :). Thank you very much. Ill try it.. Both ways make you crazy? :) haha.!!!" ;) Hi,! Nuno? Thx, Nuno Hi, Even if i add the prefix i have errors. If you can make some sample, I would happy to take a look. you can reach me with this address mchlsync AT gmail DOT com.. Hi,? Thx, Nuno AFAIK, there is no wordaround for that.If we use User control inheritance in Silverlight, we will lost visual designer in Blend and VS.. I've got another problem, very similar to the previous one. I've got my own User Control with some XAML definition code and I want to inherit from this class: <UserControl x:Class="Test1.ControlX" xmlns="" xmlns: <Grid x: <Ellipse Width="50" Height="30" Grid. </Grid> </UserControl> public partial class ControlX : UserControl { public ControlX() { InitializeComponent(); } } and then public! ControlTwo does not have any XAML code, it just derives from ControlX. I guess, that it's not agains rules, but the XAML parser is thinking otherwise. Small update: Page.xaml contains: <UserControl x:Class="Test1.Page" xmlns="" xmlns:x="" xmlns: : XAML <BaseListItemControl x:Class="ItemsControls.TestItemListControl" xmlns="" xmlns: Hi, I also had a similar problem. The base classes (including user controls) are in one class library and other classes (from a different class library) needs to use those classes (customized user control). Here is how I solved this: Jeff, I looked at your project, and fixed your error. You need to namespace the BaseControl tag in the InheritedControl.xaml. It needs this so that it knows that BaseControl is coming from your assembly. It will look like this: <my:BaseControl x: <Grid x: </Grid> </my:BaseControl>. -Jeff Hi,:Class="CustomControls.TestControl" xmlns= xmlns:x= xmlns: ... ... and, public partial class TestControl : UserControlBase { ... ... } - Shahed. - ...to get rid of "The property 'Content' does not exist on the type..." warnings in Visual Studio and fix the Blend's "Cannot add content to an object of type..." issue, just add this property to your base class: public new UIElement Content { get { return base.Content; } set { base.Content = value; } } note: unfortunately, your base class cannot be abstract if you want to use the visual designer in Blend If it's nested namespace, please check my reply in this post I'm still having problems with this. I'm getting this exception: System.Windows.Markup.XamlParseException occurredthat is related to this xaml::Class="Exchange_S2.EmptyItem" xmlns="" xmlns:x="" xmlns:d="" xmlns:mc= xmlns:exs2="clr-namespace:Exchange_S2" "clr-namespace:TodayIT.Avalanche2.Web.Controls.Basic"> .. </exs2:ItemControl> If the above doesn't work, try the following, on top of the solution mentioned above: Add a reference in AssemblyInfo.cs [assembly: XmlnsDefinition("", "Exchange_S2")] I've just blogged on how I got UserControl inheritance to work for me. Hope this helps: - Hi Guys, I also encountered the same problem. Like wjchristenson2 created a blog post about it. You can solve this problem with two methods. Creating a wrapper as discussed above our use the strategy pattern. rapidly, here's my code : namespace MyApp { public class BasePage : UserControl {...} public class BaseEditPage<T>: BasePage where T: WCFService.EntityObject, new() {...} public class SomethingEdit : BaseEditPage<WCFService.SomethingEO> {...} } I've no error with this code BUT... my xaml don"t compile: <my:BaseEditPage x:Class="MyApp.SomethingEdit" x:TypeArguments="myWCF:SomethingEO" xmlns= xmlns:x= xmlns:d= xmlns:mc= xmlns:my="clr-namespace:MyApp;assembly=MyAppProject" xmlns: <my:BaseEditPage> some supplenet info: the WCFService represent the service WCF that is link to my project and SomethingXXX is class with [DataContract] attribute. x:TypeArgument is about to specified the Generic Class in the UserControl xaml ! this don't run cause the file SomethingEdit.g.cs (the hidden file partial class) don't generate with the normal generic specification like this: SomethingEdit<WCFService.SomethingEO> !! any help will be very great !!! to read you, Patrice I am doing this in windows phone 7.5 developing. didn't touch AsssemblyInfo.cs. I created the derived usercontrol xaml this way: <dialogs:BaseUserControl x:Class="......" and inserted one line: xmlns:dialogs="clr-namespace:<your base usercontrol class path>" Everything is fine, but got warning like: Warning 1 '....LayoutRoot' hides inherited member '<base usercontrol class>.LayoutRoot'. Use the new keyword if hiding was intended... Because in the base usercontrol has defined LayoutRoot, and the automatically created .g.i.cs of the derived usercontrol defines it again. Since it's a warning, the app is working fine, I didn't notice any problem so far, will see how it goes. Hope MS can improve the way the g.i.cs creates when the user control is inherited from another usercontrol but not UserControl. I am sure in g.i.cs only needs handling the new xaml elements in the derived usercontrol. Good luck! Frank
http://social.msdn.microsoft.com/Forums/silverlight/en-US/ead8f174-8c4f-4983-b569-83a00de4605e/silverlight-user-control-inheritance?forum=silverlightnet
CC-MAIN-2014-41
refinedweb
805
60.41
The assignment is: A. Present a menu like the following: 1. Enter a char 2. Enter a string 3. Enter a float 4. Enter an integer 5. Display the sqaure of numbers from 1-100 9. Quit the Program Enter the Program B.The program should not quit until the user chooses to quit by entering "9". You must use either a do/while loop or a while loop for this purpose. C. Depending on the choice the user makes, allow the user to enter that variable type, display it on the screen and prompt for themenu again. D. For choice 2 (Enter a String) in the menu, output the string user entered as all lowercase and all uppercase letters. E. For choice 3 and 4 from the menu in step 1,, if the number the user entered was negative, display a message: "You entered a negative number." Similarly, if the user enered a positive number, display: "You entered a positive numbers." F. For choice 5, use an integer variable. Must use a "for" loop to display the squares of all numbers from 1-100. Example output: Square of 1 is 1 Square of 2 is 4 Square of 3 is 9 ..... Square of 100 is 10000 BELOW are the code i wrote so far. Im stuck on Part C. And I don't know where to add the code for PArt C-E. Any help would be appreciated. #include "stdafx.h" #include <stdio.h> int main (void) { int myChoice; do { printf ("\nMAIN MENU\n\n"); printf ("1. Enter a char\n"); printf ("2. Enter a string\n"); printf ("3. Enter a float\n"); printf ("4. Enter an integer\n"); printf ("5. Display the square of numbers from 1-100\n"); printf ("9. Quit the Program\n\n"); printf ("Enter your choice. "); scanf_s ("%i", &myChoice); } while (myChoice != 9); return 0; }
https://www.daniweb.com/programming/software-development/threads/262411/need-help-on-what-to-do-next-in-this-c-code-assignment
CC-MAIN-2021-39
refinedweb
310
87.31
How fair is the Codility's score system when the same algorithm is implemented using a fast and slow technology like C++ and JavaScript ? look here the test. C++ is faster than JavaScript for many reasons, perhaps the biggest is due to the fact that C++ code is compiled into CPU instructions and then executed. JavaScript on the other hand is interpreted, so there are more layers between the code and CPU execution, and this consequently takes more time. Are C++ developers getting higher scores in the same problem in comparison with the JavaScript ones for the same algorithm? the short answer is no! and this is great! but in some cases... perhaps! A test was done using the MaxCounters problem, you can check it here. The same algorithm was implemented using C++ and JavaScript and submitted two times, as follows: JavaScript function solution(N, A) { var counters = new Array(N).fill(0); var biggest = 0; for (var x = 0; x < A.length; x++) { if (A[x] <= N) { counters[A[x] - 1]++; if (biggest < counters[A[x] - 1]) biggest = counters[A[x] - 1]; } else counters.fill(biggest); } return counters; } C++ #include <iostream> #include <vector> using namespace std; vector<int> solution(int N, vector<int> A) { vector<int> counters (N); int biggest = 0; for (int x = 0; x < A.size(); x++) { if (A[x] <= N) { counters[A[x] - 1]++; if (biggest < counters[A[x] - 1]) biggest = counters[A[x] - 1]; } else fill (counters.begin(),counters.end(),biggest); } return counters; } The algorithm is not the best in terms of performance, because this way we could see how much time it took and the maximum allowed by Codility for the problem. And here is the short answer, both got the same score: That is great, so the score system is fair, right? Let's analyze a little more: JavaScript - First and Second submission result As we can see, on the first submit the Codility's server took 2.12s to run and on the second 2.13s. The large_random2 test uses random numbers, and also the server may be overloaded with many others tests, so it's normal to change a little bit the execution time, although the 0.99s limit seems to remain the same. But on the extreme_large test we can see that the Codility's score system somehow compensates that 'slow execution moment' on the server, raising the limit from 1.02s to 1.04s, which is nice/fair. Now let's see how the C++ performed: A similar situation happens here, at the first submission the Codility's server took 3.73s to execute the extreme_large test versus 3.70s on the second try, but pay attention to the large_random2 test, now things get very interesting, the algorithm reproved due to 0.02s (total 0.13s) slower than the 0.11s maximum limit, and the difference between the extreme_large tests was 0.03s, but the limits seems to not change for the C++ test. Conclusion As expected C++ performs faster than JavaScript, and the Codility's score system seems to compensate their servers performance when evaluate the tests raising the limits applied to each performance test, at least for the JavaScript language. When the slow or just not the best approach is used in slow languages like JavaScript the difference between the execution time and the maximum allowed for the test are high, so you can easily say 'yes, my algorithm sucks', but on the other hand, for the C++ tests, the difference between the 'slow/not the best' approach were just 20 milisencods. Perhaps 20ms couldn't be a problem if you get a server with low execution time and pass the test? we could take advantage of this to score better than other languages? the random numbers tests can take 20ms less and pass the test? well, it's a possibility, so... who knows?! 0 comentários :
http://www.l3oc.com/2017/06/codility-does-technology-influences.html
CC-MAIN-2017-51
refinedweb
649
71.55