text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
# You may distribute under the terms of either the GNU General Public License # or the Artistic License (the same terms as Perl itself) # # (C) Paul Evans, 2015 -- leonerd@leonerd.org.uk package Device::Chip::MCP23S17; use strict; use warnings; use base qw( Device::Chip::MCP23x17 ); our $VERSION = '0.01'; =head1 NAME C<Device::Chip::MCP23S17> - chip driver for a F<MCP23S17> =head1 DESCRIPTION This subclass of L<Device::Chip::MCP23x17> provides the required methods to allow it to communicate with the SPI-attached F<Microchip> F<MCP23S17> version of the F<MCP23x17> family. =cut use constant PROTOCOL => "SPI"; sub SPI_options { return ( mode => 0, max_bitrate => 1E6, ); } sub write_reg { my $self = shift; my ( $reg, $data ) = @_; $self->protocol->write( pack "C C a*", ( 0x20 << 1 ), $reg, $data ); } sub read_reg { my $self = shift; my ( $reg, $len ) = @_; $self->protocol->readwrite( pack "C C a*", ( 0x20 << 1 ) | 1, $reg, "\x00" x $len ) ->transform( done => sub { substr $_[0], 2 } ); } =head1 AUTHOR Paul Evans <leonerd@leonerd.org.uk> =cut 0x55AA;
https://metacpan.org/release/Device-Chip-MCP23x17/source/lib/Device/Chip/MCP23S17.pm
CC-MAIN-2019-43
refinedweb
165
51.92
One of the best things about Node.js is its massive module ecosystem. With bundlers like webpack we can leverage these even in the browser outside of Node.js. Let’s look at how we can build a module with TypeScript usable by both JavaScript developers and TypeScript developers. Before we get started make sure that you have Node.js installed – you should ideally have a version of 6.11 or higher. Additionally make sure that you have npm or a similar package manager installed. Let’s build a module that exposes a function that filters out all emojis in a string and returns the list of emoji shortcodes. Because who doesn’t love emojis? ✨ Installing dependencies First create a new directory for your module and initialize the package.json by running in your command line: mkdir emoji-search cd emoji-search npm init -y The resulting package.json looks like this: { "name": "emoji-search", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC" } Now let’s install some dependencies. First install the TypeScript compiler as a devDependency by running: npm install typescript --save-dev Next install the emojione module. We’ll use this to convert emojis to their shortcodes like 🐵 to :monkey_face:. Because we will be using the module in TypeScript and the module doesn’t expose the types directly we also need to install the types for emojione: npm install emojione @types/emojione --save With the project dependencies installed we can move on to configuring our TypeScript project. 🔧 Configuring the TypeScript project Start by creating a tsconfig.json file which we’ll use to define our TypeScript compiler options. You can create this file manually and place the following lines into it: { "compilerOptions": { "target": "es5", "module": "commonjs", "declaration": true, "outDir": "./dist", "strict": true } } Alternatively you can auto-generate the tsconfig.json file with all available options by running: ./node_modules/.bin/tsc --init If you decided for this approach just make sure to adjust the declaration and outDir options according to the JSON above. Setting the declaration attribute to true ensures that the compiler generates the respective TypeScript definitions files aside of compiling the TypeScript files to JavaScript files. The outDir parameter defines the output directory as the dist folder. Next modify the package.json to have a build script that builds our code: { "name": "emoji-search", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "build": "tsc", "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC", "devDependencies": { "typescript": "^2.3.2" }, "dependencies": { "@types/emojione": "^2.2.1", "emojione": "^3.0.3" } } That’s all we have to do to configure the TypeScript project. Let’s move on to writing some module code! 💻 Create the module code Create a lib folder where we can place all of our TypeScript files and in it create a create a file called index.ts. Place the following TypeScript into it: import { toShort } from 'emojione'; const EMOJI_SHORTCODES = /:[a-zA-Z1-9_]+:/g export function findEmojis(str: string): string[] { // add runtime check for use in JavaScript if (typeof str !== 'string') { return []; } return toShort(str).match(EMOJI_SHORTCODES) || []; } Compile the code by running: npm run build You should see a new dist directory that has two files, index.js and index.d.ts. The index.js contains all the logic that we coded compiled to JavaScript and index.d.ts is the file that describes the types of our module for use in TypeScript. Congratulations on creating your first module accessible to both TypeScript and Javascript! Lets prep the module for publishing. 🔖 Prepare for publishing Now that we have our module, we have to make three easy changes to the package.json to get ready to publish the module. - Change the mainattribute to point at our generated JavaScript file - Add the new typesparameter and point it to the generated TypeScript types file - Add a prepublishscript to make sure that the code will be compiled before we publish the project. { "name": "emoji-search", "version": "1.0.0", "description": "", "main": "dist/index.js", "types": "dist/index.d.ts", "scripts": { "prepublish": "npm run build", "build": "tsc", "test": "echo \"Error: no test specified\" && exit 1" }, "keywords": [], "author": "", "license": "ISC", "devDependencies": { "typescript": "^2.3.2" }, "dependencies": { "@types/emojione": "^2.2.1", "emojione": "^3.0.3" } } We should also make sure to exclude unnecessary files from the installation of our module. In our case the lib/ folder is unnecessary because we only need the built files in the dist/ directory. Create a new file called .npmignore and place the following content into it: lib/ That’s it! 🎉 You are ready now to publish your module using npm publish. Unfortunately someone already built a module called emoji-search 😕 so if you want to publish this module, just change the name in the package.json to another name. 🍽 Consume the module The great thing with our module is that this can now be seamlessly used in JavaScript or TypeScript projects. Simply install it via npm or yarn: npm install emoji-search --save If you want to try this out without publishing the module yourself you can also install the demo-emoji-search module. It is the same code published on npm. Afterwards we can use the module in JavaScript: const emojiSearch = require('demo-emoji-search'); console.log(emojiSearch.findEmojis("Hello 🐼! What's up? ✌️")); Or in TypeScript with full type support: import { findEmojis } from 'demo-emoji-search'; const foundEmojis: string[] = findEmojis(`Hello 🐵! What's up? ✌️`); console.log(foundEmojis); 🎊 Conclusion Now this was obviously just a very simple module to show you how easy it is to publish a module usable in both Javascript and TypeScript. There are a boatload of other benefits provided by TypeScript to the author of the module such as: - Better authoring experience through better autocomplete - Type safety to catch bugs especially in edge cases early - Down-transpilation of cutting-edge and experimental features such as decorators As you’ve seen it’s very easy to build a module in TypeScript to provide a kickass experience with our module to both JavaScript and TypeScript developers. If you would like to have a more comprehensive starter template to work off that includes a set of best practices and tools, check out Martin Hochel’s typescript-lib-starter on GitHub. ✌️ I would love to hear about your experience with TypeScript and feel free to reach out if you have any problems:
https://www.twilio.com/blog/2017/06/writing-a-node-module-in-typescript.html
CC-MAIN-2018-51
refinedweb
1,072
56.96
Question: I cannot understand why this piece of code does not compile: namespace A { class F {}; // line 2 class H : public F {}; } namespace B { void F(A::H x); // line 7 void G(A::H x) { F(x); // line 9 } } I am using gcc 4.3.3, and the error is: s3.cpp: In function âvoid B::G(A::H)â: s3.cpp:2: error: âclass A::Fâ is not a function, s3.cpp:7: error: conflict with âvoid B::F(A::H)â s3.cpp:9: error: in call to âFâ I think that because in line 9 there is no namespace prefix, F(x) should definitively mean only B::F(x). The compiler tries to cast x into its own superclass. In my understanding it should not. Why does it do that? Solution:1 That's because compiler will search function in the same namespace its arguments from. Compiler found there A::F identifier but it is not a function. In result you'll get the error. It is standard behaviour as far as I can remember. 3.4.2 Argument-dependent name lookup When an unqualified name is used as the postfix-expression in a function call (5.2.2), other namespaces not considered during the usual unqualified lookup (3.4.1) may be searched, and namespace-scope friend function declarations (11... This rule allows you to write the following code: std::vector<int> x; // adding some data to x //... // now sort it sort( x.begin(), x.end() ); // no need to write std::sort And finally: Because of Core Issue 218 some compilers would compile the code in question without any errors. Solution:2 Have you tried using other compilers yet? There is a gcc bug report here which is suspended (whatever that means). EDIT: After some research, I found this more official bug. Solution:3 Very strange, I copied and pasted directly to VS 2005 and I get an error, which I expected: Error 1 error LNK2001: unresolved external symbol "void __cdecl B::F(class A::H)" Because we haven't actually defined F(x) in namespace B... not sure why Gcc is giving this error. Solution:4 I just tried compiling it on Visual Studio 2005 and it worked fine. I wonder if it's a broken implementation of Argument Dependent Lookup where the namespace from the arguments was accidentally brought in? Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/10/tutorial-namespace-clashing-in-c.html
CC-MAIN-2018-43
refinedweb
417
66.33
The next step is to add an Excel Add-in function, using the XLL+ Function Wizard. In this example, we're going to write a function to return the cumulative normal (Gaussian) distribution. Note: Readers familiar with Excel may ask why we are writing a cumulative normal distribution function when Excel already contains the NORMSDIST() function. There are three reasons: - It may be useful to have an add-in function whose formula is precisely the same as that used in code elsewhere, such as in a library, or within other functions. This can make testing more precise and straightforward. - NORMSDIST() is inaccurate at extreme values, and the inverse function NORMSINV() fails beyond 8 standard deviations. In Excel 2003 and 2007, the functions are better implemented than in older versions of Excel, but can still be improved upon. - Most importantly, it makes a good example function. It is good practise to put all important business functions in separate functions that are not Excel-dependent. If you do this, you will be able to reuse the code unchanged in other environments. That is what we will do here. The code for a stand-alone implementation of the cumulative normal distribution and its inverse is shown below. The Normal() and CumNormal() functions cannot fail, so they simply return their result. InverseCumNormal() can fail if the input is out of range, so it returns 1 for success and 0 for failure. The inverted value is passed back via the pointer result. #include <math.h> // Normal distribution function double Normal(double x) { #define SQRT2PI 2.50662827463 return exp(-x * x / 2.0) / SQRT2PI; } // Cumulative normal distribution function double CumNormal(double x) { #define gamma 0.2316419 #define a1 0.319381530 #define a2 -0.356563782 #define a3 1.781477937 #define a4 -1.821255978 #define a5 1.330274429 double k; if (x < 0.0 ) { return 1.0 - CumNormal(-x); } else { k = 1.0 / (1.0 + gamma * x); return 1.0 - Normal(x) * ((((a5 * k + a4) * k + a3) * k + a2) * k + a1) * k; } } // Inverse cumulative normal function // Returns 1 for success, 0 for failure int InverseCumNormal(double u, double* result) { int i; double Y, num, den; static double p[] = { -0.322232431088, -1.0, -0.342242088547, -0.0204231210245, -0.0000453642210148 }; static double q[] = { 0.099348462606, 0.588581570495, 0.531103462366, 0.10353775285, 0.0038560700634 }; if (u <= 0.0 || u >= 1.0) return 0; if (fabs(u - 0.5) < 10e-8) { *result = 0.0; return 1; } if (u < 0.5) { InverseCumNormal(1.0 - u, result); *result *= -1.0; return 1; } Y = sqrt(-log((1.0 - u)*(1.0 - u))); num = p[4]; den = q[4]; for (i=3; i>=0; i--) { num = num*Y + p[i]; den = den * Y + q[i]; } *result = Y + num / den; return 1; } In Visual Studio, open the file Tutorial1.cpp, and add the code above at the end of the file. Tip: It is a very bad idea to type in all the code shown above. You can copy it from here and paste into your source file, or you can find all the code for this tutorial in the Samples/Tutorial1 sub-directory. All the important code is now written. All we need to do is to generate the Excel add-in function, and plug it into a stand-alone function. Next: Define the function >>
http://planatechsolutions.com/xllplus7-online/start_create_function.htm
CC-MAIN-2020-10
refinedweb
549
68.47
- NAME - SYNOPSIS - DESCRIPTION - DISCLAIMER - METHODS - after - any - before - before_template - cookies - config - content_type - dance - debug - dirname - error - false - from_dumper ($structure) - from_json ($structure, %options) - from_yaml ($structure) - from_xml ($structure, %options) - get - halt - headers - header - layout - logger - load - load_app - load_plugin - mime_type - params - pass - path - prefix - del - options - put - r - redirect - request - send_error - send_file - set - setting - set_cookie - session - splat - start - status - template - to_dumper ($structure) - to_json ($structure, %options) - to_yaml ($structure) - to_xml ($structure, %options) - true - upload - uri_for - captures - var - vars - warning -::Introduction. DISCLAIMER This documentation describes all the exported symbols of Dancer. If you want a quick start guide to discover the framework, you should look at Dancer::Introduction. If you want to have specific examples of code for real-life problems, see the Dancer::Cookbook. If you want to see configuration examples of different deployment solutions involving Dancer and Plack, see Dancer::Deployment. METHODS after Add a hook at the after position: after sub { my $response = shift; # do something with request }; The anonymous function which is given to after will be executed after having executed a route. You can define multiple after filters, using the after helper as many times as you wish; each filter will be executed, in the order you added them. any Defines a route for multiple HTTP methods at once: any ['get', 'post'] => '/myaction' => sub { # code }; Or even, a route handler that would match any HTTP methods: any '/myaction' => sub { # code }; before Defines a before filter: before sub { # do something with request, vars or params }; The anonymous function which is given to before will be executed before looking for a route handler to handle the request. You can define multiple before filters, using the before helper as many times as you wish; each filter will be executed in the order you added them. before_template Defines a before_template filter: before_template sub { my $tokens = shift; # do something with request, vars or params }; The anonymous function which is given to before_template will be executed before sending data and tokens to the template. Receives a hashref of the tokens that will be inserted into the template. This filter works as the before and after filter. cookies Accesses cookies values, which returns a hashref of Dancer::Cookie objects: get '/some_action' => sub { my $cookie = cookies->{name}; return $cookie->value; }; config Accesses the configuration of the application: get '/appname' => sub { return "This is " . config->{appname}; }; content_type Sets the content-type rendered, for the current route handler: get '/cat/:txtfile' => sub { content_type 'text/plain'; # here we can dump the contents of params->{txtfile} }; Note that if you want to change the default content-type for every route, you have to change the setting content_type instead. dance Alias for the start keyword. debug Logs a message of debug level: debug "This is a debug message"; dirname Returns the dirname of the path given: my $dir = dirname($some_path); error Logs a message of error level: error "This is an error message"; false Constant that returns a false value (0). from_dumper ($structure) Deserializes a Data::Dumper structure. from_json ($structure, %options) Deserializes a JSON structure. Can receive optional arguments. Thoses arguments are valid JSON arguments to change the behavior of the default JSON::from_json function. from_yaml ($structure) Deserializes a YAML structure. from_xml ($structure, %options) Deserializes a XML structure. Can receive optional arguments. Thoses arguments are valid XML::Simple arguments to change the behavior of the default XML::Simple::XMLin function. get Defines a route for HTTP GET requests to the given path: get '/' => sub { return "Hello world"; } halt Sets a response object with the content given. When used as a return value from a filter, this breaks the execution flow and renders the response immediatly: before sub { if ($some_condition) { return halt("Unauthorized"); } }; get '/' => sub { "hello there"; }; headers Adds custom headers to responses: get '/send/headers', sub { headers 'X-Foo' => 'bar', X-Bar => 'foo'; } header Adds a custom header to response: get '/send/header', sub { header 'X-My-Header' => 'shazam!'; } layout Allows you to set the default layout to use when rendering a view. Syntactic sugar around the layout setting: layout 'user'; logger Allows you to set the logger engine to use. Syntactic sugar around the logger setting: logger 'console'; load Loads one or more perl scripts in the current application's namespace. Syntactic sugar around Perl's require: load 'UserActions.pl', 'AdminActions.pl'; load_app Loads a Dancer package. This method takes care to set the libdir to the curent ./lib directory: # if we have lib/Webapp.pm, we can load it like: load_app 'Webapp'; Note that a package loaded using load_app must import Dancer with the :syntax option, in order not to change the application directory (which has been previously set for the caller script). load_plugin Loads a plugin in the current namespace. As with load_app, the method takes care to set the libdir to the current ./lib directory: package MyWebApp; use Dancer; load_plugin 'Dancer::Plugin::Database'; mime_type Returns all the user-defined mime-types when called without parameters. Behaves as a setter/getter when given parameters # get the global hash of user-defined mime-types: my $mimes = mime_types; # set a mime-type mime_types foo => 'text/foo'; # get a mime-type my $m = mime_types 'foo'; params This method should be called from a route handler. Alias for the Dancer::Request params accessor. pass This method should be called from a route handler. Tells Dancer to pass the processing of the request to the next matching route. You should always return after calling pass: get '/some/route' => sub { if (...) { # we want to let the next matching route handler process this one return pass(); } }; path Concatenates multiple path together, without worrying about the underlying operating system: my $path = path(dirname($0), 'lib', 'File.pm'); del Defines a route for HTTP DELETE requests to the given URL: del '/resource' => sub { ... }; options Defines a route for HTTP OPTIONS requests to the given URL: options '/resource' => sub { ... }; put Defines a route for HTTP PUT requests to the given URL: put '/resource' => sub { ... }; r Defines a route pattern as a regular Perl regexp. This method is DEPRECATED. Dancer now supports real Perl Regexp objects instead. You should not use r() but qr{} instead: Don't do this: get r('/some/pattern(.*)') => sub { }; But rather this: get qr{/some/pattern(.*)} => sub { }; redirect'; }; request Returns a Dancer::Request object representing the current request. send_error Returns a HTTP error. By default the HTTP code returned is 500: get '/photo/:id' => sub { if (...) { send_error("Not allowed", 403); } else { # return content } } This will not cause your route handler to return immediately, so be careful that your route handler doesn't then override the error. You can avoid that by saying return send_error(...) instead. send_file Lets the current route handler send a file to the client. get '/download/:file' => sub { send_file(params->{file}); } The content-type will be set depending on the current mime-types definition (see mime_type if you want to define your own). set Defines a setting: set something => 'value'; setting Returns the value of a given setting: setting('something'); # 'value' set_cookie Creates or updates cookie values: get '/some_action' => sub { set_cookie 'name' => 'value', 'expires' => (time + 3600), 'domain' => '.foo.com'; }; In the example above, only 'name' and 'value' are mandatory. session Provides access to all data stored in the current session engine (if any). It can also be used as a setter to add new data to the current session engine: # getter example get '/user' => sub { if (session('user')) { return "Hello, ".session('user')->name; } }; # setter example post '/user/login' => sub { ... if ($logged_in) { session user => $user; } ... }; You may also need to clear a session: # destroy session get '/logout' => sub { ... session->destroy; ... }; splat Returns the list of captures made from a route handler with a route pattern which includes wildcards: get '/file/*.*' => sub { my ($file, $extension) = splat; ... }; status code or its name in lower case, with underscores as a separator for blanks. template Tells the route handler to build a response with the current template engine: get '/' => sub { ... template 'some_view', { token => 'value'}; };.tt', {}, { layout => undef }; }; to_dumper ($structure) Serializes a structure with Data::Dumper. to_json ($structure, %options) Serializes a structure to JSON. Can receive optional arguments. Thoses arguments are valid JSON arguments to change the behavior of the default JSON::to_json function. to_yaml ($structure) Serializes a structure to YAML. to_xml ($structure, %options) Serializes a structure to XML. Can receive optional arguments. Thoses arguments are valid XML::Simple arguments to change the behavior of the default XML::Simple::XMLout function. true Constant that returns a true value (1). upload Defines a variable shared between filters and route handlers. before sub { var foo => 42; }; Route handlers and other filters will be able to read that variable with the vars keyword. vars Returns the hashref of all shared variables set during the filter/route chain: get '/path' => sub { if (vars->{foo} eq 42) { ... } }; warning Logs a warning message through the current logger engine..
https://metacpan.org/pod/release/XSAWYERX/Dancer-1.200/lib/Dancer.pm
CC-MAIN-2018-39
refinedweb
1,462
54.52
file and displays them on the page. We'll also need an intermediary PHP file that will actually retrieve the XML file from the remote server and pass it back to the Connection Manager. Because of the security restrictions placed upon all browsers we can't use the XHR object to obtain the XML file specifically for this example. In order to complete this example you'll need to use a full web server setup, with PHP installed and configured. Our proxy PHP file will also make use of the cURL library, so this will also need to be installed on your server. The installation of cURL varies depending on the platform in use, so full instructions for installing it is beyond the scope of this article, files.="yui/yahoo-dom-event.js"></script> <script type="text/javascript" src="> Broadcasting Corporation</a></div> <div> </body> </html> We'll start off with this very simple page which at this stage contains just the markup for the newsreader and the references to the required library files. There's also a <link> to a custom stylesheet which we'll create in a little while. Adding the JavaScript Directly after the final closing </div> tag, add the following <script>: <script type="text/javascript"> //create namespace object for this example YAHOO.namespace("yuibook.newsreader"); //define the initConnection function YAHOO.yuibook.newsreader.initConnection = function() { //define the AJAX YAHOO.util.Dom.addClass(p1, "title"); YAHOO.util.Dom.addClass(p2, "desc"); YAHOO AJAX failure handler var failureHandler = function(o) { //alert the status code and error text alert(o.status + " : " + o.statusText); } //define the callback object var callback = { success:successHandler, failure:failureHandler }; //initiate the transaction var transaction = YAHOO.util.Connect.asyncRequest("GET", "myproxy.php", callback, null); } //execute initConnection when DOM is ready YAHOO.util.Event.onDOMReady( YAHOO file define three arrays; they need to be proper arrays so that some useful array methods can be called on the items we extract from the remote XML file. them, which is why we need the arrays. To populate the arrays with the data from our collections, we can use the for loop that follows the declaration of these three variables. Because of the structure of the RSS in the remote XML file, some of the title, description, and link elements are irrelevant to the news items and instead refer to the RSS file itself and the service provided by the BBC. These are the first few examples of each of the elements in the file. So how can we get rid of the first few items in each array? The standard JavaScript .pop() method allows us to discard the last item from the array, so if we reverse the arrays, the items we want to get rid of will be at the end of each array. Calling reverse a second time once we've popped the items we no longer need puts the array back into the correct order. The number of items popped is specific to this implementation, other RSS files may differ in their structural presentation and therefore their .pop() requirements. Now that we have the correct information in our arrays, we're ready to add some of the news items to our reader. The RSS file will contain approximately 20 to 30 different news items depending on the day, which is obviously far too many to display all at once in our reader. There are several different things we could do in this situation. The first and arguably the most technical method would be to include all of the news items and then make our reader scroll through them. Doing this however would complicate the example and take the focus off of the Connection utility. What we'll do instead is simply discard all but the five newest news items. The for loop that follows the .reverse() and .pop() methods will do this. The loop will run five times and on each pass it will first create a series of new <p>, <div>, and <a> elements using standard JavaScript techniques. These will be used to hold the data from the arrays, and to ultimately display the news items. We can then use the DOM utility's highly useful .addClass() method (which IE doesn't ignore, unlike setAttribute("class")) to give class names to our newly created elements. This will allow us to target them with some CSS styles. Then we obtain the nodeValue of each item in each of our arrays and add these to our new elements. Once this is done we can add the new elements and their textNodes to the newsitems container on the page. Save the file as responseXML.html. Styling the Newsreader To make everything look right in this example and to target our newly defined classes, we can add a few simple CSS rules. In a fresh page in your text editor, add the following selectors and rules: #newsreader { width:240px; border:2px solid #980000; background-color:#cccccc; } .header { font-weight:bold; font-size:123.1%; background-color:#980000; color:#ffffff; width:96%; padding:10px 0px 10px 10px; } .title { font-weight:bold; margin-left:10px; margin-right:10px; } .desc { margin-top:-14px; *margin-top:-20px; margin-left:10px; margin-right:10px; font-size:85%; } .newslink { text-decoration:none; color:#000000; } .link { text-decoration:none; color:#ffffff; } #footer { background-color:#980000; color:#ffffff; font-size:77%; padding:10px 0px 10px 10px; } Save this file as responseXML.css. All of the files used in this example, including the YUI files, will need to be added to the content-serving directory of your web server in order to function correctly. Finally, in order to actually get the XML file in the first place, we'll need a little PHP. As I mentioned before, this will act as a proxy to which our application makes its request. In a blank page in your text editor, add the following PHP code: <?php define ('HOSTNAME', ' newsonline_uk_edition/world/rss.xml'); $session = curl_init(); curl_setopt($session, CURLOPT_URL, HOSTNAME); curl_setopt($session, CURLOPT_RETURNTRANSFER, true); $xml = curl_exec($session); curl_close($session); if (empty($xml)) { print "Error extracting RSS file!"; } else { header("Content-Type: text/xml"); echo $xml; } ?> Save this as myproxy.php. The newsreader we've created takes just the first (latest) five news items and displays them in our reader. This is all we need to do to expose the usefulness of the Connection utility, but the example could easily be extended so scroll using the Animation and DragDrop utilities. Everything is now in place, if you run the HTML file in your browser (not by double-clicking it, but by actually requesting it properly from the web server) you should see the newsreader as in the following screenshot: Useful Connection Methods As you saw in the last example, the Connection Manager interface provides some useful methods for working with the data returned by an AJAX request. Let's take a moment to review some of the other methods provided by the class that are available to us. The .abort() method can be used to cancel a transaction that is in progress and must be used prior to the readyState property (a standard AJAX property as opposed to YUI-flavoured) being set to 4 (complete). The .asyncRequest() method is a key link in Connection's chain and acts like a kind of pseudo-constructor used to initiate requests. We already looked at this method in detail so I'll leave it there as far as this method is concerned. A public method used to determine whether the transaction has finished being processed is the .isCallInProgress() method. It simply returns a boolean indicating whether it is true or not and takes just a reference to the connection object. Finally .setForm() provides a convenient means of obtaining all of the data entered into a form and submitting it to the server via a GET or a POST request. The first argument is required and is a reference to the form itself, the remaining two arguments are both optional and are used when uploading files. They are both boolean: the first is set to true to enable file upload, while the third is set to true to allow SSL uploads in IE. A Login System Fronted by YUI In our first Connection example we looked at a simple GET request to obtain a remote XML file provided by the BBC. In this example, let's look at the sending, or Posting of data as well.: For the example, we'll need at least some data in the table as well, so add in some fake data that can be entered into the login form once we've finished coding it. A couple of records like that shown in the figure below should suffice. </script> <script type="text/javascript" src= specific details here. The mark up used here should be more than familiar to most of you. Save the file; } display:none; } Save the file as login.css and view it in your browser. The code we have so far should set the stage for the rest of the example and appear as shown in the figure below: Now let's move on to the real nuts and bolts of this example—the JavaScript that will work with the Connection Manager utility to produce the desired results. Directly before the closing </body> tag, add the following <script>: <script type="text/javascript"> //create namespace object YAHOO.namespace("yuibook.login"); //define the submitForm function YAHOO") { YAHOO"); YAHOO.util.Connect.setForm(form); //define a transaction for a GET request var transaction = YAHOO.util.Connect.asyncRequest("GET", "login.php", callback); } } //execute submitForm when login button clicked YAHOO.util.Event.addListener("login", "click", YAHOO filter define briefly at the PHP file file as login.php in the same directory as the web page and everything should be good to go. Try it out and reflect upon the ease with which our task has been completed. Upon entering the username and password of one of our registered users, you should see something similar to the figure below: So that covers GET requests, but what about POST requests? As I mentioned before, the .setForm() method can be put to equally good use with POST requests as well. To illustrate this, we can add some additional code which will let unregistered visitors sign up. Add"); YAHOO.util.Connect.setForm(form); //define transaction to send stuff to server var transaction = YAHOO.util.Connect.asyncRequest( "POST", "register.php", callback); } } //execute registerForm when join button clicked YAHOO.util.Event.addListener("join", "click", YAHOO efficient filled file in the second argument of the .asyncRequest() method. All we need now is another PHP file to process the registration request. Something like the following should suffice: If you register a new user now and then take a look at your database with the mySQL Command Line Client, you should see the new data appear in your database: with a PHP (or other form of) proxy for negotiating cross-domain requests. If you have read this article you may be interested to view :
https://www.packtpub.com/books/content/ajax-and-connection-manager-yahoo-user-interface-yui
CC-MAIN-2017-17
refinedweb
1,840
62.17
10 common beginner mistakes in Python Python is an easy language to learn. And there are many self-taught programmers who don't really go with the best practices from the start. Very often during the development process and when viewing the solutions of our users in CheckiO we are faced with a lot of these mistakes. For this reason in this article I highlighted the most common beginner mistakes, so nobody repeat them again. Incorrect indentation, tabs and spaces Never use tabs, only spaces. Use 4 spaces for a tab. And follow a consistent indentation pattern, because many Python features rely on indentation. If your code is executing a task when it shouldn't then review the indentation you’re using. Using a Mutable Value as a Default Value def foo(numbers=[]): numbers.append(9) return numbers We take a list, add 9. >>> foo() [9] >>> foo(numbers=[1,2]) [1, 2, 9] >>> foo(numbers=[1,2,3]) [1, 2, 3, 9] And here what happens when calling foo without numbers. >>> foo() # first time, like before [9] >>> foo() # second time [9, 9] >>> foo() # third time... [9, 9, 9] >>> foo() # WHAT IS THIS BLACK MAGIC?! [9, 9, 9, 9] In Python default values or functions are instantiated not when the function is called, but when it's defined. Solution def foo(numbers=None): if numbers is None: numbers = [] numbers.append(9) return numbers Well, some other default values do work as expected. def foo(count=0): count += 1 return count >>> foo() 1 >>> foo() 1 >>> foo(2) 3 >>> foo(3) 4 >>> foo() 1 The reason for this is not in the default value assignment, but in the value itself. An integer is an immutable type. Doing count += 1 the original value of count isn’t changing. Try to call a function as the default value: def get_now(now=time.time()): return now As you can see, while the value of time.time() is immutable, it returns the same time. >>> get_now() 1373121487.91 >>> get_now() 1373121487.91 >>> get_now() 1373121487.91 Handle specific exceptions If you know what exception the code is going to throw, just except that specific exception. Don't write like this: try: # do something here except Exception as e: # handle exception When on a Django's model you're calling get() method and the object is not present, Django throws ObjectDoesNotExists exception. You need to catch this exception, not Exeption. Write a lot of comments and docstrings As we've already talked about in one of our articles, if you can't make the code easier and there is something not so obvious, you need to write a comment. Writing docstrings to functions/methods is also a good habit. def create_user(name, height=None, weight=None): '''Create a user entry in database. Returns database object created for user.''' # Logic to create entry into db and return db object Scoping Python understands global variables if we access them within a function: bar = 42 def foo(): return bar Here we're using a global variable called bar inside foo and it works as it should: >>> foo() 42 It also works if we use some function on a global bar = [42] def foo(): bar.append(0) >>> bar [42, 0] What if we change bar? >>> bar = 42 ... def foo(): ... bar = 0 ... foo() ... return bar 42 Here the line bar = 0, instead of changing bar, created a new, local variable also called bar and set its value to 0. Let's look at a less common version of this mistake to understand when and how Python decided to treat variables as global or local. We'll add an assignment to bar after we print it: bar = 42 def foo(): print(bar) bar = 0 This shouldn't break our code. But it does. >>> foo() Traceback (most recent call last): File " ", line 1, in foo() File " ", line 3, in foo print bar UnboundLocalError: local variable ''bar'' referenced before assignment There are two parts to this misunderstanding: 1) Python is being executed statement-by-statement, and NOT line-by-line, 2) Python statically gathers informations about the local scope of the function when the def statement is executed. Reaching bar=0 it adds bar to the list of local variable for foo. bar = 42 def foo(baz): if baz > 0: return bar bar = 0 Python can't know that the local bar was assigned to. bar = 42 def foo(): return bar if False: bar = 0 Running foo we get: Traceback (most recent call last): File " ", line 1, in foo() File " ", line 3, in foo return bar UnboundLocalError: local variable 'bar' referenced before assignment Python still declares bar as statically local. Solutions: 1. Using the global keyword, but it is very often a bad idea, if you can avoid using global it is better to do so. >>> bar = 42 ... def foo(): ... global bar ... print(bar) ... bar = 0 ... ... foo() 42 >>> bar 0 2. Don't use a global that isn't constant. If you want to keep a value that is used throughout your code, define it as a class attribute for a new class. >>> class Baz(object): ... bar = 42 ... ... def foo(): ... print(Baz.bar) # global ... bar = 0 # local ... Baz.bar = 8 # global ... print(bar_ ... ... foo() ... print(Baz.bar) 42 0 8 "How To Use Variables in Python 3" - a very detailed explanation by Lisa Tagliaferri Edge cases first Pseudo code 1: if request is post: if parameter 'param1' is specified: if the user can perform this action: # Execute core business logic here else: return saying 'user cannot perform this action' else: return saying 'parameter param1 is mandatory' else: return saying 'any other method apart from POST method is disallowed' Pseudo code 2: if request is not post: return saying 'any other method apart from POST method is disallowed' if parameter 'param1' not is specified: return saying 'parameter param1 is mandatory' if the user cannot perform this action: return saying 'user cannot perform this action' # Execute core business logic here These pseudo codes are for handling a request, and the second one is much easier to read, it doesn't have a lot of nesting, which avoids common problems. Copying >>> a = [2, 4, 8] >>> b = a >>> a[1] = 10 >>> b [2, 10, 8] As you can see a and b are pointing at the same object, and thus by changing a - b will also change. You can google this trick. >>> b = a[:] >>> a[1] = 10 >>> b [2, 4, 8] >>> a [2, 10, 8] Or the copy method. >>> b = a.copy() >>> a[1] = 10 >>> b [2, 4, 8] >>> a [2, 10, 8] In this case we can see that the b away is an a list copy, but by changing a we don't change b. Although here you can also step on a rake with the nested lists. >>> a = [[1,2], [8,9]] >>> b = a.copy() >>> a[0][0] = 5 >>> a [[5, 2], [8, 9]] >>> b [[5, 2], [8, 9]] >>> Or, for example, the multiplication of list. >>> a = [[1,2]] * 3 >>> a [[1, 2], [1, 2], [1, 2]] >>> a[0][0] = 8 >>> a [[8, 2], [8, 2], [8, 2]] To avoid it once and for all use deepcopy. >>> from copy import deepcopy >>> c = deepcopy(a) >>> a[0][0] = 10 >>> a [[10, 2], [8, 9]] >>> b [[10, 2], [8, 9]] >>> c [[8, 2], [8, 9]] Creating count-by-one errors on loops Remember that a loop doesn't count the last number you specify in a range. So if you specify the range (1, 11), you actually get output for values between 1 and 10. >>> a = list(range(1, 11)) >>> a [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] >>> a[0] 1 >>> a[0:5] [1, 2, 3, 4, 5] Wrong capitalization If you can't access a value you expected, you have to check capitalization. Python is case sensitive, so MyVar is different from myvar and MYVAR. >>> MyVar = 1 >>> MYVAR Traceback (most recent call last): File "< stdin >", line 1, in < module > NameError: name 'MYVAR' is not defined Using class variables incorrectly >> class A(object): ... x = 1 ... >>> class B(A): ... pass ... >>> class C(A): ... pass ... >>> print A.x, B.x, C.x 1 1 1 Example makes sense. >> B.x = 2 >>> print A.x, B.x, C.x 1 2 1 And again. >>> A.x = 3 >>> print A.x, B.x, C.x 3 2 3 You ask why did C.x change if we've only changed A.x.? Well, class variables in Python are internally handled as dictionaries and follow the Method Resolution Order (MRO). That's why the attribute x will be looked up in its base classes (only A in the above example, although Python supports multiple inheritance) since it's not found in class C. So, C doesn't have its own x property, independent of A. And like that references to C.x are in fact references to A.x. This causes a Python problem unless it's handled properly. Conclusion As you can see, there are a lot of things that could be done wrong, especially if you're new to Python. But by keeping in mind this mistakes you can pretty easily avoid them. You just need a little practice and it'll become a habit. Nevertheless, it's not nearly the whole list and there might be the things you'd want to add. So, what kind of mistakes did you encounter besides those mentioned above? Related articles - /r/learnpython specific FAQ - "Python Binding a Name to an Object" by John Philip J
https://py.checkio.org/blog/10-common-beginner-mistakes-in-python/
CC-MAIN-2018-09
refinedweb
1,582
69.62
Update 2012-01-06: Warning: this post is what you get if you don't know about the function __import__ and try to reinvent it. Imagine I have two different python files that are drop-in replacements for each other, perhaps for some kind of plugin system: $ cat a/c.py def d(): print "hello world a" $ cat b/c.py def d(): print "hello world b"And futher imagine I want to use both of them in the same program. I might write: import c c.d() import c c.d()But that has no chance of working. Python doesn't know where to find 'c'. So I need to tell python where to look for my code by changing sys.path: sys.path.append("a") import c c.d() sys.path.append("b") import c c.d()This will work, ish. It will print "hello world a" twice. Part of this is that the path "a" is sys.path before "b". I really only want this sys.path change to last long enough for my import to work. So maybe I should do: class sys_path_containing(object): def __init__(self, fname): self.fname = fname def __enter__(self): self.old_path = sys.path sys.path = sys.path[:] sys.path.append(self.fname) def __exit__(self, type, value, traceback): sys.path = self.old_path with sys_path_containing("a"): import c c.d() with sys_path_containing("b"): import c c.d()This is closer to working. I define a context manager so that code run with sys_path_containing will see a different sys.path. So my first "import c" will see a sys.path like ["foo", "bar", "a"] and my second import will see ["foo", "bar", "b"]. Each is isolated from the other and from other system changes. Unfortunately, it still won't work, because python remembers what it has imported before and doesn't do it again, this will still only print "hello world a" twice. Switching the second "import c" to a "reload(c)" does fix this problem, but at the expense of you already having to know whether something is loaded. Switching to "del sys.modules['c']" and using __import__ would work, though. Let's make that change and put it all into a context manager that does most of the work for us: class imported(object): def __init__(self, fname): self.fname = os.path.abspath(fname) def __enter__(self): if not os.path.exists(self.fname): raise ImportError("Missing file %s" % self.fname) self.old_path = sys.path sys.path = sys.path[:] file_dir, file_name = os.path.split(self.fname) sys.path.append(file_dir) file_base, file_ext = os.path.splitext(file_name) module = __import__(file_base) del sys.modules[file_base] return module def __exit__(self, type, value, traceback): sys.path = self.old_path with imported("a/c.py") as c: c.d() with imported("b/c.py") as c: c.d()This will print "hello world a" and then "hello world b". Yay!
https://www.jefftk.com/p/importing-python-by-path
CC-MAIN-2020-29
refinedweb
484
70.8
Pass NULL value as function parameter in C++ In this tutorial, we are going to discuss the topic “How to pass NULL value as a function parameter in C++“. So before starting this topic, let’s first recall “What is NULL?“. - “NULL” in C++ by default has the value zero (0) OR we can say that, “NULL” is a macro that yields to a zero pointer i.e. no address for that variable. - So if you look closely at the definition then you will notice one thing that, “NULL” is both, value as well as a pointer. - You can consider “NULL” as value when you are directly assigning it to a variable at the time of defining a variable. For example like, #include <iostream> int main(void) { int var = NULL; std::cout << "The value of var is " << var ; return 0; } This will give you the output, The value of var is 0 - This will not generate any error, but will give you a warning message saying, warning: converting to non-pointer type ‘int’ from NULL [-Wconversion-null] Pass NULL as a parameter - Similar is the case when you pass “NULL” as a parameter. It will not generate any error but pops up a warning message. - Below code snippet will help you to understand more clearly, #include <bits/stdc++.h> using namespace std; void nULL_fun(int a) { cout<<"Value of a: "<<a; } int main() { nULL_fun(NULL); return 0; } - This will give you a warning message saying, warning: passing NULL to non-pointer argument 1 of ‘void nULL_fun(int)’ - But you will get the output as, Value of a: 0 - But while working on big projects you should not avoid such warnings. So to fix this warning you should take the formal argument of pointer type. - So to remove such warning message the above program can be written as, #include <bits/stdc++.h> using namespace std; void nULL_fun(int *a) { cout<<"Value of a: "<<a; } int main() { nULL_fun(NULL); return 0; } - The output will be,(without warning message) Value of a: 0 - This ‘0’ is the address of “NULL” which is zero. Similarly in the 1st program, you can replace variable “int var” with “int *var” so as to remove the warning message. (Note: Ambiguity will be created if there are two functions with the same name and arguments of type say “int” and ” int *” respectively. This is discussed in my tutorial “Difference between NULL and nullptr”.) Also read: This “tutorial” does not have anything to do with function parameters.
https://www.codespeedy.com/pass-null-value-as-function-parameter-in-cpp/
CC-MAIN-2021-43
refinedweb
418
56.08
Profiling). Content expert: Denis Pravdin Ingredients This section lists the hardware and software tools used for the performance analysis scenario. Application: sample.js. The application is used as a demo and not available for download. JavaScript environment: Node.js version 8.0.0 with Chrome* V8 version 5.8.283.41: Windows* 10 Enable VTune Amplifier Support in Node.js Download node.js sources (nightly build). Run the vcbuild.bat script from the root node-v8.0.0 folder: echo vcbuild.bat enable-vtune This script builds Node.js with the VTune Amplifier support to profile JavaScript code. Note On Linux* systems, avoid using the enable-vtune flag with the fully-static configure flag. This combination is not compatible and causes the Node.js environment to crash. If you use Microsoft Visual Studio* 2015 IDE or higher, make sure to add #define _SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS to the node-v8.0.0-win\deps\v8\src\third_party\vtune\vtune-jit.cc file: #include <string.h> #ifdef WIN32 #define _SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS #include <hash_map> using namespace std; #else ... Profile JavaScript Code Running in Node.js This recipe uses a sample JavaScript application: function say(word) { console.log("Calculating ..."); var res = 0; for (var i = 0; i < 20000; i++) { for (var j = 0; j < 20000; j++) { res = i * j / 2; } } console.log("Done."); console.log(word); } function execute(someFunction, value) { someFunction(value); } execute(say, "Hello from Node.js!");To profile this application with the VTune Amplifier: Launch the VTune Amplifier: amplxe-gui.exe Click the New Project icon on the toolbar to create a new project. In the Analysis Target tab, specify node.exe in the Application field and sample.js in the Application parameters field: Switch to the Analysis Type tab and select the Advance Hotspots analysis type from the left pane and click Start to run the analysis. Note Advanced Hotspots analysis was integrated into the generic Hotspots analysis starting with VTune Amplifier 2019, and is available via the Hardware Event-based Sampling collection mode. When the analysis is complete, the VTune Amplifier opens the result in the default Hotspots viewpoint. Use the Bottom-up window to explore how the samples are distributed through JavaScript functions. Double-click the function that took the most CPU time to execute to view the source code and identify the hottest code line:
https://software.intel.com/en-us/vtune-amplifier-cookbook-profiling-javascript-code-in-node-js
CC-MAIN-2019-22
refinedweb
384
52.46
[Java] [Java] Program 10x faster with parallelStream Conclusion Java programs can be 10x faster with parallelStream() Environment - CPU: Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz, 2208 Mhz, 6 cores, 12 logical processors - Memory: 16GB - Java: Amazon Corretto-11.0.3.7.1 Verification contents Verification code Main.java import java.util.List; import java.util.ArrayList; public class Main { public static void main(String[] args) { List<Integer> n1 = new ArrayList<>(); List<Integer> n2 = new ArrayList<>(); for(int i = 0; i <500; i++) { n1.add(i); n2.add(i); } long s1 = System.currentTimeMillis(); n1.stream().forEach(i -> {try {Thread.sleep(10);} catch(Exception e) {e.printStackTrace(); }}); long t1 = System.currentTimeMillis(); long s2 = System.currentTimeMillis(); n2.parallelStream().forEach(i -> {try {Thread.sleep(10);} catch(Exception e) {e.printStackTrace(); }}); long t2 = System.currentTimeMillis(); System.out.printf("n1 exec time: %d [ms]\n", t1-s1); System.out.printf("n2 exec time: %d [ms]\n", t2-s2); } } There aren’t many points to collect and explain. n1 is a sequential stream and n2 is a parallel stream, calling Thread.sleep(10) 500 times. The reason for try-catch is to handle the checked exception of Thread.sleep(). The i in forEach disappears into the void. Execution result n1 exec time: 5298 [ms] n2 exec time: 505 [ms] n2 is more than 10 times faster than n1! As for n1, the result of 10ms × 500 times really appears. Supplement According to official document, the default number of parallel threads is , Which is equal to the number of processors available on the computer. For applications that require separate or custom pools, a ForkJoinPool may be constructed with a given target parallelism level; by default, equal to the number of available processors. It seems that the number of logical processors is applied as the default value instead of the number of physical processors, as you can see that the execution time for parallel streams is 1/10 or less of that for sequential streams. “It’s 10 times faster”… Omakan !! I think that there is a very reasonable tsukumi, but for most modern PCs I think there are about 8 logical processors, so please forgive me. Note Basically, it is better not to update or delete the objects commonly referred to by each thread (such as add to List).
https://linuxtut.com/java-program-10x-faster-with-parallelstream-d6191/
CC-MAIN-2020-40
refinedweb
383
59.7
UPDATE : . They are a good start with JavaScript fundamentals. Node is mostly JavaScript except for a few differences which I’ll highlight in this essay. The code is in the You Don’t Know Node GitHub repository under the code folder. Why care about Node? Node is JavaScript and JavaScript is almost everywhere! What if the world can be a better place if more developers master Node? Better apps equals better life! This is a kitchen sink of subjectively the most interesting core features. The key takeaways of this essay are: - Event loop: Brush-up on the core concept which enables non-blocking I/O - Global and process: How to access more info - Event emitters: Crash course in the event-based pattern - Streams and buffers: Effective way to work with data - Clusters: Fork processes like a pro - Handling async errors: AsyncWrap, Domain and uncaughtException - C++ addons: Contributing to the core and writing your own C++ addons Event Loop We can start with event loop which is at the core of Node. Node.js Non-Blocking I/O It allows processing of other tasks while IO calls are in the process. Think Nginx vs. Apache. It allows Node to be very fast and efficient because blocking I/O is expensive! Tip: If you like this post, I giving access to 1 hour mentorship, ALL my courses, ebooks and cheatsheets in one convenient bundle . Start mastering React.js, Node.js, MongoDB/Mongoose and Express.js. Upgrade your career with my mentorship. Become familiar with full stack JavaScript+Node.js stack. Acquire new skills and confidence in the future by getting the bundle . Take look at this basic example of a delayed println function in Java: System.out.println("Step: 1"); System.out.println("Step: 2"); Thread.sleep(1000); System.out.println("Step: 3"); It’s comparable (but not really) to this Node code: console.log('Step: 1') setTimeout(function () { console.log('Step: 3') }, 1000) console.log('Step: 2') It’s not quite the same though. You need to start thinking in the asynchronous way. The output of the Node script is 1, 2, 3, but if we had more statements after “Step 2”, they would have been executed before the callback of setTimeout . Look at this snippet: console.log('Step: 1') setTimeout(function () { console.log('Step: 3') console.log('Step 5') }, 1000); console.log('Step: 2') console.log('Step 4') It produces 1, 2, 4, 3, 5. That’s because setTimeout puts it’s callback in the future cycles of the event loop. Think about event loop as ever spinning loop like a for or a while loop. It stops only if there is nothing to execute either now or in the future. Blocking I/O: Multi-Threading Java The event loop allows systems to be more effective because now you can do more things while you wait for your expensive input/output task to finish. Non-Blocking I/O: Node.js This is in contrast to today’s more common concurrency model where OS threads are employed. Thread-based networking is relatively inefficient and very difficult to use. Furthermore, users of Node are free from worries of dead-locking the process — there are no locks. Note: If you like this post and interested in a corporate on-site JavaScript, Node.js and React.js training to boost productivity of your team, then contact NodeProgram.com . A quick side note: It’s still possible to write blocking code in Node.js. :flushed: Consider this simple but blocking Node.js code: console.log('Step: 1') var start = Date.now() for (var i = 1; i<1000000000; i++) { // This will take 100-1000ms depending on your machine } var end = Date.now() console.log('Step: 2') console.log(end-start) Of course, most of the time, we don’t have empty loops in our code. Spotting synchronous and thus blocking code might be harder when using other people’s modules. For example, core fs (file system) module comes with two sets of methods. Each pair performs the same functions but in a different way. There are blocking fs Node.js methods which have the word Sync in their names: var fs = require('fs') var contents = fs.readFileSync('accounts.txt','utf8') console.log(contents) console.log('Hello Ruby/n') var contents = fs.readFileSync('ips.txt','utf8') console.log(contents) console.log('Hello Node!') Results are very predictable even to people new to Node/JavaScript: data1->Hello Ruby->data2->Hello NODE! Things change when we switch to asynchronous methods. This is non-blocking Node.js code: var fs = require('fs'); var contents = fs.readFile('accounts.txt','utf8', function(err,contents){ console.log(contents); }); console.log('Hello Python/n'); var contents = fs.readFile('ips.txt','utf8', function(err,contents){ console.log(contents); }); console.log("Hello Node!"); It prints contents last because they will take some time to execute, they are in the callbacks. Event loops will get to them when file reading is over: Hello Python->Hello Node->data1->data2 So event loop and non-blocking I/O are very powerful, but you need to code asynchronously which is not how most of us learned to code in schools. Global When switching to Node.js from browser JavaScript or another programming language, these questions arise: - Where to store passwords? - How to create global variables (no windowin Node)? - How to access CLI input, OS, platform, memory usage, versions, etc.? There’s a global object. It has certain properties. Some of them are as follows: global.process: Process, system, environment information (you can access CLI input, environment variables with passwords, memory, etc.) global.__filename: File name and path to the currently running script where this statement is global.__dirname: Absolute path to the currently running script global.module: Object to export code making this file a module global.require(): Method to import modules, JSON files, and folders Then, we’ve got the usual suspects, methods from browser JavaScript: global.console() global.setInterval() global.setTimeout() Each of the global properties can be accessed with capitalized name GLOBAL or without the namespace at all, e.g., process instead of global.process . Process Process object has a lot of info so it deserves its own section. I’ll list only some of the properties: process.pid: Process ID of this Node instance process.versions: Various versions of Node, V8 and other components process.arch: Architecture of the system process.argv: CLI arguments process.env: Environment variables Some of the methods are as follows: process.uptime(): Get uptime process.memoryUsage(): Get memory usage process.cwd(): Get current working directory. Not to be confused with __dirnamewhich doesn’t depend on the location from which the process has been started. process.exit(): Exit current process. You can pass code like 0 or 1. process.on(): Attach an event listener, e.g., `on(‘uncaughtException’) Tough question: Who likes and understands callbacks? :raising_hand: Some people love callbacks too much so they created . If you are not familiar with this term yet, here’s an illustration: fs.readdir(source, function (err, files) {)) } }) }) } }) Callback hell is hard to read, and it’s prone to errors. How do we modularize and organize asynchronous code, besides callbacks which are not very developmentally scalable? Event Emitters To help with callback hell, or pyramid of doom, there’s Event Emitters . They allow to implement your asynchronous code with events. Simply put, event emitter is something that triggers an event to which anyone can listen. In node.js, an event can be described as a string with a corresponding callback. Event Emitters serve these purposes: - Event handling in Node uses the observer pattern - An event, or subject, keeps track of all functions that are associated with it - These associated functions, known as observers, are executed when the given event is triggered To use Event Emitters, import the module and instantiate the object: var events = require('events') var emitter = new events.EventEmitter() After that, you can attach event listeners and trigger/emit events: emitter.on('knock', function() { console.log('Who/'s there?') }) emitter.on('knock', function() { console.log('Go away!') }) emitter.emit('knock') Let’s make something more useful with EventEmitter by inhering from it. Imagine that you are tasked with implementing a class to perform monthly, weekly and daily email jobs. The class needs to be flexible enough for the developers to customize the final output. In other words, whoever consumes this class need to be able to put some custom logic when the job is over. The diagram below explains the that we inherit from events module to create Job and then use done event listener to customize the behavior of the Job class: Node.js Event Emitters: Observer Pattern The class Job will retain its properties, but will get events as well. All we need is to trigger the done when the process is over: // job.js var util = require('util') var Job = function Job() { var job = this // ... job.process = function() { // ... job.emit('done', { completedOn: new Date() }) } } util.inherits(Job, require('events').EventEmitter) module.exports = Job Now, our goal is to customize the behavior of Job at the end of the task. Because it emits done , we can attach an event listener: // weekly.js var Job = require('./job.js') var job = new Job() job.on('done', function(details){ console.log('Job was completed at', details.completedOn) job.removeAllListeners() }) job.process() There are more features to emitters: emitter.listeners(eventName): List all event listeners for a given event emitter.once(eventName, listener): Attach an event listener which fires just one time. emitter.removeListener(eventName, listener): Remove an event listener. The event pattern is used all over Node and especially in its core modules. For this reason, mastering events will give you a great bang for your time. Streams There are a few problems when working with large data in Node. The speed can be slow and the buffer limit is ~1Gb. Also, how do you work if the resource is continuous, in was never designed to be over? To overcome these issues, use streams. Node streams are abstractions for continuos chunking of data. In other words, there’s no need to wait for the entire resource to load. Take a look at the diagram below showing standard buffered approach: Node.js Buffer Approach We have to wait for the entire buffer to load before we can start processing and/or output. Now, contrast it with the next diagram depicting streams. In it, we can process data and/or output it right away, from the first chunk: Node.js Stream Approach You have four types of Streams in Node: - Readable: You can read from them - Writable: You can write to them - Duplex: You can read and write - Transform: You use them to transform data Streams are virtually everywhere in Node. The most used stream implementations are: - HTTP requests and responses - Standard input/output - File reads and writes Streams inherit from the Event Emitter object to provide observer pattern, i.e., events. Remember them? We can use this to implement streams. Readable Stream Example An example of a readable stream would be process.stdin which is a standard input stream. It contains data going into an application. Input typically comes from the keyboard used to start the process. To read data from stdin , use the data and end events. The data event’s callback will have chunk as its argument: process.stdin.resume() process.stdin.setEncoding('utf8') process.stdin.on('data', function (chunk) { console.log('chunk: ', chunk) }) process.stdin.on('end', function () { console.log('--- END ---') }) So chunk is then input fed into the program. Depending on the size of the input, this event can trigger multiple times. An end event is necessary to signal the conclusion of the input stream. Note: stdin is paused by default, and must be resumed before data can be read from it. Readable streams also have read() interface which work synchronously. It returns chunk or null when the stream has ended. We can use this behavior and put null !== (chunk = readable.read()) into the while condition: var readable = getReadableStreamSomehow() readable.on('readable', () => { var chunk while (null !== (chunk = readable.read())) { console.log('got %d bytes of data', chunk.length) } }) Ideally, we want to write asynchronous code in Node as much as possible to avoid blocking the thread. However, data chunks are small, so we don’t worry about blocking thread with synchronous readable.read() . Writable Stream Example An example of a writable stream is process.stdout . The standard output streams contain data going out of an application. Developers can write to the stream with the write operation. process.stdout.write('A simple message/n') Data written to standard output is visible on the command line just like when we use console.log() . Pipe Node provides developers with an alternative to events. We can use pipe() method. This example reads from a file, compresses it with GZip, and writes the compressed data to a file: var r = fs.createReadStream('file.txt') var z = zlib.createGzip() var w = fs.createWriteStream('file.txt.gz') r.pipe(z).pipe(w) Readable.pipe() takes a writable stream and returns destination, therefore we can chain pipe() methods one after another. So you have a choice between events and pipes when you use streams. HTTP Streams Most of us use Node to build web apps either traditional (think server) or RESTful APi (think client). So what about an HTTP request? Can we stream it? The answer is a resounding yes . Request and response are readable and writable streams and they inherit from event emitters. We can attach a data event listener. In its callback, we’ll receive chunk , we can transform it right away without waiting for the entire response. In this example, I’m concatenating the body and parsing it in the callback of the end event: const http = require('http') var server = http.createServer( (req, res) => { var body = '' req.setEncoding('utf8') req.on('data', (chunk) => { body += chunk }) req.on('end', () => { var data = JSON.parse(body) res.write(typeof data) res.end() }) }) server.listen(1337) Note: ()=>{} is ES6 syntax for fat arrow functions while const is a new operator. If you’re not familiar with ES6/ES2015 features and syntax yet, refer to the article, Top 10 ES6 Features Every Busy JavaScript Developer Must Know . Now let’s make our server a bit more close to a real life example by using Express.js. In this next example, I have a huge image (~8Mb) and two sets of Express routes: /stream and /non-stream . server-stream.js: app.get('/non-stream', function(req, res) { var file = fs.readFile(largeImagePath, function(error, data){ res.end(data) }) }) app.get('/stream', function(req, res) { var stream = fs.createReadStream(largeImagePath) stream.pipe(res) }) I also have an alternative implementation with events in /stream2 and synchronous implementation in /non-stream2 . They do the same thing when it comes to streaming or non-streaming, but with a different syntax and style. The synchronous methods in this case is more performant because we are only sending one request, not concurrent requests. To launch the example, run in your terminal: $ node server-stream Then open and in Chrome. The Network tab in DevTools will show you headers. Compare X-Response-Time . In my case, it was order of magnitude lower for /stream and /stream2 : 300ms vs. 3–5s. Your result will vary, but the idea is that with stream, users/clients will start getting data earlier. Node streams are really powerful! There are some good stream resources to master them and become a go-to streams expert in your team. [Stream Handbook](] and stream-adventure which you can install with npm: $ sudo npm install -g stream-adventure $ stream-adventure Buffers What data type can we use for binary data? If you remember, browser JavaScript doesn’t have a binary data type, but Node does. It’s called buffer. It’s a global object, so we don’t need to import it as module. To create binary data type, use one of the following statements: new Buffer(size) new Buffer(array) new Buffer(buffer) new Buffer(str[, encoding]) The official Buffer docs list all the methods and encodings. The most popular encoding is utf8 . A typical buffer will look like some gibberish so we must convert it to a string with toString() to have a human-readable format. The for loop will create a buffer with an alphabet: let buf = new Buffer(26) for (var i = 0 ; i < 26 ; i++) { buf[i] = i + 97 // 97 is ASCII a } The buffer will look like an array of numbers if we don’t convert it to a string: console.log(buf) // <Buffer 61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70 71 72 73 74 75 76 77 78 79 7a> And we can use toString to convert the buffer to a string. buf.toString('utf8') // outputs: abcdefghijklmnopqrstuvwxyz buf.toString('ascii') // outputs: abcdefghijklmnopqrstuvwxyz The method takes a starting number and end positions if we need just a sub string: buf.toString('ascii', 0, 5) // outputs: abcde buf.toString('utf8', 0, 5) // outputs: abcde buf.toString(undefined, 0, 5) // encoding defaults to 'utf8', outputs abcde Remember fs? By default the data value is buffer too: fs.readFile('/etc/passwd', function (err, data) { if (err) return console.error(err) console.log(data) }); data is buffer when working with files. Clusters You might often hear an argument from Node skeptics that it’s single-threaded, therefore it won’t scale. There’s a core module cluster (meaning you don’t need to install it; it’s part of the platform) which allows you to utilize all CPU power of each machine. This will allow you to scale Node programs vertically. The code is very easy. We need to import the module, create one master and multiple workers. Typically we create as many processes as the number of CPUs we have. It’s not a rule set in stone. You can have as many new processes as you want, but at a certain point the law of diminishing returns kicks in and you won’t get any performance improvement. The code for master and worker is in the same file. The worker can listen on the same port and send a message (via events) to the master. Master can listen to the events and restart clusters as needed. The way to write code for master is to use cluster.isMaster() , and for worker it is cluster.isWorker() . Most of the server the server code will reside in worker ( isWorker() ). // cluster.js var cluster = require('cluster') if (cluster.isMaster) { for (var i = 0; i < numCPUs; i++) { cluster.fork() } } else if (cluster.isWorker) { // your server code }) In the cluster.js example, my server outputs process IDs, so you see that different workers handle different requests. It’s like a load balancer, but it’s not a true load balancer because the loads won’t be distributed evenly. You might see way more requests falling on just one process (the PID will be the same). To see that different workers serving different requests, use loadtest which is a Node-based stress (or load) testing tool: - Install loadtestwith npm: $ npm install -g loadtest - Run code/cluster.jswith node ( $ node cluster.js); leave the server running - Run load testing with: $ loadtest -t 20 -c 10in a new window - Analyze results both on the server terminal and the loadtestterminal - Press control+c on the server terminal when the testing is over. You should see different PIDs. Write down the number of requests served. The -t 20 -c 10 in the loadtest command means there will be 10 concurrent requests and maximum time is 20 seconds. The core cluster is part of the core and that’s pretty much its only advantage. When you are ready to deploy to production, you might want to use a more advanced process manager: strong-cluster-control(), or $ slc run: good choice pm2(): good choice pm2 Let’s cover the pm2 tool which is one of the ways to scale your Node application vertically (one of the best ways) as well as having some production-level performance and features. In a nutshell, pm2 has these advantages: - Load-balancer and other features - 0s reload down-time, i.e., forever alive - Good test coverage You can find pm2 docs at and . Take a look at this Express server ( server.js ) as the pm2 example. There’s no boilerplate code isMaster() which is good because you don’t need to modify your source code like we did with cluster . All we do in this server is log pid and keep stats on them. var express = require('express') var port = 3000 global.stats = {} console.log('worker (%s) is now listening to', process.pid, port) var app = express() app.get('*', function(req, res) { if (!global.stats[process.pid]) global.stats[process.pid] = 1 else global.stats[process.pid] += 1; var l ='cluser ' + process.pid + ' responded /n'; console.log(l, global.stats) res.status(200).send(l) }) app.listen(port) To launch this pm2 example, use pm2 start server.js . You can pass the number of the instances/processes to spawn ( -i 0 means as many as number of CPUs which is 4 in my case) and the option to log to a file ( -l log.txt ): $ pm2 start server.js -i 0 -l ./log.txt Another nice thing about pm2 is that it goes into foreground. To see what’s currently running, execute: $ pm2 list Then, utilize loadtest as we did in the core cluster example. In a new window, run these commands: $ loadtest -t 20 -c 10 Your results might vary, but I get more or less evenly distributed results in log.txt : cluser 67415 responded { '67415': 4078 } cluser 67430 responded { '67430': 4155 } cluser 67404 responded { '67404': 4075 } cluser 67403 responded { '67403': 4054 } Spawn vs Fork vs Exec Since we’ve used fork() in the cluter.js example to create new instances of Node servers, it’s worth mentioning that there are three ways to launch an external process from within the Node.js one. They are spawn() , fork() and exec() , and all three of them come from the core child_process module. The differences can be summed up in the following list: require('child_process').spawn(): Used for large data, supports streams, can be used with any commands, and doesn’t create a new V8 instance require('child_process').fork()– Creates a new V8 instance, instantiates multiple workers and works only with Node.js scripts ( nodecommand) require('child_process').exec()– Uses a buffer which makes it unsuitable for large data or streaming, works in async manner to get you all the data at once in the callback, and can be used with any command, not just node Let’s take a look at this spawn example in which we execute node program.js , but the command can start bash, Python, Ruby or any other commands or scripts. If you need to pass additional arguments to the command, simply put them as arguments of the array which is a parameter to spawn() . The data comes as a stream in the data event handler: var fs = require('fs') var process = require('child_process') var p = process.spawn('node', 'program.js') p.stdout.on('data', function(data)) { console.log('stdout: ' + data) }) From the perspective of the node program.js command, data is its standard output; i.e., the terminal output from node program.js . The syntax for fork() is strikingly similar to the spawn() method with one exception, there is no command because fork() assumes all processes are Node.js: var fs = require('fs') var process = require('child_process') var p = process.fork('program.js') p.stdout.on('data', function(data)) { console.log('stdout: ' + data) }) The last item on our agenda in this section is exec() . It’s slightly different because it’s not using event pattern, but a single callback. In it, you have error, standard output and standard error parameters: var fs = require('fs') var process = require('child_process') var p = process.exec('node program.js', function (error, stdout, stderr) { if (error) console.log(error.code) }) The difference between error and stderr is that the former comes from exec() (e.g., permission denied to program.js ), while the latter from the error output of the command you’re running (e.g., database connection failed within program.js ). Handling Async Errors Speaking of errors, in Node.js and almost all programming languages, we have try/catch which we use to handle errors. For synchronous errors try/catch works fine. try { throw new Error('Fail!') } catch (e) { console.log('Custom Error: ' + e.message) } Modules and functions throw errors which we catch later. This works in Java and synchronous Node. However, the best Node.js practice is to write asynchronous code so we don’t block the thread. Event loop is the mechanism which enables the system to delegate and schedule code which needs to be executed in the future when expensive input/output tasks are finished. The problem arises with asynchronous errors because system loses context of the error. For example, setTimeout() works asynchronously by scheduling the callback in the future. It’s similar to an asynchronous function which makes an HTTP request, reads from a database or writes to a file: try { setTimeout(function () { throw new Error('Fail!') }, Math.round(Math.random()*100)) } catch (e) { console.log('Custom Error: ' + e.message) } There is no try/catch when callback is executed and application crashes. Of course, if you put another try/catch in the callback, it will catch the error, but that’s not a good solution. Those pesky async errors are harder to handle and debug. Try/catch is not good enough for asynchronous code. So async errors crash our apps. How do we deal with them? :sweat_smile: You’ve already seen that there’s an error argument in most of the callbacks. Developers need to check for it and bubble it up (pass up the callback chain or output an error message to the user) in each callback: if (error) return callback(error) // or if (error) return console.error(error) Other best practices for handling async errors are as follows: - Listen to all “on error” events - Listen to uncaughtException - Use domain(soft deprecated) or AsyncWrap - Log, log, log & Trace - Notify (optional) - Exit & Restart the process on(‘error’) Listen to all on('error') events which are emitted by most of the core Node.js objects and especially http . Also, anything that inherits from or creates an instance of Express.js, LoopBack, Sails, Hapi, etc. will emit error , because these frameworks extend http . js server.on('error', function (err) { console.error(err) console.error(err) process.exit(1) }) uncaughtException Always listen to uncaughtException on the process object! uncaughtException is a very crude mechanism for exception handling. An unhandled exception means your application – and by extension Node.js itself – is in an undefined state. Blindly resuming means anything could happen. process.on('uncaughtException', function (err) { console.error('uncaughtException: ', err.message) console.error(err.stack) process.exit(1) }) or process.addListener('uncaughtException', function (err) { console.error('uncaughtException: ', err.message) console.error(err.stack) process.exit(1) Domain Domain has nothing to do with web domains that you see in the browser. domain is a Node.js core module to handle asynchronous errors by saving the context in which the asynchronous code is implemented. A basic usage of domain is to instantiate it and put your crashy code inside of the run() callback: var domain = require('domain').create() domain.on('error', function(error){ console.log(error) }) domain.run(function(){ throw new Error('Failed!') }) domain is softly deprecated in 4.0 which means the Node core team will most likely separate domain from the platform, but there are no alternatives in core as of now. Also, because domain has strong support and usage, it will live as a separate npm module so you can easily switch from the core to the npm module which means domain is here to stay. Let’s make the error asynchronous by using the same setTimeout() : // domain-async.js: var d = require('domain').create() d.on('error', function(e) { console.log('Custom Error: ' + e) }) d.run(function() { setTimeout(function () { throw new Error('Failed!') }, Math.round(Math.random()*100)) }); The code won’t crash! We’ll see a nice error message, “Custom Error” from the domain’s error event handler, not your typical Node stack trace. C++ Addons The reason why Node became popular with hardware, IoT and robotics is its ability to play nicely with low-level C/C++ code. So how do we write C/C++ binding for your IoT, hardware, drone, smart devices, etc.? This is the last core feature of this essay. Most Node beginners don’t even think you can be writing your own C++ addons! In fact, it’s so easy that we’ll do it from scratch right now. Firstly, create the hello.cc file which has some boilerplate imports in the beginning. Then, we define a method which returns a string and exports that method. #include <node.h> namespace demo { using v8::FunctionCallbackInfo; using v8::HandleScope; using v8::Isolate; using v8::Local; using v8::Object; using v8::String; using v8::Value; void Method(const FunctionCallbackInfo<Value>& args) { Isolate* isolate = args.GetIsolate(); args.GetReturnValue().Set(String::NewFromUtf8(isolate, "capital one")); // String } void init(Local<Object> exports) { NODE_SET_METHOD(exports, "hello", Method); // Exporting } NODE_MODULE(addon, init) } Even if you are not an expert in C, it’s easy to spot what is happening here because the syntax is not that foreign to JavaScript. The string is capital one : args.GetReturnValue().Set(String::NewFromUtf8(isolate, "capital one"));` And the exported name is hello : void init(Local<Object> exports) { NODE_SET_METHOD(exports, "hello", Method); } Once hello.cc is ready, we need to do a few more things. One of them is to create binding.gyp which has the source code file name and the name of the addon: { "targets": [ { "target_name": "addon", "sources": [ "hello.cc" ] } ] } Save the binding.gyp in the same folder with hello.cc and install node-gyp : $ npm install -g node-gyp Once you got node-gyp , run these configuring and building commands in the same folder in which you have hello.cc and binding.gyp : $ node-gyp configure $ node-gyp build The commands will create the build folder. Check for compiled .node files in build/Release/ . Lastly, write the create Node.js script hello.js , and include your C++ addon: var addon = require('./build/Release/addon') console.log(addon.hello()) // 'capital one' To run the script and see our string capital one , simply use: $ node hello.js There are more C++ addons examples at . Summary The code to play with is on GitHub . If you liked this post, leave a comment below. If you are interested in Node.js patterns like observer, callback and Node conventions, take a look at my essay Node Patterns: From Callbacks to Observer . I know it’s been a long read, so here’s a 30-second summary: - Event loop: Mechanism behind Node’s non-blocking I/O - Global and process: Global objects and system information - Event Emitters: Observer pattern of Node.js - Streams: Large data pattern - Buffers: Binary data type - Clusters: Vertical scaling - Domain: Asynchronous error handling - C++ Addons: Low-level addons Most of Node is JavaScript except for some core features which mostly deal with system access, globals, external processes and low-level code. If you understand these concepts (feel free to save this article and re-read it a few more times), you’ll be on a quick and short path to mastering Node.js. To contact Azat, the author of this post, » You Don’t Know Node: Quick Intro to Core Features 评论 抢沙发
http://www.shellsec.com/news/8689.html
CC-MAIN-2016-50
refinedweb
5,287
67.55
. Moore’s Law states that we can expect the speed and capability of our computers to increase every couple of years, and we… Have you ever have to import a module with import ModuleName from '../../../one/two/component'? This article will remove this pain point and allow TypeScript developers to import modules efficiently and effectively for both Front-End and Back-End code base! It’s time to say goodbye to long import paths. In this article, I split the guides into front-end and back-end respectively due to the different bundlers. In react code base, we often find ourselves to import sharable components far away from our JSX.Element files. … TypeORM is an ORM that has gain a lot of attention. The native integration with TypeScript enables developers to effectively integrate TypeORM into their TypeScript project. In this tutorial, I will be covering CRUD functions/APIs in Express.js (Node.js). No front-end is required in this tutorial as we will be using swagger to test our APIs. In this tutorial, you will learn to setup Express + TypeScript application in Node.js, mainly for backend purpose. One of the most common backend services are CRUD (Create, Read, Update, Delete) services. These services require data persistence, as such, it is essential for the Express… More and more packages are added to npm database everyday, resulting in confusion for new developers to setup a simple Express app with TypeScript support. This article provides a simple-to-follow guide in setting up a good structured Node.js Express app with TypeScript support. Express.js application has gain some traction in recent years. However, plain JavaScript remains an issue when it comes to type-safe programming. Hence, there is a need to integrate TypeScript into Express.js. … I would like to share my expensive lesson — My MongoDB on DigitalOcean (cloud infrastructure) was hacked and deleted, I was blackmailed to pay bitcoin to retrieve my data. Please read this to the end because I don’t want you to experience what I had. This is how the story goes… I am a tech enthusiast and software developer. As such, I subscribed to numerous cloud infrastructure service provider to host my application and database. One of such server that I subscribed to is DigitalOcean’s Ubuntu server. … A method to execute a piece of code only during build time. Do you want to include a segment of code that should only be executed during build time of a Create React App (CRA) environment? For instance, version control or even timestamp which the CRA is bundled. In this article, I will refer to external libraries — preval.macro. Install the following node modules. npm install --save-dev preval.macro To use the module, the following structure has to be followed. import preval from 'preval.macro' const one = preval`module.exports = ANY_JAVASCRIPT_CODE`; module.exports is the keyword of importing a node js module prior… A react conditional import, especially useful for conditional CSS import. Have you ever tried to conditionally import React components or CSS file? The first instinct you have is to do a conditional import using if-else. Soon… you will find yourself in trouble because your react will not bundled with the error message Parsing error: ‘import' and ‘export' may only appear at the top level in the terminal. Then… you start googling and changing the word import to require. And you think it works. However, the require method only worked in development (or during the react bundling if you use… Tech Geek | TypeScript Full-Stacker (MERN) | Engineer and Economist by Training | Software Engineer in Finance Industry.
https://prawira.medium.com/?source=post_internal_links---------1----------------------------
CC-MAIN-2021-21
refinedweb
604
56.96
How to: Add a Region Overview A region is a mechanism that allows developers to expose to the application Windows Presentation Foundation controls as components that encapsulate a particular visual way of displaying views. Regions can be accessed in a decoupled way by their name, and they support dynamically adding or removing views at run time. Showing controls through regions allows you to consistently display and hide the views, independently of the visual style in which they display. This allows the appearance and behavior (look and feel) and layout of your application to evolve independently of the views hosted within it. This topic describes how to add a region to a view or the Shell window through XAML. For information about how to access a region through code, see How to: Show a View in a Shell Region and How to: Show a View in a Scoped Region. Prerequisites This topic assumes that the Shell class has a region manager instance registered. If you are working on a solution created with the Composite Application Library, the Shell already contains a region manager instance. For information about how to create a solution, see How to: Create a Solution Using the Composite Application Library. Steps The following procedure describes how to add a region to a view or the Shell window. To add a region to a view or the Shell window - Open the XAML view of your view or the Shell window. - Add the Composite Application Library XML namespace to the root element of the XAML file, as shown in the following highlighted markup. - Add the control you want to use as a region. The Composite Application. - Add an attribute named cal:RegionManager.RegionName to the control. This attribute is an attached property that indicates that a region has to be created and associated with the control when the view is being built. As the attribute value, provide a name for the region. Developers will use this name to get a reference to the region. The following code shows a region named "MainRegion" associated to an ItemsControl control. A recommended practice consists of using constants for region names, as shown in the following markup extracted from the UI Composition QuickStart. Outcome After adding a region, developers will be able to get a reference to it by its name and dynamically add views to it. Next Steps After you add a region, a typical task to perform next is to add views to it. For details about how to do this, see How to: Show a View in a Scoped Region.
http://msdn.microsoft.com/en-us/library/ff647819.aspx
CC-MAIN-2014-15
refinedweb
429
53.41
Opened 5 years ago Closed 5 years ago Last modified 5 years ago #17118 closed Bug (duplicate) list_editable will update wrong rows on a multiuser system if pagination is enabled Description If there is a model with fields which are "list editable" and the number of rows for that model enables pagination, an update on those fields could not work as expected (other rows are updated). - Lets say that the user updates fields on the first page on the change list view. - While saving those changes, the information on the database changes (another user or another process added rows that should be shown before the rows updated by the user, shifting the rows modified by the user to a second page). - Django will update the editable fields on the newest rows with the data entered for other rows. I debugged the code and I think that the bug is on the BaseModelFormSet. For some reason, if it can't retrieve a instance model using the pk, it will use the index to retrieve it. However, the queryset that it's using has been limited by pagination. Therefore if the row updated has been moved to another page, it won't be found. This belongs to BaseModelFormSet, take a look at the final if on the _construct_form method. def _existing_object(self, pk): if not hasattr(self, '_object_dict'): self._object_dict = dict([(o.pk, o) for o in self.get_queryset()]) return self._object_dict.get(pk) def _construct_form(self, i, **kwargs): if self.is_bound and i < self.initial_form_count(): # Import goes here instead of module-level because importing # django.db has side effects. from django.db import connections pk_key = "%s-%s" % (self.add_prefix(i), self.model._meta.pk.name) pk = self.data[pk_key] pk_field = self.model._meta.pk pk = pk_field.get_db_prep_lookup('exact', pk, connection=connections[self.get_queryset().db]) if isinstance(pk, list): pk = pk[0] kwargs['instance'] = self._existing_object(pk) if i < self.initial_form_count() and not kwargs.get('instance'): kwargs['instance'] = self.get_queryset()[i] return super(BaseModelFormSet, self)._construct_form(i, **kwargs) Change History (4) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by No, it's still happening even with ordering. Let me explain how I'm testing it: - I built a simple model. It has an id (primary key) and a name (char field). - I made an admin for that model with name as list_editable and a page size = 3. - I run the server on debug mode with a break point at the begining the ModelAdmin.changelist_view method. - I opened the app and I edited the three elements on the first page. - The debug stopped at the breakpoint and I added three new elements to the model table using sql. The three elements are meant to be shown before the three elements edited. - I resume my submit. The result is that the three new elements are shown in the first page with its name changed. The other three elements have not been edited at all. I don't understand why if it has the primary key on the list editable form, django is not using it to recover the element to edit. I'm trying the following solution (so far it's working): class FixedBaseModelFormSet(BaseModelFormSet): def _existing_object(self, pk): if not hasattr(self, '_object_dict'): self._object_dict = dict([(o.pk, o) for o in self.get_queryset()]) object = self._object_dict.get(pk) if object is None: object = self.model._default_manager.get_query_set().get(pk = pk) return object By the way, if list_editable isn't multithread safe, it would be good to specify it on the help: comment:3 Changed 5 years ago by Thanks a lot for taking the time to detail out all the steps to reproduce this problem. Here the _object_dict cache is used for performance reasons but obviously it isn't thread-safe, at least in the context of list_editable. However, I think this is just one particular issue among the several that would need to be addressed as part of #11313. If we can fix this then we should fix it as a whole rather than piecemeal. For this reason I'll close this ticket as a duplicate of #11313. Let's continue the conversation over there. By the way, you've made a good point about the documentation. A warning should be added indicating that list_editable isn't thread safe. I'll add a comment about that in the other ticket. Thank you! comment:4 Changed 5 years ago by That work around doesn't work but is really close. I think you want to avoid using the query set altogether when list editing because it could possibly return different results than it did when the list to be edited was POSTed. This actually works as a work around: def _existing_object(self, pk): return self.model._default_manager.get_query_set().get(pk = pk) ...but has side effects. For example, what if another user deletes one of the items in the list to be edited? This workaround will raise DoesNotExist. Django committers need to get busy on the solution talked about in #11313 because this bug is really ugly. Note that currently list_editable isn't well suited for multiuser environment (see #11313). However, I think the particular problem you're running into is that there is no consistent default ordering set on the changelist. Please take a look at #16819 and see whether this ticket is a duplicate of that one.
https://code.djangoproject.com/ticket/17118
CC-MAIN-2017-09
refinedweb
902
57.98
Wakeup with UART (LoPy4) - RodrigoMunoz last edited by Hello guys. I am trying to communicate with a Lopy4 through its serial port. In paticular by UART 1. (P3 and P4). Furthermore, the LoPy4 must be in sleep mode. I need to wake up the LoPy4 using the UART port. I have tried using the following code: from machine import UART import time print("Starting Loop...") uart = UART(1, baudrate=9600, bits=8, parity=None, stop=1, pins=('P3', 'P4')) i = 0 while True: print("*****************************: %d" % i) print("Going to sleep") machine.pin_sleep_wakeup([('P4')], mode=machine.WAKEUP_ALL_LOW, enable_pull=True) time.sleep(1) machine.sleep() print("Weak up...") (wake_reason, gpio_list) = machine.wake_reason() if(wake_reason==machine.PWRON_WAKE): print("Wake up for Power On") elif(wake_reason==machine.PIN_WAKE): print("Wake up for PIN") elif(wake_reason==machine.RTC_WAKE): print("Wake up for RTC") elif(wake_reason==machine.ULP_WAKE): print("Wake up for ULP") else: print("Wake up undefined") n_bytes = uart.any() print(n_bytes) buff = uart.read(n_bytes) # read up to 5 bytes print(buff) i = i + 1 The problem is that the LoPy cannot read the bytes sent to it correctly. Has anyone been able to wake up a LoPy using the serial port? Thanks - Nicolas Harel last edited by Topic is old but I want to bump it anyway. Same problem here. Received message with this method is unusable. This might be why (from ESP32 documentation on UART wakeup): "Note that the character which triggers wakeup (and any characters before it) will not be received by the UART after wakeup. This means that the external device typically needs to send an extra character to the ESP32 to trigger wakeup, before sending the data."
https://forum.pycom.io/topic/6189/wakeup-with-uart-lopy4
CC-MAIN-2022-21
refinedweb
278
69.48
Hi, I am having problems making a constructor for a image so that I can call it whenever I want from another class. The problem Im having is that I dont know what to put in g.drawImage(blah, blah, blah, THIS SECTION). I know normally you put "this" but I need to call it into another class.. Obstacle Class: public class Obstacle { public void drawUrn(Graphics g, String obstacle, int x, int y) { g.drawImage(obstacle, x, y, ); } } what would I need to do so I can call that in my Player.class? "this" is a shorthand reference to the instance/object which is invoking the method. If your drawImage method needs to have an ImageObserver instance/object to act upon which is not the current instance/object, you can use the name of that ImageObserver instance/object you want the method to act on as an argument in place of "this". So, if your Player class extends ImageObserver, and you have created an instance of Player - let's say, player1 - you would use g.drawImage( obstacle, x, y, player1); HOWEVER, you might want to directly invoke the player1 instance's drawImage method ... Last edited by nspils; 08-22-2006 at 02:18 PM. well, right now, player is the only other class. Its an applet right now. I just wanted a way to add an obstacle whenever I wanted but I wanted to make it a seperate class.. what Im trying to do is in Player, i want to be able to type like.. obstacle1.drawUrn(urn, 10, 20); and have it draw an urn at the coordinates provided.. but since the constructor is not in player, i am unsure of how to make that work I'm not sure exactly what you're looking for, but you could just make a subclass of Obstacle called Urn and then give it a static final member which is an image. Then when you want to draw it, you would just say: Code: g.drawImage(obstacle, x, y, Urn.IMAGE); Or something like that. Hope this helps. g.drawImage(obstacle, x, y, Urn.IMAGE); ~evlich Forum Rules
http://forums.devx.com/showthread.php?155437-How-to-use-constructor-from-one-class-to-draw-image-in-another&p=462501
CC-MAIN-2013-48
refinedweb
357
73.78
hi, I don't how to generate applet window in wed. This applet window have File to save/load. Would you give me source code or related java class to let me see API please????? thanks wilson^^ i wrote this in JCreator LE: import java.applet.*; public class title extends Applet { public void paint(Grahpics screen) { //code in here } } umm once you get used to that you can add on features to interact with the mouse..make sure in your html (because applets run in html) that you have the code <applet code="filename.class" width=400 height=400></applet> It helps if you write your html file in the same file that your .class file is in IM me on aim if you have any questions - Sportsdude11751 They say if you play a Microsoft Windows CD backwards it will play satanic messages. But thats nothing, if you play it forwards it installs Windows. Originally posted by Sportsdude11751 i wrote this in JCreator LE: import java.applet.*; public class title extends Applet { public void paint(Grahpics screen) { //code in here } } JCreator must be a crock of censored then, because i've never seen an object in the class hierarchy called Grahpics i dont know what WED is.. wednesday? hmm anyways.. i odnt know how much you know about applets, but i'll tell you for free that they cant save or load anything - that would be a security risk.. so creating a save and load menu in an applet, youre only going to have to grey it out anyway, because it wont
http://forums.devx.com/showthread.php?139972-how-to-generate-applet-window&p=414304
CC-MAIN-2017-51
refinedweb
262
77.47
The below code, which creates and uses a bitset, is from the following tutorial "Intro to Bit Vectors". I am rewriting this code to try to learn and understand more about C structs and pointers. #include <stdio.h> #include <stdlib.h> #define WORDSIZE 32 #define BITS_WORDSIZE 5 #define MASK 0x1f // Create a bitset int initbv(int **bv, int val) { *bv = calloc(val/WORDSIZE + 1, sizeof(int)); return *bv != NULL; } // Place int 'i' in the biset void set(int bv[], int i) { bv[i>>BITS_WORDSIZE] |= (1 << (i & MASK)); } // Return true if integer 'i' is a member of the bitset int member(int bv[], int i) { int boolean = bv[i>>BITS_WORDSIZE] & (1 << (i & MASK)); return boolean; } int main() { int *bv, i; int s1[] = {32, 5, 0}; int s2[] = {32, 4, 5, 0}; initbv(&bv, 32); // Fill bitset with s1 for(i = 0; s1[i]; i++) { set(bv, s1[i]); } // Print intersection of bitset (s1) and s2 for(i = 0; s2[i]; i++) { if(member(bv, s2[i])) { printf("%d\n", s2[i]); } } free(bv); return 0; } #include <stdio.h> #include <stdlib.h> #define WORDSIZE 32 #define BITS_WS 5 #define MASK 0x1f struct bitset { int *bv; }; /* Create bitset that can hold 'size' items */ struct bitset * bitset_new(int size) { struct bitset * set = malloc(sizeof(struct bitset)); set->bv = calloc(size/WORDSIZE + 1, sizeof(int)); return set; } /* Add an item to a bitset */ int bitset_add(struct bitset * this, int item) { return this->bv[item>>BITS_WS] |= (1 << (item & MASK)); } /* Check if an item is in the bitset */ int bitset_lookup(struct bitset * this, int item) { int boolean = this->bv[item>>BITS_WS] & (1 << (item & MASK)); printf("%d\n", boolean); return boolean; } int main() { struct bitset * test = bitset_new(32); int num = 5; bitset_add(test, num); printf("%d\n", bitset_lookup(test, num)); return 0; } That is not a tutorial, it is misleading examples at best. First of all, use an unsigned type. I recommend unsigned long (for various reasons, none of them critical). The <limits.h> header file defines constant CHAR_BIT, and the number of bits you can use in any unsigned integer type is always CHAR_BIT * sizeof (unsigned_type). Second, you can make the bit map (or ordered bit set) dynamically resizable, by adding the size information to the structure. The above boils down to #include <stdlib.h> #include <limits.h> #define ULONG_BITS (CHAR_BIT * sizeof (unsigned long)) typedef struct { size_t ulongs; unsigned long *ulong; } bitset; #define BITSET_INIT { 0, NULL } void bitset_init(bitset *bset) { if (bset) { bset->ulongs = 0; bset->ulong = NULL; } } void bitset_free(bitset *bset) { if (bset) { free(bset->ulong); bset->ulongs = 0; bset->ulong = NULL; } } /* Returns: 0 if successfully set -1 if bs is NULL -2 if out of memory. */ int bitset_set(bitset *bset, const size_t bit) { if (bset) { const size_t i = bit / ULONG_BITS; /* Need to grow the bitset? */ if (i >= bset->ulongs) { const size_t ulongs = i + 1; /* Use better strategy! */ unsigned long *ulong; size_t n = bset->ulongs; ulong = realloc(bset->ulong, ulongs * sizeof bset->ulong[0]); if (!ulong) return -2; /* Update the structure to reflect the changes */ bset->ulongs = ulongs; bset->ulong = ulong; /* Clear the newly acquired part of the ulong array */ while (n < ulongs) ulong[n++] = 0UL; } bset->ulong[i] |= 1UL << (bit % ULONG_BITS); return 0; } else return -1; } /* Returns: 0 if SET 1 if UNSET -1 if outside the bitset */ int bitset_get(bitset *bset, const size_t bit) { if (bset) { const size_t i = bit / ULONG_BITS; if (i >= bset->ulongs) return -1; return !(bset->ulong[i] & (1UL << (bit % ULONG_BITS))); } else return -1; } In a bitset structure, the ulong is a dynamically allocated array of ulongs unsigned longs. Thus, it stores ulongs * ULONG_BITS bits. BITSET_INIT is a preprocessor macro you can use to initialize an empty bitset. If you cannot or do not want to use it, you can use bitset_init() to initialize a bitset. The two are equivalent. bitset_free() releases the dynamic memory allocated for the bitset. After the call, the bit set is gone, and the variable used is re-initialized. (Note that it is perfectly okay to call bitset_free() on an un-used but initialized bit set, because calling free(NULL) is perfectly safe and does nothing.) Because the OS/kernel will automatically release all memory used by a program (except for certain types of shared memory segments), it is not necessary to call bitset_free() just before a program exits. But, if you use bit sets as part of some algorithm, it is obviously a good practice to release the memory no longer needed, so that the application can potentially run indefinitely without "leaking" (wasting) memory. bitset_set() automatically grows the bit set when necessary, but only to as large as is needed. This is not necessarily a good reallocation policy: malloc()/ realloc() etc. calls are relatively slow, and if you happen to call bitset_set() in increasing order (by increasing bit number), you end up calling realloc() for every ULONG_BITS. Instead, it is often a good idea to adjust the new size ( ulongs) upwards -- the exact formula you use for this is your reallocation policy --, but suggesting a good policy would require practical testing with practical programs. The shown one works, and is quite robust, but may be a bit slow in some situations. (You'd need to use at least tens of thousands of bits, though.) The bitset_get() function return value is funky, because I wanted the function to return a similar value for both "unset" and "outside the bit set", because the two are logically similar. (That is, I consider the bit set, a set of set bits; in which case it is logical to think of all bits outside the set as unset.) A much more traditional definition is int bitset_get(bitset *bset, const size_t bit) { if (bset) { const size_t i = bit / ULONG_BITS; if (i >= bset->ulongs) return 0; return !!(bset->ulong[i] & (1UL << (bit % ULONG_BITS))); } else return 0; } which returns 1 only for bits set, and 0 for bits outside the set. Note the !!. It is just two not operators, nothing too strange; making it a not-not operator. !!x is 0 if x is zero, and 1 if x is nonzero. (A single not operator, !x, yields 1 if x is zero, and 0 if x is nonzero. Applying not twice yields the not-not I explained above.) To use the above, try e.g. int main(void) { bitset train = BITSET_INIT; printf("bitset_get(&train, 5) = %d\n", bitset_get(&train, 5)); if (bitset_set(&train, 5)) { printf("Oops; we ran out of memory.\n"); return EXIT_FAILURE; } else printf("Called bitset_set(&train, 5) successfully\n"); printf("bitset_get(&train, 5) = %d\n"); bitset_free(&train); return EXIT_SUCCESS; } Because we do not make any assumptions about the hardware or system we are running (unless I goofed somewhere; if you notice I did, let me know in the comments so I can fix my goof!), and only stuff that the C standard says we can rely on, this should work on anything you can compile the code with a standards-compliant compiler. Windows, Linux, BSDs, old Unix, macOS, and others. With some changes, it can be made to work on microcontrollers, even. I'm not sure if all development libraries have realloc(); even malloc() might not be available. Aside from that, on things like 32-bit ARMs this should work as-is just fine; on 8-bit AVRs and such it would be a good idea to use unsigned char and CHAR_BIT, instead, since they tend to emulate the larger types rather than supporting them in hardware. (The code above would work, but be slower than necessary.)
https://codedump.io/share/6Dq8Hn9nezVx/1/c-bitset-implementation-using-structs-unexpected-output
CC-MAIN-2018-13
refinedweb
1,247
57.71
Hey everyone!! I’m kinda scratching my head on this project I’m on right now since I cannot figure out how to make it work. Hopefully you can help me a little bit… I have three stepper motors (with Big Easy Drivers) and two joysticks in the following setup. Joystick1 controls: Stepper1 (Tilt, X) Stepper2 (Shift, Y) Joystick2 controls: Stepper3 (Rotate, Z) However let’s forget Joystick2 and Stepper3 for a second. I want to control Stepper1 and Stepper2 accordingly to the position of Joystick1. So if I went for a diagonal movement on the stick both motors should start running simultaneously. If I were to move up the stick only Stepper1 should move, etc. You get the deal Here’s a video which describes it quite well (forget the XBOX controller): I understand that a real simultaneous movement of multiple steppers is not possible at all. Rather microsteps are made “nearly” at the same time to make it look both are running at the same time. Therefore I want to use the Accelstepper.h library which did a great job in the past for me. However I never encountered the need of using two or more motors at the same time and there’s where my problem is. Here’s my current code which just rotates the steppers (very slow btw) but the input of my joystick is not considered at all. I feel there’s something I’m missing as this program seems nearly too simple for it to work… In this case the steppers “rotate” and “tilt” should move accordingly to the position of my joystick. #include <AccelStepper.h> AccelStepper // Initializing the stepper motors rotate(1,3,2), tilt(1,9,8), shift(1,7,6); #define JoyX1 A8 //Init Joysticks #define JoyY1 A9 #define JoyX2 A10 #define JoyY2 A11 int joyX1 = analogRead(JoyX1), joyY1 = analogRead(JoyY1), joyX2 = analogRead(JoyX2), joyY2 = analogRead(JoyY2); void setup() { Serial.begin(9600); rotate.setMaxSpeed(2000); rotate.setAcceleration(100); tilt.setMaxSpeed(2000); tilt.setAcceleration(100); shift.setMaxSpeed(2000); shift.setAcceleration(100); } void loop() { joyX1 = analogRead(JoyX1); joyY1 = analogRead(JoyY1); joyX2 = analogRead(JoyX2); joyY2 = analogRead(JoyY2); Serial.print( joyX1 ); Serial.print( ' ' ); //Values displayed on Serial are correct Serial.print( joyY1 ); Serial.println( ' ' ); rotate.moveTo(joyX1*50); //I feel this is too simple... rotate.setSpeed(joyX1); //Value of joystick to set the speed according to the position of the joystick. rotate.run(); shift.moveTo(joyY1*50); rotate.setSpeed(joyY1); shift.run(); Thanks a lot for reading and any help in advance! Mario
https://forum.arduino.cc/t/controlling-multiple-steppers-with-joystick/382143
CC-MAIN-2021-43
refinedweb
419
54.02
Great! Well, do you "believe" that data is passed through stdin, or you're exactly 100% certain? And also, Rubberman, would you please tell me how to capture it? Thanks Thanks jabirali, ;-) Have you tried doing something like this? #include <iostream> #include <sstream> #include <string> using namespace std; int main() { stringstream ss; ss << cin.rdbuf(); cout << "Content-type: text/html\\n\\n<html><body>\\n" << "Data received:\\n" << ss.str() << "\\n</body></html>\\n"; return 0; } The Linux Foundation is a non-profit consortium dedicated to the growth of Linux. More About the foundation... Join / Linux Training / Board Privacy / Terms / Editorial Policy
http://www.linux.com/community/forums/software-development/writing-cgi-using-c/11376?limit=6&offset=6
CC-MAIN-2014-35
refinedweb
102
60.41
You're enrolled in our new beta rewards program. Join our group to get the inside scoop and share your feedback.Join group Join the community to find out what other Atlassian users are discussing, debating and creating. Hey guys, So recently our organization tightened up permissions and stopped allowing all users to create new repositories. The issue is, if you attempt to create one without the correct permissions, it just produces the repository under your namespace. Really, it should return an error of some kind so we know what's going on. Just thought you guys would want to know. Hi Daniil, The flow not quite like that. If I go into one of my organization's projects and hit 'create repo', it'll take me to a page that looks like this: The owner will be listed as me, but it won't display any error when going to that page and will obviously create a repo under my name. Ok, now I get what you mean. Thanks for the details! So yes, you're right, in this case we silently ignore the context you came from and preselect some other account you can create repositories in – in your case it is your personal account (but might be some other team, for instance). I agree we should show some hint that the context has changed. I mean that there's no bug in the behaviour you observed, it's rather a silent fallback to another account which is easy to miss. I'll check with the team on what we can do there. Do you mind creating a ticket with the same description in our public issue tracker? This way you'll be notified as soon as we do something about this issue. Cheers, Dani
https://community.atlassian.com/t5/Bitbucket-discussions/Create-Permissions/td-p/1221307
CC-MAIN-2021-21
refinedweb
297
71.34
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. PSR-14:49 with Phil Sturgeon This recommendation was designed to avoid code from one package negatively effecting the global state of the application, or conflicting with another package, and is important to any OOP code put on Composer. - 0:01 Now you know how to structure your PHP code and you know how to auto load it. - 0:04 It's the huge amount of the work. - 0:06 You can create all source of code using various different design patterns but - 0:09 - 0:11 You just have to use an auto loader and name spaces to structure your code. - 0:15 Over the last few years, it's become more and - 0:17 more common for people to use established standards and recommendations to - 0:20 increase interoperability and consistency between various projects. - 0:24 As this course is all about best practices, - 0:26 it seems only reasonable to use these for components. - 0:28 PSR-1 is a standard recommendation which provides you - 0:32 with a list of good ideas to implement. - 0:35 If you only implement a few of them, - 0:36 then you are not technically compliant with PSR-1. - 0:38 But the more of these rules you follow, the better. - 0:41 In fact, there's barely any reason to ignore a rule in PSR-1. - 0:44 Here are some of the rules. - 0:46 PHP code must use the long tags, or the short echo tags. - 0:49 It must not use other tag variations. - 0:52 Historically speaking, there are a few different tags you can use in PHP. - 0:54 There are long tags, short tags, short echo tags, and some other ones like, - 0:58 asp star tags. - 1:00 PSR-1 suggest that you only use long tags and short echo tags. - 1:05 We discussed character encoding in UTF8 in an earlier lesson, - 1:08 and PHP should have its own files written in UTF8. - 1:12 This allows unicode characters to be used in actual PHP code and - 1:15 not just when worrying about output. - 1:17 Many editors like sublime and PHP storm, will use the setting by default, but - 1:21 it's worth checking your settings or preferences. - 1:25 If we go to the php-fig.org website and - 1:27 scroll down a little bit, we can see PSR-1. - 1:30 This will take us to a page containing all of the rules, - 1:33 some of which I've already discussed. - 1:35 One rule I'd really like to highlight can be found about halfway down the page - 1:39 under 2.3 side effects. - 1:42 The rule starts here. - 1:43 A file should declare new symbols, classes, functions, - 1:47 constants, et cetera, and cause no other side effects. - 1:50 Or, it should execute logic with side effects but should not do both. - 1:54 And this sounds like a bit of a complicated statement. - 1:57 But it's a really, really good rule. - 1:59 It basically means that when you build your components, packages, classes or - 2:02 whatever, you should do one of two things. - 2:05 Either define functions, classes, constants, et cetera. - 2:08 Or, do things like setup, changing config values including on the files output and - 2:13 content like HTML or JSON. - 2:15 You can only do one type of action on the same file, never both. - 2:18 Let's have a look at some examples of this. - 2:21 On the PSL1 document itself. - 2:24 You can see a few examples here. - 2:26 So, the first example, here we're changing the error reporting value to E_ALL. - 2:31 Now, that might be fine if you're in a bootstrap file. - 2:33 But if this is in some other class, it could be terrible. - 2:36 Imagine you include a file to use a function or a class that's declared there. - 2:40 But at some point in that class, maybe before, or - 2:42 maybe inside of a function or a method, the author decided to turn off notices. - 2:47 Some developers do this when their code is written quite poorly, and it has a lot - 2:50 of notices, it's generally quicker to turn them off than actually fix them. - 2:55 If you expect notices to be turned on and then they're suddenly turned off - 2:58 without you noticing, you can end up having a really bad time. - 3:01 The same can be said for any label setting. - 3:03 Don't change default time zone or the display arrow settings or memory limit or - 3:08 anything inside the same file as a class or - 3:10 function you are expecting people to use. - 3:12 Including another file from another file can lead to some complex issues. - 3:17 If I include file A then file A includes file B but - 3:20 I've already included file B myself, - 3:23 then we'll get a fatal error because it's trying to re-declare the same class. - 3:26 If I avoid that by not including file B myself, and - 3:29 somebody changes file A to not include file B, - 3:31 then we get another fatal error because that class hasn't been included. - 3:35 Generally, try and keep file loading to a minimum. - 3:38 Especially in files where you're defining functions and classes. - 3:41 This next example highlights a side effect which is generating output. - 3:45 So outputting any output in a file when you only expect to - 3:49 include a class is a recipe for disaster. - 3:51 Accidentally outputting HTML can break redirects that might happen earlier in - 3:54 the page, or cause broken sessions, or all sorts of other really weird things. - 3:59 So finally the last rule here is an example of a declaration. - 4:01 As PSR-1 suggests, you should either declare or run side effects, - 4:06 but never both. - 4:07 So a declaration, as mentioned, could be a function or a class. - 4:11 It could also be a variable or a constant or basically anything you're defining. - 4:16 Another rule of PSR-1 is the namespaces and classes must follow PSR-0 or PSR-4. - 4:22 That should be fairly easy as we know how PSR-4 works already. - 4:24 They're both auto-reading standards and do essentially the same thing. - 4:28 Neither of them care too much what you actually name your Namespaces, Classes and - 4:32 Methods, PSR-1 does. - 4:34 Namespaces and Classes need to use studly caps, which means use a capital letter for - 4:39 each new word and everything else is a lowercase letter. - 4:42 Methods need to use camel case, which is just like study caps, but - 4:46 the first character in the whole method should be lowercase as well.
https://teamtreehouse.com/library/php-standards-and-best-practices/creating-distributable-oop-packages/psr1
CC-MAIN-2017-13
refinedweb
1,218
71.44
I have a problem with this code. The program is supposed to read a vector with Standard Input and choose randomly from that vector, put those numbers in another array and then print that array out. How large the first array is depends on N, from args, the size of the sample vector is M. Vector M is not supposed to choose the same number twice. It seems that the calling method does'nt work from main readInt and to static readInt method and I don't know how to fix it. Code Java: public class Permexamp{ //Reading the vector with StdIn public static int [] readInt(int N) { int [] vectora = new int [N]; for (int i = 0;i<N;i++){ vectora [i] = StdIn.readInt(); } return vectora; } //Building a now vector chosen randomly from vectora, not choosing the same number 2 public static int[] randomchoose(int [] a, int M) { int [] vectorb = new int [M]; for (int i = 0; i < M; i++) { int r = i + (int) (Math.random() * (M-i)); int t = vectorb[r]; vectorb[r] = vectorb[i]; vectorb[i] = t; } return vectorb; } //Main call's the other methods and prints out vectorb public static void main(String args[]) { int N = Integer.parseInt(args[0]); int M = Integer.parseInt(args[1]); int[] vectora = readInt(N); int[] vectorb = randomchoose(vectora,M); for (int i = 0; i < M ; i++) StdOut.println(vectorb[i]); } } When I compile it there are no errors. But the program seems to be in infenite loop. I do ctr Z to stop the program I get this error: ^Z Exception in thread "main" java.util.NoSuchElementException at java.util.Scanner.throwFor(Unknown Source) at java.util.Scanner.next(Unknown Source) at java.util.Scanner.nextInt(Unknown Source) at java.util.Scanner.nextInt(Unknown Source) at StdIn.readInt(StdIn.java:139) at Permexamp.readInt(Permexamp.java:8) at Permexamp.main(Permexamp.java:29)
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/32643-put-random-number-one-vector-another-printingthethread.html
CC-MAIN-2016-26
refinedweb
312
58.58
This chapter contains these topics: Introduction to the XSQL Pages Publishing Framework Using the XSQL Pages Publishing Framework: Overview Generating and Transforming XML with XSQL Servlet Using XSQL in Java Programs XSQL Pages Tips and Techniques The Oracle XSQL pages publishing framework is an extensible platform for publishing XML in multiple formats. The Java-based XSQL servlet, which is the center of the framework, provides a declarative interface for dynamically publishing dynamic Web content based on relational data. The XSQL framework combines the power of SQL, XML, and XSLT. You can use it to create declarative templates called XSQL pages to perform the following actions: Assemble dynamic XML datagrams based on parameterized SQL queries Transform datagrams with XSLT to generate a result in an XML, HTML, or text-based format An XSQL page, so called because its default extension is .xsql, is an XML file that contains instructions for the XSQL servlet. The Example 14-1 shows a simple XSQL page. It uses the <xsql:query> action element to query the hr.employees table. Example 14-1 Sample XSQL Page <?xml version="1.0"> <?xml-stylesheet SELECT * FROM employees </xsql:query> You can present a browser client with the data returned from the query in Example 14-1. Assembling and transforming information for publishing requires no programming. You can perform most tasks in a declarative way. If one of the built-in features does not fit your needs, however, then you can use Java to integrate custom data sources or perform customized server-side processing. In the XSQL pages framework, the assembly of information to be published is separate from presentation. This architectural feature enables you to do the following: Present the same data in multiple ways, including tailoring the presentation appropriately to the type of client device making the request (browser, cellular phone, PDA, and so on) Reuse data by aggregating existing pages into new ones Revise and enhance the presentation independently of the content This chapter assumes that you are familiar with the following technologies: Oracle Database SQL. The XSQL framework accesses data in a database. PL/SQL. The XDK supplies a PL/SQL API for XSU that mirrors the Java API. Java Database Connectivity (JDBC). The XSQL pages framework depends on a JDBC driver for database connections. eXtensible Stylesheet Language Transformation (XSLT). You can use XSLT to transform the data into a format appropriate for delivery to the user. XML SQL Utility (XSU). The XSQL pages framework uses XSU to query the database. This section contains the following topics: Using the XSQL Pages Framework: Basic Process Setting Up the XSQL Pages Framework Running the XSQL Pages Demo Programs Using the XSQL Pages Command-Line Utility The XSQL page processor engine interprets, caches, and processes the contents of XSQL pages. Figure 14-1 shows the basic architecture of the XSQL pages publishing framework. The XSQL page processor provides access from the following entry points: From the command line or in batch mode with the XSQL command-line utility. The oracle.xml.xsql.XSQLCommandLine class is the command-line interface. Over the Web by using the XSQL servlet installed in a Web server. The oracle.xml.xsql.XSQLServlet class is the servlet interface. As part of JSP applications by using <jsp:include> to include a template or <jsp:forward> to forward a template. Programmatically by using the oracle.xml.xsql.XSQLRequest Java class. Figure 14-1 XSQL Pages Framework Architecture You can run the same XSQL pages from any of the access points shown in Figure 14-1. Regardless of which way you use the XSQL page processor, it performs the following actions to generate a result: Receives a request to process an XSQL page. The request can come from the command line utility or programmatically from an XSQLRequest object. Assembles an XML datagram by using the result of one or more SQL queries. The query is specified in the <xsql:query> element of the XSQL page. Returns this XML datagram to the requestor. Optionally transforms the datagram into any XML, HTML, or text-based format. Figure 14-2 shows a typical Web-based scenario in which a Web server receives an HTTP request for Page.xsql, which contains a reference to the XSLT stylesheet Style.xsl. The XSQL page contains a database query. Figure 14-2 Web Access to XSQL Pages The XSQL page processor shown in Figure 14-2 performs the following steps: Receives a request from the XSQL Servlet to process Page.xsql. Parses Page.xsql with the Oracle XML Parser and caches it. Connects to the database based on the value of the connection attribute on the document element. Generates the XML datagram by replacing each XSQL action element, for example, <xsql:query>, with the XML results returned by its built-in action handler. Parses the Style.xsl stylesheet and caches it. Transforms the datagram by passing it and the Style.xsl stylesheet to the Oracle XSLT processor. Returns the resulting XML or HTML document to the requester. During the transformation step in this process, you can use stylesheets that conform to the W3C XSLT 1.0 or 2.0 standard to transform the assembled datagram into document formats such as the following: HTML for browser display Wireless Markup Language (WML) for wireless devices Scalable Vector Graphics (SVG) for data-driven charts, graphs, and diagrams XML Stylesheet Formatting Objects (XSL-FO), for rendering into Adobe PDF Text documents such as e-mails, SQL scripts, Java programs, and so on Arbitrary XML-based document formats You can develop and use XSQL pages in various scenarios. This section describes the following topics: Creating and Testing XSQL Pages with Oracle JDeveloper Setting the CLASSPATH for XSQL Pages Configuring the XSQL Servlet Container Setting Up the Connection Definitions The easiest way to use XSQL pages is with Oracle JDeveloper 10g. The IDE supports the following features: Color-coded syntax highlighting XML syntax checking In-context drop-down lists that help you pick valid XSQL tag names and auto-complete tag and attribute names XSQL page deployment and testing Debugging tools Wizards for creating XSQL actions To create an XSQL page in an Oracle JDeveloper 10g project, do the following steps: Create or open a project. Select File and then New. In the New Gallery dialog box, select the General category and then XML. In the Item window, select XSQL Page and click OK. JDeveloper loads a tab for the new XSQL page into the central window. To add XSQL action elements such as <xsql:query> to your XSQL page, place the cursor where you want the new element to go and click an item in the Component Palette. A wizard opens that takes you through the steps of selecting which XSQL action you want to use and which attributes you need to provide. To check the syntax of an XSQL page, place the cursor in the page and right-click Check XML Syntax. If there are any XML syntax errors, JDeveloper displays them. To test an XSQL page, select the page in the navigator and right-click Run. JDeveloper automatically starts up a local Web server, properly configured to run XSQL pages, and tests your page by launching your default browser with the appropriate URL to request the page. After you have run the XSQL page, you can continue to make modifications to it in the IDE as well as to any XSLT stylesheets with which it might be associated. After saving the files in the IDE you can immediately refresh the browser to observe the effect of the changes. You must add the XSQL runtime library to your project library list so that the CLASSPATH is properly set. The IDE adds this entry automatically when you go through the New Gallery dialog to create a new XSQL page, but you can also add it manually to the project as follows: Right-click the project in the Applications Navigator. Select Project Properties. Select Profiles and then Libraries from the navigation tree. Move XSQL Runtime from the Available Libraries pane to Selected Libraries. Outside of the JDeveloper environment, you should make sure that the XSQL page processor engine is properly configured. Make sure that the appropriate JAR files are in the CLASSPATH of the JavaVM that processes the XSQL Pages. The complete set of XDK JAR files is described in Table 3-1, "Java Libraries for XDK Components". The JAR files for the XSQL framework include the following: xml.jar, the XSQL page processor xmlparserv2.jar, the Oracle XML parser xsu12.jar, the Oracle XML SQL utility (XSU) ojdbc5.jar, the Oracle JDBC driver (or ojdbc6.jar) Note:The XSQL servlet can connect to any database that has JDBC support. Indicate the appropriate JDBC driver class and connection URL in the XSQL configuration file connection definition. Object-relational functionality only works when using Oracle database with the Oracle JDBC driver. If you have configured your CLASSPATH as instructed in "Setting Up the Java XDK Environment", then you only need to add the directory where the XSQL pages configuration file resides. In the database installation of the XDK, the directory for XSQLConfig.xml is $ORACLE_HOME/xdk/admin. On Windows your %CLASSPATH% variable should contain the following entries: %ORACLE_HOME%\lib\ojdbc5.jar;%ORACLE_HOME%\lib\xmlparserv2.jar; %ORACLE_HOME%\lib\xsu12.jar;C:\xsql\lib\xml.jar;%ORACLE_HOME%\xdk\admin On UNIX the $CLASSPATH variable should contain the following entries: $ORACLE_HOME/lib/ojdbc5.jar:$ORACLE_HOME/lib/xmlparserv2.jar: $ORACLE_HOME/lib/xsu12.jar:$ORACLE_HOME/lib/xml.jar:$ORACLE_HOME\xdk\admin Note:If you are deploying your XSQL pages in a J2EE WAR file, then you can include the XSQL JAR files in the ./WEB-INF/libdirectory of the WAR file. You can install the XSQL servlet in a variety of different Web servers, including OC4J, Jakarta Tomcat, and so forth. You can find complete instructions for installing the servlet in the Release Notes for the OTN download of the XDK. Navigate to the setup instructions as follows: Log on to OTN and navigate to the following URL: Click Getting Started with XDK Java Components. In the Introduction section, scroll down to XSQL Servlet in the bulleted list and click Release Notes. In the Contents section, click Downloading and Installing the XSQL Servlet. Scroll down to the Setting Up Your Servlet Engine to Run XSQL Pages section and look for your Web server. XSQL pages specify database connections by using a short name for a connection that is defined in the XSQL configuration file, which by default is named $ORACLE_HOME/xdk/admin/XSQLConfig.xml. Note:If you are deploying your XSQL pages in a J2EE WAR file, then you can place the XSQLConfig.xmlfile in the ./WEB-INF/classesdirectory of your WAR file. The sample XSQL page shown in Example 14-1 contains the following connection information: <xsql:query Connection names are defined in the <connectiondefs> section of the XSQL configuration file. Example 14-2 shows the relevant section of the sample configuration file included with the database, with the hr connection in bold. Example 14-2 Connection Definitions Section of XSQLConfig.xml <connectiondefs> ... <connection name="hr"> <username>hr</username> <password>hr_password</password> <dburl>jdbc:oracle:thin:@localhost:1521:ORCL</dburl> <driver>oracle.jdbc.driver.OracleDriver</driver> <autocommit>false</autocommit> </connection> ... </connectiondefs> For each database connection, you can specify the following elements: <username>, the database username <password>, the database password <dburl>, the JDBC connection string <driver>, the fully-qualified class name of the JDBC driver to use <autocommit>, which optionally forces AUTOCOMMIT to TRUE or FALSE Specify an <autocommit> child element to control the setting of the JDBC autocommit for any connection. If no <autocommit> child element is set for a <connection>, then the autocommit setting is not set by the XSQL connection manager. In this case, the setting is the default autocommit setting for the JDBC driver. You can place an arbitrary number of <connection> elements in the XSQL configuration file to define your database connections. An individual XSQL page refers to the connection it wants to use by putting a connection=" xxx " attribute on the top-level element in the page (also called the "document element"). Caution:The XSQLConfig.xmlfile contains sensitive database username and password information that you want to keep secure on the database server. Refer to "Security Considerations for XSQL Pages" for instructions. Demo programs for the XSQL servlet are included in $ORACLE_HOME/xdk/demo/java/xsql. Table 14-1 lists the demo subdirectories and explains the included demos. The Demo Name column refers to the title of the demo listed on the XSQL Pages & XSQL Servlet home page. "Running the XSQL Demos" explains how to access the home page. To set up the XSQL demos perform the following steps: Change into the $ORACLE_HOME/xdk/demo/java/xsql directory (UNIX) or %ORACLE_HOME%\xdk\demo\java\xsql directory (Windows). Start SQL*Plus and connect to your database as ctxsys — the schema owner for the Oracle Text packages — and issue the following statement: GRANT EXECUTE ON ctx_ddl TO scott; Connect to your database as a user with DBA privileges and issue the following statement: GRANT QUERY REWRITE TO scott; The preceding query enables scott to create a function-based index that one of the demos requires to perform case-insensitive queries on descriptions of airports. Connect to your database as scott. You will be prompted for the password. Run the SQL script install.sql in the current directory. This script runs all SQL scripts for all the demos: @install.sql Change to the ./doyouxml subdirectory, and run the following command to import sample data for the "Do You XML?" demo (you will be prompted for the password): imp scott file=doyouxml.dmp To run the Scalable Vector Graphics (SVG) demonstration, install an SVG plug-in such as Adobe SVG plug-in into your browser. The XSQL demos are designed to be accessed through a Web browser. If you have set up the XSQL servlet in a Web server as described in "Configuring the XSQL Servlet Container", then you can access the demos through the following URL, substituting appropriate values for yourserver and port: Figure 14-3 shows a section of the XSQL home page in Internet Explorer. Note that you must use browser version 5 or higher. Figure 14-3 XSQL Home Page The demos are designed to be self-explanatory. Click the demo titles—Hello World Page, Employee Page, and so forth—and follow the online instructions. Often the content of a dynamic page is based on data that does not frequently change. To optimize performance of your Web publishing, you can use operating system facilities to schedule offline processing of your XSQL pages. This technique enables the processed results to be served statically by your Web server. The XDK includes a command-line Java interface that runs the XSQL page processor. You can process any XSQL page with the XSQL command-line utility. The $ORACLE_HOME/xdk/bin/xsql and %ORACLE_HOME%\xdk\bin\xsql.bat shell scripts run the oracle.xml.xsql.XSQLCommandLine class. Before invoking the class make sure that your environment is configured as described in "Setting Up the XSQL Pages Framework". Depending on how you invoke the utility, the syntax is either of the following: java oracle.xml.xsql.XSQLCommandLine xsqlpage [outfile] [param1=value1 ...] xsql xsqlpage [outfile] [param1=value1 ...] If you specify an outfile, then the result of processing xsqlpage is written to it; otherwise the result goes to standard out. You can pass any number of parameters to the XSQL page processor, which are available for reference by the XSQL page processed as part of the request. However, the following parameter names are recognized by the command-line utility and have a pre-defined behavior: xml-stylesheet=stylesheetURL Provides the relative or absolute URL for a stylesheet to use for the request. You can also set it to the string none to suppress XSLT stylesheet processing for debugging purposes. posted-xml=XMLDocumentURL Provides the relative or absolute URL of an XML resource to treat as if it were posted as part of the request. useragent=UserAgentString Simulates a particular HTTP User-Agent string from the command line so that an appropriate stylesheet for that User-Agent type is selected as part of command-line processing of the page. This section describes the most basic tasks that you can perform with your server-side XSQL page templates: Producing Datagrams from SQL Queries Transforming XML Datagrams into an Alternative XML Format Transforming XML Datagrams into HTML for Display You can serve database information in XML format over the Web with XSQL pages. For example, suppose your aim is to serve a real-time XML datagram from Oracle of all available flights landing today at JFK airport. Example 14-3 shows a sample XSQL page in a file named AvailableFlightsToday.xsql. Example 14-3 "?" represents a bind variable bound */ ORDER BY ExpectedTime /* to the value of the City parameter. */ </xsql:query> The XSQL page is an XML file that contains any mix of static XML content and XSQL action elements. The file can have any extension, but .xsql is the default extension for XSQL pages. You can modify your servlet engine configuration settings to associate other extensions by using the same technique described in "Configuring the XSQL Servlet Container". Note that the servlet extension mapping is configured inside the ./WEB-INF/web.xml file in a J2EE WAR file. The XSQL page in Example 14-3 begins with the following declaration: <?xml version="1.0"?> The first, outermost element in an XSQL page is the document element. AvailableFlightsToday.xsql contains a single XSQL action element <xsql:query>, but no static XML elements. In this case the <xsql:query> element is the document element. Example 14-3 represents the simplest useful XSQL page: one that contains a single query. The results of the query replace the <xsql:query> section in the XSQL page. Note:Chapter 30, "XSQL Pages Reference" describes the complete set of built-in action elements. The <xsql:query> action element includes an xmlns attribute that declares the xsql namespace prefix as a synonym for the urn:oracle-xsql value, which is the Oracle XSQL namespace identifier: <xsql:query The element also contains a connection attribute whose value is the name of one of the pre-defined connections in the XSQL configuration file: <xsql:query The details concerning the username, password, database, and JDBC driver that will be used for the demo connection are centralized in the configuration file. To include more than one query on the page, you can invent an XML element to wrap the other elements. Example 14-4 illustrates this technique. Example 14-4 Wrapping the <xsql:query> Element < bound */ ORDER BY ExpectedTime /* to the value of the City parameter. */ </xsql:query> <!-- Other xsql:query actions can go here inside <page> and </page> --> </page> In Example 14-4, the connection attribute and the xsql namespace declaration always go on the document element, whereas the bind-params is specific to the <xsql:query> action. The <xsql:query> element shown in Example 14-3 contains a bind-params attribute that associates the values of parameters in the request to bind variables in the SQL statement included in the <xsql:query> tag. The bind parameters in the SQL statement are represented by question marks. You can use SQL bind variables to parameterize the results of any of the actions in Table 30-1, "Built-In XSQL Elements and Action Handler Classes" that allow SQL statements. Bind variables enable your XSQL page template to produce results based on the values of parameters passed in the request. To use a bind variable, include a question mark anywhere in a statement where bind variables are allowed by SQL. Whenever a SQL statement is executed in the page, the XSQL engine binds the parameter values to the variable by specifying the bind-params attribute on the action element. Example 14-5 illustrates an XSQL page that binds the bind variables to the value of the custid parameter in the page request. Example 14-5 customer with ID of 101 can then be requested by passing the customer id parameter in the request as follows: The value of the bind-params attribute is a space-delimited list of parameter names. The left-to-right order indicates the positional bind variable to which its value will be bound in the statement. Thus, if your SQL statement contains five question marks, then the bind-params attribute needs a space-delimited list of five parameter names. If the same parameter value must be bound to several different occurrences of a bind variable, then repeat the name of the parameters in the value of the bind-params attribute at the appropriate position. Failure to include the same number of parameter names in the bind-params attribute as in the query results in an error when the page is executed. You can use variables in any action that expects a SQL statement or PL/SQL block. The page shown in Example 14-6 illustrates this technique. The XSQL page contains three action elements: <xsql:dml> binds useridCookie to an argument in the log_user_hit procedure. <xsql:query> binds parameter custid to a variable in a WHERE clause. <xsql:include-owa> binds parameters custid and userCookie to two arguments in the historical_data procedure. Example 14-6 means of a lexical substitution parameter. Thus, you can parameterize how actions behave as well as substitute parts of the SQL statements that they perform. Lexical substitution parameters are referenced with the following syntax: { @ParameterName}. Example 14-7 illustrates how you can use two lexical substitution parameters. One parameter in the <xsql:query> element sets the maximum number of rows to be passed in, whereas the other controls the list of columns to be ordered. Example 14-7 DevOpenBugs.xsql <!--> Example 14-7 also contains two bind parameters: dev and prod. Suppose that you want to obtain the open bugs for developer smuench against product 817. You want to retrieve only 10 rows and order them by bug number. You can fetch the XML for the bug list by specifying parameter values as follows: You can also use the XSQL command-line utility to make the request as follows: xsql DevOpenBugs.xsql dev=smuench prod=817 max=10 orderby=bugno Lexical parameters also enable you to parameterize the XSQL pages connection and the stylesheet used to process the page. Example 14-8 illustrates this technique. You can switch between stylesheets test.xsql and prod.xsl by specifying the name/value pairs sheet=test and sheet=prod. Example 14-8 DevOpenBugs.xsql <> You may want to provide a default value for a bind variable or a substitution parameter directly in the page. In this way, the page is parameterized without requiring the requester to explicitly pass in all values in every request. To include a default value for a parameter, add an XML attribute of the same name as the parameter to the action element or to any ancestor element. If a value for a given parameter is not included in the request, then the XSQL page processor searches for an attribute by the same name on the current action element. If it does not find one, it keeps looking for such an attribute on each ancestor element of the current action element until it gets to the document element of the page. The page in Example 14-9 defaults the value of the max parameter to 10 for both <xsql:query> actions in the page. Example 14-9 Setting a Default Value <example max="10" connection="demo" xmlns: <xsql:querySELECT * FROM TABLE1</xsql:query> <xsql:querySELECT * FROM TABLE2</xsql:query> </example> This page in Example 14-10 defaults the first query to a max of 5, the second query to a max of 7, and the third query to a max of 10. Example 14-10 Setting Multiple Default Values > All defaults are overridden if a value of max is supplied in the request, as shown in the following example: Bind variables respect the same defaulting rules. Example 14-11 illustrates how you can set the val parameter to 10 by default. Example 14-11 Defaults for Bind Variables <example val="10" connection="demo" xmlns: <xsql:query SELECT ? AS somevalue FROM DUAL WHERE ? = ? </xsql:query> </example> If the page in Example 14-11 is requested without any parameters, it returns the following XML datagram: <example> <rowset> <row> <somevalue>10</somevalue> </row> </row> </example> Alternatively, assume that the page is requested with the following URL: The preceding URL returns the following datagram: <example> <rowset> <row> <somevalue>3</somevalue> </row> </row> </example> You can remove the default value for the val parameter from the page by removing the val attribute. Example 14-12 illustrates this technique. Example 14-12 Bind Variables with No Defaults <example connection="demo" xmlns: <xsql:query SELECT ? AS somevalue FROM DUAL WHERE ? = ? </xsql:query> </example> A URL request for the page that does not supply a name/value pair returns the following datagram: <example> <rowset/> </example> A bind variable that is bound to a parameter with neither a default value nor a value supplied in the request is bound to NULL, which causes the WHERE clause in Example 14-12 to return no rows. XSQL pages can make use of parameters supplied in the request as well as page-private parameters. The names and values of page-private parameters are determined by actions in the page. If an action encounters a reference to a parameter named param in either a bind-params attribute or in a lexical parameter reference, then the value of the param parameter is resolved in the following order: The value of the page-private parameter named param, if set The value of the request parameter named param, if supplied The default value provided by an attribute named param on the current action element or one of its ancestor elements The value NULL for bind variables and the empty string for lexical parameters For XSQL pages that are processed by the XSQL servlet over HTTP, you can also set and reference the HTTP-Session-level variables and HTTP Cookies parameters. For XSQL pages processed through the XSQL servlet, the value of a parameter param is resolved in the following order: The value of the page-private parameter param, if set The value of the cookie named param, if set The value of the session variable named param, if set The value of the request parameter named param, if supplied The default value provided by an attribute named param on the current action element or one of its ancestor elements The value NULL for bind variables and the empty string for lexical parameters The resolution order means that users cannot supply parameter values in a request to override parameters of the same name set in the HTTP session. Also, users cannot set them as cookies that persist across browser sessions. With XSQL servlet properly installed on your Web server, you can access XSQL pages by following these basic steps: Copy an XSQL file to a directory under the virtual hierarchy of your Web server. Example 14-3 shows the sample page AvailableFlightsToday.xsql. You can also deploy XSQL pages in a standard J2EE WAR file, which occurs when you use Oracle JDeveloper 10g to develop and deploy your pages to Oracle Application Server. Load the page in your browser. For example, if the root URL is yourcompany.com, then you can access the AvailableFlightsToday.xsql page through a Web browser by requesting the following URL: The XSQL page processor automatically materializes the results of the query in your XSQL page as XML and returns them to the requester. Typically, another server program requests this XML-based datagram for processing, but if you use a browser such as Internet Explorer, then you can directly view the XML result as shown in Figure 14-4. Figure 14-4 XML Result From XSQL Page (AvailableFlightsToday.xsql) Query If the canonical <ROWSET> and <ROW> XML output from Figure 14-4 is not the XML format you need, then you can associate an XSLT stylesheet with your XSQL page. The stylesheet can transform the XML datagram in the server before returning the data. When exchanging data with another program, you typically agree on a DTD that describes the XML format for the exchange. Assume that you are given the flight-list.dtd definition and are told to produce your list of arriving flights in a format compliant with the DTD. You can use a visual tool such as XML Authority to browse the structure of the flight-list DTD, as shown in Figure 14-5. Figure 14-5 Exploring flight-list.dtd with XML Authority Figure 14-5 shows that the standard XML formats for flight lists are as follows: <flight-list> element, which contains one or more <flight> elements <flight> elements, which have attributes airline and number, and each of which contains an <arrives> element <arrives> elements, which contains text Example 14-13 shows the XSLT stylesheet flight-list.xsl. By associating the stylesheet with the XSQL page, you can change the default <ROWSET> and <ROW> format into the industry-standard <flight-list> and <flight>. Example 14-13 flight-list.xsl <!-- XSLT stylesheet is a template that includes the literal elements that you want to produce in the resulting document, such as <flight-list>, <flight>, and <arrives>, interspersed with XSLT actions that enable you to do the following: Loop over matching elements in the source document with <xsl:for-each> Plug in the values of source document elements where necessary with <xsl:value-of> Plug in the values of source document elements into attribute values with the {some_parameter} notation The following items have been added to the top-level <flight-list> element in the Example 14-13 stylesheet: xmlns:xsl="" This attribute defines the XML Namespace named xsl and identifies the URL string that uniquely identifies the XSLT specification. Although it looks just like a URL, think of the string as the "global primary key" for the set of elements defined in the XSLT 1.0 specification. When the namespace is defined, you can use the <xsl:XXX> action elements in the. You can associate the flight-list.xsl stylesheet with the AvailableFlightsToday.xsql in Example 14-3 by adding an <?xml-stylesheet?> instruction to the top of the page. Example 14-14 illustrates this technique. Example 14-14 flight-list.xsl <> Associating an XSLT stylesheet with the XSQL page causes the requesting program or browser to view the XML in the format as specified by flight-list.dtd you were given. Figure 14-6 illustrates a sample browser display. Figure 14-6 XSQL Page Results in XML Format To return the same XML data in HTML instead of an alternative XML format, use a different XSLT stylesheet. For example, rather than producing elements such as <flight-list> and <flight>, you can write a stylesheet that produces HTML elements such as <table>, <tr>, and <td>. The result of the dynamically queried data then looks like the HTML page shown in Figure 14-7. Instead of returning raw XML data, the XSQL page leverages server-side XSLT transformation to format the information as HTML for delivery to the browser. Figure 14-7 Using an XSLT Stylesheet to Render HTML Similar to the syntax of the flight-list.xsl stylesheet, the flight-display.xsl stylesheet shown in Example 14-15 looks like a template HTML page. It contains <xsl:for-each>, <xsl:value-of>, and attribute value templates such as {DUE} to plug in the dynamic values from the underlying <ROWSET> and <ROW> structured XML query results. Example 14-15 flight-display.xsl <!--> Note:The stylesheet produces well-formed HTML. Each opening tag is properly closed (for example, <td>… </td>); empty tags use the XML empty element syntax <br/>instead of just <br>. You can achieve useful results quickly by combining the power of the following: Parameterized SQL statements to select information from the Oracle database Industry-standard XML as a portable, interim data exchange format XSLT to transform XML-based datagrams into any XML- or HTML-based format The oracle.xml.xsql.XSQLRequest class enables you to use the XSQL page processor in your Java programs. To use the XSQL Java API, follow these basic steps: Construct an instance of XSQLRequest, passing the XSQL page to be processed into the constructor as one of the following: String containing a URL to the page URL object for the page In-memory XMLDocument Invoke one of the following methods on the object to process the page: process() to write the result to a PrintWriter or OutputStream. The ability to pass the XSQL page as an in-memory XMLDocument object means that you can dynamically generate any valid XSQL page for processing. You can then pass the page to the XSQL engine for evaluation. When processing a page, you may want to perform the following additional tasks as part of the request: Pass a set of parameters to the request. You accomplish this aim by using the XSQLResquest.setPostedDocument() method. Example 14-16 shows how you can process a page by using XSQLRequest. Example 14-16 XSQLRequestSample Class); // Set up)); } } See Also:Chapter 15, "Using the XSQL Pages Publishing Framework: Advanced Topics" to learn more about the XSQL Java API This section contains the following topics: Hints for Using the XSQL Servlet Resolving Common XSQL Connection Errors Security Considerations for XSQL Pages HTTP parameters with multibyte names, for example, a parameter whose name is in Kanji, are properly handled when they are inserted into your XSQL page with the <xsql:include-request-params> element. An attempt to refer to a parameter with a multibyte name inside the query statement of an <xsql:query> tag returns an empty string for the parameter value. As a workaround use a nonmultibyte parameter name. The parameter can still have a multibyte value that can be handled correctly. This section lists the following XSQL Servlet hints: Specifying a DTD While Transforming XSQL Output to a WML Document Testing Conditions in XSQL Pages Passing a Query Result to the WHERE Clause of Another Query Handling Multi-Valued HTML Form Parameters Invoking PL/SQL Wrapper Procedures to Generate XML Datagrams Accessing Contents of Posted XML Changing Database Connections Dynamically Retrieving the Name of the Current XSQL Page You can specify a DTD while transforming XSQL output to a WML document for a wireless application. The technique is to use a built-in facility of the XSLT stylesheet called <xsl:output>. The following example illustrates this technique: <xsl:stylesheet xmlns: <xsl:output <xsl:template </xsl:template> ... </xsl:stylesheet> The preceding stylesheet produces an XML result that includes the following code, where "your.dtd" can be any valid absolute or relative URL.: <!DOCTYPE xxxx SYSTEM "your.dtd"> You can include if-then logic in your XSQL pages. Example 14-17 illustrates a technique for executing a query based on a test of a parameter value. Example 14-17 Conditional Statements in XSQL Pages <xsql:if-param <xsql:query> SELECT .... </xsql:query> </xsq:when> <xsql:if-param <xsql:query> SELECT .... </xsql:query> </xsql:if-param> See Also:Chapter 30, "XSQL Pages Reference" to learn about the <xsql:if-param> action If you have two queries in an XSQL page, then you can use the value of a select list item of the first query in the second query by using page parameters. Example 14-18 illustrates this technique. Example 14-18 Passing Values Among SQL Queries > In some situations you may need to process multi-valued HTML <form> parameters that are needed for <input name="choices" type="checkbox">. Use the parameter array notation on your parameter name (for example, choices[]) to refer to the array of values from the selected check boxes. Assume that you have a multi-valued parameter named guy. You can use the array parameter notation in an XSQL page as shown in Example 14-19. Example 14-19 Handling Multi-Valued Parameters > Assume that you request this page is requested with the following URL, which contains multiple parameters of the same name to produce a multi-valued attribute: The page returned looks like the following: <page> <guy-list>Curly,Larry,Moe</guy-list> <quoted-guys>'Curly','Larry','Moe'</quoted-guys> <guy> <value>Curly</value> <value>Larry</value> <value>Moe</value> </guy> </page> You can also use the value of a multi-valued page parameter in a SQL statement WHERE clause by using the code shown in Example 14-20. Example 14-20 Using Multi-Valued Page Parameters in a SQL Statement <page connection="demo" xmlns: <xsql:set-page-param <xsql:query> SELECT * FROM sometable WHERE name IN ({@quoted-guys}) </xsql:query> </page> You cannot set parameter values by binding them in the position of OUT variables with <xsql:dml>. Only IN parameters are supported for binding. You can create a wrapper procedure, however, that constructs XML elements with the HTTP package. Your XSQL page can then invoke the wrapper procedure with <xsql:include-owa>. Example 14-21 shows a PL/SQL procedure that accepts two IN parameters, multiplies them and puts the value in one OUT parameter, then adds them and puts the result in a second OUT parameter. Example 14-21 addmult PL/SQL Procedure CREATE OR REPLACE PROCEDURE addmult(arg1 NUMBER, arg2 NUMBER, sumval OUT NUMBER, prodval OUT NUMBER) IS BEGIN sumval := arg1 + arg2; prodval := arg1 * arg2; END; You can write the PL/SQL procedure in Example 14-22 to wrap the procedure in Example 14-21. The addmultwrapper procedure accepts the IN arguments that the addmult procedure preceding expects, and then encodes the OUT values as an XML datagram that you print to the OWA page buffer. Example 14-22 addmultwrapper PL/SQL Procedure; The XSQL page shown in Example 14-23 constructs an XML document by including a call to the PL/SQL wrapper procedure. Example 14-23 addmult.xsql <page connection="demo" xmlns: <xsql:include-owa BEGIN addmultwrapper(?,?); END; </xsql:include-owa> </page> Suppose that you invoke addmult.xsql by entering a URL in a browser as follows: The XML datagram returned by the servlet reflects the OUT values as follows: <page> <addmult><sum>75</sum><product>1350</product></addmult> </page> The XSQL page processor can access the contents of posted XML. Any XML document can be posted and handled by the feature that XSQL supports. For example, an XSQL page can access the contents of an inbound SOAP message by using the xpath=" XpathExpression" attribute in the <xsql:set-page-param> action. Alternatively, custom action handlers can gain direct access to the SOAP message body by calling getPageRequest().getPostedDocument(). To create the SOAP response body to return to the client, use an XSLT stylesheet or a custom serializer implementation to write the XML response in an appropriate SOAP-encoded format. See Also:The Airport SOAP demo for an example of using an XSQL page to implement a SOAP-based Web Service Suppose that you want to choose database connections dynamically when invoking an XSQL page. For example, you may want to switch between a test database and a production database.You can achieve this goal by including an XSQL parameter in the connection attribute of the XSQL page. Make sure to define an attribute of the same name to serve as the default value for the connection name. Assume that in your XSQL configuration file you define connections for database testdb and proddb. You then write an XSQL page with the following <xsql:query> element: <xsql:query ... </xsql:query> If you request this page without any parameters, then the value of the conn parameter is testdb, so the page uses the connection named testdb defined in the XSQL configuration file. If you request the page with conn=proddb, then the page uses the connection named proddb instead. An XSQL page can access its own name in a generic way at runtime in order to construct links to the current page. You can use a helper method like the one shown in Example 14-24 to retrieve the name of the page inside a custom action handler. Example 14-24 Obtaining the Name of the Current XSQL Page; } This section contains tips for responding to XSQL errors: Receiving "Unable to Connect" Errors Receiving "No Posted Document to Process" When Using HTTP POST Suppose that you are unable to connect to a database and errors similar to the following when running the helloworld.xsql sample program: Oracle XSQL Servlet Page Processor XSQL-007: Cannot acquire a database connection to process page. Connection refused(DESCRIPTION=(TMP=)(VSNNUM=135286784)(ERR=12505) (ERROR_STACK=(ERROR=(CODE=12505)(EMFI=4)))) The preceding errors indicate that the XSQL servlet is attempting the JDBC connection based on the <connectiondef> information for the connection named demo, assuming you did not modify the helloworld.xsql demo page. By default the XSQLConfig.xml file comes with the entry for the demo connection that looks like the following (use the correct password): <connection name="demo"> <username>scott</username> <password>password</password> <dburl>jdbc:oracle:thin:@localhost:1521:ORCL</dburl> <driver>oracle.jdbc.driver.OracleDriver</driver> </connection> The error is probably due to one of the following reasons: Your database is not on the localhost machine. Your database SID is not ORCL. Your TNS Listener Port is not 1521. When trying to post XML information to an XSQL page for processing, it must be sent by the HTTP POST method. This transfer can be effected by an HTML form or an XML document sent by HTTP POST. If you try to use HTTP GET instead, then there is no posted document, and hence you get the "No posted document to process" error. Use HTTP POST instead to cause the correct behavior. This section describes best practices for managing security in the XSQL servlet: Installing Your XSQL Configuration File in a Safe Directory Disabling Default Client Stylesheet Overrides Protecting Against the Misuse of Substitution Parameters The XSQLConfig.xml configuration file contains sensitive database username and password information. This file should not reside in any directory that maps to a virtual path of your Web server, nor in any of its subdirectories. The only required permissions for the configuration file are read permission granted to the UNIX account that owns the servlet engine. Failure to follow this recommendation could mean that a user of your site could browse the contents of your configuration file, thereby obtaining the passwords to database accounts. By default, the XSQL page processor enables the user to supply a stylesheet in the page request by passing a value for the special xml-stylesheet parameter. If you want the stylesheet referenced by the server-side XSQL page to be the only legal stylesheet, then include the allow-client-style="no" attribute on the document element of your page. You can also globally change the default setting in the XSQLConfig.xml file to disallow client stylesheet overrides. If you take either approach, then the only pages that allow client stylesheet overrides are those that include the allow-client-style="yes" attribute on their document element. Any product that supports the use of lexical substitution variables in a SQL query can cause a developer problems. Any time you deploy an XSQL page that allows part of all of a SQL statement to be substituted by a lexical parameter, you must make sure that you have taken appropriate precautions against misuse. For example, one of the demonstrations that comes with XSQL Pages is the Adhoc Query Demo. It illustrates how you can supply the entire SQL statement of an <xsql:query> action handler as a parameter. This technique is a powerful and beneficial tool when in the right hands, but if you deploy a similar page to your production system, then the user can execute any query that the database security privileges for the connection associated with the page allows. For example, the Adhoc Query Demo is set up to use a connection that maps to the scott account, so a user can query any data that scott would be allowed to query from SQL*Plus. You can use the following techniques to make sure your pages are not abused: Make sure the database user account associated with the page has only the privileges for reading the tables and views you want your users to see. Use true bind variables instead of lexical bind variables when substituting single values in a SELECT statement. If you need to parameterize syntactic parts of your SQL statement, then lexical parameters are the only way to proceed. Otherwise, you should use true bind variables so that any attempt to pass an invalid value generates an error instead of producing an unexpected result.
http://docs.oracle.com/cd/E11882_01/appdev.112/e23582/adx_j_xsqlpub.htm
CC-MAIN-2014-10
refinedweb
7,463
51.68
Jan 17, 2012 06:09 PM|SamuelCampbell|LINK I want the headline of a story to be an action link that will load a page with only that one story displayed, this is what I am trying at the moment. I think Iam on the right lines but not sure? I am converting the headline to a string and then want to assign it to a viewbag so that I can use it an sql or is there a better way to do this? to show only that one story: I have made tried to do this two different ways in the second I wanted to use viewbag.Selection in the sql? <td class = "tdone">@Html.ActionLink(item.Headline.ToString(), "../home/NewsStory")</td> <td class = "tdone">@Html.ActionLink(item.Headline.ToString(), "../home/NStory", new { Headline = ViewBag.Selection })</td> Jan 17, 2012 09:03 PM|martincjarvis|LINK I'm not 100% certain what your trying to do. Did you want your final markup to be along the lines of: <a href="/news/?itemid=56">Item Headline</a> or <a href="/news/item-headline/">Item Headline</a> If it's the former then you'll need a an action method to take a parameter of 'itemid' and take the unque identifier of the headline. This is an old school link though, the power of mvc is to make friendly aliases If its the latter then the url form of the headline (ie alphanumeric and dashes only) should be enough to resolve to the news item. You'll need to define a new route to capture the headline as a variable. Jan 17, 2012 09:22 PM|martincjarvis|LINK If you've not got any requirements around SEO for the link, then go for the simple item id action link. It will be simpler to debug and workwith as you'll have a known key (probably a unique identifier in the database). Jan 17, 2012 10:24 PM|martincjarvis|LINK Ah, no. It's very bad practise to put SQL into a url/form variable/cookie, etc (basically anything that comes from the client) as it's a security risk (google for SQL Injection). What you do instead is provide the key parameter (such as an integer) as the action argument and then pass that into a parameterised sql statement (or pass to an ORM, etc). For example if my News Item model was: public class NewsItem { public int Id { get; set; } public string Headline { get; set; } [DataType(System.ComponentModel.DataAnnotations.DataType.Html)] public string Body { get; set; } } My controller class can have two methods...one to list all entries and one to display the linked to item. In this case I've just hard coded a dictionary of news items. public class NewsController : Controller { // this represents your database public static Dictionary<int, NewsItem> _items = new Dictionary<int, NewsItem>{ {1, new NewsItem{Id=1,Headline="Important News", Body="<p>This is fairly important</p>"}} , {2, new NewsItem{Id=2,Headline="News", Body="<p>This is just news</p>"}} , {3, new NewsItem{Id=3,Headline="M'eh News", Body="<p>This probably just about a kitten</p>"}} }; //this is your show all items public ActionResult Index() { ViewData.Model = _items.Values.OrderBy(ni => ni.Id).ToArray(); return View(); } // this is your show a single item action public ActionResult Item(int id) { NewsItem ni = _items[id]; ViewData.Model = ni; return View(); } } You'll notice that the Item method takes the id of the item to display and then the body of the method uses that value to select the correct item. The index view for this is: @model BasicMVC3.Models.NewsItem[] @{ ViewBag.Title = "Index"; } <h2> Index</h2> <ul> @foreach (var ni in Model) { <li>@Html.ActionLink(ni.Headline, "Item", "News", new { id = ni.Id }, null) </li> } </ul> You'll see that the ActionLink is configured to display the head line as the link text, and to refer to the Item action on the News controller passing in a value for id. I believe that this is what you're looking for. I've uploaded a very basic functional project for you to look at here: I hope it helpers Jan 18, 2012 09:02 PM|SamuelCampbell|LINK I have tried to implement this but I have hit a snagging point: This is the actionlink that takes an int through to the controller and then the actionresult method <td class = "tdone">@Html.ActionLink(item.Headline, "NewsStory", "Home", new { id = item.Id }, null)</td> public ActionResult NewsStory(int id) { //NewsItem news = item[id]; NewsItem item = News[id]; ViewData.Model = item; return View(); } Jan 18, 2012 09:03 PM|SamuelCampbell|LINK Sorry forgot to inlcude how I create the data object and the error message: Jan 19, 2012 10:25 AM|martincjarvis|LINK SamuelCampbell - <div class="comment-right-col"> <div id="reference-post-content" class="reply-comment"</div> </div> The error suggestst that 'News' hasn't been defined anywhere, try looking at the line number in the exception (57?) Jan 19, 2012 11:31 AM|SamuelCampbell|LINK I have looked at line 57 and the error "The name does not exist in the current context", var news is declared within the Public ActionResult Index method but it is not declared within the Public ActionResult NewsStory method does this mean that I should declare or instantiate again within this method or declare it as a global variable? Saying that I have tried tio declare it outside of the functions and visual studio did not like this. The first block of code is the Home Controller : Controller and the second is the Index.cshtml view from which the call is made to show just the one story. The third block of code is the NewsStroy.cshtml view that I intend to show just the one story. I am hoping that I have just made a small oversight somewhere and that the code in the majority is good to go? using System; using System.Collections.Generic; using System.Data; using System.Data.Entity; using System.Linq; using System.Web; using System.Web.Mvc; using CHT2520_U0858987_1.Models; //brings the model into play using PagedList; namespace CHT2520_U0858987_1.Controllers { public class HomeController : Controller { private UniversityNewsEntities db = new UniversityNewsEntities();)); } public ActionResult NewsStory(int id) { //NewsItem news = item[id]; NewsItem item = News[id]; ViewData.Model = item; return View(); } public ActionResult About() { return View(); } } } @model PagedList.IPagedList<CHT2520_U0858987_1.Models.NewsItem> @using System.Linq; @{ ViewBag. @using (Html.BeginForm()) { <!-- TEXT INPUT BOX TO CAPTURE SEARCH STRING TO SEARCH HEADLINES --> @Html.TextBox("SearchString", ViewBag.CurrentFilter as string) <input type="submit" value="Search news archive" class="button1" /> } </form> <table> <!-- LOOP THROUGH MODEL AND FOR EACH ITEM CREATE AN ENTRY IN TABLE --> @foreach (var item in Model) { <tr> <!--CONTAINS IMAGE SCR TAG FOR IMAGE --> <td class = "smallpic"><img alt = "relative picture" src='@item.Image' width = "70" height = "70"/></td> <!-- CAPTURE HEADLINE TO USE IN SQL --> <!-- <td class = "tdone">@item.Headline</td> --> <td class = "tdone">@Html.ActionLink(item.Headline, "NewsStory", "Home", new { id = item.Id }, null)</td> <!--CAPTURE DATE TO USE IN SWITCH STATEMENT FOR FURTHER DATE SORT FUNCTIONALITY --> <td class = "tdtwo">@item.Posted.ToString("dddd, MMMM d, yyyy")</td> <!-- USE RAW TO REMOVE THE HTML TAGS FROM STORY --> <td class = "tdthree">@Html.Raw(item.Story)</td> </tr> } </table> <div> Page @(Model.PageCount < Model.PageNumber ? 0 : Model.PageNumber) of @Model.PageCount @if (Model.HasPreviousPage) { @Html.ActionLink("<<", "Index", new { page = 1, sortOrder = ViewBag.CurrentSort, currentFilter = ViewBag.CurrentFilter }) @Html.Raw(" "); @Html.ActionLink("< Prev", "Index", new { page = Model.PageNumber - 1, sortOrder = ViewBag.CurrentSort, currentFilter = ViewBag.CurrentFilter }) } else { @:<< @Html.Raw(" "); @:< Prev } @if (Model.HasNextPage) { @Html.ActionLink("Next >", "Index", new { page = Model.PageNumber + 1, sortOrder = ViewBag.CurrentSort, currentFilter = ViewBag.CurrentFilter }) @Html.Raw(" "); @Html.ActionLink(">>", "Index", new { page = Model.PageCount, sortOrder = ViewBag.CurrentSort, currentFilter = ViewBag.CurrentFilter }) } else { @:Next > @Html.Raw(" ") @:>> } </div> @{ Layout = "../Shared/_Layout.cshtml"; <!-- CALL LAYOUT --> } Jan 19, 2012 11:50 AM|martincjarvis|LINK Yes, in .Net variables are scoped so variables declared within one method are not available in others and will need to re-declared. Also, c# is case sensitive to News != news You could try replaceing the line: NewsItem item = News[id]; With something along the lines of: var item = db.NewsItems.Where(ni=>ni.Id==id).First(); I hope this helps Jan 19, 2012 12:37 PM|SamuelCampbell|LINK Thanks Martin, it is now redirecting to the page and url contains the id of the story so I guess that this part is working fine but there is no story displaying on the page? Here is an example of the url with the id Jan 20, 2012 02:39 PM|SamuelCampbell|LINK Cracked it! Thanks for your help Martin I think I used the process of finding 101 ways of not how to do it but the following works fine! public ActionResult NewsStory(int id) { var item = from s in db.NewsItems.Where(ni =>ni.Id ==id) select s; return View(item); } 14 replies Last post Jan 20, 2012 02:39 PM by SamuelCampbell
http://forums.asp.net/t/1759986.aspx
CC-MAIN-2014-52
refinedweb
1,489
57.47
Aros/Platforms/68k support/Developer/Libraries Contents - 1 Introduction - 2 Contribution - 3 Commodities Introduction[edit] Libraries/Applications "automagically" open libraries that were available when they were compiled. If there is an issue, it would be easiest to downgrade the AROS version of graphics.library to 39 (in graphics.conf file) for the purpose of "frankenrom". Another option: define global GfxBase variable somewhere in layers - this should prevent the "automagic" from working and thus you will need to open graphics manually. - autoconfig (wip, fast ram expansions already work, autoboot rom support to do. Very important for UAE harddrive testing) - expansion.library is currently set as noautolib. Is it because expansion (autoconfig) is not needed in non-Amiga systems or it needs to be initialized some other way? - utility.library math functions are ugly piece of work. Double return values (D0 and D1) - A0 and A1 must be preserved. Eww. Just eww. Looks like a bunch of ASM work needs to be done here for m68k. Did you check arch/.unmaintained/m68k-native/utility ? Only need to dynamically select between 68020+ and 68000/010 versions. Getting rid of the rom directory as different arch may put more or less modules in rom. Some arch may even make it configurable which modules to put in rom depending on how much non-volatile storage is available. Having the rom/ directory make the m68k port easier, simply because only had to support the librom.a minimal AROS C library, not the full AROS C. In ABI V1 librom.a is gone. There is one library in arosstdc.library that will be usable by all modules. It will be initialized as one of the first modules. Therefore also the IO functions are moved to arosstdcdos.library that is disk based module. How are you handling the ctype.h family of functions? I would like to see an option for LANG=C only stripped down set. I think there should be defines for TaggedOpenLibrary() library id numbers. (but where? It should be only used by rom code). It would be quite useful in m68k-amiga because it makes rom file slightly smaller (currently there are lots of library name strings here and there). UtilityBase = TaggedOpenLibrary(7); looks quite ugly.. workbench/libs/gallium: -Wall cleanup. Different enum types are being compared, and the compiler is unhappy. I looked at the enum lists for each type, and they seem very different. The code is like this in the trunk of main Mesa repo as well (workbench/libs/mesa/). ROMmable[edit] I would like to suggest the following changes to the genmodule tool, to better support ROMmable modules: I have an alternative solution also used in the ABI V1 branch. The only assumption that is made is that LIBBASE of the library we are executing code in A6 (on i386 this is in *(%ebx) ). * LIBBASESIZE: - Would have space for an additional BPTR for the library's SegList - Space for an additional N struct Library * slots after the end of LIBBASETYPE, where N is the number of auto-opened libraries. - These slots would hold the opened library handles for the auto-opened libraries. But I don't like this change but won't veto it. It can always be reverted when proper support for global variables is implemented in ROM linking and the ROM bootstrap code. To me write-only media and write-able global variables sounds mutually exclusive. Well, actually *read*-only media and write-able global variables sounds even more mutually exclusive ;-) Please outline how it would work for an EEPROM or alike. EEPROM Very easy. In your linker script you declare .bss and .data sections in RAM, whereas .rodata and .text are in ROM. At startup, the Kickstart copy the .data section stored somewhere in ROM into proper location in RAM and zeroes the .bss section. And at the same time we could replace the ROMtag scanner with a list of modules that is generated at link time; for example using the AROS symbolsets. I thought "binary compatibility" goes both ways, original rom modules work with Aros rom modules and vice versa.. IMHO this is the worst solution. Chip RAM is too valuable for that. Either modules are going to be proper fully rommable or _everything_ can be relocated to RAM by tiny relocator and expansion RAM configuration code in ROM. Anyway, both solutions are bad ideas for basic A500/A1200 modes which will kill my interest instantly. * set_open_libraries(): - Would take LIBBASE as a parameter - Would OpenLibary() to the post-LIBBASETYPE Library *slots * set_close_libraries() - Would take LIBBASE as a parameter - Would CloseLibary() the post-LIBBASETYPE Library *slots * An additional header would be '-include libname_libraries.h', that would be autogenerated, and have lines like: #define GM_SEGLIST_SLOT (BPTR(((APTR *)(((IPTR)LIBBASE)+LIBBASESIZE))[0]) #define UtilityBase ((APTR *)(((IPTR)LIBBASE)+LIBBASESIZE))[1] #define GfxBase ((APTR *)(((IPTR)LIBBASE)+LIBBASESIZE))[2] I don't want this; I did a big effort in the past to get rid of these '#define libbase' hacks. This code assumes that LIBBASE is available in all functions that call a function of an autoopened library. This assumption is false. If you see no other way out please only implement when compiling the kobjs on m68k. I mean, genmodule could supported two types of libraries (or just make a second genrommodule for the sake of it). Whatever assumption is made, it would only apply to code that is explicitely written/modified against that assumption... ... To me there is no place in clean code for these #define hacks, they change the semantics of some code (struct IntuitionBase *, function pointers to functions in amiga shared libraries, ...). It is incompatible with how the C library is implemented in ABI V1. The stubs code for stack based functions in a library needs the global libbase and a #define won't be noticed. Problem for C library still remains in ABI V1. Maybe a static C link library can be reintroduced in the arch/m68k-native tree, allow me to have what in my mind is a proper and clean m68k branch. Constraints: (1) Don't want to put BSS in (precious) Chip RAM (2) Some machines have Fast RAM, some don't, so I can't *at compile time* determine where to link the .BSS to. (3) Due to (1) and (2), .text cannot have absolute references to anything in the BSS - i.e. FooBase, BarBase, etc. The '#define hacks' allow me to satisfy all constraints, and I do not plan to add them to the source code itself - only genmodule, and only when it is targeting a read-only ROM. The end result of all of this is that genmodule would run in two (target depenant) modes: .bss mode for disk loadable modules, and .rela.bss mode of ROM modules. Hmm... .rela.bss.... (/me thinks) Also I would be able to *remove* a lot of the '#define hacks' that already exists in the codebase, since only genmodule would need to be concerned about where to place the library handles. An alternative. Make a custom link script for the ROM that links all kobjs together with the BSS sections linked to the first page(s) of chip RAM. Bootstrap code would then need to copy the initialization values to this page. In a later stage on machines with a MMU to remap the ROM and the BSS section to fast RAM. In the _start.c file you would have the following const IPTR UtilityBase_offset = LIBBASESIZE+1*sizeof(APTR); const IPTR GfxBase_offset = LIBBASESIZE+2*sizeof(APTR); Code should then do #include <proto/utility_rel.h> #include <proto/gfx_rel.h> in case of #include <proto/utility.h> #include <proto/gfx.h> gfx_rel.h would #include <inline/gfx_rel.h> and in inline/gfx_rel.h you would have the following: static inline void RectFill(...) { extern const IPTR GfxBase_offset; struct GfxBase *GfxBase = (struct GfxBase *)((char *)(GET_A6)+GfxBase_offset); AROS_LC5NR(void, RectFill, ...) } This last code without inline can also be used in as stub in the static link library. The link library would be called libgfx_rel.a etc. If you look in the repository in the branch branches/ABI_V1/trunk-genmodule_pob/AROS/tools/genmodule you should find most infrastructure there. Only GET_A6 is called GET_LIBBASE and is implemented as inline asm in aros/$cpu/cpu.h (only i386 ATM). Problem is that it can't be introduced in the main trunk without breaking bin compatibility in i386/x86_64 because there is no reg allocated now for that purpose. This would move the auto-opened libraries to the LibBase, out of BSS, making the process of building ROMmable libraries easier. No code changes would need to be made to existing libraries, and no ABIs would need to change. I would think you need to pass LIBBASE as a parameter then to the function explicitly. You can still do this and put it in A6. The only quirk would be things like callbacks that are not listed in the .conf file, which would want to get handles to the library's opened libbases. How is that handled? Or should those callbacks be specified in the .conf? ReadStruct/WriteStruct are good examples. Their callback routines (in the Hook structure) will probably want to talk to both the LibBase class and DOSBase. Basically all code that currently uses libbases should provide also alternative code that uses the _offset variable. So in the ABI V1 branch for each library there is now libxxx.a and libxxx_rel.a static link library generated. The former will have stubs for the functions in the library using the global libbase, the latter stubs using _offset. If you link code with the latter lib but don't define the _offset variable you get a linker error. The m68k ROM module would need to link -lxxx_rel (or uselibs=xxx_rel). For Hooks I think we need provide support for saving A6/LIBBASE in the Hook data and restore it by a special support HOOK function. Don't know if this is possible and if it would have impact on the number of arguments that can be passed. For the hooks I was looking at (datatypes.library, the LibBase was already in the Hook's h_Data. So I just added struct DataTypesBase *DataTypesBase = hook->h_Data; at the start of the hooks manually, and everything was good. (I have a patch for datatypes.library that makes it ROMmable, using the '#define hack', putting the LibBases in the LIBBASETYPE explicitly, and explicitly opening them on the library's init, and modifying the hook routines as described above. Could BHFormat be reworked so that it doesn't open muimaster.library if run from the CLI? The system could ask inserting different disks. A CLI version of format could be included. System/Format could be reworked to get rid of some dependencies or at least work with the basic libraries (I don't think iffparse or datatypes should be required for formatting, but are subdependencies of muimaster.library.). How about reworking BHFormat to have a ./configure option to only use gadtools.library instead of MUI Master? I have to have gadtools in the ROM anyway for AmigaOS 3.x. Which ones should I work on making ROMable? C/Shell - Yes! Shell is a good one since it's in the original kickstart and booting without Startup Sequence it allows you to perform operations without the need of accessing the HD, it's a good candidate. Libs/arosc.library - I suppose that since arosc.library may be used by various programs and system components it would be quite interesting to have it on ROM to be able to launch various apps from the shell. Libs/datatypes.library - I this requires datatypes on disk so I'm not sure having it in ROM would help much. It's nice to have because programs requiring it wouldn't fail loading but since there wouldn't be any datatype loaded I don't think it's necessary. Libs/muimaster.library - MUI uses external classes and I don't know if it allows loading apps without external classes present. If it does then it may be useful but if it doesn't then skip it. Lots of dependencies. Libs/asl.library - This may come very handy with an updated Shell with auto-completion... Libs/diskfont.library - Not prioritary. If you want to load fonts from disk then you can also store diskfont.library on the same disk. Libs/gadtools.library - This may be useful for some basic apps. I can't remember if it's present in default KS, if it's not then it's not so high-priority. KS 3.0 seems to want it in ROM. Along with If, EndIf, and some other commands. System/Format - If you have a system capable of accessing writable media then you'll be able to access files outside rom too. It's a nice thing to have but IMHO "Format" is not a priority. The basic infrastructure for ROMmable shell commands. There seems to be an alignment issue, though, since sometimes it just works (adding 'shellcommands' to KRSRC), and sometimes get a hang. Can someone review the 'pure' changes I needed to make to workbench/c/shellcommands/Shell? Test the built shell, and it seems to work for trivial tasks. At first, found it a little difficult to program in the .bss and .data free Pure style, coming from a MMU OS (Linux) where shared executable pages are easy, and -fPIC compilers actually work. But coming to really really like the Pure style, and for AROS we should promote the style and advantages more. Here's, just off the top of my head, some of the best parts of Pure: All Pure programs/libraries can: - have their .text segments marked read-only - be linked into a ROM - can have multiple executions of the same object, i.e. Shell[1], Shell[2], Shell[3] all share the same .text segment Due to the above, for low memory systems, Pure is a total win. In an ideal world compilers wouldn't support BSS or data sections. All these MMU tricks etc. are just work-arounds for programmers' laziness. And don't get me started on garbage collection ;-) Yes, it's a little more work to code in Pure, but think it's worth it. I can't disagree more. In my mind the compiler and/or build infrastructure should make it possible to have pure programs using global variable, auto-opening libraries, etc. Then, this would require a '-fPIC Data' style, correct? Where the .text segment could be linked to any location, and there is a 'task global' (either a member of struct Task and/or a register) that points to the task's "private" .bss/.data area. This is similar to the Amiga BCPL support, in fact, and the tc_GlobVec pointer would be just the field to use, since it would essentially be providing the same service. But I'd hate to be the one to implement the compiler changes for all the supported AROS archs to make this happen. I'm more in favor of a register to use for storing the global pointer. It would be the same register also used for the libbase as it has similar function. On i386 ABI V1 I used %ebx for this; which is a stack pointer *(%ebx) is the place for storing the base. For m68k A6 would be used. -fPIC under ELF for most architectures wants to use a 'global offset table', and position independent code (which takes ANOTHER register out of the available pool). To make '-fPIC-data' - I'm not sure about how to do that easily. In adtools' gcc there is a --baserel options but I don't know much more than that it exists. Also I think that in the ABI V1 branch the base is there to have this possible. Please try to limit the code you convert to this Pure way of doing things. Can linker merge identical strings in rom? I noticed rom is full of dos.library, intuition.library etc.. strings? (and all kinds of debugging strings too). Wasn't that the purposed of exec.library's TaggedOpenLibrary() - it took a bunch of magic numbers and did the appropriate OpenLibrary("blah.library") for that number. That would solve the repeated string problem. Only do manual opening of libraries when really needed for backwards compatiblity. Don't do it for fun. The main goal was to make the program pure. This means - it can now be made resident using C:Resident. Just the command is very small and i thought it is a good idea. It is not about the fact that it is small or large program. Problem is that code in repository is often taken as start by programmers to start new projects. This way bad habits get spread. In the past I have spent considerable time to get rid of these manual opening of libraries often filled with bugs as the exception clauses are not in the common code path. And it really hurts me every time I see this code being re-added. When in ABI V1 I do think to implement a compile switch that allows to make programs residentable without needing to do something. It is not for the near future but I hope you can wait that long. Dos Library[edit] Autoconfig boot rom is now supported, UAE uaehf.device gets initialized properly but it crashes when it is time to init dos.library. AROS dos.library seems to be normal autoinit library but official roms do it differently. Well, what ACTUALLY seems to happen is the bootblock returns both D0 (error code) and A0 (pointer to a Resident rt_Init structure), and the Boot Block loader is supposed to MakeLibrary() with the rt_Init passed in. (assuming that the bootblock returned D0==0, then it also returned a Resident InitTable in A0). All autobooting harddrives' boot rom code do following at the end of boot phase: d0 = FindResident("dos.library"); move.l d0,a0 move.l RT_INIT(a0),a0 jsr (a0) aros dos rt_init contains init tables, not code -> crash. dos rt_init code initializes dos.library "manually". InitResident() is not used for this. (dos is weird, even after BCPL was gone). The first thing to do is to add an option to genmodule so as it is able to generate libraries not relying on RTF_AUTOINIT flag. The rest is pretty straightforward. It will not harm other ports. Some time ago i implemented reverse thing for resources (option resautoinit), it was used for battclock.resource. It's very easy in fact. You can reuse existing genmodule's code for resources. How to do this without breaking non-m68k builds? How to disable autoinit and put init code pointer in rt_init? Harddrive boot code includes final jsr or jmp in autoboot rom. It can't be patched. Floppy boot code does return dos resident in A0. Which means it can be fixed without touching dos. But nothing prevents floppy boot block to also having jsr(a0), it isn't going to return, it was only designed this way to allow strap to free temporary resources (boot block buffer, trackdisk, etc..) before starting dos. Note that replacing AOS dos.library with AROS dos.library will not work, because it is very incompatible in things which interface with filesystems, starting processes (like the Shell/CLI stuff) and probably other stuff. MorphOS dos.library is based on AROS one but much more compatible to AOS. You would have more luck with that one. That is ... until you want to use other AROS components like filesstems which use FSA packet API, which will not work anymore. Currently latest problem is simply NewAddTask() getting or reading startup parameters (startup pc etc..) incorrectly and crashing because of bad initial PC. Couldn't the initial ROM be mostly a loader for disk based modules? Would love to, but afs.handler depends on Intuition, which depends on Graphics, Layers, oop, .. This could probably be changed so that afs.handler only opens and uses intuition if intuition is already loaded at the time ithe handler wants to show a requester. It already checks if the first intuition screen is open beforehand. Shouldn't workbench/libs/partition be in rom/partition? Yes, the entire source tree could do with a clean up, and this has been discussed before on this dev list. It looks like a component you'd want to have to be able to boot from a hard disk on m68k. It is not needed, this is autobooting harddisk driver's task. (read partition tables, add partitions, add possible filesystems in RDB). Of course we don't need to simulate this with future rom built-in drivers. In existing ports it's used by the strap module to mount partitions, so it needs to be part of the kernel (or to be exact, loaded by the bootloader). Boot priority is already stored in the partition table. Only in RDB one. It's not stored anyhow on CDs, USB flash, etc. The priorities of unpartitioned removable media such as CDs and floppies have traditionally depended on the priority assigned to the drive they're in. Bootable USB sticks use SFS in an RDB and therefore have a priority just like internal HDs. MBR partitions don't store priorities, but there are other reasons why their use for bootable volumes isn't recommended anyway (e.g. a custom device name can't be chosen). All existing graphics HIDDs seem to include attrbases tables that are not rommable. (writable data and bss). We actually have a BSS (ugh) on the m68k-amiga ROM. If you can deal with just having BSS, we're good. A note/warning about performance loss. Hidd stubs do not generate error/warning now, if used in ROM-able code, but the performance will suffer drastically when they are in use. Instead of initializing the methodID only once, they will call oop.library/GetMethodID() upon every use of these stubs. This cannot be fast... ;) Understood. I'll need to find a (clean) way to AllocMem() an area of all of the stubs in a class to use to store their MIDs. (just so everyone else knows, this slowdown should only impact m68k-amiga). Just committed changes to Intuition, Hyperlayers, and Graphics that remove the need for them to have .BSS and .DATA segments. This is required for the two-rom-split method of getting AROS into UAE. Is there some kind of framework for compiling and linking ROM build-in resident commands? (at least addresident and friends seem to be implemented. About pure binaries, look at C:Dir, some commands which i backported from MorphOS are pure. In a short, they don't use startup code and any global variables. That's all in fact.). Most KS2.0+ bootable disks require build-in resident commands and I guess boot shell should be some kind of resident program too. Programs in workbench/c/shellcommands can be integrated into shell binary. Look at macros there. They are defined in shcommands_notembedded.h. And there is another file, shcommands_embedded.h. I don't know what is needed to make use of this feature. Looks like no-one has tried it for a while. Look under AROS/workbench/c/shellcommands/, you'll find programs that use a set of macros defined within AROS/compiler/include/aros/shcommands_notembedded.h. Those macros expose an interface that should be reimplemented in shcommands_embedded.h in order to produce programs that can be embedded in ROM. As of now, those macros let you produce reentrant code effortlessy, but unless you want to embed the produced files "as is" within the ROM and use LoadSeg() on them then (which could be a viable option, depending on your needs) then you need to write the embedded version of those macros. Those macros expose an interface that should be reimplemented in shcommands_embedded.h in order to produce programs that can be embedded in ROM. DOS/AddSegment() will be very useful as well. Have to hook FindSegment() into the LDDaemon to complete the picture...) I'll make sure to have the FindSegment() in LDDaemon at a *lower* priority than on-disk executables, so that we don't lock-in the ROM versions. Amiga IDE support is also in my "implement soon" list. Do I add #ifdefs to existing ata.device or do I need to do something else? (it seems to be too PC hardware specific and not exactly modular). I would go for separate module for Amiga. The ata.device is a "touchy" ;) subject - it has been broken many times by people trying to improve and there always seems to be not enough testers available after changes are made ;) Most of those breakages are through necessity because the old code was severely handicaped/badly designed - personally I think its working extremely well these days since there are hardly any reports of it not working - if at all. The only thing (imho) still needing to be done is to correct the initial probing code to not add legacy ports that haven't been associated with an ata controller, if it has found pci controllers with devices attached. Im not sure how different the amiga's internal ide implementation is - but I would imagine it would be possible to move the bits that do differ (probing, programming the chipset directly) into separate files and only include those in your specific build .. e.g. dma_amiga.c I'm also not sure how the elbox fast ata etc. controllers work but it would be nice to have AROS able to support them also (I have 2 still, as well as 2 mediator PCI-bus boards). I checked NKD. I also think this change might break binary compatibility on existing ports (BOOL is 2 bytes AFK while LONG is 4 bytes). The NDK. what do I do with dos.conf? Lock/Open (and others) aren't aliases anymore. (dos.conf not committed) I think the easiest thing to do would be to make stub files in rom/dos/ for the aliases functions, then you can check in dos.conf that doesn't have the aliases. DevCD NDK 2.0 and 3.1 dos_protos.h: <snip> LONG Examine( BPTR lock, struct FileInfoBlock *fileInfoBlock ); LONG ExNext( BPTR lock, struct FileInfoBlock *fileInfoBlock ); LONG Info( BPTR lock, struct InfoData *parameterBlock ); </snip> Same in SAS-C include/clib/dos_protos.h. n fact BOOL and ULONG return types are binary compatible. In any way the value is returned in a CPU register, so it occupies the whole register. The difference is only how it is evaluated. For example on m68k WORD would be evaluated using tst.w and ULONG - using tst.l. Just NDK 3.1 definition guarantees that upper half of the register will not contain trash. WB3.0 C:Assign seems to do this: if (AssignLock(whatever, NULL) != DOSTRUE) then print "can't cancel > <whatever>". Failed. Unfortunately AssignLock return type is BOOL . Replace it with LONG (for example) and Assign works correctly.. Seems both are broken, as BOOL should be typedef'ed as "short" and AssignLock() really should be LONG according to original AOS includes. Ah, AssignLock returns LONG but AssignLate, AssignPath and AssignAdd return BOOL. (Mistake that was fixed in later version?) Testing KS 3.1 CreateNewProc(): CreateNewProc()'s NP_Arguments tag Does NOT add "\n". Input() DOES return NP_Arguments argument stream. RunCommand(): Same results. Does not add "\n", Input()+FGetC() returns command argument(s). There was tiny difference: (probably due to different input handles) CreateNewProc() FGetC() returned EOF when argument stream ended. RunCommand() FGetC() waits for more characters (press return in CLI returned one linefeed) SystemTagList() test results: it seems to do everything (or is it shell that does it?) All whitespace (spaces and tabs) swallowed between command and arguments. Trailing whitespace is not swallowed. If "\n" already in argument string = end of argument string. If no "\n" in argument string = add one at the end. CreateNewProc()'s NP_Arguments tag does NOT add "\n". Input() DOES return NP_Arguments argument stream. Also without "\n" at the end of argument string RunCommand() FGets() does not return until return is pressed. C:Run does this (maybe others too?) olddir = CurrentDir(toclone); cis = Open("", FMF_READ); CurrentDir(olddir); This can't work in dos packets mode. CurrentDir() takes and returns FileLocks, Open returns FileHandles. Dos packet way of duplicating console handles (and only console handles) is: old = SetConsoleTask(((struct FileHandle*)BADDR(toclone))->fh_Type); cis = Open("*", MODE_OLDFILE); SetConsoleTask(old); Do we need special function for duplicating console handles or how to solve this incompatibility? (at least until everyone switches to dos packets) Currently AROS C:Run hangs, WB2.x/3.x versions work. dos.library/MatchNext does not always return ERROR_NO_MORE_ENTRIES when it finishes. ("go out of for(;;) loop --> MakeResult" does not set ERROR_NO_MORE_ENTRIES) It confuses WB3.x:C/Dir. "(null)" appears in rightmost column if number of directory entries is odd. Never mind. It is not MatchNext bug (It works correctly, I was blind again) It is WB3.x:C/Dir misusing (?) VFPrintf (RawDoFmt), expecting %s with NULL pointer (not pointer to empty string) to produce nothing in output. KS 3.1 confirmed, RawDoFmt completely ignores %s if string pointer is NULL. It does not attempt to access address zero either (confirmed with UAE memory watch points) jsr %a6@(76 * -6) /* Exec/DoIO() */ Been noticed this several times already. Please use a named constant instead of a number and a comment; that way, the compiler can check that the offset if correct and people know which function you're calling. First I have to say: gcc 680x0 assembly syntax is HORRIBLE. (at least to anyone who started coding for Amiga in assembly in 1980s..). Sorry about that but I don't know (or want to know, see above) how to make working constants. This is Jason's problem :D GCC 68k also understand Motorola 68k syntax automatic. you can write jsr -(76*6)(a6) best is when you write 68k asm code as a separate file. GCC asm detect for small .s or large .S, if your asm file have a large .S (for example file have name myasm.S)then C define preprocessor can use in asm code too. #define offset -76 #ifdef ... jsr offset(a6) #endif Open() FMF_x parameters are a problem under m68k-amiga. (Used in many places inside rom modules and aros applications). It breaks SnoopDos and probably all other programs that take over dos/Open() vector. a) get rid of them completely? (are they actually even used?). go with this one. The new modes were never even documented in the Open() Autodoc AFAIK. They are documented in and also quite used. I meant "used" as in "actually does something different than only getting converted back to ACTION_FIND* in packet handler or most flags getting ignored in original non-dos packet version of afs.handler". Don't mind having some kind of NewOpen() with new better/extended mode flags but using original Open() with totally different and incompatible mode parameter can't be right. b) add some wrapper that converts FMF_ stuff to original MODE_xxxx parameters before calling Open(). (instead of wrapping them inside Open(), this is how it works in dos packets mode currently) My suggestion... 1) dos.library should be free of extensions, to be compatible with AmigaOS 3.1 up to the binary level (that extension I implemented years ago was supposed to be binary compatible as well, but In didn't take into account programs that would SetFunction() some of dos.library functions and take over them. 2) an arosdos.library could be implemented, or some hidds, or some other thing that would implement the extensions. The scheme would be as follow dos.library -----> packet wrapper ---\ \ > aros dos handler / arosdos.library ---------------------/ There should be one packet handler wrapper per aros dos handler. In case of a dos handler based solely upon packets (a port from AmigaOS) then an aros-specific packet.handler would act as a wrapper around it, as it is now I think. The immediate result of getting rid of those flags right now is that we would lose functionalities in the arosc.library (or however it's called in Staf's modifications) since they are implemented system-wide at the level of dos.library, like file append for instance. And pipefs would not work anymore (but it would need a rework anyway, since it's quite antiquated and doesn't really work well). The idea would be to be "future proof", and not fossilize on the old amigados packet handlers. Without a clear "new dos" project to push forward, though, the only viable options are: either keep everything as it is, trying to find the ways to be compatible with the AROS way *and* the AmigaOS way, or just go for the AmigaOS way. This is the mapping in use: #define FMF_MODE_OLDFILE (FMF_AMIGADOS | FMF_WRITE | FMF_READ) #define FMF_MODE_READWRITE (FMF_MODE_OLDFILE | FMF_CREATE) #define FMF_MODE_NEWFILE (FMF_MODE_READWRITE | FMF_LOCK | FMF_CLEAR) As you see, MODE_NEWFILE would also create the file if it doesn't exists and clears it if it does, so you should not use it in that context. Use MODE_OLDFILE instead for everything. What do with AROS code that allocates struct FileInfoBlock from stack? This does not guarantee required LONG alignment (BPTRs and m68k-amiga). Replace them with AllocDosObject()? But it adds yet another memory allocation test and memory free call, I think it makes simple code. Too complex and ugly. New macro that allocates from stack + hides the alignment hack? (preferably macro that also works with other DOS structures like InfoData too). Code like (from c/dir.c): UBYTE _fib[sizeof(struct FileInfoBlock) + 3]; struct FileInfoBlock *fib = (APTR) (((IPTR) _fib + 3) & ~3); is imho too ugly.. IMHO just use AllocDosObject(), since allocating. Is that really an issue with GCC? There are some alignment options for a lot of stuff from the int and long - and FileInfoBlock starts with a LONG (-malign-int). BCPL_Action[edit] Anyway, all DOS structures introduced pre-2.0 should be allocated with AllocVec() because it was the BCPL way. trying to boot a Workbench 1.3 ADF. I'm able to load the 'Shell-Seg' Shell DOS process, but it's using the (nearly impossible to google) dos.library *BCPL* interface. I think it is not impossible to provide a 'thunk' from 1.3 BCPL interfaces to Library interfaces. Just A Small Matter of Programming. IF.. we can find documentation.... Here's a dump, where I have stubbed out the BCPL Action routine (stored in reg A5) with a debugging call. Ie: BCPL_Action 600 (0x77f6c, 0x1f007, 0x1f018, 0x1f033) D0 D1 D2 D3 D4 D0 = BCPL routine to call D1..D4 arguments [LoadSeg] Loading 'L:Shell-Seg'... Try Function 00f93158 [ELF Loader] Not an ELF object [InternalLoadSeg] FAILED loading 0001c8ee as an ELF object. Try Function 00f92720 read_block(file=116974, buffer=00069e6a, size=4, func[0]=00059bb2) buf=00069e6a, subsize = 4 (of 4) HUNK_HEADER: Hunk count: 1 First hunk: 0 Last hunk: 0 Hunk 0 size: 0x001ba8 bytes in ANY memory HUNK_CODE(0): Length: 0x001ba8 bytes in ANY memory HUNK_END [InternalLoadSeg] Succeeded loading 0001c8ee as an AOS object. [LoadSeg] segs = 0001c905 pr_GlobVec = 0007c660 BCPL_Action 600 (0x77f6c, 0x1f007, 0x1f018, 0x1f033) BCPL_Action 608 (0x0, 0x1f007, 0x1f018, 0x1f033) BCPL_Action 696 (0x303782bc, 0x3ef0b3, 0x1f018, 0x1f033) BCPL_Action 700 (0x11215381, 0xf, 0x1f018, 0x1f033) BCPL_Action 700 (0x14, 0xd6847a03, 0x11215381, 0x1f033) BCPL_Action 700 (0x48e72022, 0xffffffff, 0x11215381, 0x1f033) BCPL_Action 712 (0x0, 0xffffffff, 0x11215381, 0x1f033) BCPL_Action 700 (0x48, 0xffffffff, 0x11215381, 0x1f033) BCPL_Action 700 (0xc, 0xd6847a03, 0x11215381, 0x1f033) BCPL_Action 708 (0x0, 0xffffffff, 0x44854e04, 0xffffffff) BCPL_Action 708 (0xe1898283, 0xffffffff, 0x44854e04, 0xffffffff) BCPL_Action 708 (0x2, 0xffffffff, 0x44854e04, 0xffffffff) BCPL_Action 708 (0x303782bd, 0xffffffff, 0x44854e04, 0xffffffff) BCPL_Action 708 (0x0, 0xffffffff, 0x44854e04, 0xffffffff) ... The 'BCPL4Amiga.lha' archive on aminet.net is a *treasure trove* of information: - How to call BCPL routines - How the BCPL stack frame works (eww!!!) - What D0/A1 is (it's NOT what I thought!) - What almost all of the pr_GlobVec AmigaDOS offsets do! I think I can make a BCPL thunking library from this, no problem! If I can add this support, it would give AROS m68k the ability to run all the Amiga 1.0-1.3 BCPL CLI commands! It'll give me something to do while I wait for Toni to finish the graphics drivers. Actually, in this case, I do believe the printed English version of the Amiga Guru Book would be the best source of information. Hm, actually the C= guys tried to remove all BCPL stuff from AOS and explicitely replaced all the related CLI commands with C equivalents which go via DOSBase instead of a GlobVec - if I do read it correctly, with V37 only 4 BCPL handlers in L: where left. All remaining GlobVec legacy was re-implementing on top of dos.library - thus also avoiding the option to keep local copies of a jump table with modified entries for jump vectors. Having a global (resp. local) non-library-based vector as jump table with both - public and private data entries, BSS and the ability of overlayed loading of different modules of a program, well, it could be considered both: very flexible or utterly broken ;-) The "Chapter 16" of the Guru Book "BCPL and the Global Vector" starts with a quote saying "If you understand it, it's obsolete." IMHO it would be more useful to have all the C-based standard CLI commands and filesystem drivers from V37 and beyond working than old BCPL stuff. AOS 1.0-1.3 code/tools does not provide any missing functionality to a "Frankenrom". Of course a compatibility layer for other, non-OS BCPL code might be useful - but then again it may only be relevant for specific hardware drivers which are neither available for UAE nor most Amiga models. Basically just for anything that is not V37-aware and requires a GlobVec other than -1. Is that worth the overhead? How (well) does MorphOS implement BCPL legacy? Or does one at most get a GlobVec of -1? I would agree, but one of my unstated goals is to have AROS M68K be able to run all the Workbench ADFs that come with Amiga Forever. Also, a number of 'magazine disks' use L:Shell-Seg. I'm going to work on supporting L:Shell-Sig from AOS 1.3 only for now, as that will bring in the maximum advantage to AROS m68k as a ROM replacement. The BCPL thunk code does not seem, at this time, to be taking up too much code space, and I'm putting it in arch/m68k-amiga/dos/, so it won't impact any other ports. Do not enable 32-bit fast ram boards, for some unknown reason booting hangs if it is enabled. This is now fixed. Normal pointer to BCPL conversion error in NIL handler, worked only accidentally. (It took ages to find this one..) I guess BCPL pointer bugs will be the most common ones because no other port needs them. Perhaps it is possible to add some BCPL pointer debug check macros? For example check that pointer's 2 lowest bits are zero when converting to BCPL pointer and check that BCPL pointer points to expected memory range? (They nearly always points to non-existing ram if conversion is done twice or it is missing) There are not any other than original Commodore BCPL WB 1.x programs and related software like handlers. (Did anyone even have a BCPL compiler outside of Commodore?) It most likely didn't prevent non-BCPL programs from using BCPL features. Problem is finding them (maybe very old PD disk series?) should remember that BPTRs don't really exists on (all?) other ports.. It's the first one that is opened via diskfont.library/newfontcontensts.c. Don't know which one that is exactly, the file name is 20. + ((BPTR*)BADDR(hunktab[last]))[0] = BNULL; This should be already zeroed when hunktab was allocated. But it doesn't seem to be or it's overwritten later. I've added some more debug messages e.g. to KrnUnregisterModules, here comes a debug log that might help bit, you can see from addresses it is on a 64bit machine: [InternalLoadSeg] Succeeded loading 0000000040725b80 as an ELF object. FixFonts Loading 20 Hunk count: 1 allocmem 0000000040725bb0 24 First hunk: 0 Last hunk: 0 Hunk 0 size: 0x001ff4 bytes in ANY memoryallocmem 000000004091d3b0 8192 @000000004091d3b4 HUNK_CODE(0): Length: 0x001ff4 bytes in ANY memory HUNK_RELOC32: Hunk #0: HUNK_END freemem 0000000040725bb0 24 [InternalLoadSeg] Succeeded loading 0000000040728240 as an AOS object. [KRN] KrnUnregisterModule(0x0x4091d3b4) [KRN] Next segment pointer 0x0x4091d3b4 [KRN] Next segment pointer 0x0x754eff7000 [KRN] Trap signal 11, SysBase 0x406455b8, KernelBase 0x40646600 [KRN] Process 0x4072d590 (FixFonts) RSP=000000004094cf30 RBP=000000004094cf50 RIP=00000000404aac0c RAX=000000754eff7000 RBX=00000000405202a8 RCX=00007f0e99a91770 RDX=0000000000000000 RDI=00007f0e99d36860 RSI=0000000000000000 RFLAGS=0000000000010246 R8 =00007f0e9a140700 R9 =00000000404dc320 R10=0000000000000000 R11=0000000000000246 R12=000000004071f9f0 R13=000000004071f910 R14=000000004072d6f8 R15=0000000000000000 Many AROS programs have this: AROS_UFH3(__startup static ULONG, _start, AROS_UFHA(char *, argstr, A0), AROS_UFHA(ULONG, argsize, D0), AROS_UFHA(struct ExecBase *, sysbase, A6)) { <stuff> } A6 does not and can not contain SysBase in m68k-amiga port. It contains BCPL magic (actually most address registers contain undocumented BCPL stuff) and it is impossible to detect between normal and BCPL programs at runtime. (some can have normal C startup then do BCPL stuff later..) Are there any ports that do *not* have a global SysBase that user space can get to? Is the only way to get SysBase at program startup and library init on m68k by the absolute address $4? I think there should be a way to pass SysBase to a new program or library without needing an absolute address. The latter is especially difficult on hosted platforms. As a last resort we can do it again during loadseg so a symbol name SysBase gets the right value by the loader. I do prefer the current solution though. The loadseg way of initializing sysbase? Nothing. If you look at InternalLoadSeg_ELF, it already handles that case: rom/dos/internalloadseg_elf.c: Lines 388-406. So only (a) HUNK architectures that (b) don't have a global SysBase need SysBase passed to them. I don't think there are any of those under AROS. That a wrong strip of an executable will make it unusable. This linked with the fact that we plan to switch to ELF executables in the future and not relocatable objects. We still need relocation info in the file. Intuition Library[edit] Unfortunately intuition and layers had other m68k build issues that made gfx HIDD testing impossible. OpenScreen() internals do not work. It is already partially fixed but at least one more problem remains. More important issue is that I can't seem to get mouse working (software nor hardware). IntuitionBase->ActiveMonitor is always NULL, intuition/misc.c only sets mouse if it is non-NULL. MySetPointerPos() is called and mouse oordinates change when I move the mouse. It looks like graphics subsystem thinks mouse is shared with other host windows. aoHidd_Gfx_IsWindowed is set to FALSE. If old modeid is not available, openworkbench() selects 640x200 resolution but still uses original saved display dimensions. This causes visual problem (and huge chip ram usage) in situations where some RTG mode has been saved (for example 1024x768) but we are now booting with RTG board disabled (or removed), result is huge superbitmap native chipset screen in normal 640x200 resolution. Which is quite annoying. It should fall back to original nominal dimensions and depth if mode was missing. Perhaps boot menu screen selection and openworkbench screen mode selection should be merged? Both need similar special code to select the best mode, especially when using Amiga hardware (PAL/NTSC/RTG) Amiga port uses following "default" modes: - 640x256 (PAL) or x512 (interlaced) - 640x200 (NTSC) or x400 (interlaced) Above modes use either 2 or 4 planes. Yes boot menu is fine but the problem is initial screen (shell) default resolution and aros boot image resolution. It is 640x200 on NTSC machines, 640x256 on PAL machines. (But both PAL and NTSC modes are added to mode database if chipset supports PAL/NTSC changes = ECS Agnus or newer) AROS m68k also adds early Picasso96 driver "RTG" support which means resolution of 640x480. Not all RTG Picasso96 drivers support PAL or NTSC -like mode resolutions. Only the gfx driver knows which is the best "default" mode unless some ugly checks are added to common code (like current boot screen resolution selection does). And finally, initial screen must be 2 planes only due to bandwidth and chip memory usage reasons but AROS boot image is 16 colors and assumes square pixels (=interlace needed) Other supported platforms can work with much simpler solution because they all support at least 256 color screens and there is no huge vram/blitter bandwidth bottlenecks. Interlaced and 4 planes only needed when opening boot screen image (for correct aspect ratio) Use of 4 plane hires for boot menu or initial shell is too slow on OCS/ECS chipset. Perhaps some kind of function in gfx hidd that can be asked something like "give me array of modeids+depths for x" where x = boot menu, boot image or initial shell screen? (Remember that depth is required if planar mode) 640x480x8 (if RTG mode, 640x400 or 640x512 may not be supported) Intuition/monitorclass.c/SetPointerPos() includes hotspot offsets in HIDD_Gfx_SetCursorPos() coordinates which does not work with Amiga chipset if mouse sprite resolution is not same as screen resolution. (Most common case is lores sprite and hires screen) Hotspot should be in sprite resolution pixels, not in screen resolution pixels. For example if hotspot is -3, screen is hires and sprite is lores: Sprite is moved left -3 hires pixels, but it should be either -6 hires pixels of -3 lores pixels. (HIDD can easily convert this automatically, SetCursorShape already includes hotspot offset data) amiga-m68k native mouse positioning works in all resolutions correctly if SetPointerPos() "Take HotSpot into account" is removed and hot spot offset is handled in HIDD. HIDD coordinates are top-left corner of the sprite. Graphics driver has no clue about hotspot (except hosted, but it's there just for input reporting only). Sprite coordinates should come in sprite's resolution. Just previously there was no separate sprite resolution. Sprite shape resolution != sprite positioning resolution. Sprite shape is independant of positioning (at least on AGA, ECS and older have restrictions) remember the sprites on AGA machines had positioning resolution independent on the screen resolution. Which meant the sprite moved with HiRes (or even SuperHiRes) resolution even on LoRes screen. There's a flag that has to be set in order for that to happen though. The default setting is to have LowRes sprites on LowRes and HighRes screens and HighRes sprites on SuperHighRes screens. The resolution is globally set for all sprites though. Normally sprite has lores pixels but hires horizontal resolution. -> graphics driver needs to know hotspot offset. (or it needs to know mouse pointer resolution so that it can add correct offset). There's no "hotspot" in the sprite. It's just a sprite. Hotspot is Intuition's thing. Doesn't this mean that sprite position should be specified in bitmap's resolution? Well, in "positioning resolution". Guess positioning resolution == screen resolution. So, Intuition can adjust offsets. In fact moving mouse pointer is actually MoveSprite(). I added monitorclass method only for convenience. Added these methods to monitorclass in order to remove hotspot specification from ChangeExtSpriteA(). Previously hotspot specification was a private extension there. See SVN history. It can't unless Intuition knows the sprite resolution, which is the problem. Sprite position is always in screen resolution, this is not the problem. (Hardware may not support it but it is not Intuition's problem) Yes, this is quite backwards way to do it (Move the mouse instead of moving the hotspot) but correct fix would require Intuition update so that it knows mouse resolution and at least I don't want to touch that stuff. This quick fix at least makes Amiga native modes usable (and afaik no one else uses hardware cursors so this won't affect other platforms) Here is visual explanation because I am still not sure if my previous explanation was too confusing.. Lets say we have following lores sprite image (standard mode in AOS) 00X00000 00XX0000 00XXX000 00XXXX00 It has hotspot offset of (2,0) Now lets see how it looks on hires screen at (0,0) (without hotspot offset, wrong positioning) S000XX0000000000 0000XXXX00000000 0000XXXXXX000000 0000XXXXXXXX0000 (with 2,0 _hires_ pixel hotspot offset, still wrong positioning. This is what happened previously) 00S0XX0000000000 0000XXXX00000000 0000XXXXXX000000 0000XXXXXXXX0000 (with 2,0 _lores_ pixel hotspot offset, or 4 pixel hires, finally we have correct hotspot positioning) 0000SX0000000000 0000XXXX00000000 0000XXXXXX000000 0000XXXXXXXX0000 (S = hotspot) Icon Library[edit] In: workbench/libs/icon/./diskobjio.c is sdd_Stream a BPTR to a file, a pointer to a buffer, both? Where does sdd_Stream come from? This stuff comes from big endian struct reading/writing support in compiler/arossupport. The type of the stream can be whatever and depends on what the hook function wants it to be (is passed to functions like ReadStruct(), ...). icon.library uses dostreamhook() (in support.c) which needs stream to be a BPTR (file handle). support.c/ReadIcon_WB() does the initial ReadStruct() passing "file" param (BPTR) as stream value. icon resizing only kicks in if an icon's Gadget->MutalExclude has (1 << 31) set, so all old icons should continue to be displayed pixel-for-pixel. ilbmtoicon now accepts a --dpi x:y parameter (default is 72:72, the old Macintosh desktop publishing resolution) to allow you to specify the icon's default resolution in dots-per-inch. This is translated into Amiga Display Resolution Ticks, so that the icon.library logic doesn't need to do any unit conversion. The icon scaling is a bit rough, but it is fast for C code. (Could be faster if someone would implement the HIDD hooks for CyberGfx/ScalePixelArray..). BitMap icons are scaled with BitMapScale(), and ARGB icons are scaled using a Bresenham rescaler (needs to be extended to average pixels when scaling up), since BitMapScale() appears to clobber the 'A' portion of ARGB bitmaps. As I noted in my commit message, I need to add an additional tag to IconControl to help Wanderer control icon scaling. We should also put in an override in Prefs/ScreenMode to allow the adjustment of screen DPI/DPC too, via the MonitorSpec ratioh/ratiov parameters. From my understanding, the display width/height ratio defines (or has an impact on) the pixel width/height ratio. E.g. if a single pixel is square by default, running the display in a different resolution than the one which it physically provides may change that aspect. From my testing on AOS 3.9, MonitorSpec's ratioh/ratiov do not change (ie, they are always RATIO_UNITY) regardless of the screen mode (even 1280x200 NTSC was ratioh=16, ratiov=16) Therefore, I believe those parameters do not reflect the pixel size, but the monitor size itself. As for DRI Resolution, I believe that the equations I have posted are a good approximation of the AOS behavior, and allow extension to other display formats (widescreen, portrait mode, etc) MonitorSpec ratioh and ratiov are fixed-point fractions, are defined by the RATIO_FIXEDPART and RATIO_UNITY macros. #define RATIO_FIXEDPART 4 #define RATIO_UNITY (1 << RATIO_FIXEDPART) What they are a ratio *of* is still a bit of a mystery. Maybe a ratio of 'this monitor' versus a 1084S? It's a little related. MonitorSpec's ratioh and ratiov always appear to be RATIO_UNITY (1.0) on AOS, so I'm going to have AROS calculate ratioh and ratiov like this #define C_1084_HEIGHT 198 /* mm */ #define C_1084_WIDTH 264 /* mm */ if (GetMonitorEDIDSize(..,&MonitorEDIDWidthMM, &MonitorEDIDHeightMM)) { ms->ratiow = ((MonitorEDIDWidthMM)<<RATIO_FIXEDPART)/264; ms->ratioh = ((MonitorEDIDHeightMM)<<RATIO_FIXEDPART)/198; } else { ms->ratiow = RATIO_UNITY; ms->ratioh = RATIO_UNITY; } Monitors that comply with the VESA EDID spec will have MonitorEDIDWidthMM and MonitorEDIDHeightMM set from the EDID information, and all others will assume a 4:3 13" monitor. A Screen's DrawInfo 'Resolution' parameter (in 'ticks') would be calculated as res.x = 44*320/screen_pixel_width res.y = 44*256/screen_pixel_height (res.y will be slightly different for Amiga NTSC screens, with a value of 56 for 200 rows instead of 52, but that should not be a big problem) With this information, an application can calculate a display's DPI as: #define C_1084_WIDTH_FIN 0xa6 /* 10.4 " * 16 in hex */ #define C_1084_HEIGHT_FIN 0x7c /* 7.8 " * 16 in hex */ DPI.x = (screen_pixel_width << (RATIO_FIXEDPART * 2)) / (ms->ratioh * C_1084_WIDTH_FIN); DPI.y = (screen_pixel_height << (RATIO_FIXEDPART * 2)) / (ms->ratiov * C_1084_HEIGHT_FIN); .. and then, I will have enough information to dynamically rescale icons based upon DPI and aspect ratio of the screen. MonitorSpec: ratioh, ratiov - ratio between size of a 1084S and the current monitor (larger fractions are larger monitors, smaller are smaller) DrawInfo Resolution: Since this is related to mouse ticks, it would make more sense for this calculation to be a inverse relationship to overall DPI, instead of to the number of pixels in the display. Therefore, DrawInfo.Resolution would be calculated as: res.x = (1280 * 11 * ratioh / pixel_width) >> RATIO_FIXEDPART res.y = (1024 * 11 * ratiov / pixel_height) >> RATIO_FIXEDPART Screen DPI can be directly calculated from DrawInfo Resolution as: #define C_1084_WIDTH_CIN 104 /* 10.4 " in centi-inches */ #define C_1084_HEIGHT_CIN 78 /* 7.8 " in centi-inches */ dpi.x = (11 * 1280 * 10) / C_1084_WIDTH_FIN / res.x dpi.y = (11 * 1024 * 10) / C_1084_HEIGHT_FIN / res.y Screen DPC (dots per centimeter) is calculated as: #define C_1084_WIDTH_MM 264 /* 10.4 " in mm */ #define C_1084_HEIGHT_MM 198 /* 7.8 " in mm */ dpc.x = 11 * 1280 * 10 / C_1084_WIDTH_MM / res.x dpc.y = 11 * 1024 * 10 / C_1084_HEIGHT_MM / res.y From my understanding, the display width/height ratio defines (or has an impact on) the pixel width/height ratio. E.g. if a single pixel is square by default, running the display in a different resolution than the one which it physically provides may change that aspect. IMHO the xAspect/yAspect fields in the ILBM BMHD chunk are related, when it comes to interpreting similar values in graphics/monitors. See: "Typical values for aspect ratio are width : height = 10 : 11 (Amiga 320 x 200 display) and 1 : 1 (Macintosh*)." and from EA's ilbm.h: "/* Aspect ratios: The proper fraction xAspect/yAspect represents the pixel * aspect ratio pixel_width/pixel_height. * * For the 4 Amiga display modes: * 320 x 200: 10/11 (these pixels are taller than they are wide) * 320 x 400: 20/11 * 640 x 200: 5/11 * 640 x 400: 10/11 */ #define x320x200Aspect 10L #define y320x200Aspect 11L #define x320x400Aspect 20L #define y320x400Aspect 11L #define x640x200Aspect 5L #define y640x200Aspect 11L #define x640x400Aspect 10L #define y640x400Aspect 11L" "-------------------------- getaspect ------------------------------- bmhd->xAspect = 0; /* So we can tell when we've got it */ if(GfxBase->lib_Version >=36) { if(GetDisplayInfoData(NULL, (UBYTE *)&DI, sizeof(struct DisplayInfo), DTAG_DISP, modeid)) { bmhd->xAspect = DI.Resolution.x; bmhd->yAspect = DI.Resolution.y; } } /* If running under 1.3 or GetDisplayInfoData failed, use old method * of guessing aspect ratio */ if(! bmhd->xAspect) { bmhd->xAspect = 44; bmhd->yAspect = ((struct GfxBase *)GfxBase)->DisplayFlags & PAL ? 44 : 52; if(modeid & HIRES) bmhd->xAspect = bmhd->xAspect >> 1; if(modeid & LACE) bmhd->yAspect = bmhd->yAspect >> 1; }" and here's another interesting remark: "/* AmigaOS sees 4:3 modes as square in the DisplayInfo database, * so we correct 16:10 modes to square for widescreen displays. */ xres = (xres * 16) / 4; yres = (yres * 10) / 3;" (this overrides the DI.Resolution.x and DI.Resolution.y) diskfont Library[edit] Only fixed.font or does all fonts have same problem? Isn't it like that amiga fonts are actually hunk binaries? Yes but I wanted to know if all hunk binaries fail or just some font on non-m68k ports. Enable internalloadseg_aos.c debugging and include related log messages, maybe it helps to find the problem (I still can't see anything wrong :() Here is the log up to segfault in FixFonts: [DOS] DosInit: InitCode(RTF_AFTERDOS) [DOSBoot] __dosboot_BootProcess: Booting from device 'EMU:' Hunk count: 1 First hunk: 0 Last hunk: 0 Hunk 0 size: 0x0010d8 bytes in ANY memory @011d5064 HUNK_CODE(0): Length: 0x0010d8 bytes in ANY memory HUNK_RELOC32: Hunk #0: HUNK_END [KRN] Trap signal 11, SysBase 0xf522dc, KernelBase 0xf52f38 [KRN] Process 0x1025508 (FixFonts) SP=011f0d30 FP=011f0d48 PC=b762e84e R0=4eff7000 R1=00000000 R2=b7631b6c R3=00f531a4 R4=011f0dd7 R5=011f0dd7 graphics library[edit] Dynamic allocation of pixbuf for BlkMaskBitMapRastPort() patch does - Eliminates the need for the pixbuf semaphores (improving concurrency) - Dynamically allocates pixbuf as needed - Moves NUMPIX to the bltmaskbitmaprastport.c file, and documents it - We can now handle source images where width * sizeof(HIDDT_Pixel) exceeds NUMPIX, so long as we have enough memory available. Small comment: Please create graphics' private memory poll and use polled memory allocation functions, i.e. AllocPolled/FreePolled. Very frequent calls of regular, non polled memory allocations are good candidate for performance loss. AROS should get a better memory allocator. The current mem allocator is a speed brake if you use frequently allocs /free which C++ programs do. Or use arosc for malloc a better mem handler ?. if so, the patch Jason do should use malloc. The memallocs in the patch can too large to work in a pool. So even if poolmem is used, the alloc will go straight to a alloc of non pooled mem mostly. Pools are great for collections of items of identical size, but that is not the case here. For this type of usage (variable size, unknown maximum size), pools are not very helpful. Both of these statements are just theory unless proven by some benchmarks. Personally I would think memory pools is advisable, if not for speed then for memory fragmentation or be able to deallocate a whole bunch of allocated memory with one DeletePool command. The latter is IMO most usable during clean up. malloc cannot be currently used in AROS libraries as it is allocated on calling task context. And arosc malloc/free just uses a memory pool and AllocPooled/FreePooled ... What is the minimum size for 'NUMPIX'? The default (50000) leads to an allocate of 200000 bytes when graphics.library is initted. That's a little much for a buffer that is used by only a few functions. This gets you 200x250 pixels bitmap which is not that much TBH. Maybe you could change the implementation to allocation the buffer dynamically? The Root BitMap class does this for several calls (BM__Hidd_BitMap__PutAlphaImage, BM__Hidd_BitMap__PutTemplate, BM__Hidd_BitMap__PutAlphaTemplate, etc). Alternatively, please ifdef it for m68k - decreasing this value in general will mean more instances of smaller VRAM->RAM reads which will probably slow down the operations. workbench Library[edit] Wrote Wanderer:Tools/Info at line 58 there is: #define USE_TEXTEDITOR 1 I suppose you could try with 0 there SYS:System/Wanderer/Tools/ExecuteStartup is missing an icon. (deleted in revision 34053, shouldn't there be replacement?) No icon -> OpenWorkbenchObjectA() opens a CLI output window because GetIconTags("ExecuteStartup") returns isDefaultIcon=TRUE. Just wondering why no one has noticed this previously unless there is some other reason for hidden console window. (only tested using m68k-amiga port) Without icon: 20-901 [513 226x165]: [WBLIB] OpenWorkbenchObjectA: name = Wanderer:Tools/ExecuteStartup 20-905 [513 041x231]: [WBLIB] OpenWorkbenchObjectA: isDefaultIcon = 1 20-908 [513 060x281]: [WBLIB] OpenWorkbenchObjectA: it's a TOOL 20-912 [514 037x012]: [WBLIB] OpenWorkbenchObjectA: it's a CLI program 20-914 [514 008x061]: [Open] FH=1032b898 Process: 0x10100c08 "WANDERER:Wanderer", Window: 0x00000000, Name: "CON:////Output Window/CLOSE/AUTO/WAIT", Mode: 1005 Icon added: 10-571 [510 226x248]: [WBLIB] OpenWorkbenchObjectA: name = Wanderer:Tools/ExecuteStartup 10-576 [511 040x007]: [WBLIB] OpenWorkbenchObjectA: isDefaultIcon = 0 10-579 [511 220x055]: [WBLIB] OpenWorkbenchObjectA: it's a TOOL 10-581 [511 226x103]: [WBLIB] OpenWorkbenchObjectA: it's a WB program 10-585 [511 226x159]: [WBLIB] OpenWorkbenchObjectA: stack size: 32768 Bytes, priority 0 10-590 [511 226x231]: [WBLIB] WB_LaunchProgram: Success Looking at workbench.library/wanderer as sources well and noticed the AppMenu and AppIcon didn't seem to be quite implemented yet. It looks like wbhandler.h should have a type added for update AppMenu after update AppIcon, and the library code should send update (AppIcon|AppMenu) messages to the registered port for the event types after they're succesful in adding/removing their list changes. Yes, workbench.library needs a bit of work. Was 'RegisterWorkbench()' an AROS specific improvement, or is there an AOS equivalent? Seems aros specific, the ol' being AlohaWorkbench(), dating back to ... hmm... well, so heck lot long time back, but it's intentionally left undocumented as the sole purpose is to register ... workbench, as ... workbench. Appendix C, page 2,... AlohaWorkbench() - This routine allows the Workbench tool to make its presence and departure known to Intuition. AlohaWorkbench() - In Hawaiian, "aloha" means both hello and goodbye. The AlohaWorkbench() routine allows the Workbench program to inform Intuition that it has become active and that it is shutting down. This routine is called with one of two kinds of arguments-either a pointer to an initialized message port (which designates that Workbench is active and communications can take place), or NULL to designate that the Workbench tool is shutting down. When the message port is active, Intuition will send IntuiMessages to it. The messages will have the Class field set to WBENCHMESSAGE. The Code field will equal either WBENCHOPEN or WBENCHCLOSE, depending on whether the Workbench application should open or close its windows. Intuition assumes that Workbench will comply, so as soon as the message is replied to, Intuition proceeds with the expectation that the windows have been opened or closed accordingly. The procedure synopsis is: AlohaWorkbench(WBPort) WBPort - a pointer to an initialized MsgPort structure in which the special communications are to take place. I'm not sure its the same function, but something was written in an old amigamail or devcon article about similar functionality being supported (I've lost my paper copies over the years). dopus magellan is probably the best possibility for a real commercial client. Ideally folks should be able to have multiple workbench.library clients running as in a multi-screen workbench program. An app writer might want to implement workbench drawer navigation and file handling features for use on its own public screen along with having its AppWindows (not being on the Workbench screen) for drag and drop, in place of file requesters. They might want to go as far as doing their own drawer window code equivalents using PROGDIR: as a root window and showing only their project files or only supported clipart file types by showing the actual clip and not the icon image. mathffp.library[edit] Building mathffp.library. Fixed, I forgot to commit header file modifications.. (btw, could someone fix lh_SysBase == NULL problem?). use BNULL for BPTR NULLs. You know what I am going to say ... :) Just add support for .bss in the ROMs by hard linking it to an absolute address. Not going to happen. Chip RAM is precious, slow, and the only RAM guaranteed to be on all Amiga models. If I hard linked the .bss, I would have to use Chip RAM, which would reduce the amount of RAM available for DMAable bitmaps, floppy tracks, and audio. Using space in the library's handle (which is allocated in Fast RAM if it is available) is a much better solution. Then do what I did previously with my drivers. Store method IDs in the library base and do not use the stubs (they weren't meant for ROM code without .bss anyway) :-D This is again m68k-amiga specific which can't expect FPU but still needs to support floating point. Which means we need to do it "amiga way" and use mathieeedoubxxx libraries. (non-amiga way would be trapping FPU exceptions, this is only allowed when emulating missing 040/060 fpu functions :)) Is the correct way to do this to replace gcc math C library functions with stubs that call IEEDP library functions? (and put lib bases in aros_privdata) IEEEDP functions then either use software versions or inline asm fpu versions of math functions (replace lib vectors with FPU functions when library is opened) Math libraries have few known m68k related problems. Cmp() and Tst() functions are documented to return correct set 68k flags which is impossible with plain C SetSR() sets the codes but final copy to D0 resets all flags.. (all math lib SetSR() calls are useless in m68k C code). Only FFP Cmp() and Tst() have assembly fixes, other FFP functions still return at least wrong N-flag (FFP sign bit is bit 7, not 31), fortunately IEEE signbit it bit 31. I'll try to fix IEEE Cmp() and Tst() later, Jason can attempt to fix other return code problems if still interested :) Also Autodocs are slightly incorrect, for example Cmp() and Tst() are documented as always returning V=0 which is impossible because V flag is set/reset by m68k CMP and is required by bgt and other branch ommands.. (which are documented as working with math libs!) Linking with -libm should solve the issue. If it's already linked with -libm then I wouldn't know why it's not working. It is not yet done because it isn't that simple, proper m68k Amiga way is to redirect compiler math libs to math libraries in LIBS: (LIBS: math libraries then use either software or FPU depending on hardware config). Is there something that needs to be done before LC_LIBDEFS_FILE can work in overridden c-file? I tried to override mathieeedoubbas/mathieeedoubbas_init.c (as arch/m68k-all/mathieeedoubbas/ mathieeedoubbas_init.c) to support m68k FPU functions. Already included ieeedpbas_fpu.c can't be used on m68k-amiga because AOS C-library math functions should call math lib functions internally, not the other way round. /home/twilen/aros/arch/m68k-all/mathieeedoubbas/./mathieeedoubbas_init.c:11:10: error: #include expects "FILENAME" or <FILENAME>. This define is handled by the %build_module macro but not by the %build_archspecific macro. It will need to be implemented in config/make.tmpl if needed. Oops, I forgot to reply that LC_LIBDEFS_FILE wasn't really needed. I overrode if because I didn't want to add yet another idef _m68000 and because m68k FPU routines should always use FPU instructions, even if m68k AROS setup is compiled to be 68000 compatible (which is true today). Init routine then patches function pointers at runtime if FPU is detected. btw, IEEE Div and Mul software versions are not implemented, anyone can implement these? 68040 and 68060 don't implement all instructions in hardware that older CPUs and FPUs supported (some rarely used instructions and FPU instructions like sin and cos) Motorola distributed assembly code (including sources) that implement software emulation of missing instructions, for example Commodore and accelerator manufacturers included it with 68040 or 68060.library. (I guess everyone already knew that?) Motorola's emulation files come with following licence, is this AROS compatible? It should be, for example it is included with netbsd but I'd like to have confirmation before it becomes part of m68k-all. -)~ ~ - MOTOROLA MICROPROCESSOR & MEMORY TECHNOLOGY GROUP - M68000 Hi-Performance Microprocessor Division - M68060 Software Package Production Release - M68060 Software Package Copyright (C) 1993, 1994, 1995, 1996. -)~ %build_module does not support asmfiles. Could anyone more familiar with this stuff add support? I need asmfiles support to build 040/060 missing instruction emulation module. It is m68k-only, there is nothing to override and it requires m68k assembly. I assume r38541 and r38542 are those Motorola codes, right? What about files in m68k-all/mathieeedoubbas. Is this Motorola code as well? Actually, that's just the 'normal 68882 FPU' instructions. Toni's talking about a seperate issue, which is the emulation code from motorola to support partially implemented FPU instructions on the 68040 and 68060 (which had most, but not all, of the 68882 FPU in them). Yes, some rarely used non-FPU instructions are missing. They are also handled by emulation code. <snip> The unimplemented integer instructions are: 64-bit divide 64-bit multiply movep cmp2 chk2 cas (w/ a misaligned effective address) cas2 </snip> lowlevel.library[edit] trunk/AROS/rom/devs/keyboard/keyboard.c trunk/AROS/rom/devs/keyboard/keyboard_intern.h Lowlevel.library is not a rom module (it is rom in cd32-only) and also there is no amiga hardware specific implementation yet = on-disk original version must be supported. Without this hack you only get random memory corruption that is really difficult to debug when lowlevel.library is expunged (for example whdload does this at startup) Better question is: how does original keyboard.device survive? Perhaps it does not use (or even need) opener specific structures? Is there any test case/test application that can be used to compare results AROS vs OS3.1? I assume there is nothing about this in RKMs/Guru Book and we can't disassemble the KS binaries to check. Multiple OpenDevice("keyboard.device") on KS3.1 return static value in io_Unit field. (Pointer relative to keyboard.device base address). I'd say there is no need for opener-specific KBUnit structure. (Perhaps original keyboard.device only works properly when there is single opener?). reqtools.library[edit] Original m68k reqtools.library gadgets also have some problems, all button gadgets are not visible but they do work when clicked (and they even become visible permanently after clicking them once) Perhaps this is related: Deluxe Paint 2 and 3 (not tested others, 2 and 3 used for overlay tests) button gadgets have similar graphics glitches. Button text/label becomes permanently invisible after it has been clicked once (or if gadget was originally drawn as active) Its possible that fid->fid_EdgesOnly is 0.should 1 if it is draw later, I get in mind, frameiclass, is lots modify for AFA.I send you the file private, maybe you see difference. Some time i have post the problems AFA have with AROS code.I think this 2 are also maybe possible. Have you check if this problems are not in AROS 68k ? I mean text in wb 3.9 about Req is correct show. //RemoveClass(FindClass("itexticlass")); // text in WB 3.9 about is not written. //BOOPSI_ITextIClass_Startup_lib(); ////// //RemoveClass(FindClass("frbuttonclass")); // gadget boxes are too large //BOOPSI_FrButtonClass_Startup_lib(); but in the last functions i don't know what to fix, i deactivate them I am quite sure reqtools uses standard intuition gadgets (it was 1.3 compatible for a long time), not boopsi. reqtools 68k call this code, maybe you add some debug code to look also if fid->fid_EdgesOnly = 0 or 1. I disabled EdgesOnly test: no effect on hidden button gadgets. IPTR FrameIClass__IM_DRAW(Class *cl, struct Image *im, struct impDraw *msg) { DEBUG_IFRAME(dprintf("dispatch_frameiclass: draw Width %ld Height %ld\n",im->Width, im->Height)); //kprintf("%ld %ld\n",im->Width,im->Height); return draw_frameiclass(cl, im, msg, im->Width, im->Height); } I noticed RefreshBoolGadgetState() is called when selection state changes and it looks ok at that point but something after this call clears the label text. Maybe you try out to skip draw_frameiclass(cl, im, msg, im->Width, im->Height) No effect either (on hidden buttons, gadgets started to look strange of course). I just noticed it is the backfill pattern that overwrites the gadget because gadtools demo part that shows non-backfill window works correctly. its not use by many software, if then text is not overwritten, you can be sure something in frameiclass. I guess all Aros native programs use boopsi if no one has noticed this previously. wbversion[edit] the installer script does not accept aros by the Version.I get message i should use Kick 3.1. So is it possible that AROS wbversion is compatible to AOS and show that AROS is at least kick 3.1 ? fails here - Have the KS3.1 (if (< #wbversion 40) ( (exit #msg-badkick (quiet)) )) How is #wbversion set? What version of Scalos is this and where can it be downloaded from? - wbversion is defined by looking at libs:version.library's version: from scalos'installer script (set #wbversion (/ (getversion "Libs:version.library") 65536)) I don't think there is a version.library in AROS. I guess this explains the failure. It doesn't have any functions, does it? M68k software surely assumes it exists. It has no extra functions but it can be expunged normally. I found this with codesearch: const char *amigaGetOSVersion(void) { static const char *osver[] = { "2.0","2.0x","2.1","3.0","3.1","3.2","3.3","3.4","3.5","3.9","3.9","3.9","3.9","3.9","4.0pre","4.0pre","4.0" }; int ver = SysBase->LibNode.lib_Version; #ifndef __amigaos4__ if (ver >= 40 && ver < 50) { // Detect OS 3.5/3.9 struct Library *VersionBase; if ((VersionBase = OpenLibrary("version.library",0))) { ver = VersionBase->lib_Version; CloseLibrary(VersionBase); } } #endif #ifdef __POWERUP__ if (FindResident("MorphOS") && ver > 45) ver = 45; #endif if (ver < 36) ver = 36; else if (ver > 51) ver = 51; return osver[ver-36]; } Looks like all we have to do is to set lib_Version to 40 for OS3.1. Zune[edit] global.prefs is incompatible with m68k-amiga. It has little-endian data included that crash m68k-amiga TextEditor mui class. For example decimal 500 is read as F4010000 when TextEditor asks for MUIM_GetConfigItem / MUICFG_TextEditor_UndoSize. Class is fixed (that prevented it to initialize), prefs problem is different problem and only solution currently is to delete the prefs file. If it *is* supported to be cross-platform, all data should be big-endian (since Classic AmigaOS is our lowest common denominator, and Big Endian is easier to read by eye in a hex dump!) I see a few of those where IPTR is reverted to ULONG (probably not good on x86_64?). Take a second look at the #ifdef __AROS__ lines added above that code. I believe the SDI_hook.h and SDI_compiler.h changes are stable, and ready to backport. I would like to proposes that we have a top-level workbench/libs/SDI, that has the 'master' versions of the SDI headers for AROS, that are installed in :Development/Include/. Current packages that use SDI should then pick those up, instead of the (probably stale) versions in their local includes. Does not make much sense to me when there is already compiler/include (and technicaly it isnt a library so workbench/libs isnt the place for it). AROS should only have one version of SDI (hopefully the most current!) in its tree, not four or fix copies of different revisions. On a different note, I can't figure out how to make the SDI varadic VA_* macros work for the AROS architectures that use non-uniform varadic arrays. (x86_64 specifically). Currently on AROS we have some quote elaborate macros (and helper functions) for the following cases: IPTR func(LONG dummy, Tag tag1, ...) { AROS_SLOWSTACKTAGS_PRE(tag1) retval = funcA(dummy, AROS_SLOWSTACKTAGS_ARG(tag1)); AROS_SLOWSTACKTAGS_POST } and IPTR func(Class *cl, Object *obj, ULONG MethodID, ...) { AROS_SLOWSTACKMETHODS_PRE(MethodID) retval = funcA(cl, obj, AROS_SLOWSTACKMETHODS_ARG(MethodID)) AROS_SLOWSTACKMETHODS_POST } Contribution[edit] Package-Startup is executed from its parent directory. So it adds "system:extras/Zune/MCC_NList/Classes" to Libs: This compliciate part of Startup-Sequence reads all variables in env:sys/packages. The variables contain the path to s/Package-Startup. If EXISTS ENV:SYS/Packages List ENV:SYS/Packages NOHEAD FILES TO T:P LFORMAT="If EXISTS ${SYS/Packages/%N}*NCD ${SYS/Packages/%N}*NIf EXISTS S/Package-Startup*NExecute S/Package-Startup*NEndif*NEndif*N" Execute T:P Delete T:P QUIET CD SYS: EndIf Someone bored enough should help to fix m68k compilation problems in contrib :) what compiler error you get ? maybe you try to compile contrib with old compiler(best switch off optimizer in GCC to avoid broken programs)and old AROS 68k macro files in 68k.cpu.h I notice that the macros Mason have do for GCC 4.5.0 do not work for compile AFA(with GCC 3.4.0 and GCC 4.5.0), only old AROS macro and GCC 3.4.0 work ok. But in theory it should work when i change libcall.h and cpu.h. macro errors get more visible, when you compile with Option -E, so only the preprocessor stuff is done and the asscii file is save.this i send Jason some time ago.He did not answer.here are the mails i send him. this zune file or arossupport.c when i use old libbcall.h all work.I have it attach(currently it is names as libbcall.h_ because i use your file) here is full log output of amidevcpp. -E option i use to get preprocessor error output.when i use not -E i get only a syntax error which is not precise. strange that AROS work. Compiler: m68k-AmigaOS Building Makefile: "E:\amiga\AmiDevCpp\usr\local\amiga\Makefile.win" Executing make... mingw32-make.exe -f "E:\amiga\AmiDevCpp\usr\local\amiga\Makefile.win" all m68k-amigaos-gcc-3.4.0 -c afa_os/arossupport.c -o aros/WORKBE~1/libs/MUIMAS~4/arossupport.o -I"E:/amiga/AmiDevCpp/usr/local/amiga/m68k-amigaos/sys-include" -I"E:/amiga/AmiDevCpp/usr/local/amiga/m68k-amigaos/include" -I"afa_os/include" -I"arosinclude" -I"Aros/workbench/libs/muimaster" -I"Aros/workbench/libs/muimaster/classes" -I"Aros/workbench/classes/zune/aboutwindow" -m68020 -m68881 -D__AROS__ -fno-strict-aliasing -E -w -O2 afa_os/arossupport.c:18:43: macro "__AROS_LCA" requires 3 arguments, but only 1 given afa_os/arossupport.c:18:43: macro "__AROS_LCA" requires 3 arguments, but only 1 given afa_os/arossupport.c:23:42: macro "__AROS_LCA" requires 3 arguments, but only 1 given afa_os/arossupport.c:23:42: macro "__AROS_LCA" requires 3 arguments, but only 1 given afa_os/arossupport.c:172:21: macro "__AROS_LCA" requires 3 arguments, but only 1 given afa_os/arossupport.c:180:44: macro "__AROS_LCA" requires 3 arguments, but only 1 given afa_os/arossupport.c:180:44: macro "__AROS_LCA" requires 3 arguments, but only 1 given ..... """"" on line 18 there is just a simple call of that. return MUI_NewObjectA(classname, &tag1); but it happen also on LHA macros. Currently only working compiler is latest + Jason's A6 gcc register patch. (and this surely has nothing to do with possible compiler problems) btw, there is something wrong with either aros mui or libraries when also using native amigaos libraries. Scout crashes due to corrupt region pointer when calling DisposeRegion (or was it ClearRegion). (I was trying to use amigaos nlist because aros version does not compile) Region pointer points to some strange string data. Debugging memory overflows/corruption is sooooo boring.. I think here is problem that sdi includes does think its 68k and use 68k kompiler syntax. to see whats really wrong use compiler option -E and post preprocessor file, because sdi use so many nested macros. If the output of -E in functionheader show something as this and have a asm command in, that's wrong for new compiler. funcname(char *args __asm("a0") ) > /home/twilen/AROS/contrib/bgui/examples/FieldList.h:150:21: error: too > many arguments to function 'SetFLAttr' this is different.maybe a wrong prototype of SetFLAttr.if not here you can too look when compiler Option -E is set in makefile, whats the full sourceline without macros in it. > > btw, there is something wrong with either aros mui or libraries when > also using native amigaos libraries. Scout crashes due to corrupt > region pointer when calling DisposeRegion (or was it ClearRegion). as far i remember there is on AROS different lib slot usage of zune MUI API.I fix offsets for AFA, but i don't know whats different.I send you the AFA muimaster.h defines privat, attachments here not work. because all of the OOP stuff in amiga is due to black box concept and much work need for add a feature, very limit implement and many features are miss.that's also a problem on MOS MUI4. so on MUI and boopsi picture datatype developers that use that begin to hack and use undocument features. So i suggest that on AROS 68k same way is go as AFA do.the AROS MUI lib is name as zune.library. All AROS programs are compile to use zune.library on 68k. This allow you to get best compatibility by using MUI 68k and avoid that new features cant add in zune, it break some programs. user can use as in AFA the zune promoter to tell a program that it should use zune.library instead of muimaster.library because AFA is since long time out, there can say, that only around 70-80% of MUI 68k programs work perfect on zune. too does not like OO stuff, too hard to find problems, so i give up, to make zune more compatible since many years. for the bgui on AROS stuff i have currently no other idea, as the Problem come from AROS input device. this i can not test on AFA. arosc library[edit] The librom.a library now has the old-style 'no %f/%g' printf family, arosc.library has the floating point stuff. So, if you want %f, use 'linklibs="arosc"', and *not* linklibs="rom". It may be prudent to have the 'rom' version print out '???' or something if someone accidentally links with it and uses a '%f' format. Only Prefs for openurl.library needed %f. FYI: it is not allowed to link modules (libraries, devices, etc) with arosc.library - this is because arosc.library stores the context in calling task. TASK [arosc.library context] | | \ / module | | \ / arosc.library So when the task dies, context dies, but the module had it's memory allocated on that context. That's why modules need to be linked with librom.a (even if the are NOT it the ROM) and C lib functions missing in librom.a must be implemented directly in module (that's what I had to do with mesa.library for example). "So, if you want %f, use 'linklibs="arosc"', and *not* linklibs="rom"" does not apply to them. :) PS. The macro for building modules explicitly disables linking with arosc.library - this is on purpose (that's why Kalamatee got linker errors) What does RawDoFmt() under AOS print, then? Wasn't it something like just the '%' stripped, i.e. 'f'? bgui library[edit] All bgui.library gadgets are visible (some have small alignment bugs) but they can't be selected or activated with a mouse. Keyboard shortcuts work (if gadget has a keyboard shortcut set) bgui.library demos and FileMaster 3 ("for some reason" this is one of my test cases...) has same BOOPSI problem. They are in normal non-pressed state. Also it is not just button gadgets, all bgui gadget types can't be selected with a mouse. Everything else works (more or less) except mouse selection. For example string gadget that is selected when window opens has cursor visible and even text editing works fine. Pressing a key that is shortcut for button gadget works and gadget's text label disappears (which is wrong, it should show pressed state) Keeping shortcut key pressed while pressing ESC cancels the press event and text label re-appears. So there is at least two different problems with bgui gadgets: - mouse selection. - button gadget (maybe others, can't really test until mouse selection works) pressed state is rendered incorrectly. I activate many AROS functions and classes for test in AFA (that make problems with 68k code), but bgui addbuttons Testprogram( i use only for test) work ok.So seem no Problem in one of this AROS function. buttonclass gmhittest on AFA is same as on AROS and work with all. I also revert some compatibility enhancement of AFA code.bgui still work. I test when i have time on AROS 68k. what happen if they cant click, is there no press state show when you click in it ? here you can see what problems are cause by some AROS functions in some programs.normally i have deactivate all this functions with known bugs. so if you get such a problem it help maybe to find what function cause this. this is stable working AROS code with no known compatibility Bugs RemoveClass(FindClass("gadgetclass")); BOOPSI_GadgetClass_Startup_lib(); RemoveClass(FindClass("sysiclass")); BOOPSI_SysIClass_Startup_lib(); BOOPSI_TbiClass_Startup_lib(); RemoveClass(FindClass("buttongclass")); BOOPSI_ButtonGClass_Startup_lib(); RemoveClass(FindClass("imageclass")); BOOPSI_ImageClass_Startup_lib(); SETFUNC(DoGadgetMethodA,Intuition,IntuitionBase); // // editpad scrollbar after load full size SETFUNC(RefreshGadgets,Intuition,IntuitionBase); if (replacevisualprefs) { SETFUNC(AddGadget,Intuition,IntuitionBase); SETFUNC(AddGList,Intuition,IntuitionBase); } SETFUNC(RemoveClass,Intuition,IntuitionBase); SETFUNC(NextObject,Intuition,IntuitionBase); SETFUNC(DisposeObject,Intuition,IntuitionBase); SETFUNC(AddClass,Intuition,IntuitionBase); SETFUNC(FreeClass,Intuition,IntuitionBase); SETFUNC(MakeClass,Intuition,IntuitionBase); SETFUNC(NewObjectA,Intuition,IntuitionBase); SETFUNC(MakeClass,Intuition,IntuitionBase); SETFUNC(SetGadgetAttrsA,Intuition,IntuitionBase); SETFUNC(ObtainGIRPort,Intuition,IntuitionBase); // text is draw only on top of screen SETFUNC(ReleaseGIRPort,Intuition,IntuitionBase); SETFUNC(DrawImage,Intuition,IntuitionBase); //redraw errros on old icons on workbench SETFUNC(DrawImageState,Intuition,IntuitionBase); //redraw errors on old icons on workbench SETFUNC(ModifyIDCMP,Intuition,IntuitionBase); //immidiate redraw when scroll not work(MUI) RemoveClass(FindClass("icclass")); //kingcon scrollbar redraw not immidiate. BOOPSI_ICClass_Startup_lib(); RemoveClass(FindClass("rootclass")); InitRootClass_lib(); RemoveClass(FindClass("groupgclass")); BOOPSI_GroupGClass_Startup_lib(); RemoveClass(FindClass("modelclass")); BOOPSI_ModelClass_Startup_lib(); RemoveClass(FindClass("itexticlass")); // text in WB 3.9 about is not written. BOOPSI_ITextIClass_Startup_lib(); ////// RemoveClass(FindClass("frbuttonclass")); // gadget boxes are too large BOOPSI_FrButtonClass_Startup_lib(); // RemoveClass(FindClass("fillrectclass")); BOOPSI_FillRectClass_Startup_lib(); prefs[edit] WB3.1 Prefs/Palette apparently manually creates DrawInfo structure(s) causing crash in sysiclass_aros.c because it assumes valid dri_Screen. Preferences program does not crash anymore if I set dri_version = DRI_VERSION + 1 in openscreen.c and only allow "new" drawinfo versions in sysiclass_aros.c. Is this change ok or is it better to have some kind of "extended drawinfo" flag instead of changed version number? Unfortunately this will break backwards compatibility with old AROS programs. (I'll put this inside AROS_FLAVOUR_BINCOMPAT ifdef if there is no compatible solutions) Prefs/palette stops crashing and works except color chooser circle is completely black, first all colors are drawn and when it is finished, it is overwritten with black color. WB3.1 Prefs/Palette calls AreaCircle (or AreaEllipse) twice (this is used to draw the narrow black border around color wheel), both have same center coordinates, one has few pixels larger radius, and finally AreaEnd is called which should only fill narrow area border area. AROS version fills both circles, result is completely black color wheel. (I am not going to touch this) Apparently AreaEnd() always uses blitter-like area fill. wb[edit] AOS Workbench started programs also have NULL pr_CurrentDir. For example very common SAS-C startup code (source comes with compiler) does following when started from WB: a0 = WB startup message move.l wa_Lock(a0),d1 callsys DupLock move.l d0,__curdir(A4) move.l d0,d1 callsys CurrentDir Unfortunately at exit it does not restore pr_CurrentDir, it only Unlock()'s __curdir(a4). Which means there is no other way to handle this than to have NULL pr_CurrentDir. (Currently it causes double unlock which at least UAE fs ignores without crashes, not sure about others). I now have confirmed following Process variables when started from WB: pr_CurrentDir = NULL pr_HomeDir = set pr_CIS = NULL pr_COS = NULL pr_ConsoleTask = NULL pr_FileSystemTask = set Commodities[edit] Debugging this and it is ScreenToFront("wb screen") that gets called by Wanderer (or something) after selecting Execute. I guess screendepth isn't checking (at least correctly) that requested screen is already in front. Call trace is: screendepth()->RethinkDisplay()->MrgCop()
https://en.wikibooks.org/wiki/Aros/Platforms/68k_support/Developer/Libraries
CC-MAIN-2017-09
refinedweb
14,535
65.32
In machine learning, while building a predictive model for classification and regression tasks there are a lot of steps that are performed from exploratory data analysis to different visualization and transformation. There are a lot of transformation steps that are performed to pre-process the data and get it ready for modelling like missing value treatment, encoding the categorical data, or scaling/normalizing the data. We do all these steps and build a machine learning model but while making predictions on the testing data we often repeat the same steps that were performed while preparing the data. So there are a lot of steps that are followed and while working on a big project in teams we can often get confused about this transformation. To resolve this we introduce pipelines that hold every step that is performed from starting to fit the data on the model. Through this article, we will explore pipelines in machine learning and will also see how to implement these for a better understanding of all the transformations steps. What we will learn from this article? - What are the pipelines in Machine learning? - Advantages of building pipelines? - How to implement a pipeline? - What are the pipelines in Machine learning? Pipelines are nothing but an object that holds all the processes that will take place from data transformations to model building. Suppose while building a model we have done encoding for categorical data followed by scaling/ normalizing the data and then finally fitting the training data into the model. If we will design a pipeline for this task then this object will hold all these transforming steps and we just need to call the pipeline object and rest every step that is defined will be done. This is very useful when a team is working on the same project. Defining the pipeline will give the team members a clear understanding of different transformations taking place in the project. There is a class named Pipeline present in sklearn that allows us to do the same. All the steps in a pipeline are executed sequentially. On all the intermediate steps in the pipeline, there has to be a first fit function called and then transform whereas for the last step there will be only fit function that is usually fitting the data on the model for training. As soon as we fit the data on the pipeline, the pipeline object is first transformed and then fitted on each of the steps. While making predictions using the pipeline, all the steps are again repeated except for the last function of prediction. - How to implement a pipeline? Implementation of the pipeline is very easy and involves 4 different steps mainly that are listed below:- - First, we need to import pipeline from sklearn - Define the pipeline object containing all the steps of transformation that are to be performed. - Now call the fit function on the pipeline. - Call the score function to check the score. Let us now practically understand the pipeline and implement it on a data set. We will first import the required libraries and the data set. We will then split the data set into training and testing sets followed by defining the pipeline and then calling the fit score function. Refer to the below code for the same. import pandas as pd from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split df = pd.read_csv('pima.csv') X = df.values[:,0:7] Y = df.values[:,8] X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.30, random_state=7) pipe = Pipeline([('sc',StandardScaler()),('rfcl', RandomForestClassifier())]) We have defined the pipeline with the object name as pipe and this can be changed according to the programmer. We have defined sc objects for StandardScaler and rfcl for Random Forest Classifier. pipe.fit(X_train,y_train) print(pipe.score(X_test, y_test) If we do not want to define the objects for each step like sc and rfcl for StandardScaler and Random Forest Classifier since there can be sometimes many different transformations that would be done. For this, we can make use of make_pipeling that can be imported from the pipeline class present in sklearn. Refer to the below example for the same. from sklearn.pipeline import make_pipeline pipe = make_pipeline(StandardScaler(),(RandomForestClassifier())) We have just defined the functions in this case and not the objects for these functions. Now let’s see the steps present in this pipeline. print(pipe.steps) pipe.fit(X_train,y_train) print(pipe.score(X_test, y_test)) Conclusion Through this article, we discussed pipeline construction in machine learning. How these can be helpful while different people working on the same project to avoid confusion and get a clear understanding of each step that is performed one after another. We then discussed steps for building a pipeline that had two steps i.e scaling and the model and implemented the same on the Pima Indians Diabetes data set. At last, we explored one other way of defining a pipeline that is building a pipeline using make a pipeline. If you loved this story, do join our Telegram Community. Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here. I am currently enrolled in a Post Graduate Program In Artificial Intelligence and Machine learning. Data Science Enthusiast who likes to draw insights from the data. Always amazed with the intelligence of AI. It's really fascinating teaching a machine to see and understand images. Also, the interest gets doubled when the machine can tell you what it just saw. This is where I say I am highly interested in Computer Vision and Natural Language Processing. I love exploring different use cases that can be build with the power of AI. I am the person who first develops something and then explains it to the whole community with my writings.
https://analyticsindiamag.com/everything-about-pipelines-in-machine-learning-and-how-are-they-used/
CC-MAIN-2020-45
refinedweb
988
63.9
motifs and subgraphs I'm counting the number of motifs (3-nodes isophormic class of connected subgraphs) in a random directed network. There are 13 of this. One is, for example S1={1 -> 2, 2 -> 3} and another one S2={1 -> 2, 2 -> 3, 1 -> 3}. They are two distinct motifs, and I wouldn't count a S1 when I actually find S2. The problem is that S1 is in S2, hence subgraph_search() finds a S1 in each S2 and all related functions inherit the problem (wrong counting, wrong iterator...). Any idea how to resolve this issue? Similar things would happen for 4-nodes motifs and so on... The code I used goes like: import numpy M1 = DiGraph(numpy.array([[0,1,0],[0,0,1],[0,0,0]])) #first motif M5 = DiGraph(numpy.array([[0,1,1],[0,0,1],[0,0,0]])) #second motif g = digraphs.RandomDirectedGNP(20,0.1) #a random network l1 = [] for p in g.subgraph_search_iterator(M1): #search first motif l1.append(p) #make a list of its occurences l5 = [] for p in g.subgraph_search_iterator(M5): #the same for the second motif l5.append(p)
https://ask.sagemath.org/question/10157/motifs-and-subgraphs/?comment=17639
CC-MAIN-2021-31
refinedweb
190
69.07
67554/unable-to-enter-text-in-text-area-in-selenium-python please please answer this; thanks in advance <textarea class="form-control text-muted ng-untouched ng-pristine ng-invalid" data-</textarea> I tried by using Name ,css_selector and class nothing worked Here, I give you working script which ...READ MORE I found that my MIME file type ...READ MORE Hey Unnati, '\n' is usually used to type text ...READ MORE using OpenQA.Selenium.Interactions; Actions builder = new Actions(driver); ...READ MORE The below code containing Keys.ENTER might just ...READ MORE You can use the below code: import org.openqa.selenium.Keys WebE.
https://www.edureka.co/community/67554/unable-to-enter-text-in-text-area-in-selenium-python
CC-MAIN-2021-21
refinedweb
104
52.87
Currently PyPI allows a project name to contain basically any character except for a /. However most of the installation tooling doesn't not work with this wide of a namespace. It also opens up several avenues for spoofing attack where you trick people into copy and pasting an install command that looks like you're installing one package but you are really installing a different one. So I propose that moving forward that all projects/distributions are required to have names using only urlsafe characters. Specifically letters, decimal digits, hyphen, period, and underscore. Doing this would allow a better experience for people attempting to install packages, it would allow tool authors to test and make sure they can install all valid packages etc. -----------------: <>
https://mail.python.org/pipermail/distutils-sig/2013-May/020701.html
CC-MAIN-2016-44
refinedweb
123
51.18
ASP.NET MVC Tip #44 – Create a Pager HTML Helper In this tip, I demonstrate how you can create a custom HTML Helper that you can use to generate a user interface for paging through a set of database records. I build on the work of Troy Goode and Martijn Boland. I also demonstrate how you can build unit tests for HTML Helpers by faking the HtmlHelper class. This week, I discovered that I desperately needed a way of paging through the database records in two of the sample MVC applications that I am building. I needed a clean, flexible, and testable way to generate the user interface for paging through a set of database records. There are several really good solutions to this problem that already exist. I recommend that you take a look at Troy Goode’s discussion of his PagedList class at the following URL: Also, see Martijn Boland’s Pager HTML Helper at the following URL: I used both of these solutions as a starting point. However, there were some additional features that I wanted that forced me to extend these existing solutions: · I wanted complete flexibility in the way in which I formatted my pager. · I wanted a way to easily build unit tests for my pager. Walkthrough of the Pager HTML Helper Let me provide a quick walkthrough of my Pager Helper solution. Let’s start by creating a controller that returns a set of database records. The Home controller in Listing 1 uses a MovieRepository class to return a particular range of Movie database records. Listing 1 – Controllers\HomeController.cs using System.Web.Mvc; using Tip44.Models; namespace Tip44.Controllers { [HandleError] public class HomeController : Controller { private MovieRepository _repository; public HomeController() { _repository = new MovieRepository(); } public ActionResult Index(int? id) { var pageIndex = id ?? 0; var range = _repository.SelectRange(pageIndex, 2); return View(range); } } } The Home controller in Listing 1 has one action named Index(). The Index() action accepts an Id parameter that represents a page index. The Index() action returns a set of database records that correspond to the page index. The Home controller takes advantage of the MovieRepository in Listing 2 to retrieve the database records. Listing 2 – Models\MovieRepository.cs using MvcPaging; namespace Tip44.Models { public class MovieRepository { private MovieDataContext _dataContext; public MovieRepository() { _dataContext = new MovieDataContext(); } public IPageOfList<Movie> SelectRange(int pageIndex, int pageSize) { return _dataContext.Movies.ToPageOfList(pageIndex, pageSize); } } } A range of movie records is returned by the SelectRange() method. The ToPageOfList() extension method is called to generate an instance of the PageOfList class. The PageOfList class represents one page of database records. The code for this class is contained in Listing 3. Listing 3 – Models\PageOfList.cs using System; using System.Collections.Generic; namespace MvcPaging { public class PageOfList<T> : List<T>, IPageOfList<T> { public PageOfList(IEnumerable<T> items, int pageIndex, int pageSize, int totalItemCount) { this.AddRange(items); this.PageIndex = pageIndex; this.PageSize = pageSize; this.TotalItemCount = totalItemCount; this.TotalPageCount = (int)Math.Ceiling(totalItemCount / (double)pageSize); } public int PageIndex { get; set; } public int PageSize { get; set; } public int TotalItemCount { get; set; } public int TotalPageCount { get; private set; } } } Finally, the movie database records are displayed in the view in Listing 4. The view in Listing 4 is a strongly typed view in which the ViewData.Model property is typed to an instance of the IPageOfList interface. Listing> <%= Html.Pager(ViewData.Model)%> </div> </body> </html> When the view in Listing 4 is displayed in a web browser, you see the paging user interface in Figure 1. Figure 1 – Paging through movie records Notice that the view in Listing 4 includes a Cascading Style Sheet. By default, the Html.Pager() renders the list of page numbers in an unordered bulleted list (an XHTML <ul> tag). The CSS classes are used to format this list so that the list of page numbers appears in a single horizontal line. By default, the Html.Pager() renders three CSS classes: pageNumbers, pageNumber, and selectedPageNumber. For example, the bulleted list displayed in Figure 1 is rendered with the following XHTML: <ul class='pageNumbers'> <li class='pageNumber'><a href='/Home/Index/2'><</a></li> <li class='pageNumber'><a href='/Home/Index/0'>1</a></li> <li class='pageNumber'><a href='/Home/Index/1'>2</a></li> <li class='pageNumber'><a href='/Home/Index/2'>3</a></li> <li class='selectedPageNumber'>4</li> <li class='pageNumber'><a href='/Home/Index/4'>5</a></li> <li class='pageNumber'><a href='/Home/Index/4'>></a></li> </ul> Notice that the 4th page number is rendered with the selectedPageNumber CSS class. Setting Pager Options There are several options that you can set when using the Pager HTML Helper. All of these options are represented by the PagerOptions class that you can pass as an additional parameter to the Html.Pager() method. The PagerOptions class supports the following properties: · IndexParameterName – The name of the parameter used to pass the page index. This property defaults to the value Id. · MaximumPageNumbers – The maximum number of page numbers to display. This property defaults to the value 5. · PageNumberFormatString – A format string that you can apply to each unselected page number. · SelectedPageNumberFormatString – A format string that you can apply to the selected page number. · ShowPrevious – When true, a previous link is displayed. · PreviousText – The text displayed for the previous link. Defaults to <. · ShowNext – When true, a next link is displayed. · NextText – The text displayed for the next link. Defaults to >. · ShowNumbers – When true, page number links are displayed. Completely Customizing the Pager User Interface If you want to completely customize the appearance of the pager user interface then you can use the Html.PagerList() method instead of the Html.Pager() method. The Html.PagerList() method returns a list of PagerItem classes. You can render the list of PagerItems in a loop. For example, the revised Index view in Listing 5 uses the Html.PagerList() method to display the page numbers. Listing> <%-- Show Page Numbers --%> <% foreach (PagerItem item in Html.PagerList(ViewData.Model)) { %> <a href='<%= item.Url %>' class='<%=item.IsSelected ? "selectedPageNumber" : "pageNumber" %>'><%= item.Text%></a> <% } %> </div> </body> </html> Figure 2 – Custom pager user interface The main reason that I added the Html.PagerList() method to the PagerHelper class is to improve the testability of the class. Testing the collection of PageItems returned by Html.PagerList() is easier than testing the single gigantic string returned by Html.Pager(). Behind the scenes, the Html.Pager() method simply calls the Html.PagerList() method. Therefore, building unit tests for the Html.PagerList() method enables me to test the Html.Pager() method as well. Testing the Pager HTML Helper I created a separate test project for unit testing the PagerHelper class. The test project has one test class named PagerHelperTests that contains 10 unit tests. One issue that I ran into almost immediately when testing the PagerHelper class was the problem of faking the MVC HtmlHelper class. In order to unit test an extension method on the HtmlHelper class, you need a way of faking of the HtmlHelper class. I decided to extend the MvcFakes project with a FakeHtmlHelper class. I used the FakeHtmlHelper class in all of the PagerHelper unit tests. The FakeHtmlHelper is created in the following test class Initialize method: [TestInitialize] public void Initialize() { // Create fake Html Helper var controller = new TestController(); var routeData = new RouteData(); routeData.Values["controller"] = "home"; routeData.Values["action"] = "index"; _helper = new FakeHtmlHelper(controller, routeData); // Create fake items to page through _items = new List<string>(); for (var i = 0; i < 99; i++) _items.Add(String.Format("item{0}", i)); } Notice that I need to pass both a controller and an instance of the RouteData class to the FakeHtmlHelper class. The RouteData class represents the controller and action that generated the current view. These values are used to generate the page number links. The Intialize() method also creates a set of 100 fake data records. These data records are used in the unit tests as a proxy for actual database records. Here’s a sample of one of the test methods from the PagerHelperTests class: [TestMethod] public void NoShowNext() { // Arrange var page = _items.AsQueryable().ToPageOfList(0, 5); // Act var options = new PagerOptions { ShowNext = false }; var results = PagerHelper.PagerList(_helper, page, options); // Assert foreach (PagerItem item in results) { Assert.AreNotEqual(item.Text, options.NextText); } } This test method verifies that setting the ShowNext PagerOption property to the value false causes the Next link to not be displayed. In the Arrange section, an instance of the PageOfList class is created that represents a range of records from the fake database records. Next, in the Act section, the PagerOptions class is created and the PagerHelper.PagerList() method is called. The PagerHelper.PagerList() method returns a collection of PagerItem objects. Finally, in the Assert section, a loop is used to iterate through each PagerItem object to verify that the Next link does not appear in the PagerItems. If the text for the Next link is found then the test fails. Using the PagerHelper in Your Projects At the end of this blog entry, there is a link to download all of the code for the PagerHelper. All of the support classes for the PagerHelper are located in the MvcPaging project. If you want to use the PagerHelper in your MVC projects, you need to add a reference to the MvcPaging assembly. Summary Creating a good Pager HTML Helper is difficult. There are many ways that you might want to customize the user interface for paging through a set of database records. Attempting to accommodate all of the different ways that you might want to customize a paging user interface is close to impossible. Working on this project gave me a great deal of respect for the work that Troy Goode and Martijn Boland performed on their implementations of pagers. I hope that this tip can provide you with a useful starting point, and save you some time, when you need to implement a user interface for paging through database records.
https://weblogs.asp.net/stephenwalther/asp-net-mvc-tip-44-create-a-pager-html-helper
CC-MAIN-2022-33
refinedweb
1,654
57.47
Welcome to my brief homepage for CS 352. Announcements, homework hints, etc. will appear here or in the discussion forum. My office hours are scheduled for Monday 1-3 PM and Wednesday 3-4 PM. to emails, except that I have a goal to respond within 24 hours). The class website is: To sign up for the discussion forum, visit. If you do not have a Google account, send an email to me, and I will send an email to you that will invite the account of your choosing to join. You do not need to join to read the discussion forum, just to post to it. That being said, we hope most of you will post at some point, so it is probably in your interest to sign up now, rather than later. Instructions on how to join a Google group and setup a GMail filter. Since we filter the discussion group emails by list-alias, it is unnecesary to begin the subject line of discussion group emails with a "CS 352" prefix. That being said, placing the homework and question number in the subject line will make life easier for your peers. Guide Excel file Examples #include <stdio.h>at the top of your file.
http://www.cs.utexas.edu/users/ragerdl/cs352/
CC-MAIN-2017-43
refinedweb
207
82.24
Your Account Not the differences between the versions of Ruby. Please consult your favorite reference after reviewing the tips you read here. But now that you have been sufficiently warned, we can move on to talking about how to keep things clean. It is very tempting to run your test suite on one version of Ruby, check to make sure everything passes, then run it on the other version you want to support and see what breaks. After seeing failures, it might seem easy enough to just drop in code such as the following to make things go green again: def my_method(string) lines = if RUBY_VERSION < "1.9" string.to_a else string.lines end do_something_with(lines) end Resist this temptation! If you aren’t careful, this will result in a giant mess that will be difficult to refactor, and will make your code less readable. Instead, we can approach this in a more organized fashion. Before duplicating any effort, it’s important to check and see whether there is another reasonable way to write your code that will allow it to run on both Ruby 1.8 and 1.9 natively. Even if this means writing code that’s a little more verbose, it’s generally worth the effort, as it prevents the codebase from diverging. If this fails, however, it may make sense to simply backport the feature you need to Ruby 1.8. Because of Ruby’s open classes, this is easy to do. We can even loosen up our changes so that they check for particular features rather than a specific version number, to improve our compatibility with other applications and Ruby implementations: class String unless "".respond_to?(:lines) alias_method :lines, :to_a end end Doing this will allow you to rewrite your method so that it looks more natural: def my_method(string) do_something_with(string.lines) end Although this implementation isn’t exact, it is good enough for our needs and will work as expected in most cases. However, if we wanted to be pedantic, we’d be sure to return an Enumerator instead of an Array: Enumerator Array class String unless "".respond_to?(:lines) require "enumerator" def lines to_a.enum_for(:each) end end end If you aren’t redistributing your code, passing tests in your application and code that works as expected are a good enough indication that your backward-compatibility patches are working. However, in code that you plan to distribute, open source or otherwise, you need to be prepared to make things more robust when necessary. Any time you distribute code that modifies core Ruby, you have an implicit responsibility of not breaking third-party libraries or application code, so be sure to keep this in mind and clearly document exactly what you have changed. In Prawn, we use a single file, prawn/compatibility.rb, to store all the core extensions used in the library that support backward compatibility. This helps make it easier for users to track down all the changes made by the library, which can help make subtle bugs that can arise from version incompatibilities easier to spot. In general, this approach is a fairly solid way to keep your application code clean while supporting both Ruby 1.8 and 1.9. However, you should use it only to add new functionality to Ruby 1.8.6 that isn’t present in 1.9.1, and not to modify existing behavior. Adding functions that don’t exist in a standard version of Ruby is a relatively low-risk procedure, whereas changing core functionality is a far more controversial practice. If you run into a situation where you really need two different approaches between the two major versions of Ruby, you can use a trick to make this a bit more attractive in your code. if RUBY_VERSION < "1.9" def ruby_18 yield end def ruby_19 false end else def ruby_18 false end def ruby_19 yield end end Here’s an example of how you’d make use of these methods: def open_file(file) ruby_18 { File.open("foo.txt","r") } || ruby_19 { File.open("foo.txt", "r:UTF-8") } end Of course, because this approach creates a divergent codebase, it should be used as sparingly as possible. However, this looks a little nicer than a conditional statement and provides a centralized place for changes to minor version numbers if needed, so it is a nice way to go when it is actually necessary. When you need to accomplish the same thing in two different ways, you can also consider adding a method to both versions of Ruby. Although Ruby 1.9.1 shipped with File.binread(), this method did not exist in the earlier developmental versions of Ruby 1.9. File.binread() Although a handful of ruby_18 and ruby_19 calls here and there aren’t that bad, the need for opening binary files was pervasive, and it got tiring to see the following code popping up everywhere this feature was needed: ruby_18 ruby_19 ruby_18 { File.open("foo.jpg", "rb") } || ruby_19 { File.open("foo.jpg", "rb:BINARY") } To simplify things, we put together a simple File.read_binary method that worked on both Ruby 1.8 and 1.9. You can see this is nothing particularly exciting or surprising: File.read_binary if RUBY_VERSION < "1.9" class File def self.read_binary(file) File.open(file,"rb") { |f| f.read } end end else class File def self.read_binary(file) File.open(file,"rb:BINARY") { |f| f.read } end end end This cleaned up the rest of our code greatly, and reduced the number of version checks significantly. Of course, when File.binread() came along in Ruby 1.9.1, we went and used the techniques discussed earlier to backport it to 1.8.6, but prior to that, this represented a nice way to attack the same problem in two different ways. Now that we’ve discussed all the relevant techniques, I can show you what prawn/compatibility.rb looks like. This file allows Prawn to run on both major versions of Ruby without any issues, and as you can see, it is quite compact: prawn/compatibility.rb class String #:nodoc: unless "".respond_to?(:lines) alias_method :lines, :to_a end end unless File.respond_to?(:binread) def File.binread(file) File.open(file,"rb") { |f| f.read } end end if RUBY_VERSION < "1.9" def ruby_18 yield end def ruby_19 false end else def ruby_18 false end def ruby_19 yield end end This code leaves Ruby 1.9.1 virtually untouched and adds only a couple of simple features to Ruby 1.8.6. These small modifications enable Prawn to have cross-compatibility between versions of Ruby without polluting its codebase with copious version checks and workarounds. Of course, there are a few areas that needed extra attention, and we’ll about the talk sorts of issues to look out for in just a moment, but for the most part, this little compatibility file gets the job done. Even if someone produced a Ruby 1.8/1.9 compatibility library that you could include into your projects, it might still be advisable to copy only what you need from it. The core philosophy here is that we want to do as much as we can to let each respective version of Ruby be what it is, to avoid confusing and painful debugging sessions. By taking a minimalist approach and making it as easy as possible to locate your platform-specific changes, we can help make things run more smoothly. Before we move on to some more specific details on particular incompatibilities and how to work around them, let’s recap the key points of this section: Try to support both Ruby 1.8 and 1.9 from the ground up. However, be sure to write your code against Ruby 1.9 first and then backport to 1.8 if you want prevent yourself from writing too much legacy code. Before writing any version-specific code or modifying core Ruby, attempt to find a way to write code that runs natively on both Ruby 1.8 and 1.9. Even if the solution turns out to be less beautiful than usual, it’s better to have code that works without introducing redundant implementations or modifications to core Ruby. For features that don’t have a straightforward solution that works on both versions, consider backporting the necessary functionality to Ruby 1.8 by adding new methods to existing core classes. If a feature is too complicated to backport or involves separate procedures across versions, consider adding a helper method that behaves the same on both versions. If you need to do inline version checks, consider using the ruby_18 and ruby_19 blocks shown in this appendix. These centralize your version-checking logic and provide room for refactoring and future extension. With these thoughts in mind, let’s check out some incompatibilities you just can’t work around, and how to avoid them. There are some features in Ruby 1.9 that you simply cannot backport to 1.8 without modifying the interpreter itself. Here we’ll talk about just a few of the more obvious ones, to serve as a reminder of what to avoid if you plan to have your code run on both versions of Ruby. In no particular order, here’s a fun list of things that’ll cause a backport to grind to a halt if you’re not careful. Ruby 1.9 adds a cool feature that lets you write things like: foo(a: 1, b: 2) But on Ruby 1.8, we’re stuck using the old key => value syntax: foo(:a => 1, :b => 2) Ruby 1.9.1 offers a downright insane amount of ways to process arguments to methods. But even the more simple ones, such as multiple splats in an argument list, are not backward compatible. Here’s an example of something you can do on Ruby 1.9 that you can’t do on Ruby 1.8, which is something to be avoided in backward-compatible code: def add(a,b,c,d,e) a + b + c + d + e end add(*[1,2], 3, *[4,5]) #=> 15 The closest thing we can get to this on Ruby 1.8 would be something like this: add(*[[1,2], 3, [4,5]].flatten) #=> 15 Of course, this isn’t nearly as appealing. It doesn’t even handle the same edge cases that Ruby 1.9 does, as this would not work with any array arguments that are meant to be kept as an array. So it’s best to just not rely on this kind of interface in code that needs to run on both 1.8 and 1.9. On Ruby 1.9, block variables will shadow outer local variables, resulting in the following behavior: >> a = 1 => 1 >> (1..10).each { |a| a } => 1..10 >> a => 1 This is not the case on Ruby 1.8, where the variable will be modified even if not explicitly set: >> a = 1 => 1 >> (1..10).each { |a| a } => 1..10 >> a => 10 This can be the source of a lot of subtle errors, so if you want to be safe on Ruby 1.8, be sure to use different names for your block-local variables so as to avoid accidentally overwriting outer local variables. In Ruby 1.9, blocks can accept block arguments, which is most commonly seen in define_method: define_method define_method(:answer) { |&b| b.call(42) } However, this won’t work on Ruby 1.8 without some very ugly workarounds, so it might be best to rethink things and see whether you can do them in a different way if you’ve been relying on this functionality. Both the stabby Proc and the .() call are new in 1.9, and aren’t parseable by the Ruby 1.8 interpreter. This means that calls like this need to go: Proc .() >> ->(a) { a*3 }.(4) => 12 Instead, use the trusty lambda keyword and Proc#call or Proc#[]: lambda Proc#call Proc#[] >> lambda { |a| a*3 }[4] => 12 Although it is possible to build the Oniguruma regular expression engine into Ruby 1.8, it is not distributed by default, and thus should not be used in backward-compatible code. This means that if you’re using named groups, you’ll need to ditch them. The following code uses named groups: >> "Gregory Brown".match(/(?<first_name>\w+) (?<last_name>\w+)/) => #<MatchData "Gregory Brown" first_name:"Gregory" last_name:"Brown"> We’d need to rewrite this as: >> "Gregory Brown".match(/(\w+) (\w+)/) => #<MatchData "Gregory Brown" 1:"Gregory" 2:"Brown"> More advanced regular expressions, including those that make use of positive or negative look-behind, will need to be completely rewritten so that they work on both Ruby 1.8’s regular expression engine and Oniguruma. Though it may go without saying, Ruby 1.8 is not particularly well suited for working with character encodings. There are some workarounds for this, but things like magic comments that tell what encoding a file is in or String objects that are aware of their current encoding are completely missing from Ruby 1.8. String Although we could go on, I’ll leave the rest of the incompatibilities for you to research. Keeping an eye on the issues mentioned in this section will help you avoid some of the most common problems, and that might be enough to make things run smoothly for you, depending on your needs. So far we’ve focused on the things you can’t work around, but there are lots of other issues that can be handled without too much effort, if you know how to approach them. We’ll take a look at a few of those now. Although we have seen that some functionality is simply not portable between Ruby 1.8 and 1.9, there are many more areas in which Ruby 1.9 just does things a little differently or more conveniently. In these cases, we can develop suitable workarounds that allow our code to run on both versions of Ruby. Let’s take a look at a few of these issues and how we can deal with them. In Ruby 1.9, you can get back an Enumerator for pretty much every method that iterates over a collection: >> [1,2,3,4].map.with_index { |e,i| e + i } => [1, 3, 5, 7] In Ruby 1.8, Enumerator is part of the standard library instead of core, and isn’t quite as feature-packed. However, we can still accomplish the same goals by being a bit more verbose: >> require "enumerator" => true >> [1,2,3,4].enum_for(:each_with_index).map { |e,i| e + i } => [1, 3, 5, 7] Because Ruby 1.9’s implementation of Enumerator is mostly backward-compatible with Ruby 1.8, you can write your code in this legacy style without fear of breaking anything. In Ruby 1.8, Strings are Enumerable, whereas in Ruby 1.9, they are not. Ruby 1.9 provides String#lines, String#each_line, String#each_char, and String#each_byte, all of which are not present in Ruby 1.8. Strings Enumerable String#lines String#each_line String#each_char String#each_byte The best bet here is to backport the features you need to Ruby 1.8, and avoid treating a String as an Enumerable sequence of lines. When you need that functionality, use String#lines followed by whatever enumerable method you need. The underlying point here is that it’s better to stick with Ruby 1.9’s functionality, because it’ll be less likely to confuse others who might be reading your code. In Ruby 1.9, strings are generally character-aware, which means that you can index into them and get back a single character, regardless of encoding: >> "Foo"[0] => "F" This is not the case in Ruby 1.8.6, as you can see: >> "Foo"[0] => 70 If you need to do character-aware operations in Ruby 1.8 and 1.9, you’ll need to process things using a regex trick that gets you back an array of characters. After setting $KCODE="U",[18] you’ll need to do things like substitute calls to String#reverse with the following: $KCODE="U" String#reverse >> "résumé".scan(/./m).reverse.join => "émusér" Or as another example, you’ll replace String#chop with this: String#chop >> r = "résumé".scan(/./m); r.pop; r.join => "résum" Depending on how many of these manipulations you’ll need to do, you might consider breaking out the Ruby 1.8-compatible code from the clearer Ruby 1.9 code using the techniques discussed earlier in this appendix. However, the thing to remember is that anywhere you’ve been enjoying Ruby 1.9’s m17n support, you’ll need to do some rework. The good news is that many of the techniques used on Ruby 1.8 still work on Ruby 1.9, but the bad news is that they can appear quite convoluted to those who have gotten used to the way things work in newer versions of Ruby. Ruby 1.9 has built-in support for transcoding between various character encodings, whereas Ruby 1.8 is more limited. However, both versions support Iconv. If you know exactly what formats you want to translate between, you can simply replace your string.encode("ISO-8859-1") calls with something like this: Iconv string.encode("ISO-8859-1") Iconv.conv("ISO-8859-1", "UTF-8", string) However, if you want to let Ruby 1.9 stay smart about its transcoding while still providing backward compatibility, you will just need to write code for each version. Here’s an example of how this was done in an early version of Prawn: if "".respond_to?(:encode!) def normalize_builtin_encoding(text) text.encode!("ISO-8859-1") end else require 'iconv' def normalize_builtin_encoding(text) text.replace Iconv.conv('ISO-8859-1//TRANSLIT', 'utf-8', text) end end Although there is duplication of effort here, the Ruby 1.9-based code does not assume UTF-8-based input, whereas the Ruby 1.8-based code is forced to make this assumption. In cases where you want to support many encodings on Ruby 1.9, this may be the right way to go. Although we’ve just scratched the surface, this handful of tricks should cover most of the common issues you’ll encounter. For everything else, consult your favorite language reference. Depending on the nature of your project, getting things running on both Ruby 1.8 and 1.9 can be either trivial or a major undertaking. The more string processing you are doing, and the greater your need for multilingualization support, the more complicated a backward-compatible port of your software to Ruby 1.8 will be. Additionally, if you’ve been digging into some of the fancy new features that ship with Ruby 1.9, you might find yourself doing some serious rewriting when the time comes to support older versions of Ruby. In light of all this, it’s best to start (if you can afford to) by supporting both versions from the ground up. By writing your code in a fairly backward-compatible subset of Ruby 1.9, you’ll minimize the amount of duplicated effort that is needed to support both versions. If you keep your compatibility hacks well organized and centralized, it’ll be easier to spot any problems that might crop up. If you find yourself writing the same workaround several times, think about extending the core with some helpers to make your code clearer. However, keep in mind that when you redistribute code, you have a responsibility not to break existing language features and that you should strive to avoid conflicts with third-party libraries. But don’t let all these caveats turn you away. Writing code that runs on both Ruby 1.8 and 1.9 is about the most friendly thing you can do in terms of open source Ruby, and will also be beneficial in other scenarios. Start by reviewing the guidelines in this appendix, then remember to keep testing your code on both versions of Ruby. As long as you keep things well organized and try as best as you can to minimize version-specific code, you should be able to get your project working on both Ruby 1.8 and 1.9 without conflicts. This gives you a great degree of flexibility, which is often worth the extra effort. [18] This is necessary to work with UTF-8 on Ruby 1.8, but it has no effect on 1.9. If you enjoyed this excerpt, buy a copy of Ruby Best Practices. © 2012, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://oreilly.com/ruby/excerpts/ruby-best-practices/writing-backward-compatible.html
crawl-003
refinedweb
3,457
65.52
by Álex 2012-11-06 19:30 talks graphite carbon diamond monitoring python pygrunn Two days ago was my, sadly, last PyGrunn monthly meeting. Thanks to Bram now we know a little bit more about how to monitoring with Python applications. Below you will find the notes that I take to the people that couldn’t assist to the talk. But they are only that, some notes, don’t expect to find a really cool story on them. I am pretty sure that the original slides made by Bram will help you. Some important companies are using it: Instragram, Etsy, Github, Kalooga (this is the company where Bram is working :). It’s a project created by Orbitz.com. Graphite : The tool that makes the graphs. Carbon : Colects the statistics. Whisper : Metrics RRD (Round Robin database). Diamond : It’s the metrics collector, they are others: CollectD, Munin, Ganglia… Of course, you can develop yours. If you want namespaces you can always use dots .. Example: pygrunn.load [load] [now]. Why don’t use statsd? It’s a layer for Graphite that you can use to keep your application running and send the data to statsd. If it can write it ok, if not, you have a problem. The original implementation of StatsD is in Node.js but there are another projects that do it with C (StatsD-c) or python (Bucky) or -write your prefered language here-. You can plot data with: You can do funny things as: They are stored in files, in case that you run out of space this data will be stored on the Carbon cache until something happen to it.a You can filter results, for example mostDeviant (take a look to the slides to see the screenshots that show the use). Sentry : Error caching middleware that you can run with your WSGI application to check the Exceptions and the stacktraces. Shinken : It’s also written in python and it’s compatible with Nagios. It could be a good complement to Graphite to show some alerts when the thing are really wrong. New Relic : It’s a Web Application Performance Management (APM (Application Performance Management). polo is made with by @agonzalezro
http://agonzalezro.github.io/graphite-carbon-and-diamond.html
CC-MAIN-2018-17
refinedweb
364
74.29
" How to calculate area of rectangle How to Calculate Area of Rectangle In this section we will learn how to calculate area... rectangle method which takes two parameters of type integer. To calculate the area... creates a circle object by using the above method. ? The program passes the circle... of the circle object by using the methods in the object. Answers Hi Friend, Try How to calculate area of Circle How to Calculate Area of Circle  ... of calculating the area of circle. In this program you will be taught each and every... the area of circle. After that it return the area of circle, return the doubles type Installing programs over a network using java Installing programs over a network using java Hi, i want to write a java program that will allow me to install programs from a server to a client machine. Any help will be appreciated. Thanks Java Network Bandwidth - Java Beginners Java Network Bandwidth Hi ....... How to calculate the Network Bandwidth using Core Java Method which returns area of circle - Java Beginners Method which returns area of circle Need simple Java Method example that returns area of circle Java Example CodeWith the help of given Java code you can return the area of a circle and give the radius BlueJ (area of circle) BlueJ (area of circle) How does one write a program to calculate the area of a circle in BlueJ? Note that the user will have to provide the radius. A simple program to calculate the area of a circle. radius provided How can java programs execute automatically when it connects to network How can java programs execute automatically when it connects to network Good Day dears... How can java programs execute automatically when it connects to network. Thanks in Advance friends Shackir How can i Test Installing programs over a network Installing programs over a network Hi, i want to write a java program that will allow me to install programs from a server to a client machine. Any help will be appreciated. Thanks How to Create Circle In Java How to Create Circle In Java Introduction: This is a simple program of java awt. In this section, you will learn how to create Circle Diagram. The java circle is the most how to execuite java programs??? how to execuite java programs??? I have jdk 1.6 installed in my pc.i want to execuite java programs in ms-dos for applet and without using applet.please tell me Java Circle to Circle collision detection Java Circle to Circle collision detection Java Circle to Circle collision detection how to draw network(lines and nodes) in java - Applet how to draw network(lines and nodes) in java Hi, Iam doing my MSC project. I want to know, how can we draw a network(with lines and nodes, any number of lines and nodes... say 5). I also want to know how can we draw a graph By using Applet display a circle in a rectangle By using Applet display a circle in a rectangle Write a java applet to display a circle in a rectangle Java Programs Java Programs Hi, What is Java Programs? How to develop application for business in Java technology? Is there any tool to help Java programmer in development of Java Programs? Thanks Calculate sum and Average in java Calculate sum and Average in java How to calculate sum and average in java program? Example:- import java.io.BufferedReader; import.... The given code accepts the numbers from the user using BufferedReader class. Then using java programs java programs Explain types of Java programs. Also explain how to compile and run them. Types of Java Programs: Standalone Applications Web Applications Enterprise Applications Console Application Web services java conversion java conversion how do i convert String date="Aug 31, 2012 19:23:17.907339000 IST" to long value java programs java programs write java applet program to display an image using drawimage method of an java graphic class programs - Java Beginners . (by using methods of minimum and maximum of numbers) 3. Write a Java program... and find area . Abstract 6. Write a Java program... by using Runnable interface. 8. Write a Java program on multithreading by sung form to java script conversion form to java script conversion I am using a form one more form cotain a piece of code . how to convert it to java script to function it properly? code is shown below <form action="" id="cse-search programs java programs 1:- java program to copy the contents of one file to another file(input the file names using command-line arguments). 2:- java.... 3:-java program to find the repeated digits in a given number.   java lab programs - Java Beginners java lab programs 1. Develop a Java package with simple Stack... for Complex numbers in Java. In addition to methods for basic operations on complex.... Develop with suitable hierarchy, classes for Point, Shape, Rectangle, Square, Circle Java Conversion Question - Java Beginners Java Conversion Question Hello sir ,How i can Convert Java Code to Vb.net ,Plz Help Me java with computer network java with computer network i feel it's very difficult to use java for network programming. how can i recover this problem?give me any simple example... that explains the java for network programming Overview of Networking through JAVA Java gui program for drawing rectangle and circle Java gui program for drawing rectangle and circle how to write java gui program for drawing rectangle and circle? there shoud be circle.... and the program must also show message dialog "this is a red circle" when click Calculate Company's Sale using Java Calculate Company's Sales Using Java In this section, you will learn how to calculate Company's sale. A company sales 3 items at different rates and details of sales ( total amount) of each item are stored on weekly basis i.e. from Monday How to Create Text Area In Java How to Create Text Area In Java In this section, you will learn how to create Text Area in Java... and a text area is also created on the panel by using JTextArea class Java programs - Java Beginners Java programs Hello Please write the following programs for me using GUI.Thanks You. 1. Write a java program that reads the first name, last name, hours worked and hourly rate for an employee. Then print the name Programs in java Programs in java Hi, What are the best programs in java for a beginner? Thanks network bassed program network bassed program sir... i need code for how to sending meg one system to another system using java program plz send coding programs Why word "static" is used in java programs java programs - Java Beginners java programs i need a program for rational numbers to represent 500/1000 as 1/2 in java using java doc comments Hi Friend, Try the following code: public class RationalClass{ private int nr Java Conversion Java Conversion  ..., you will learn how to convert binary number into decimal... In this section, you will learn how to convert a char into a string type data java network java network Write a Java Networking program, which displays the usage of the getName() and getAddress() method and overrides the String() method to display the domain name and the address of the internet address java programs - Java Beginners java programs design a java interface for adt stack .develop two different classes that implement the interface one using array and another using linkedlist Hi Friend, Try the following code: import java.lang. Conversion - Development process is the theory of conversion from java to .NET? How it is done? Hi.... Java applications can convert to C# using the Microsoft Java Language Conversion Assistant (JLCA). Code that calls Java APIs can convert to comparable C# code Network Project Network Project I am writting a java program for a LAN which creates accounts for students and allows them to view assignments posted by lecturers... would create folders for each student on the server and also how i would verify Type Conversion - Java Beginners Type Conversion Hai, rajanikanth, Please explain how to convert from integer value to a string value .In a database operation i want to convert... for conversion programs - Java Beginners Program Java in Linux How to program Java in Linux system Java To Vb.Net Conversion - Java Beginners Java To Vb.Net Conversion How to convert Java Code To VB.net,which Software is useful to convert java to vb.net,plz Help Me java concepts - Java Beginners java concepts i need theory for designing a java interface for adt stack .develop two different classes that implement the interface one using array and another using linkedlist Calculate Age using current date and date of birth Calculate Age using current date and date of birth In this section you will learn how to calculate the age. For this purpose, through the code, we have prompted the user to enter current date and date of birth in a specified format. Now how to draw network with nodes - Java Beginners how to draw network with nodes how can i draw a network graph (game theory) with 3 nodes and each node display some information programs - Java Magazine programs a program to print circle and fill a colour in it. ...; Ellipse2D Circle; int Width; int Height; g2 = (Graphics2D) g... (); Circle = new Ellipse2D.Double ((double) ((Width - Diameter java programs java programs i need help in understanding the whole concept of the 13 java programs that i hav...here r de programs.. int i,j,m,n; m=Integer.parseInt(args [0]); n=Integer.parseInt(args [1]); System.out.print Collection of Large Number of Java Sample Programs and Tutorials Collection of Large Number of Java Sample Programs and Tutorials Java... will learn you will learn how to iterate collection in java. A running example of java program is provided that shows you how you can network - Java Beginners Java network library What are the Java Network library programs - Java Beginners Java Array Programs How to create an array program in Java? Hi public class OneDArray { public static void main (String[]args){ int... information.. conversion - Java Beginners conversion how to convertor .xl to .pdf and .pdf to .xl and .mp4 to .mp3 Hi Friend, Try the following code: 1)xls to pdf: import java.io.*; import java.util.*; import com.lowagie.text.*; import calculate size of array calculate size of array Is it possible to calculate the size of array using pointers in Java How to access the "Add/Remove Programs" list in Control Panel using Java Program? - Java Beginners How to access the "Add/Remove Programs" list in Control Panel using Java Program? Dear Sir, I'm very interested in creating java programs... Programs list in the Control Panel. I don't know, how can i access it? Kindly give me how to prepare railway reservation form using core java concepts how to prepare railway reservation form using core java concepts write code for online railway reservation form: The following constraints are: a) Fix the no.of Tickets alloted for online reservation b) on each ticket booking Java example to calculate the execution time Java example to calculate the execution time  ... will describe you the way that how one can calculate or get the execution time of a method or program in java. This can be done by subtracting the starting time java programs java programs please help in this series .. 55555 55554 55543 55432 54321 java programs java programs A union B, transpose of matric, denomination of a given number i need java programs for this category? Hi Friend, Transpose of matrix: import java.util.*; public class Transpose { public Calculate the Sum of three Numbers . In this section you will learn how to calculate the sum of three numbers by using three... how to calculate three integer number . First of all define class name "... Calculate the Sum of Three Numbers   how to compile programs?????????? how to compile programs?????????? "javac" is not recognised as a file name. why?????????? Have you set your path name and class name correctly? Anyways have a look at the following link: Install Java image conversion tools image conversion tools how many tools are available in image conversion at particularlly in java Java Programs will help you learn and understand how get method is used within Java programs...In this section of RoseIndia we are providing several Java programs... a huge collection of Java programs and examples covering almost all topics. Every OOPs concepts in Java Object-Oriented Programming (OOPs) concepts in Java helps in creating programs... application that are developed on the OOPs concepts at first analyze the program.... It contains data and codes with behavior. In Java, Class is a structure inside which Java Programs Java Programs Hello. I need help with the following. Write a program that prompts the user for 2 different integers, then prints out the numbers between the 2 integers (inclusive) and their squares and the sum calculate volume of cube, cylinder and rectangular box using method overloading in java calculate volume of cube, cylinder and rectangular box using method overloading in java calculate volume of cube, cylinder and rectangular box using method overloading in java calculate volume of cube, cylinder VOIP Network VOIP Network VoIP Network Management The MetaSwitch...-to-use Java client graphical user interface, as well as SNMP/Corba and billing interfaces for straightforward integration into existing network management systems how to calculate max and min in the loop - Java Beginners how to calculate max and min in the loop thanks coz giving me the answer for my question. i want to know is there possible calculation for the max... this array values into integer. Hi friend, Import the java input Programs in java - Java Interview Questions Program in Java decimal to binary i need a java example to String Reverse. Can you also explain the Java decimal to binary with the help... void main(String args[]){ // build up string using + Concatenation con = new writing java programs - Java Beginners writing java programs How do i write a code to display even numbers from 1 to 50 Hi Friend, Try the following code: class EvenNumbers{ public static void main (String args[]){ String evenNo URL in term of Java Network Programming URL in term of Java Network Programming...) is the address of a resource on the Internet. In java network programming we.... In this section we will provide the complete information about the way of using URL in java understand how to create Java applications. In this section you will find how to calculate addition of two distances in feets and inches using objects as functions arguments how to calculate addition of two distances in feets and inches using objects as functions arguments how to calculate addition of two distances in feets and inches using objects as functions arguments in java Calculate Sales Tax using Java Program Calculate Sales Tax using Java Program In this section, you will learn how to calculate the sales tax and print out the receipt details for the purchased items. To evaluate this, we have taken the following things into considerations: 1 Java Date conversion - Date Calendar Java Date conversion How do I convert 'Mon Mar 28 00:00:00 IST 1983' which is a java.util.Date or String to yyyy/MM/dd date format in sql object conversion - Java Beginners /java-conversion/ObjectToDouble.shtml Thanks... sandeep kumar suman...object conversion Hi, Can anybody tell me the object conversion in java. Hi sandeep Can u please tell me in details about your Java Programs - Java Beginners Java Programs Dear Sir, Could you give me the syntax for HIERARCHIAL INHERITANCE and give sample program for the same?(if possible with an output...:// Thanks OOPs and Its Concepts in Java OOPs and Its Concepts in Java  ... Programming or OOP is the technique to create programs based on the real world. Unlike procedural programming, here in the OOP programming model programs A Program to find area and perimeter of Rectangle to understand the concepts. We will write a program, which calculate the area and perimeter of a rectangle in java. Code Description : Using this example... with the dot operator, invoking the method area(), which calculate the area and perimeter Simple Java Programs for Beginners RoseIndia brings you simple Java programs that can be used and downloaded... of Java programs covering every topic, method, keywords, classes, functions, etc. Programs on Hello World Array list, Linked list, Iterate list, Java swing Java Script - Design concepts & design patterns Java Script how to merge two excel sheet using javascript, kindly give some clue
http://www.roseindia.net/tutorialhelp/comment/96385
CC-MAIN-2014-10
refinedweb
2,780
52.9
Hi, I have commited my mutant proposal code. You won't have seen the CVS commit because it was pretty big. Some of you will no doubt sigh with relief at that. Anyway, in order to help people understand what mutant is and how it does what it does, this is a brief, albeit rambling description. Mutant has many experimental ideas which may or may not prove useful. I'll try to describe what is there and let anyone who is interested comment. Mutant is still immature. You'll notice that there is at this time, just one task, a hacked version of the echo task, which I have been using to test out ideas. Most tasks would end up being pretty similar to their Ant 1.x version. OK, let me start with some of the motivating requirements. There are of coure many Ant2 requirements but I want to focus on these for now. Mutant does address many of the others. BTW, I'll use the terms Ant and mutant somewhat interchangeably - just habit, not an assumption of any sort. One of the things which is pretty difficult in Ant 1.x is the management of classpaths and classloaders. For example, today the antlr task requires the antlr classes in the classpath used to start ant. I'm talking here about the classpath built up in the ant.bat/ant script launchers. At the same time, my mate Oliver's checkstyle task which uses antlr won't run if the antlr classes are in the classpath because then those classes cannot "see" the classes in the taskdef's classpath. Another requirement I have is extensibility. In Ant 1.x this is difficult because whenever a new type is created, each task which needs to support this must be changed to provide the new addXXX method. The ejbjar task is on eample with its concept of vendor specific tools. The zip/jar task with its support for different types of fileset. The addition of the classfileset would require a change to the zip task. Mutant Init ================= Mutant defines a classloader hierarchy somewhat similar to that used in Tomcat 4. Tasks join into this hierarchy at a particular point to ensure they have visibility of the necessary interface classes and no visibility of the Ant core itself. There is nothing particularly novel about that but they are able to request certain additional resources as we will see later. Mutant starts with two jars. One is the start.jar which contains just one class, Main.java which establishes the initial configuration and then runs the appropriate front end command line class. If a different front end was desired, a different launch class in its own jar would be used which would configure the classloader hierarchy somewhat differently and start the approriate GUI front end. The second jar, init.jar, provides a number of initialisation utilities. These are used by Main.java to setup Ant and would also be used by any other front end would also use to classes to configure Ant. There are four classes here ClassLocator - used to locate from which jar a class comes LoaderUtils - utils for building classloaders from dirs InitException - init time exception InitConfig - config information sent into Ant at startup. The important class here is the InitConfig which communicates the state of Ant at startup into the the core of Ant when it starts up. Main determines the location of ANT_HOME based on the location of the start classes and then populates the InitConfig with both classloaders and information about the location of various jars and config files. At the top of the classloader hierarchy (or bottom if you turn the page around) are the bootstrap and system classloaders. I won't really distinguish between these in mutant. Combined they provide the JDK classes, plus the classes from the init and start jars. One objective is to keep the footprint of the init and start jars small so they do not require any external classes which may then become visible lower in the hierarchy. Main does not explicitly create these loaders, of course, but just adds a reference to the init config as system class loader The next jar is for the common area. This provides interface definitions and utility classes for use by both the core and by tasks/types etc. It is loaded from ANT_HOME/lib/common/*.jar. Typically this is just lib/common/common.jar but any other jars in here are loaded. This pattern is used for all of the classloaders. Next up is the core loader. It includes the lib/antcore/antcore.jar plus any others and the XML parser jars. Mutant's core does not assume that the project model will come from an XML description but XML facilities are needed in the core for reading in ant library defs and config files. The parser jar locations are also stored in the init config. This lets the jars be added to any ant library that want to use Ant's XML parser rather than providing its own. Similarly tools.jar location is determined automatically and added to the config for use by tasks which request. I'll get to that in discussing the antlib processing. The final jar that is loaded is the jar for the frontend - cli.jar. This is not passed in init config since these classes are not visible to the core and are not needed by it. So the hierarchy is jdk classes | start/init | common | antcore | cli Tasks generally will come in at common, hiding the core classes, front end and XML parser classes from tasks. Once Main has setup the initConfig, it creates the front end commandline class and launches mutant proper, passing it the command line args and the init config. A GUI would typically replace start.jar and the cli.jar with its own versions which manage model construction from GUI processes rather than from XML files. It may be possible to move some of Main.java's processing into init.jar if it iuseful to other front ends. I haven't looked at that balance. Mutant Frontend ================ The front end is responsible for coordinating execution of Ant. It managed command line arguments, builds a model of the Project to be evaluated and coordinates the execution services of the core. cli.jar contains not only the front-end code by the XML parsing code for building a project model from an XML description. Other front ends may choose to build project models in different ways. Commandline is pretty similar to Ant 1.x's Main.java - it handles arguments, building loggers, listeners, defines, etc - actually I haven't full implemented defined in mutant yet but it would be similar to Ant 1.x. Commandline then moves to building a project model from the XML representation. I have just expanded the approach in Ant 1's ProjectHelper for XML parsing, moving away fro a stack of inner classes. The classes in the front end XML parsing use some XML utility base classes from the core. The XML parsing handles two elements at parse time. One is the <ref> element which is used for project references - that is relationships between project files. The referenced project is parsed as well. The second is the <include> element which includes eithr another complete project or a project <fragment> directly into the project. All the other elements are used to build a project model which is later processed in the core. The project model itself is organized like this A project contains named references to other projects targets build elements (tasks, type instances) A target contains build elements (tasks, type instances) A build element contains build elements (nested elements) So, for now the project model contains top level tasks and type instances. I'm still thinking about those and property scoping especially in the face of project refs and property overrides. Anyway, the running of these tasks is currently diabled. Once the model is built, the commandline creates an execution manager instance, passing it the initConfig built by Main.jar. It adds build listeners and then starts the build using the services of the ExecutionManager. Ant Libraries ============== Before we get into execution proper, I'll deal with the structure of an ant library and how it works. An antlibrary is a jar file with a library descriptor located in META-INF/antlib.xml. This defines what typedefs/taskdefs/converters the library makes available to Ant. The classes or at least some of the classes for the library will normally be available in the jar. The descriptor looks like this (I'll provide two examples here) <antlib libid="ant.io" isolated="true"> <typedef name="thread" classname="java.lang.Thread"/> <taskdef name="echo" classname="org.apache.ant.taskdef.io.Echo"/> <converter classname="org.apache.ant.taskdef.io.FileConverter"/> </antlib> <antlib libid="ant.file" reqxml="true" reqtools="true" extends="ant.io" isolated="true"> <taskdef name="copy" classname="org.apache.ant.file.copy"/> </antlib> the "libid" attribute is used to globally identify a library. It is used in Ant to pick which tasks you want to make available to a build file. As the number of tasks available goes up, this is to prevent name collisions, etc. The name is constructed similarly to a package name - i.e Reverse DNS order. The "home" attribute is a bit of fluff unused by mutant to allow tools to manage libraries and updated them etc. More thought could go into this. "reqxml" allows a library to say that it wants to use Ant's XML parser classes. Note that these will be coming from the library's classloader so they will not, in fact, be the same runtime class, but it saves tasks packaging their own XML parsers. "reqtools" allows a library to specify that it uses classes from Sun's tools.jar file. Again, if tools.jar is available it will be added to the list of classes in the library's classloader "extends" allows for a single "inheritance" style relationship between libraries. I'm not sure how useful this may be yet but it seems important for accessing common custom types. It basically translates into the class loader for this library using the one identified in extends as its parent. "isolate" specifies that each task created from this libary comes from its own classloader. This can be used with tasks derived from Java applications which have static initialisers. This used to be an issue with Anakia for example. Similarly it could be used to ensure that tool.jar classes are unloaded to stop memory leaks. Again this is experimental so may not prove ultimately useful. The <typedef> creates a <thread> type. That is just a bit of fun which I'll use in an example later. It does show the typedefing of a type from outside the ant library however. <taskdef> is pretty obvious. It identifies a taskname with a class from the library. The import task, which I have not yet implemented will allow this name to be aliased - something like <import libid="ant.file" task="echo" alias="antecho"/> Tasks are not made available automatically. The build file must state which tasks it wants to use using an <import> task. This is similar to Java's import statement. Similarly classes whose ids start with "ant." are fully imported at the start of execution. Mutant Config ================= When mutant starts execution, it reads in a config file. Actually it attempts to read two files, one from $ANT_HOME/conf/antconfig.xml and another from $HOME/.ant/antconfig.xml. Others could be added even specified in the command line. These config files are used to provide two things - libpaths and task dirs. Taskdirs are locations to search for additional ant libraries. As people bundle Ant tasks and types with their products, it will not be practical to bundle all this into ANT_HOME/lib. These additional dirs are scanned for ant libraries. All .zip/.jar/.tsk files which contain the META-INF/antlib.xml file will be processed. Sometimes, of course, the tasks and the libraries upon which they depend are not produced by the same people. It is not feasible to go in and edit manifests to connect the ant library with its required support jars, so the libpath element in the config file is used to specify additional paths to be added to a library's classloader. An example config would be <antconfig> <libpath libid="ant.file" path="fubar"/> <libpath libid="ant.file" url=""/> </antconfig> Obviously other information can be added to the config - standard property values, compiler prefs, etc. I haven't done that yet. User level config override system level configs. So when a ant library creates a classloader, it will take a number of URLS. One is the task library itself, the XML parser classes if requested, the tools.jar if requested, and any additional libraries specified in the <antconfig>. The parent loader is the common loader from the initconfig. unless this is an extending library. Mutant Execution ================== Execution of a build is provided by the core through two key classes. One if the ExecutionManager and the other is the ExecutionFrame. An execution frame is created for each project in the project model hierarchy. It represents the execution state of the project - data values, imported tasks, typedefs, taskdefs, etc. The ExecutionManager begins by reading configs, searching for ant libraries, configuring and appending any additional paths, etc. It then creates a root ExecutionFrame which represents the root project. when a build is commenced, the project model is validated and then passed to the ExecutionFrame. the ExecutionFrame is the main execution class. When it is created it imports all ant libraries with ids that start with ant.*. All others are available but must be explicitly imported with <import> tasks. When the project is passed in, ExecutionFrames are created for any references projects. This builds an ExecutionFrame hierarchy which parallels the project hierarchy. Each <ref> uses a name to identify the referenced project. All property and target references use these reference names to identify the particular frame that hold the data. As an example, look at this build file <project default="test" basedir=".." doc: <ref project="test.ant" name="reftest"/> <target name="test" depends="reftest:test2"> <echo message="hello"/> </target> </project> Notice the depends reference to the test2 target in the test.ant project file. I am still using the ":" as a separator for refs. It doesn't collide with XML namespaces so that should be OK. Execution proceeds by determining the targets in the various frames which need to be executed. The appropriate frame is requested to execute the target's tasks and type instances. The imports for the frame are consulted to determine what is the approrpiate library and class from that library. A classloader is fetched, the class is instantiated, introspected and then configured from the corresponding part of the project model. Ant 1.x's IntrospectionHelper has been split into two - the ClassIntrospector and the Reflector. When the task is being configured, the context classloader is set. Similarly it is set when the task is being executed. Types are handled similarly. When a type in instantiated or a task executed, and they support the appropriate interface, they will be passed a context through which they can access the services of the core. Currently the context is an interface although I have wondered if an abstract class may be better to handle expansion of the services available over time. Introspection and Polymorphism ================================ Introspection is not a lot different from Ant 1.x. After some thought I have dropped the createXXX method for polymorphic type support, discussed below. Basically setXXX methods, coupled with an approriate string to type converter are used for attributes. addXXX methods are used for nested elements. All of the value setting has been moved to a Reflector object. Object creation for addXXX methods is no longer provided in the reflector class, just the storage of the value. This allows support fro add methods defined in terms of interfaces. For example, the hacked Echo task I am using has this definition /** * testing * * @param runnable testing */ public void addRun(Runnable runnable) { log("Adding runnable of type " + runnable.getClass().getName(), MessageLevel.MSG_WARN); } So when mutant encounteres a nested element it does the following checks Is the value specified by reference <run ant: Is it specified by polymorphic instances <run ant: or is it just a normal run o' the mill nested element, which is instantiated by a zero arg constructor. Note the use of the ant namespace for the metadata. In essence the nested element name <run> identifies the add method to be used, while the refId or type elements specify the actual instance or type to be used. The ant:type identifies an Ant datatype to be instantiated. If neither is specified, the type that is expected by the identified method, addRun in this case, is used to create an instance. In this case that would fail. Polymorphism, coupled with typedefs is one way, and a good way IMHO, of solving the extensibility of tasks such as ejbjar. OK, that is about the size of it. Let me finish with two complete build files and the result of running mutant on them. build.ant ============= <project default="test" basedir=".." doc: <ref project="test.ant" name="reftest"/> <target name="test" depends="reftest:test2"> <echo message="hello"/> </target> </project> test.ant ========= <project default="test" basedir="." doc: <target name="test2"> <thread ant: <echo message="hello2"> <run ant: </run> </echo> <echo message="hello3"> <run ant: </run> </echo> </target> </project> If I run mutant via a simple script which has just one line java -jar /home/conor/dev/mutant/dist/lib/start.jar $* I get this test2: [echo] Adding runnable of type java.lang.Thread [echo] hello2 [echo] Adding runnable of type java.lang.Thread [echo] hello3 test: [echo] hello BUILD SUCCESSFUL Total time: 0 seconds Lets change the <run> definition to <run/> in test.ant and the result becomes test2: [echo] Adding runnable of type java.lang.Thread [echo] hello2 BUILD FAILED /home/conor/dev/mutant/test/test.ant:10: No element can be created for nested element <run>. Please provide a value by reference or specify the value type plus an ugly stacktrace at the moment :-) That is it for me - good night Conor -- To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org> For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org>
http://mail-archives.eu.apache.org/mod_mbox/ant-dev/200201.mbox/%3C3C4D7886.8050907@cortexebusiness.com.au%3E
CC-MAIN-2020-50
refinedweb
3,082
66.13
Created on 2000-07-31 21:14 by anonymous, last changed 2002-05-29 01:33 by nnorwitz. This issue is now closed. Jitterbug-Id: 4 Submitted-By: MHammond@skippinet.com.au Date: Mon, 12 Jul 1999 15:38:43 -0400 (EDT) Version: 1.5.2 OS: Windows [Resubmitted by GvR] It is a problem that bugged me for _ages_. Since the years I first wrote the Pythonwin debugger Ive learnt alot about how it works :-) The problem is simply: when the frame being debugged is self.botframe, it is impossible to continue - only "step" works. A "continue" command functions as a step until you start debugging a frame below self.botframe. It is less of a problem with pdb, but makes a GUI debugger clunky - if you start a debug session by stepping into a module, the "go" command seems broken. The simplest way to demonstrate the problem is to create a module, and add a "pdb.set_trace()" statement at the top_level (ie, at indent level 0). You will not be able to "continue" until you enter a function. My solution was this: instead of run() calling "exec" directly, it calls another internal function. This internal function contains a single line - the "exec", and therefore never needs to be debugged directly. Then stop_here is modified accordingly. The end result is that "self.botframe" becomes an "intermediate" frame, and is never actually stopped at - ie, self.botframe effectivly becomes one frame _below_ the bottom frame the user is interested in. Im not yet trying to propose a patch, just to discuss this and see if the "right thing" can be determined and put into pdb. Thanks, Mark. ==================================================================== Audit trail: Mon Jul 12 15:39:35 1999 guido moved from incoming to open My common workaround is to always create a function called debug(): that calls the function in the module I am debugging. Instead of doing a runcall for my function I do a runcall on debug. Sorry I forgot to sigh the comment for 2000-Oct-17 07:18 David Hurt davehurt@flash.net Logged In: YES user_id=31392 Is this really a bug? Or just a feature request? Perhaps we should move it to 42 and close the report. Logged In: YES user_id=6380 Yes, it's really a bug -- it's an annoyance, you have to hit contine twice. Logged In: YES user_id=105700 There appears to be a simple solution. I'm not used to pdb very much, so I cannot fur sure say that my patch doesn't affect any extension of it, but it seems to work just fine. Idea: Allow botframe not to be a frame at all, but also None. This makes it possible to use f_back in line 67: self.botframe = frame.f_back ##!!CT In stop_here, we just omit the first two lines: def stop_here(self, frame): ##!!CT if self.stopframe is None: ##!!CT return 1 if frame is self.stopframe: return 1 while frame is not None and frame is not self.stopframe: if frame is self.botframe: return 1 frame = frame.f_back return 0 By this trick, botframe is llowed to be one level "on top" of the topmost frame, and we see the topmost frame behave as nicely as every other. -- chris Logged In: YES user_id=6380 You know, I cannot reproduce the problem! I created this module: import pdb def foo(): x = 12 y = 2 z = x**y print z return pdb.set_trace() print 12 print "hello world" foo() When I run it I get the pdb prompt. When I hit "continue" at the prompt, the whole program executes. Before we start messing with this I'd like to be able to reproduce the problem so I can confirm that it goes away! Logged In: YES user_id=14198 This appears to have been fixed magically in Python 2.2. Using Python 2.1 with the sample demonstrates the bug, while 2.2 and current CVS both work correctly. Haven't tried 2.1.1 A scan of the pdb and bdb logs don't show an obvious candidate that fixed the bug, but to be quite honest, I don't care *how* it was fixed now that it is <wink> Logged In: YES user_id=6380 OK, closing. Christian: please *don't* check it it! Logged In: YES user_id=105700 Ok, I didn't check ti in, but I disagree to close it! Do you think I would supply a patch if there weren't a problem? The problem was reported to me by an IronPort Python user who simply had the problem that pdb.runcall on a given function *does not* run, but always single steps. Your test doesn't get at the problem, since you don't set a breakpoint, which is necessary to make it show up! Here we go: - write a simple program with some 10 lines - start it with runcall - set a breakpoint - continue and it will definately step! With my patch, it works as expected. Furthermore, Mark's F5 command is documented to "start the program in the debugger". It never did so. With the patch, it does. Let's bring it to the end it deserves. regards - chris Logged In: YES user_id=6380 Can you be more specific in your example? I don't understand how I start a 10-line program with runcall, since runcall requires a function. I also want to know exactly which syntax you use to set the breakpoint. Logged In: YES user_id=105700 # test program for bdb buglet. # usage: # import pdb, bdbtest # pdb.runcall(bdbtest.test) # # then, in the debugger, type "b 13": def test(): a=0 a=1 a=2 a=3 a=4 a=5 a=6 a=7 a=8 a=9 # the breakpoint will be at "a=4" # now try to continue with "c", and you # will see it still single stepping. Logged In: YES user_id=6380 OK, you convinced me. Do you want to check it in or should I do it? If you want to do it, go ahead, and mark it as a 2.1 and 2.2 bugfix candidate. And thanks for this solution! (I tried this in IDLE, and there's a benign effect there, too. :-) Logged In: YES user_id=33168 Checked in as bdb.py 1.38/1.39 for current (fixed whitespace), 1.33.6.2 for 2.2 1.31.2.1 for 2.1 Hopefully I got it right. Closing.
https://bugs.python.org/issue210682
CC-MAIN-2020-05
refinedweb
1,077
84.27
Jun 21 Computer Programming – A 40k Foot View Filed Under Computers & Tech, Software Development on June 21, 2014 at 5:23 pm This blog post is a companion document to two Chit Chat Across the Pond segments I will be recording with Allison Sheridan on the NosillaCast over the next two weeks. The first of the two shows is now out, and and can be found here. One the second show is out I’ll add that link in too. In episode 474 when Allison was chatting with Donal Burr about Apple’s new Swift programming language said she didn’t understand what a compiler was, so I thought it might be fun to try address that! But rather than focus in on just that one very specific question, I thought it would be more useful to take a high-level look at computer programming in general, so that some of the conversations around various developer platforms will make more sense to the majority of NosillaCast listeners, who are non-programers. I find things always make more sense with examples, so I’m going to provide a number of them throughout this post, and if you want to play along, you’ll need to have Apple’s command line developer tools installed on your Mac (or, you’ll need the GNU C Compiler AKA gcc installed on a Linux computer/VM). I find it’s helpful to have the developers tools installed on any Mac, even if you don’t program, because they add a lot of command line tools to OS X. If you don’t have them installed, I suggest you have a read of this c|net article. One of the examples uses Java. If you have Java installed, by all means play along, but if not, I wouldn’t recommend installing Java just for this one example. With the preliminaries out of the way, lets get stuck in. In true NosillaCast style, we’ll start with a problem to be solved. Why Do We Need Programming Languages? When it comes to telling a computer what to do, we hit a major language barrier. Computers only understand binary machine codes, and with the possible exception of a handful of uber uber uber nerds, humans just don’t. To illustrate the magnitude of the problem, below is the actual binary code for a REALLY simple computer program (in a scrollable box so it doesn’t take up the entire page): It’s pretty clear this is not a format in which humans can easily work! All those 1s and 0s represent the instructions to the CPU, as well as the data the instructions should work on/with. We can re-write that same information in a slightly more human-readable form by representing the instructions to the CPU, and the various registers contained on the CUP with simple codes/names, and representing the data in ASCII form. This is what we call assembly language. The above binary code can be written in Intel x86 assembly language as: [code] .section __TEXT,__text,regular,pure_instructions .globl _main .align 4, 0x90 _main: ## @main ## BB#0: pushl %ebp movl %esp, %ebp subl $8, %esp movl $L_str, (%esp) calll _puts xorl %eax, %eax addl $8, %esp popl %ebp ret .section __TEXT,__cstring,cstring_literals L_str: ## @str .asciz “Hello World!” .subsections_via_symbols [/code] It’s still very low-level, but at least humans can read it, and, if you’ve got the requisite skills, program a computer in it (Steve Gibson famously programmed SpinRite in assembly). However, it’s still not a very human-friendly way to express instructions to a computer. This is why high-level computer languages were invented. The above assembly code (and the binary above that) are equivalent to the following very short C program: [sourcecode language=”c”] #include int main() { printf(“Hello World!\n”); } [/sourcecode] This is obviously much more human-friendly, and even many non-programers will probably be able to intuit that this simple program just prints “Hello World!”. Obviously, to get from this nice human readable form to the binary we saw above, we need some kind of converter, and this is where compilers and interpreters come in. Compiled Languages -v- Interpreted Languages There are two basic approaches different programming languages use to getting from their human-readable to binary codes the computer can execute, the can either use a compiler, or and interpreter. A compiler takes the human readable code and transforms it into the binary code for a specific computer architecture and OS, and then saves those 1s and 0s to a file which can then be executed or run over and over again with out the compiler’s involvement. This is how the vast vast majority of commercial software works. Every app you buy in the Mac, iOS, Windows or GooglePlay app stores has been compiled, as have all the big commercial apps you buy direct from the makers like Photoshop, Office, and so on. The same is true of the majority of open-source apps many of us use like FireFox, LibreOffice, and the GIMP. Compiling has many advantages, it is very efficient, you do it once, and then you can execute the code over and over again without any overhead, and, it means you can distribute your app without ever sharing any of your source code, which is very important to commercial software vendors. There is however another approach, and which it’s rarely used for large software products, it is used by every scripting language I have ever encountered. Scripting languages don’t have a traditional compiler, instead they have an interpreter. In many ways interpreters are very similar to compilers, they do the same basic translation between a human-readable language and computer code, but they don’t create an executable binary file that can be used over and over again. Instead, they translate the code on the fly. Each time you run a script, you are in effect recompiling it. This is obviously less efficient, because the translation happens each time the code is run, but, it has advantages too. Firstly, compilation is slow, because all the resources needed by an app have to be bundled into the single executable file. It’s not unusual for an app to take a few minutes to compile, and larger projects can even take hours! This can make tweaking and testing code painful. Interpreted languages run pretty much instantly, so you can tweet, test, and tweak very quickly. Compiled code has another disadvantage, the binary codes are different for different CPU architectures, and for different operating systems, so, you have to compile different versions of your code for each platform your support. This is why download pages often give you a lot of different downloads to choose from. The same script will run on any platform that has an interpreter for that given language, so you they are much more portable. Perl is an example of an interpreted language. I can write some Perl code on my Mac, run and test it, and then deploy it on a Linux server, or give it to a friend using Windows to run on their computer. Perl code can be run on any computer that has a Perl interpreter. The Third Way – Compiled AND Interpreted Languages With the rise of Java, a new approach gained real traction, using both a compiler AND an interpreter to get the portability of an interpreted language with much of the efficiency of a compiled language. Human-readable Java code is compiled to machine code by the Java Compiler, but it’s not machine code for any real computer architecture, instead it gets compiled down to the machines code for the Java Virtual Machine. This compiled Java code is then run using an interpreter called the Java Runtime Environment (or JRE). You might imagine that this kind of hybrid approach would give you the worst of both approaches, but it actually doesn’t, because of one very important fact – it is much easier to translate from one type of machine code to another than from human-readable code to machine code. This means that the Java interpreter is only a little slower than running code compiled for the actual architecture, but, it has all the portability of interpreted code. The idea is that you compile Java code once, then run it on any platform that has a JRE. Microsoft have taken this idea to the next level with their .net framework. The basic model is the same, you compile you human code down to a generic machine code, then interpret it as you execute, but, they took it one step further by supporting multiple different human-readable languages that compile down to the same .net machine codes. Getting Practical Some Compiled Languages: - C - C++ - Objective C - Swift Some Interpreted Languages: - Shell scripts (e.g. DOS Batch files, Unix/Linux shell scripts) - BASIC - Perl - PHP - Python - Ruby - JavaScript - AppleScript - VBScript Some Hybrid Languages: - Java - Perl 6 (with the Parrot VM) - C# (pronounced C-sharp) Before we get stuck into some practical examples, create a folder and open it in both the Finder and the Terminal. We’ll save all our sample files into this folder, and compile and run them all from there. Programming always has to be done using a plaint-text editor. You can use TextEdit.app, but only if you switch it to plain-text mode (Format → Make Plain Text). Alternatively you could use a commanline editor like nano or vi, or, you could use a programming editor like the free TextWrangler, or the cheap Smultron 6 (my favourite for casual programming). It could be argued that it’s more instructive to copy and paste the code yourself, but, if you’d prefer not to, I’ve compiled all the sample code into a single ZIP file which you can download here. Example 1 – Writing a Compiled Program (in C) - Save the code below in a file called ex1-hello.c - Compile it with the command: gcc -o ex1-hello ex1-hello.c - Execute it with the command: ./ex1-hello [sourcecode language=”c”] #include int main() { printf(“Hello World!\n”); } [/sourcecode] Example 2 – Writing an Interpreted Program (in Perl) - Save the code below in a file called ex2-hello.pl - Execute it with the Perl interpreter: perl ex2-hello.pl - OPTIONALLY – make the script self-executing (POSIX OSes allow scripts to specify the interpreter the OS should use to execute a script with using the so-called shebang line, i.e. #!then the path to an interpreter as the first line in a script) - Make the file executable with: chmod 755 ex2-hello.pl - execute the file: ./ex2-hello.pl [sourcecode language=”perl”] #!/usr/bin/perl print “Hello World!\n”; [/sourcecode] OPTIONAL: Example 3 – Writing a Hybrid Program (in Java) - Save the code below in a file called Ex3Hello.java(the different naming convention to the other examples is imposed by Java) - Compile the human-readable code to Java machine code: javac Ex3Hello.java - Execute the code with the Java interpreter: java Ex3Hello [sourcecode language=”java”] public class Ex3Hello{ public static void main(String args[]){ System.out.println(“Hello World!”); } } [/sourcecode] Loosely Typed -v- Strongly Typed Languages Another major difference between different programming languages is in how they store information. Programs are data manipulators, so storing information is an absolutely essential part of every programming language. Programming languages all use variables to store information, but they all have their own rules for how variables work. The information stored within a variable can be just about anything, a boolean true or false value, a single character, a string of characters, a whole number, a decimal number, a date, a time, or a complex record describing a person’s CV and so on and so forth. These different kinds of information are referred to as types. While every programming language has it’s own unique quirks when it comes to how they deal with variables, you can broadly group languages into two groups, those that have very strict typing rules, and those that have very loose typing rules, or, in programmer jargon, strongly typed and loosely typed languages. In a strongly typed language, the programer has to specify exactly what type of information a variable can store a the moment they create (or declare) that variable. In a loosely typed language you just declare a variable by giving it a name, and then put what ever data you want in there. To illustrate this point, lets see an example of each approach: Example 4 – Declaring Variables in a Loosely Typed Language (Perl) - Save the code below into a file called ex4-looseVariables.pl - Execute the code with the Perl interpreter: perl ex4-looseVariables.pl [sourcecode language=”perl”] #!/usr/bin/perl my $a = 42; my $b = 3.1415; my $c = ‘d’; my $d = ‘Donkey’; print “$a, $b, $c, $d\n”; [/sourcecode] Notice that we create variables of four different types (an integer, a decimal number, a character, and a string of characters), but we declare each one in exactly the same way. Example 5 – Declaring Variables in a Strongly Typed Language (C) - Save the code below in a file called ex5-strongVariables.c - Compile it with: gcc -o ex5-strongVariables ex5-strongVariables.c - Execute it with: ./ex5-strongVariables [sourcecode language=”c”] #include int main() { int a = 42; float b = 3.1415; char c = ‘d’; char d[] = “Donkey”; printf(“%d, %f, %c, %s\n”, a, b, c, d); } [/sourcecode] In this example we declare the same four variables as we did above, but this time we have to explicitly give each variable a type as we create it ( int for an integer, float for a decimal number, char for a character, and char[] for a string of characters). We also have to tell the printf command what type of variable to expect at each insertion point in the string ( %d for an int, %f for a float, %c for a char and %s for a char[]). Some Loosely Typed Languages: - BASH - Perl - PHP - Python - JavaScript - Objective C Some Strongly Typed Languages: - C - C++ - Java - Swift Just like with compiled versus interpreted languages, there are pros and cons to each approach. You’ll notice that most of the languages that are loosely typed are scripting languages, and that’s because it’s much quicker and easier to program in a loosely typed languages, so they are very well suited to small quick projects. But, they have a very big downside, the looseness prevents the compiler/interpreter from doing any type-checking, so a whole bunch of errors go un-caught until the program is running. This leads nicely to a very important point that really explains why Swift is causing such a stir – not all errors are equal, and language design choices can push some errors from run-time back to compile-time, which results in more stable programs. Lets look at that concept in more detail now. Not All Errors Are Equal There’s a whole spectrum of types of error that us imperfect humans can introduce into programs as we write them, but they’re not all equal! For our purposes today, good errors are those that are easy to track down, and bad errors are the sneaky kind that take time and effort to find and fix. The easiest errors to find and track down are syntax errors. Programming languages have very well defined grammar rules, just like human languages do, and if you break them, your code is said to be syntactically incorrect. When humans do be getting there grammar wrong, we have the intelligence to figure out what the speaker meant (as I just demonstrated), but computers have no intelligence, so when you make a syntax error in a programming language, the compiler compiling it, or the interpreter interpreting it, will quit with an error. You just can’t miss these kinds of errors, because while your code has them it simply won’t run! To illustrate the point, lets intentionally break the code in the first two examples and see what happens. Example 6 – a Syntax Error in a Compiled Language (C) - Duplicate the file ex1-hello.c, and save it as ex6-syntaxerror.c - Delete the last line from the file (the one that just has }on it), and save the file - Try to compile the code with: gcc -o ex6-syntaxerror ex6-syntaxerror.c You should get a compiler error, and no executable file will have been created: Example 7 – a Syntax Error in an Interpreted Language (Perl) - Duplicate the file ex2-hello.pl, and save it as ex7-syntaxerror.pl - Edit the file and change pirnton the second line - Try run the script with the Perl interpreter: perl ex7-syntaxerror.pl Again, the script does not execute, and the interpreter exits with an error: At the very other end of the spectrum are logic errors – the programmer has implemented exactly the algorithm he or she was asked to, but, there was a mistake in the spec, so it actually doesn’t do what it was supposed to, even though it compiles and runs. No compiler or interpreter can ever come to your rescue here, no matter how well designed your language is! Run-Time Errors Suck! Every programmer’s worst nightmare is an intermittent bug that only shows up when the code is in use, and only under certain conditions. These can happen in every language, no matter how well designed it is. To illustrate the point, lets intentionally create some compiled and interpreted code which suffers from the same simplistic intermittent run-time error. We’ll write a C program, and a Perl script which take two numbers as command line arguments, and divide the first by the second, then print out the answer. Example 8 – An Intentional Intermittent Run-Time Error (in C) - Save the code below in a file called ex8-divide.c - Compile it with: gcc -o ex8-divide ex8-divide.c - Test that it works by using it to divide 100 by 4: ./ex8-divide 100 4 - Test some more combinations, say 9 and 3, 16 and 4, and 270 and 90 - Now trigger the intentionally planted bug: ./ex8-divide 22 0 [sourcecode language=”c”] #include #include int main( int argc, char *argv[] ) { /* Make sure two arguments were supplied, or whine */ if( argc != 3 ) { printf(“Invalid arguments – you must provide two integer numbers!\n”); exit(1); } /* Convert our arguments to integers */ int a = atoi(argv[1]); int b = atoi(argv[2]); /* Do the division */ int ans = a/b; /* print the answer */ printf(“%d divided by %d equals %d\n”, a, b, ans); } [/sourcecode] The program works fine for many combinations of numbers, but, if you pass 0 as the second number, the program crashes! Example 9 – An Intentional Intermittent Run-Time Error (in Perl) - Save the code below in a file called ex9-divide.pl - Test that it works by using it to divide 100 by 4: perl ex9-divide.pl 100 4 - Test some more combinations, say 9 and 3, 16 and 4, and 270 and 90 - Now trigger the intentionally planted bug: perl ex9-divide.pl 22 0 [sourcecode language=”perl”] #!/usr/bin/perl # make sure we got two arguments, or whine unless(scalar @ARGV == 2){ print “Invalid arguments – you must provide two numbers!\n”; exit 1; } # read the numbers from the arguments (my $num1, my $num2) = @ARGV; # do the division my $ans = $num1/$num2; # print the answer print “$num1 divided by $num2 equals $ans\n”; [/sourcecode] As with the C example, all works fine for most numbers, but again, if we pass 0 as the second number, the script crashes! Compile-Time Errors -v- Run-Time Errors As we have seen, some errors will always be picked up by the compiler or the interpreter, and some will never be, regardless of how you design your language. However, between these two zones there’s a very interesting grey area, where decisions made when designing a programming language can push some types of error from run-time, which is bad, to compile-time, which is good! As with everything else, these decisions come with compromises, so there are plenty of really good reasons to use languages that don’t push as many types of error to compile-time as possible. There are lots and lots of different ways languages can push errors to compile-time, but we’ll just pick one to illustrate the point, by returning to the concept of loosely types languages as compared to strongly typed languages. Pushing Type Errors to Compile-Time A type error occurs when you try to do something to data of a particular kind that makes no sense. For example, you can’t divide four by baboon, that’s just arrant nonsense! If you try to force a computer to do something like that it will crash, just like it did when we tried to make it divide by zero. Because loosely typed languages let you store anything in any variable, the interpreter can’t spot when you try to do something impossible until the code is running and you present it with the impossible operation, so type errors in loosely typed languages are always run-time errors. Lets intentionally Create one! Example 10 – An Intentional Type Error in a Loosely Typed Language (Perl) This example is rather contrived, but it illustrates the point. We’ll create a subroutine that accepts two arguments, divides one by the other, and return the result. The subroutine can obviously only work when it’s passed two numbers, so we’ll intentionally pass it something else to trigger a type error. - Save the code below into a file called ex10-typeerror.pl - Run it to trigger the error: perl ex10-typeerror.pl [sourcecode language=”perl”] #!/usr/bin/perl # define our subroutine for dividing two numbers sub divide{ my $x = shift; my $y = shift; return $x/$y; } # print something to prove we are in runtime print “I’m running!\n”; # now trigger the type error by dividing 4 by a baboon my $a = 4; my $b = ‘baboon’; my $ans = divide($a, $b); [/sourcecode] Because Perl is loosely typed, it can’t check if legal values will be passed to the subroutine, because the subroutine doesn’t tell the interpreter what types it expects, and variables have no type when they’re defined, so all Perl knows is that a call will be made which passes some values to a subroutine. The only way you can find out you have an error is at run-time. Now lets contrast this behaviour to what you get with a strongly typed language, C in this case. Example 11 – An Intentional Type Error in a Strongly Typed Language (C) - Save the code below in a file called ex11-typeerror.c - Try compile it (and watch it generate errors) with: gcc -o ex11-typeerror ex11-typeerror.c [sourcecode language=”c”] #include /* Define a subroutine to divide two numbers */ int divide(int x, int y){ return x/y; } int main() { /* trigger a type error by trying to divide 4 by a baboon */ int a = 4; char b[] = “baboon”; int ans = divide(a, b); } [/sourcecode] The C compiler was instantly able to detect that our code had a bug because the subroutine declaration explicitly stated that it needed two integers as input, and that it would return an integer as output. Each variable declaration also specified the type, so the compiler knew that divide expected two integers, but that b was not an integer, so it complained, and pushed what was as runtime error in Perl, to a compile-time error in C. Loose Typing is not All Bad Like I said before, this is always about pros and cons. Loosely typed language tend to suffer from more run-time errors because type errors can’t be detected up front. But, as we’ve already said, loosely typed languages tend to be easier to program quickly in, and there are other advantages too. To illustrate this point, lets re-visit our two programs for dividing numbers ( ex8-divide & ex9-divide.pl) Because C is strongly typed, we had to define the types of all the variables involved, so, our program explicitly, and ONLY divides integers. And, it will always round the answer to an integer, even when the actual result is not a whole number. We can illustrate this problem by trying to divide 10 by 3, and then 5 by 0.25: Our loosely typed Perl script on the other hand, it has no problems switching back and forth between whole numbers and decimals as needed: 10 divided by 3 equals 3.33333333333333 bart-imac2013:CCATP140622 bart$ perl ex9-divide.pl 5 0.25 5 divided by 0.25 equals 20 bart-imac2013:CCATP140622 bart$ Conclusions All programming languages aim to solve the same problem, to allow humans to tell computers what to do, but there are many different ways to approach this problem, and different languages do things differently. We’ve only looked at a few of these differences, there are many many many more. Probably the biggest differentiator we’ve ignored is the so-called programming paradigm the language follows – is it procedural, object oriented, imperative, or something else? The key point though is that the many many choices programming language designers make all have pros and cons. There is absolutely no such thing as the one perfect programming language, the most you can argue is that a given language is the best-fit for a given task, or class of tasks (and you’ll always find someone who’s willing and able to argue that you’re wrong!). This is where developers joy at Apple’s introduction of Swift comes in, it seems to be a language better suited to the development of desktop and mobile apps than Objective C. It’s compiled, like Objective C, but unlike Objective C, it’s strongly typed, forcing developers to be much more explicit when defining variables and subroutines, and hence pushing lots of run-time errors in their large code-bases back to compile-time. Objective C also had a more forgiving syntax, allowing developers to take some shortcuts, whether they intended to or not. C and Objective C both allow the curly brackets on some control statements to be omitted if there’s only one line of code affected by the control statement. It’s this flexibility that famously led to the Goto Fail bug. Swift has a stricter syntax, enforcing braces on all control statements, even those only controlling a single line of code. Hopefully this very very high-level overview of the massive sea of programming languages has armed you with enough knowledge to understand at least some of the conversations you’ll encounter about programming languages, and, perhaps, even whetted your appetite enough to consider learning to program! Back when the Commodore C64 was on the market, I was a young boy and happily programming away using Basic. And I had been typing Assembler code from magazines into my trusty computer. Oh, happy days of my childhood! Since then, I learned how to “program” using scripting languages like PHP and I am able to use HTML/CSS. That’s what I know. Not much! 🙂 Since 2006, I am a Mac user at home and a Windows user at work. I have just listened to the first part of this Chit Chat Across The Pond segment and I loved it. Hopefully, I will find the time to step into some programming on the Mac – just for fun that is. Keep up the great work! I’m glad you enjoyed the episode Christian – as you can probably tell, I love talking about programming, and since I no longer teach as part of my job, I don’t get to do it very often anymore. I think I missed out growing up just a little later than you did. When I started buying computer magazines, they didn’t come with programs to type, they came with floppy disks to just install. That meant that programming was total and utter voodoo to me until I started in university. It would have been nice to have those skills while I had hours and hours of time to kill during those long childhood school holidays! Also, don’t be too hard on yourself, PHP programming is still programming 🙂 I know I have quite strong negative opinions about PHP, but you don’t judge a site or an app by the language it was written in any more than you judge the quality of a table by the tools it was crafted with! Another complex topic covered with your usual well considered approach. I can’t wait to hear the error discussion next time – a problem space I am already far too familiar with. Just one comment on what you said re Java being “very verbose” with specific reference to System.out.println. I personally think that is in some ways *better* than a more succinct version because it’s very clear where this function comes from and what it operates on. Granted this particular call is used an awful lot so it does get old quickly, but it is absolutely not something I would count against the language because it is *logical*. I actually liked Java a lot, even though I never really practiced after learning it. C, on the other hand, gives me the willies. Powerful? Yes. Dangerous? Yes. Hi Allister, SOP is not at all the best example of Java’s verbosity, but it was all the code we had in front of us, so I hung my argument on that rather shaky hanger! Where Java really gets me is when you start interacting with databases, or parsing XML, or trying to use things like Regular expressions, you just end up doing soooooooooo much typing to achieve anything! Despite the verbosity, I love Java’s object model, and will always have a soft spot for the language. It was the first language I learned, and the first language I wrote a large code-base in (>100k lines). I also couldn’t agree more about C, as someone famously quipped – “a language that combines all the elegance and power of assembly language with all the readability and maintainability of assembly language”. The examples in this post are the first C I’ve written since I handed in the project work for the Computational Physics module in the third year of my degree back in 1996! I chose it because GCC is everywhere, and because so many languages are described as c-style. You mention dangerous – it’s true that the dangers of C are responsible for so many of the security bugs that plague us all the time (buffer overflows shouldn’t be possible in a good language), but it did remind me that my favourite language, Perl, can be very dangerous too, because the ‘safeties’ are optional – I once killed a Mac with a Perl script that didn’t use the Strict module, and hence took my typo as a valid variable definition, defaulted the variable to blank, and then used it to assemble the following command for shelling out: “rm -rf $badvariable/”! (that was the last day I ever wrote Perl without use Strict and use Warnings!)
https://www.bartbusschots.ie/s/2014/06/21/computer-programming-a-20k-foot-view/
CC-MAIN-2017-43
refinedweb
5,247
56.59
React Native WebView is a component to render the web page into your mobile app. This is supported by Android and IOs both. WebView is very useful as you can open any web link in your app itself so when anybody wants to browse your referred link they don’t need to open any other app for that. import { WebView} from 'react-native-webview' <WebView source={{uri: ''}}/> To use WebView you need to install react-native-webview dependency. npm install react-native-webview --save After the updation of React Native 0.60, they have introduced autolinking so we do not require to link the library but need to install pods. So to install pods use cd ios && pod install && cd .. //This is an example code to understand WebView// import React, { Component } from 'react'; //import react in our code. import { WebView } from "react-native-webview"; //import all the components we are going to use. export default class App extends Component { render() { return ( <WebView source={{uri: ''}} style={{marginTop: 20}} /> ); } } Do you want to hire us for your Project Work? Then Contact US.
https://blog.codehunger.in/react-native-webview/
CC-MAIN-2021-43
refinedweb
180
56.05
On 2008-10-31 09:08, Tino Wildenhain wrote: > Hi, > > Steven D'Aprano wrote: >> On Fri, 31 Oct 2008 07:10:05 +0100, Tino Wildenhain wrote: >> >>> Also, locals() already returns a dict, no need for the exec trickery. >>> You can just modify it: >>> >>> >>> locals()["foo"]="bar" >>> >>> foo >>> 'bar' >>> >> >> That is incorrect. People often try modifying locals() in the global >> scope, and then get bitten when it doesn't work in a function or class. > >> >>>>> def foo(): >> ... x = 1 >> ... locals()['y'] = 2 >> ... y >> ... >>>>> foo() >> Traceback (most recent call last): >> File "<stdin>", line 1, in <module> >> File "<stdin>", line 4, in foo >> NameError: global name 'y' is not defined >> >> You cannot modify locals() and have it work. The fact that it happens >> to work when locals() == globals() is probably an accident. > > Ah thats interesting. I would not know because I usually avoid > such ugly hacks :-) It doesn't even work for already defined local variables: >>> def foo(): ... x = 1 ... locals()['x'] = 2 ... print x ... >>> foo() 1 The reason is that locals are copied in to a C array when entering a function. Manipulations are then done using the LOAD_FAST, STORE_FAST VM opcodes. The locals() dictionary only shadows these locals: it copies the current values from the C array into the frame's f_locals dictionary and then returns the dictionary. This also works the other way around, but only in very cases: * when running "from xyz import *" * when running code using "exec" globals() on the other hand usually refers to a module namespace dictionary, for which there are no such optimizations.. I don't know of any way to insert locals modified in a calling stack frame... but then again: why would you want to do this anyway ? -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Oct 31
https://mail.python.org/pipermail/python-list/2008-October/500610.html
CC-MAIN-2014-15
refinedweb
300
69.21
Well okay, actually I’m still using them, but I thought the absolute would make for a better headline. But I do not use them nearly as much as I used to. Almost exactly a year ago, I even described them as an integral part of my unit design. Nowadays, most units I write do not have an unnamed namespace at all. What’s so great about unnamed namespaces? Back when I still used them, my code would usually evolve gradually through a few different “stages of visibility”. The first of these stages was the unnamed-namespace. Later stages would either be a free-function or a private/public member-function. Lets say I identify a bit of code that I could reuse. I refactor it into a separate function. Since that bit of code is only used in that compile unit, it makes sense to put this function into an unnamed namespace that is only visible in the implementation of that unit. Okay great, now we have reusability within this one compile unit, and we didn’t even have to recompile any of the units clients. Also, we can just “Hack away” on this code. It’s very local and exists solely to provide for our implementation needs. We can cobble it together without worrying that anyone else might ever have to use it. This all feels pretty great at first. You are writing smaller functions and classes after all. Whole class hierarchies are defined this way. Invisible to all but yourself. Protected and sheltered from the ugly world of external clients. What’s so bad about unnamed namespaces? However, there are two sides to this coin. Over time, one of two things usually happens: 1. The code is never needed again outside of the unit. Forgotten by all but the compiler, it exists happily in its seclusion. 2. The code is needed elsewhere. Guess which one happens more often. The code is needed elsewhere. After all, that is usually the reason we refactored it into a function in the first place. Its reusability. When this is the case, one of these scenarios usually happes: 1. People forgot about it, and solve the problem again. 2. People never learned about it, and solve the problem again. 3. People know about it, and copy-and-paste the code to solve their problem. 4. People know about it and make the function more widely available to call it directly. Except for the last, that’s a pretty grim outlook. The first two cases are usually the result of the bad discoverability. If you haven’t worked with that code extensively, it is pretty certain that you do not even know that is exists. The third is often a consequence of the fact that this function was not initially written for reuse. This can mean that it cannot be called from the outside because it cannot be accessed. But often, there’s some small dependency to the exact place where it’s defined. People came to this function because they want to solve another problem, not to figure out how to make this function visible to them. Call it lazyness or pragmatism, but they now have a case for just copying it. It happens and shouldn’t be incentivised. A Bug? In my code? Now imagine you don’t care much about such noble long term code quality concerns as code duplication. After all, deduplication just increases coupling, right? But you do care about satisfied customers, possibly because your job depends on it. One of your customers provides you with a crash dump and the stacktrace clearly points to your hidden and protected function. Since you’re a good developer, you decide to reproduce the crash in a unit test. Only that does not work. The function is not accessible to your test. You first need to refactor the code to actually make it testable. That’s a terrible situation to be in. What to do instead. There’s really only two choices. Either make it a public function of your unit immediatly, or move it to another unit. For functional units, its usually not a problem to just make them public. At least as long as the function does not access any global data. For class units, there is a decision to make, but it is simple. Will using preserve all class invariants? If so, you can move it or make it a public function. But if not, you absolutely should move it to another unit. Often, this actually helps with deciding for what to create a new class! Note that private and protected functions suffer many of the same drawbacks as functions in unnamed-namespaces. Sometimes, either of these options is a valid shortcut. But if you can, please, avoid them. Shit: you once made me love the unnamed namespace as a handy “container” for my separated functions. And I don’t want to give up that love. But there is no denying that you gave a valid argument here to re-think my opinion. 😉
https://schneide.blog/2016/11/11/why-im-not-using-c-unnamed-namespaces-anymore/
CC-MAIN-2018-47
refinedweb
845
76.32
[SOLVED]Write QMAKE replace function Well, as I mentioned somewhere...I have got function: @defineReplace(getLibraryTarget) { NAME = $$1 return(quote($${PROJECT}.$$NAME)) }@ But it returns just $$NAME (not $$PROJECT), when I tested this variable, shows correct message, but when I compiled this code, it returns just $$NAME as target...I really don't know where could be a problem... So the correct name of that library should be: @Project.LibraryName.dll@ But it is just: @LibraryName.dll@ how to rewrite this function?... You'll have to show how you use it. @defineReplace(getLibraryName) { NAME = $$1 return ($$quote(${PROJECT}.$$NAME)) }@ works fine. The problem mainly is that PROJECT isn't something qmake sets. What do you expect PROJECT to return? The project name? If I have the following: @PROJECT = gnarl defineReplace(getLibraryName) { NAME = $$1 return ($$quote(${PROJECT}.$$NAME)) } message($$getLibraryName(huuhaa.dll))@ I get @Project MESSAGE: gnarl.huuhaa.dll@ as output. If I don't set PROJECT, I get @Project MESSAGE: .huuhaa.dll@ which indicates qmake doesn't do anything with the variable. He said he gets: @LibraryName.dll@ not @.LibraryName.dll@ which indicates to me that he is not using it properly. I am using it: @TARGET = $$getLibraryName(LibraryProject)@ Yes, the variable $$PROJECT is set up (in .pri file, where is also defined that function) Is PROJECT set before you define the function? yes, it is. See what you get with @message($$TARGET)@ well...nothing. It is empty... I tried: @message($$TARGET) message($$setLibraryTarget(Configuration)) @ Both are NULL (empty)... Before or after you use @TARGET = $$getLibraryName(LibraryProject)@ How is setLibraryTarget defined? Used in Library project: @include(../../../Project.pri) QT += xml xmlpatterns TARGET = $$setLibraryTarget(Configuration) @ Defined in project include file ( .pri ) @defineReplace(setLibraryTarget){ NAME = $$1 return($$quote(${SYDNEYSTUDIO_PROJECT_NAME}.$$NAME)) }@ Well, when I removed Makefile and wrote is on multi-lines, it works...thanks for help. So solution is, do not write QMAKE function as inline :D :D But, there is another problem with my QMAKE (new thread?)
https://forum.qt.io/topic/6956/solved-write-qmake-replace-function
CC-MAIN-2018-39
refinedweb
326
62.85
Windows Storage Server 2008 R2 has been released to our OEM manufacturers! In my opinion, this OEM platform is by far the best value for your money for a network-attached storage (NAS) appliance operating system. Our OEMs tell us that they prefer a solution that integrates flawlessly into an Active Directory, supports that latest storage management applications, plays nice with industry standards and has an easy to use interface that IT pros understand. Buying industry standard servers running Windows Storage Server is 4x less expensive than buying a proprietary appliance and the features people want most are included without paying more later. In addition to the cost savings, the new release scales very well to accommodate many more users accessing files than those proprietary appliances. In today’s IT environment, people expect great features like fast file protocols with distributed namespaces, file replication, data deduplication, iSCSI Targets, volume snapshots and support for the latest security, anti-virus, and backup applications. They want the ability to do storage reporting and enact automated policies based on the business value of their data. If something goes wrong, they want on-site support staff with teams of people available to resolve issues and answer questions. Windows Storage Server 2008 R2 is built on the award-winning Windows Server 2008 R2 codebase and contains some awesome features for NAS solutions. OEMs will be offering great storage solutions and best in class support for these mission-critical systems over the next decade. Windows Storage Server 2008 R2. Three New Editions to Savor: - Windows Storage Server 2008 R2 Enterprise - Windows Storage Server 2008 R2 Standard - Windows Storage Server 2008 R2 Workgroup There are a ton of new things you can do with Windows Storage Server appliances. Let’s look at the key scenarios and uses. Key Scenarios: - File Server – Access files over the network using SMB and NFS protocols. SMB 2.1 is super fast and Windows 7 clients can speak it natively. SMB 2.1 combined with the new networking and storage stacks in Windows has proven to almost double the SMB file-protocol performance on identical hardware by moving from Windows Server 2008 to Windows Server 2008 R2-based file servers. You also get all the benefits of the File Server Resource Manager (FSRM) with quotas, file screens and storage reporting. FSRM includes the incredible File Classification Infrastructure (FCI) that enables you to classify every file in your organization and perform specific actions you choose. For example, you can scan all files for credit card or social security numbers and automatically protect them. You could prevent deletion of time-bound data, expire, delete, RMS protect, or move files to SATA drives when they get old. Just about anything you can dream up can be done with FCI; you could build a simple little HSM solution in just a few minutes! - Branch Office Server – Windows Storage Server is the ultimate branch office OS. Take advantage of Read-Only Domain Controller (RODC) to authenticate the branch users. Use Distributed File System (DFS) to publish a company-wide namespace and DFS Replication to do two-way synchronization of branch office servers to the home office. You can make DFS replicas read-only and take advantage of SIS to deduplicate files in the branch. You can sync each user’s data to the corporate site and offer them offline files and folder redirection so they can work on multiple computers or while they are mobile/offline. Standard and Enterprise storage servers can also run DNS and DHCP so you can consolidate infrastructure in the branch. In a large branch, or if downtime isn’t cool, then use a pair of storage servers in a 2-node cluster. Windows Storage Server 2008 R2 Enterprise adds BranchCache to save massive amounts of bandwidth over the WAN back to your corporate headquarters. - Block Storage Server –iSCSI storage for application servers like SQL Servers, Exchange Servers or Hyper-V. The Microsoft iSCSI Software Target supports SCSI-2 and SCSI-3 persistent reservation commands so you can use it for shared cluster storage. - Unified Storage Server – Serve blocks and files from the same storage server. - iSCSI Boot Server – Boot support for diskless servers and clients. - iSCSI Boot for HPC Clusters – Deploy and boot hundreds of diskless HPC cluster nodes in minutes using differencing virtual hard disks (VHDs) that build off a common “golden master” image. We have been able to simultaneously boot hundreds of HPC compute nodes off of a single iSCSI Target in just a few minutes. - Gateway to a SAN – Front-end your storage area network (SAN) storage so you can leverage those disks for storing files. Deployment Modes: - Standalone Storage Server – Windows scales up well with fast drives, networking cards, and processors. - Highly Available Storage Server – Use Failover Clustering to create iSCSI or file server solutions with no single points of failure. Multipath I/O (MPIO) can be set up on the network paths as active-active (round-robin load balance for maximum throughput and redundancy) or active-standby (only used in case of a failover). iSCSI Storage Topologies Tested at Microsoft: - Hyper-V host (iSCSI initiator) using an iSCSI LUN as a volume for the virtual machines. In this topology, the Hyper-V host uses an iSCSI initiator to connect to an iSCSI target. The host formats the LUNs with its own file system (NTFS) and creates .vhd files to be used by the virtual machines. - Hyper-V host (iSCSI initiator) using an iSCSI LUN as a pass-through disk to the virtual machines. The Hyper-V host uses the iSCSI initiator to connect to an iSCSI target. The host doesn’t format the LUN and just passes it through to the virtual machines. - Hyper-V virtual machine using the iSCSI initiator. The virtual machine uses the iSCSI initiator to connect directly to an iSCSI LUN being hosted by an iSCSI target. - Boot and Data disks for a Hyper-V host. The iSCSI target can provide a LUN to a Hyper-V host for boot disks or data disks for the Hyper-V virtual machines in all three of the above topologies. If you boot from the disk described in the “Hyper-V virtual machine using the iSCSI initiator” topology, it enables diskless iSCSI booting. - Clustered Application Servers. This configuration uses clustered physical servers or clustered Hyper-V guest virtual machines that use the iSCSI target for their shared storage. Many application servers like Microsoft SQL Server, Microsoft Exchange Server, and Microsoft Office SharePoint Server are clustered to provide maximum uptime. - Clustered Hyper-V host for Live Migration and CSV. This configuration creates a failover cluster with two Hyper-V host machines using the iSCSI target. A Cluster Shared Volume (CSV) will be created on the storage provided by the iSCSI target and the guest virtual machine configuration and disk files is created on the CSV. Hyper-V live migration enables the entire virtual machine to quickly move from one host to another. Components Unique to Windows Storage Server 2008 R2: -! - New! Cluster-Ready OOBE – This customizable Out-Of-Box-Experience (OOBE) includes Windows Welcome and the Initial Configuration Tasks (ICT) applications. The OOBE can be used in standalone or clustered configurations and it enables OEMs to brand, customize and pre-load storage appliances with a custom image for their server hardware and attached storage. Users can enjoy a two-node failover cluster setup without ever going to the second node. Imagine setting up a highly-available iSCSI Targets or file services in just 15 minutes! - Web RDP Management – Get full-screen management from any Windows system (ActiveX) or any non-Windows client (Java RDP) Just visit for complete full-screen UI management in any IT environment, including a Linux system running Firefox. Windows-based storage servers support NFS and iSCSI protocols for a complete remote-storage solution on application servers running just about any platform. - iSCSI Software Target 3.3 – This new version of the Microsoft iSCSI Software Target for Windows Storage Server 2008 R2 includes Windows PowerShell cmdlets and differencing virtual hard disk support for HPC boot scenarios. See the Six uses for iSCSI blog for an outline of the many ways to use an iSCSI target. The test and development scenarios are especially awesome when you can’t afford an expensive SAN for each developer. - iSCSI Software Target 3.3 Hardware Providers – New versions of the iSCSI Software Target VDS and VSS hardware providers for Windows Server 2003, Windows Server 2008, and Windows Server 2008 R2 (where iSCSI initiators are running) and a new HPC hardware provider (x64 only for Windows HPC Server 2008 R2) to automatically set up hundreds of differencing virtual hard disks to support booting hundreds of diskless HPC nodes. - Branding and Licensing Packages. Used by OEMs to build a Windows Storage Server appliance. The EULA for Windows Storage Server 2008 R2 doesn’t allow you to run regular Line of Business or server applications on Windows Storage Server, but it does offer a complete storage solution for just about any storage workload. Edition Breakdown: I am really looking forward to seeing the new hardware that will come out with WSS in the next few months. Check back here for a new series of blogs on WSS we have planned. Cheers, Scott M. Johnson Program Manager Windows Storage Server Hello All I am thinking to get the WSS R2 OS but i would like to know if i can use it as an Active Directory server role. I have only one ACTIVE Directory and since i will get this for Backup reason of my servers i am thinking to make this one. Let me know pls if anyone use it already. SIncerely Andrew: Yes! WSS 2008 R2 is R2 to begin with. I second that Hyper-V question: if I want to run a cluster for both floating my Hyper-V VMs and serving up my DFS namespaces for my LAN users then can I just buy 2-3 WSS2k8R2 Ent servers and configure them to all these services? Or I have to get 2 WSS2k8R2 Ent for all file services and host Clustered Shared Volume (to save on CALs) + 2 WS2k8R2 Ent/DC to host Hyper-V and everything else (domain services etc)? (Storaqe would be a LeftHand/Equallogic-type of unit, no need for RAID or anything fancy at OS level.) How to set up MS iSCSI Software target (target side) for HPC iSCSI Provider to support booting hundreds of diskless HPC nodes ? When will WSS2008 R2 be available to download from TechNet? Thanks, Eric It might take a week or so, but it should be pretty quick. Will a person be able to upgrade from WSS2008 to WSS2008R2 and migrate their existing volumes without a rebuild? In other words if I have WSS2008 and volumes now, can I go to R2 with just an upgrade? You can do a fresh install and continue to use the same volumes, but you cannot do an in-place upgrade to maintain applications and settings. Some OEMs will support migrating a server to a newer version. Another question. Will TRIM for SSDs be supported (as it is in Windows 7)? I believe it isn't in WSS2008. Yeah, TRIM for SSDs is supported. These editions share the Windows Server 2008 R2 codebase when it comes to disk subsystems and device support. Will the users limit in the Workgroup Edition be changed from the R1 to R2? The current limit is 50 users, and the table comparing the R2 editions mentions a 25 limit for the Workgroup Edition. The increase in the disks limit would be good, but the users limit change wuold be a shame =/ So with a Windows Storage Server 2008 R2 NAS appliance you could then store the data for Hyper-V VM's? I thought I read that Hyper-V does not support the storage of the VM's on NAS appliances is this where the iSCSI initiator comes into playy? What level of domain (2003, 2008, 2008 R2) does RODC require? I just checked with my OEM license distributor and they don't have it yet.She said its scheduled for November… Is there a different release schedule for those of us who get our WSS licenses through distributors? Does this require vendor supplied hardware or can this be installed to an existing server or in our case…a vmware cluster as a virtual machine? we are also interested in WSS 2008 R2 and tested the new version. Now we found out, that it is possible to activate SingleInstanceStorage on a 2008 R2 server. Is this supported? Or is it only supported on WSS OEM distributed hardware? Hi all. It is written "Leave a Comment". So don't ask questions. There will be no answers… Sad so. Does anyone know how it will handle bitrot within files? why is SIS not available in regular windows server or WSS not available in a virtual edition??? this is so annoying! Everything is virtual these days. I want to take advantage of SIS on a NAS virtual machine. Can upgrade to windows storage server 2008 R2 from windows storage server 2003 R2? Bob – According to my Dell rep, you can't upgrade to 2008 WSS, it's only available to manufacturers. As yet I have not been able to find this, which is frustrating as we're upgrading to Exchange 2010 and need DPM 2010 to support it, but I can't put DPM 2010 on a 2003 box. 2008 Server won't upgrade over my 2003 WSS and now I'm finding out I have to buy a completely new NAS just to upgrade the OS. Seriously, if anyone finds this is not the case, I'd LOVE to hear about it. I've look and haven't found an answer to what I thought would be a basic question: Is there a cost and/or licensing issue with upgrading from 2008 WSS Standard to 2008 WSS R2 Standard? Is it possible to perform an in-place upgrade from Windows Server 2008 R2 to Windows Storage Server 2008 R2? Can Storage Server R2 use RAW disk instead of VHD files? Scott – learn to read. Andrew – No. Andrew, Steve is right. Machines in the field cannot be upgraded by end users. WSS is sold as an appliance by OEMs. JJ, Yes, the OS supports many disk controllers that will attach to raw disks and be able to format them. I am a little steamed. I read the Microsoft FAQ and wanted WSS 2008 R2. I then ordered a Dell PowerVault NX300 and NX3000 and they came with WSS 2008 SP2, not R2. Dell tells me they aren't buildin WSS 2008 R2 boxes yet. While it is my fault for assuming it was R2, it kinda sucks that I can't do read only DC and other R2 stuff. Who can give me a serial no? send to my email :pyy123@163.com, thanks very much. What are considerations for UPGRADING Windows Storage Server 2003 to Windows Storage Server 2008 ? I need details of doing this upgrade, hardware and software issues and considerations. I need the Computer and Backup from WSS Esentialss 2008 , but I have WSS Standard 2008 . Can I use this function ? and how ? Anyone got an OEM part number for the products, having massive issues finding this in Australia. Pingback from Problem setting up Windows Storage Server 2008 as a domain control – Active Directory How to reset Storage Administrator Password of Windows Storage Server 2008 R2 Standard without formation of OS. Is it possible? Hi Folks – It’s great to see all the new storage appliances that are running Windows Storage Server. Red help on 2008 storage server one of disks (raid 6 configured) did mode =missing and status= failed
https://blogs.technet.microsoft.com/storageserver/2010/09/22/windows-storage-server-2008-r2-is-now-available/
CC-MAIN-2017-13
refinedweb
2,640
62.17
tag:blogger.com,1999:blog-285137142017-10-14T12:52:35.678+07:00wiradikusumaku 'kan terbang tinggi bagai rajawaliThomas Wiradikusumanoreply@blogger.comBlogger207125wiradikusuma Kickass.IDSaya hobi menonton film, terutama film Holywood. Di sela-sela kesibukan, saya sering menyempatkan diri untuk ke bioskop bersama keluarga. Untuk kamu yang punya hobi sama, saya membuat situs sederhana untuk mengecek apakah film tertentu sudah tersedia di Internet, namanya <a href="">Kickass.ID</a>.<br /><br />Namanya familiar? Kickass adalah salah satu situs torrent terkenal yang memuat banyak film bajakan, program bajakan, musik bajakan, dan konten bajakan lainnya. Nama Kickass naik turun, situsnya sering ditutup pihak berwajib dan muncul lagi dengan alamat Internet yang mirip.<br /><br />Kickass.ID tidak ada hubungannya dengan situs torrent tersebut. Situs buatan saya hanya berisi informasi konten yang tersedia di Internet. Bagaimana mencari kontennya, usaha sendiri :)<br /><br />Apa gunanya? Kalau kamu sering mencari konten di Internet, berarti kamu harus sering mengecek situs torrent. Biasanya, situs seperti itu susah dibuka dari ponsel, apalagi kalau pakai data. Nah, situs buatan saya dirancang agar tampilannya <i>mobile-friendly</i>, hemat data, cepat dibuka, dan aman (gambar gembok berwarna hijau).<br /><br />Tunggu apa lagi? Buruan cek dan bookmark <a href="">Kickass.ID</a>!Thomas Wiradikusuma in ReviewIt's been more than a year since my last post. During the period of time, I have quit my job and launched <a href="">Homework Hero, an Android app to help students in Indonesia with their homework</a>. I also got married, become a father, went through an accelerator (<a href="">MaGIC</a>), and learned Python and Machine Learning. Phew!<br /><br /)!<br /><br />Time becomes a precious commodity when I started a family and become a father. No more hackathons (also, I'm getting old for that), and the free time I usually spent with code editors should now probably be spent with my family.<br /><br /).<br /><br />So, by quitting my job, suddenly I have 100% for doing startup? Not quite. For the first few months, I was still adjusting. Not to mention MaGIC required us to attend mandatory full-day sessions twice a week.<br /><br /.<br /><br />I will keep doing side projects, as it allows me to escape from routine. Just few days ago I built <a href="">kalender2017.id</a>, which at least helps me with my own itch.<br /><br />Happy new year!Thomas Wiradikusuma from the Founder InstituteIt's just last night when I emailed my list announcing my graduation from the Founder Institute Kuala Lumpur. Khailee, Managing Partner of 500 Startups and a mentor of Founder Institute, gave the final mentoring last week. It's amazing how fast four months could go by.<br /><br />I.<br /><br />Then they opened in Jakarta, my hometown. I was very tempted to quit my job to enrol there, but then again it kinda defeats the purpose of joining—might as well join a full accelerator (which I didn't do because I wasn't ready).<br /><br />Then they opened in Kuala Lumpur. I missed the 1st cohort, and I kinda lost interest on the 2nd. But when they opened registration again, few months ago, I decided to join. It's now or never, I told myself.<br /><br /.<br /><br />I've learned a lot of new stuff first-hand, from mentors who are industry players, many of them are now in my list. I know them, and, more importantly, they know me. I've also made friends with fellow founders. I found my crowd.<br /><br /.<br /><br /><div>If you're thinking of starting your own company, Founder Institute might be a good place to start.</div><br />Thomas Wiradikusuma setting goalsSo, I have broken <a href="">my promise</a> again to write more often. I read about it somewhere that I'm not unique to this "over promise, under deliver" problem when it comes to writing. Still, I feel like I fail myself and my readers.<br /><br />Writing is a Good Thing™—somebody even compiled <a href="">17 reasons for doing it<."<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="278" src="" width="400" /></a></div><br />Fortunately, I learned something new today about goal setting. We shouldn't set our goal in binary: either we achieve it or not. Instead, set it in 3 categories: <b>What we can definitely do, what we want to achieve, and what is awesome to get.</b> Let's use my writing problem as an example.<br /><br />Looking at my recent writing frequency, 12 posts <i>per year</i> seems something I can definitely do. Of course, that amount is lame for a blog, so my target is to write 52. It seems ambitious, but in 2007 and 2008 I wrote more than that, so it's a good target. Always challenge yourself, reasonably.<br /><br />52 is the number of week in a year, but I didn't mention the interval. I can slack for 11 months and rush in December. Now, if I can write consistently every week, that's awesome!<br /><br /.<br /><br />Now it's your turn.Thomas Wiradikusuma, I'm now 30!Happy birthday to me! I can't believe I'm now 30 years old. Gosh! I have promised myself to blog everyday starting today, until (at least) my next birthday. The topic does not matter, it's the process that I seek. Since I just returned from a 5-day trip from <a href="">Boracay, the Philippines</a>, I'll spare myself and consider this short paragraph a first :)Thomas Wiradikusuma tinggal di Kuala LumpurDari semua tulisan disini, tulisan gw tentang <a href="">makanan di Malaysia</a> ternyata paling populer. Beberapa orang bahkan sampai mengontak gw di Facebook untuk bertanya lebih lanjut. Sekarang gw sudah hampir tiga tahun tinggal di Kuala Lumpur, saatnya untuk berbagi lebih banyak.<br /><br /,<br /><br /><b>Tempat tinggal pertama: Shah Alam.</b><br /><br /.<br /><br />Pengalaman dari tinggal di Shah Alam:<br /><ul><li>Beberapa perusahaan besar di Malaysia mempunyai <i>call center</i> <i>outsource</i> dan mendapat akomodasi tempat tinggal (satu kondo bertiga) dan bis karyawan. Karena mereka bekerja<i> shift</i>, bisnya datang setiap jam.</li><li.</li></ul><div>Tentang kereta ("kereta" berarti "mobil" di Malaysia, tapi untuk tulisan ini "kereta" berarti "alat transportasi yang berjalan di rel"):</div><ul><li>Transportasi massal berbasis rel di Kuala Lumpur ada tiga: <i>monorail</i>, KTM dan LRT. KTM hampir sama dengan KRL AC ekonomi(?) di Jakarta—sama-sama ga tepat waktu dan lambat.</li><li>Tiket dapat dibeli di konter atau mesin (tapi kebanyakan mesinnya rusak, hehe), atau pakai kartu Touch N Go (kartu debit).</li></ul><div>Tentang angkutan umum lain:</div><ul><li.</li><li>Disini ga ada ojek, bajaj, omprengan atau angkot (minibus). Selain transportasi berbasis rel, hanya ada bis dan taksi.</li><li.</li><li>Kalau naik bis, kamu umumnya harus berhenti di halte, ga boleh di sembarang tempat.</li><li>Taksi disini sangat pemilih. Sedikit macet ga mau, padahal kan tetap dibayar!</li><li>Di beberapa tempat (terutama daerah turis dan tempat clubbing), taksi ga mau pakai argo ("meter") atau minta tambahan, misalnya +RM2 (~Rp6 ribu).</li><li.</li><li>Taksi disini jauh lebih jelek dibandingkan di Jakarta.</li><li>Naik taksi disini cenderung lebih murah, mungkin karena tidak begitu macet dan tidak perlu memberi tip.</li></ul><div>Tentang jalanan, kendaraan pribadi, pejalan kaki dan tempat parkir:</div><ul><li>Jalan raya di Malaysia lebih lebar.</li><li>Jalanan disini umumnya hanya macet sebelum dan sesudah jam kerja, tapi macetnya tetap lebih "masuk akal" dibandingkan dengan di Jakarta.</li><li>Beberapa jalan tol dipasangi <i>speed trap</i>—kamera yang menangkap kalau kamu terlalu ngebut.</li><li.</li><li>Di Kuala Lumpur, mobil lebih banyak dari motor.</li><li>Motor boleh masuk tol tanpa membayar.</li><li>Kadang disediakan jalur khusus untuk motor.</li><li>Tidak ada yang mau naik motor kecuali terpaksa. Pedagang kaki lima dan penjual DVD bajakan di emperan jalan pun naik mobil.</li><li).</li><li>Orang Malaysia <i>sedikit</i> lebih teratur dalam mengantri, termasuk di lampu merah (untuk hal ini, pengemudi mobil lebih patuh dibanding pejalan kaki).</li><li>Disini ga ada tukang parkir, apalagi "pak ogah".</li></ul><div>Ada yang perlu ditambahkan? Tulisan berikutnya akan berdasarkan pengalaman gw pertama kali "ngekost" di Kuala Lumpur.</div>Thomas Wiradikusuma update, midyear 2012<p>How fast time flies. My last post was on the <a href="">1st of January this year</a>, and that's more than a half year ago!</p><p>I was dormant here, but I kept on writing (at least for a couple of weeks) in <a href="">#anakkos</a>, a blog inside <a href="">Neytap.com, a classifieds for room rentals</a>, for SEO purpose. So far, the website receives almost 100 page views daily. Not bad for a $0 marketing effort.</p><p!</p><p><strong>TL;DR:</strong> Neytap is designed to be speedy, with the (unforeseen) expense of search engine discoverability. Lucky that blogging helps, although not by much. This explains why engineers suck at selling consumer products :D</p><p>Next, I co-founded <a href="">Cabara.co.id, a curated marketplace for domestic workers</a>. At the moment we're focusing on maid service in Jakarta. We pitched in <a href="">Startup Asia Jakarta 2012</a>, you can <a href="">watch my pitch there in YouTube</a> (you might want to skip the first few minutes).</p><p><iframe src="" width="480" height="270" frameborder="0"></iframe></p><p>We didn't win (we'd be surprised if we did), but it's a great opportunity to pitch there. We were covered in <a href="">Tech In Asia</a> and some other publications, and approached by a number of VCs. Everything is new to me (and to my co-founders as well, apparently), so it was quite an experience.</p><p>Last but not least, I recently created <a href="">temanmudik.com</a>, a (social network?) website to connect Indonesians who are going homecoming this year. It's a tradition in Indonesia—and probably other Muslim countries, I know they have it in Malaysia—for people to go back to their hometown to celebrate <a href="">Eid ul-Fitr</a> (in Indonesia it's called Idul Fitri or Lebaran).</p><p>Following <a href="">Minimum Viable Product</a>.</p>Thomas Wiradikusuma New Year 2012!Can't believe it's 2012 and we're still breathing, LOL! Anyway just in case some big meteor hit earth in next minute... this is my first post in 2012!Thomas Wiradikusuma released!I've been spending my free time developing a website to facilitate room rentals. The website is an attempt to scratch my own itch. Today, nine months after I registered the domain, I'm releasing <a href="" target="_blank">Neytap</a> to general public.<br /><br /><b>Neytap</b> is a classifieds for room rentals inside Facebook. <b>It's simple</b>, because I lack design skills. <b>It's fast</b>, because my internet is crappy. <b>And it's easy</b>, because I'm too lazy to explain how it works :)<br /><br />For the curious, the word <b>neytap</b> originates from an Indonesian word "menetap", which means "to stay" or "to settle".<br /><br /><b>me·ne·tap</b> <i>v</i> bertempat tinggal tetap (di); bermukim di: <i>banyak orang asing ~ di kota dagang itu; ada yg pulang ke kampung halamannya, ada pula yg ~ di kota-kota; </i> — <a href="">Kamus Besar Bahasa Indonesia</a><br /><br />"menetap" is a mouthful word so I trimmed it to "netap". To make the pronunciation similar for Indonesian-speaking and English-speaking tongues, I added "y" in the middle.<br /><br /><i>"If you are not embarrassed by your first release, you've launched too late" — Reid Hoffman, Founder of LinkedIn.</i><br /><br />I'm embrarred indeed. The website is so simple, <i>too</i> simple in fact. There are some features that I decide to exclude from this release, including multi-language support and a mobile version, mainly because they're still crappy.<br /><br />This is the first public release of Neytap, but certainly not the last. I'm going to update the website iteratively. Meanwhile, please take a look at <a href="" target="_blank">Neytap</a> and tell me what you think!<br /><br />If it isn't for the pretty date (20-11-2011—Indonesian format), I would certainly delay the release. But then again, "waiting for the perfect time" is just an excuse and it may never come. <i>Soli Deo gloria.</i>Thomas Wiradikusuma convertible debt burden me?<a href="">As I've written before</a>, I'm currently working on two side projects in my free time. This post is an update about their progress.<br /><br />For.<br /><br /.<br /><br />I cold emailed some people (only two, actually. A CEO in a company I worked for and an acquaintance I met in airport), asking whether they know anyone interested to angel invest in my project.<br /><br />The latter forwarded my email to her business partner who then asked for a pitch. To my surprise, he showed deep interest on our first meeting, and immediately showed intent to invest after reading my financial projection (which I think quite conservative on number).<br /><br />In case you're curious, here's my proposed term:<br /><ul><li>8% interest p/a</li><li>25% discount</li><li>with cap</li><li>maturity at 1 year</li></ul>I think it's pretty much standard. But then the guy said that he "doesn't like the idea of giving loan" because he "doesn't want to burden me". He wanted some shares instead. This is his offer (more or less):<br /><ul><li>Start the company in Malaysia so we can get government grants and stuff, but it must be majority owned by local, so he proposed...</li><li>60% for himself (being a Malaysian). And because his business partner (my friend) introduced us, so...</li><li>She'll get 20%.</li><li>Maybe he doesn't really trust me, so the money will be dispensed monthly, and...</li><li>I must get his permission for any expenses.</li></ul>To summarize, the company would get USD 23.7k spread over 6 months period, I would get 20% share and less than half my current salary, and I must report everything. I've never dealt with any investor before, but I don't feel right.<br /><br />My friend jokingly said the money I need is around the price a car, might as well I borrow it from bank and keep 100% share for myself.<br /><br />I emailed the investor politely rejecting his offer. Here's a snippet of the letter:<br /><blockquote>You've been very gracious with your time and I'm thankful for that. After careful consideration, however, I have decided not to take your current offer. I have asked around and did some research, convertible debt is still the term I want. </blockquote><blockquote>There's a great article on the benefits of convertible debt,, where the two main points (for me) are Suitability (point 2) and Control (point 3). It is also investor-friendly when complemented with discount and cap.</blockquote>I guess I'll keep doing this as a side project until I can stand on my own or found more sensible investment.<br /><br />Thomas Wiradikusuma your ideaLast week a guy added me in Facebook because he said we have similar interest in startup and found my blog (the one you're reading now) quite amusing.<br /><br />I usually only approve cute chicks and people I know—because in my experience random guys who added me happen to be gay and they thought I am too (<a href=""><b>I am NOT</b></a>). Since he introduced himself (and his intent), I'm more than happy to approve his friend request. He happen to be a smart person, and has a female wife :)<br /><br />We exchanged messages, and I thought one of my replies worth to post in blog, so I asked his permission and he agreed, so here it goes:<br /><br /><i>Hi bro, thanks for your length reply. I think you should stop reading and start jumping to action. In startup, first-hand experience is much more useful.</i><br /><br /><i>The simplest is to create a landing page to validate your idea. If it doesn't get satisfactory traction (i.e. low signups), either the idea sucks or the landing page needs refining.</i><br /><br /><i>It might hurt to know the truth (since your idea is most likely your ambition), but it will save you the time from building product nobody wants and you can move on to another idea.</i><br /><br /><i>A couple days ago I read <a href="">Ash Maurya's Running Lean</a> which mentioned <a href="">Eric Ries' Lean Startup</a>. I didn't finish the reading, but it made me realize that I've been taking the wrong approach.</i><br /><br /><i>I've been spending too long <a href="">developing the product</a> and also sidetracked by "research", but nothing towards validating my idea early. I'm afraid that when my product has complete, it doesn't get the traction I expected, and I will feel demotivated. I've been in this situation.</i><br /><br /><i.</i><br /><br /><i>So, to answer your question on how am I doing with my journey, now I'm doing "temporary pivot". I'm focusing on building a landing page.</i><br /><br /><i>My co-founder is currently in Jakarta for his first baby born and also to pitch to some VC. My project with him requires significant capital and network investment, so it's essential to raise some money (and make friends).</i><br /><br /><i>As for my other project, <a href="">Neytap</a>, now I'm thinking on how to solve the <a href="">chicken-and-egg problem</a>. As you know it's a classifieds for room rentals, a marketplace of buyers (tenants) and sellers (landlords). I need to figure out how to grow both sides in balance. Do you have suggestion?</i><br /><br /><i>Anyway, regarding US as your target market, I think you're right, aim the ones you're most familiar with. But isn't US already saturated?</i><br /><br />I'm so happy to meet <a href="">like-minded people</a> :)<br /><br />Thomas Wiradikusuma much should you pay developers?Two days ago I started a discussion in StartUpLokal group, <a href="">"How much should you pay developers?"</a> Basically I shared a link I found regarding <a href="">compensation plan in StackOverflow</a>.<br /><br /?"<br /><br />My answer is: "It depends." Let me elaborate.<br /><br />I'm a programmer myself. If I were to outsource/delegate programming tasks for any of my startups, it must be because (1) I'm not good enough to do it myself, or (2) The tasks so boring I'd rather do something else, like sleeping.<br /><br />So, how much?<br />?)<br /><br /).<br /><br /><i>Note: Change "him" to "her" for your convenience.</i>Thomas Wiradikusuma of the same feather flock together<i>You are who you surround yourself with.</i> You tend to be more productive when you hang out with like-minded people.<br /><br />I usually work solo on my personal projects, and it's damn tiring (on the plus side, I can do whatever I want, haha). I do have a partner in <a href="">one of my startups</a>, but since my internet is crappy and my co-founder lives in a different continent, communication is hard.<br /><br />So it was a very refreshing experience when last month I attended <a href="">Google Hackathon App Engine</a>. It was a very productive weekend indeed.<br /><br />From <a href="">Wikipedia</a>:. <b>Translation: party for geeks.</b><br /><br />I didn't manage to launch anything there, but I made significant progress; made some friends too. Looking forward for another similar events. So guys, if you have the chance to attend such event, don't miss it :)Thomas Wiradikusuma using new technology and setting up infrastructureIt's <a href="">very hard to build a startup</a>; it's even harder to build <a href="">two at the same time</a>, especially since I have a full time job. For the past few months, most of my free time were spent on coding. Let me share some progress.<br /><br />My first project is <a href="">Neytap</a>, a classifieds for room rentals—which in Bahasa Indonesia is called <i>kos</i> (correct term is <i>indekos</i>, although sometimes people write it as <i>kost</i> or <i>kos-kosan</i>). I'm aware that there are some similar websites already, but competition is always good :) It has just reached version Private Alpha and right now is under development for Private Beta.<br /><br />My second project is a location-based service. I can't tell you much about it since it needs to be in stealth until certain stage of development. I can only say that, for this project, I have a co-founder.<br /><br /><h2> On using new technology</h2>Both projects use JVM-based languages: <a href="">Java</a>, <a href="">Groovy</a> and <a href="">Scala</a>. I'm currently learning Scala, so I try to use it as much as possible. Groovy is used for scripting (like one-time off code) and Java when I'm stuck with Scala :D<br /><br />There's an important lesson that I want to share with technical founders who, like me, like to tinker with new technology: <b>Building startups with technology you're not familiar with is a bad idea.</b><br /><br />I'm not talking about quality (since you're not familiar, you might develop sub par solution), but it's all about <b>time allocation</b>. The point is, every time you want to use some fancy stuff in your project, ask yourself, "How much the distraction from achieving my target (delivering project)? Will it add significant value (i.e. "worth the time")?"<br /><br />I spent a significant time learning new stuff instead of working on actual product for these two projects. Knowledge-wise, it's not a waste. Goal-wise, it is. I decided to fallback to technologies I'm familiar with, and adding just a bit new stuff that I'm sure will improve my productivity.<br /><br /><h2> Infrastructure setup</h2>You can code immediately without any documentation (<a href="">URS</a>, <a href="">Diagrams</a>, etc), which is exactly what I did. But at some point you will realize that you need some order. You need at least an <a href="">issue tracker</a>.<br /><br /><b>Important lesson:</b> Use what you're familiar with and don't spend too much time setting it up. I'm familiar with <a href="">Trac</a>, <a href="">Redmine</a> and <a href="">JIRA</a>, but all of them are not trivial to setup for me (<a href="">YMMV</a>). I end up using <a href="">YouTrack</a>. It's free for 10 users, no installation. Just download the JAR file and run from command line: <code>java -jar youtrack-3.0.jar 9999</code>. This assume you have Java runtime installed,<br /><br />The next thing in mind is a <a href="">version control system (VCS)</a>. You must use VCS. Use the one you're most familiar with (if you're familiar with none, then stop coding and learn one, <a href="">Git</a> is good).<br /><br /><b>Important lesson:</b> If you don't like to wait, make sure your infrastructure is fast. Get a good computer with enough CPU and RAM. If you work alone, setup issue tracker and VCS in your workstation (localhost). If working with team, use the fastest server-based solution. I use <a href="">Unfuddle</a> for Git and installed YouTrack in an <a href="">EC2</a> located in Singapore. If submitting an issue or comitting code take too long, you'll be tempted to open <a href="">Hacker News</a> and not working :D<br /><br /><br />In conclusion, remember not to spend too much in either "research" or setting up infrastructure. In the end, it's your code that matters.Thomas Wiradikusuma it a nameThe first thing you need in building a startup is an idea. Some people think that idea is worthless, but for me it is equally important as the execution that follows it and the team behind it. It sets your target so that you know where to focus. But don't fall in love with your idea, it can evolve and even change radically. You just need it to get started.<br /><br />After having an idea, the next thing to come up with is a name. Having a name upfront is not required (you can use random name, e.g. "myproject", and change it later), but it simplifies a lot of things.<br /><br />Name is important for:<br /><ul><li><b>Presence</b>: domain name, Twitter handle, etc. We'll get into this in a moment.</li><li><b>Development artifacts</b>: project directory, namespace (e.g., in Java, "com.myproject"), Redmine project, <a href="">Basecamp</a> account, etc.</li></ul><br /><h2>Online presence that you need to secure</h2><br /><h3>Domain name</h3>Buy a domain from <a href="">Google Apps</a>, it's easier. You will get GMail-backed @myproject.com without setting up anything.<br /><br /><h3>Handle in your target deployment</h3>For example, if you use <a href="">Google App Engine</a>, you might want to secure myproject.appspot.com. This is optional, as it will usually be masked by your domain name (e.g. myproject.com will be forwarded to myproject.appspot.com), but it's always nice to have some consistency.<br /><br /><h3>Facebook Page</h3><a href="">Facebook</a> requires that you have at least 25 fans before eligible for a username (that is, a facebook.com/myproject). Ask your friends to Like your page to secure it.<br /><br /><h3>Twitter handle</h3>This is obvious.<br /><br /><h3>Blog (e.g. myproject.blogspot.com)</h3>Not necessary if you want to use your domain, e.g. blog.myproject.com.<br /><br />Remember that choosing name is not urgent, but the sooner the better.Thomas Wiradikusuma journey of a thousand miles begins beneath one's feet<a href="">As promised</a>, I'm going to start blogging again. Here we go.<br /><br />It's almost a year since my last post and I told you that lots of things have happened. Ironically, I don't know what to write. Maybe I'll start with how I feel these days.<br /><br />I don't feel happy.<br /><br /.<br /><br />That's one.<br /><br />The other thing, I always want to have my own business. Not the grand thing like the next Microsoft or Google (although that would be nice). I just want to have a simple small business, like a restaurant or <a href="">massage parlor</a> (with hot chicks under my employment, yay!).<br /><br />That's two.<br /><br /.<br /><br />That's three.<br /><br />Because I miss architecting and hacking stuff, and I want to have my own business, and I want to join the startup wave, today I'm announcing that I'm opening a restaurant.<br /><br />Oh, wait. You know I can't be serious. I barely know how to cook.<br /><br />Actually, I'm in a very early stage of building a startup. Well, two, actually. What? Why? Shouldn't I suppose to focus on one first until it's launched?<br /><br /.<br /><br /.<br /><br /.<br /><br /><em.</em>Thomas Wiradikusuma'm still alive<p>Hi guys, just to let you know that I'm still alive. Lots of things have changed since my last post. I promise I'm going to blog more often. Stay tuned :)</p>Thomas Wiradikusuma better, necessary?Nobody is perfect. Everyone have their own weaknesses: pasts, habits, limitations. It's not to say that you don't need to do anything about that—that would be a lame excuse. Everyone have to make themselves better.<br /><br />But sometimes you cant change some of your weaknesses. No matter how hard you try.<br /><br />In relationship, this is the Stop/Go factor. If your partner's OK with you and all of your "remaining" weaknesses, then it's a Go. This is what being "compatible" is.<br /><br />Otherwise, it's a Stop and Bye. And there's nothing wrong with that. If everyone always accept their partner's weaknesses, there would be no break-up in this world, and first boy/girlfriend will always be your only boy/girlfriend (and consecutively, your forever husband/wife).<br /><br />So, knowing that shit happens, should you make yourself better? Of course. It's for your own good.Thomas Wiradikusuma some men treat womenContrary to popular belief, men do think using their logic when it comes to relationship. Some men, including me, divide women they want to be "in relationship with" (I use this term loosely) into essentially two groups: <b>for-fun</b> and <b>serious</b> (sometimes even <b>wife material)</b>. "Serious" doesn't mean there're no fun in it. It just means he's willing to go further.<br /><br />Some men take <i>positive-negative approach</i> (have hope with the relationship, but will fallback to for-fun if it doesn't work, or simply dump her) or <i>negative-positive approach</i> (never intend to be serious, but might change over time). <b>The important thing is: they can change their mind.</b> I'll get back to this later.<br /><br /.<br /><br /.<br /><br /.<br /><br /, <b>it is stupid for me if I force her to quit</b>: she'll be upset and hate me, I'll loose the fun she gave me (e.g. sex). Simply no benefit for me.<br /><br /.<br /><br />If I'm even more serious with her, instead of telling her to quit and leave her fight the trouble alone. I will help her with research (on how to effectively remove the addiction), accompany her to care center, support her and be with her.<br /><br /?Thomas Wiradikusuma case for IndonesiaMy <a href="">shift</a> as a nurse.<br /><br /!).<br /><br /!<br /><br /.<br /><br /.<br /><br />I may go back and spend a couple of months when my contract ends. But that's because of the important things I mentioned above (family and stuff). I don't plan to pursue my career there—except maybe when I start my own business (a restaurant?).Thomas Wiradikusuma interesting weekI'm getting lame at making <a href="">blog</a> <a href="">titles</a>. Anyway, here's a quick round up of this week. I'll obliviously think I'm a celebrity to justify the importance of this post. Readers be warned.<br /><br />I.<br /><br /.<br /><br />Shameless plug: <a href="">The club I'm a member of<.<br /><br />Back to Khairul. He's a quiet guy most of the time during the meeting. But just now I checked <a href="">his website</a> .<br /><br /><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="" width="200" /></a>The thing is, I'm a really Web 2.0 geek, developer type. That's why I'm enthusiastic when encountering geeks-alike. I don't just use Facebook, Twitter, LinkedIn, FourSquare, and numerous others, but I also longing to develop something like those. I'm currently developing a super-awesome-caffeine-induced social thingy in my secret underground lair. I'll keep you updated on that.<br /><br />Apparently one French a day was not enough. After the meeting, Penny said she wanted to pick up this French guy who supposed to stay at her house. She joined <a href="">a website</a> where people can host foreigners. I thought the idea was dangerous and crazy. We followed Penny to a nice Mamak nearby KLCC and met the guy. He's nice and polite. Maybe the idea wasn't that crazy.<br /><br / <a href="">Working Holiday Visa</a>.<br /><br /.<br /><br /.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div>Thomas Wiradikusuma two weeks.<br /><br /".<br /><br /.<br /><br />I also bought a Sennheiser, model <a href="">HD 280 Pro</a>. For RM580, the headphone performs well, but not over my expectation. It's a bit weak on the bass (e.g. not "punchy"), but superior on percussions.<br /><br / <a href="">deviantArt</a> for a higher resolution version. Here's a sample:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />A day after returning to KL, I went to <a href="">DBKL</a> with some fellow Toastmasters to watch Yamato Drum Concert. The concert was a wow, you can watch some of their performances in <a href="">YouTube</a>. After the show they took me to <a href="">SkyBar</a> in Traders Hotel. Finally! I was longing to go there since before New Year 2010 because my friend told me they have a good view of the <a href="">Twin Tower KLCC</a>. Indeed they have:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br />And tonight I'll fly to Jakarta.Thomas Wiradikusuma in front of my eyes.<br /><br /.<br /><br /! <br /><br /!<br /><br />The guy had a small injury, hopefully no broken bone. But after I left the scene, I felt regretful for not knowing what to do. I have a high IQ and high dreams, but I acted stupid and did nothing. I feel ashamed for that.Thomas Wiradikusuma dan 7 Habits<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="177" src="" width="200" /></a></div>Pernah dengar <a href="">Toastmasters</a>? Toastmasters adalah <em>social club</em> untuk pengembangan kemampuan komunikasi dan kepemimpinan. Udah beberapa bulan ini gw bergabung disitu.<br /><br />Awalnya gw sempat skeptis waktu diajak, "Ujung-ujungnya disuruh jualan nih." Apalagi yang mengajak cewek yang gw kenal di Internet, dan dia kerja di MLM :D<br /><br />Waktu kuliah gw pernah dibohongi teman (<em>if it's you who're reading this, yes you sucked</em>) untuk ikut "seminar mengenai jaringan" saat mata kuliah Jaringan Komputer. Kecurigaan gw muncul begitu melihat nenek-nenek di dalam ruangan, "Lho dia tertarik <a href="">TCP/UDP</a> juga?" Ternyata seminar MLM.<br /><br />Kembali ke Toastmasters. Disitu gw belajar banyak soal <em>communication and presentation skills</em>, gw juga mengenal banyak teman baru. Biasanya setiap pertemuan selalu ada tamu (non-member), dan kemarin teman gw mengundang seseorang dari <a href="">Covey Leadership Center</a>. <br /><br />Begitu tau dia dari Covey, gw langsung bersemangat. Gw langsung nyerocos bilang kalau buku 7 Habits sungguh mengubah hidup gw. Gw cerita bahwa gw dulu suka malas-malasan, main <em>game</em> terus dan hidup gw ga jelas. Setelah membaca 7 Habits, gw mulai berubah menjadi seperti sekarang—tetap suka malas-malasan dan main <em>game</em>, tapi lebih baik :D<br /><br /><em>Anyway</em>, kalau kalian ada waktu, coba deh lihat-lihat <em>website</em> Toastmasters, datang ke pertemuan mereka (tamu gratis kok). Mereka punya cabang di 106 negara termasuk Indonesia. Cek juga buku 7 Habits kalau kalian belum pernah baca, itu buku lama tapi <em>worth to read</em>. <em>Cheers!</em> :)Thomas Wiradikusuma cita-citamu setinggi dindingAda yang bilang, "Gantungkanlah cita-citamu setinggi langit." Masalahnya adalah, selain secara praktek agak mustahil, menaruhnya terlalu tinggi akan membuat gw sulit melihatnya. <br /><br />Sebagai suatu <em>objective statement</em>, kita harus menaruh cita-cita di depan mata agar senantiasa terlihat. <em>So you know where you're supposedly heading.</em> Itu sebabnya gw menulis keinginan-kenginan gw di kertas lalu menempelkannya di dinding, tepat di depan mata.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="" width="267" /></a></div><br />Gw memasang foto di atas bukan untuk pamer, bukan juga untuk bercanda. Beberapa sangat masuk akal ("Buy drybox"), beberapa ga berhubungan dengan profesi gw ("Learn Salsa"). Beberapa bahkan terlihat menggelikan, tapi ga ada salahnya menginginkan Jaguar kan?<br /><br />Ga semua cita-cita gw tercantum disitu. Bukan karena ga penting, tapi karena capek menulisnya :D Nanti, seiring berjalannya waktu, gw akan menambah/mengurangi kertas disitu. Dari semua yang tertulis, yang paling mudah dipenuhi adalah semua yang berawalan "Buy..." karena <em>fulfillment factor</em>-nya hanya satu: uang. Yang tersulit adalah objektif yang berkelanjutan, misalnya makan sayur.<br /><br />Apakah gw ingin semuanya terwujud? Tentu. Apakah gw akan kecewa kalau ada yang ga terwujud? Tentu. Apakah gw akan berhenti melakukan hal ini kalau sebagian besar ga terwujud? Tentu tidak. <br /><br />Ga selalu semua tujuan kita harus tercapai, tapi menjalani hidup tanpa tujuan sama dengan menembak tanpa target. Bagaimana bisa tau kita semakin baik/buruk kalau ga ada tolak ukurnya?<br /><br />Catatan: Fotonya bisa diklik untuk menampilkan ukuran sebenarnya. Sebagian kata-kata di foto sengaja gw buat kabur karena <em>private and confidential</em> :)Thomas Wiradikusuma
http://feeds.feedburner.com/wiradikusuma
CC-MAIN-2017-43
refinedweb
6,419
66.23
This article is the first one of two covering a basic bilingual website in ASP.NET MVC 3. I have split the article into two parts because I feel it will become too long winded and complicated in one step. The two parts are: This article is a beginner’s introduction to using internationalized resources. The code is written with the intention of being clear, rather than production quality. I will use Arabic as the second language, this is because the website I did the original work for uses Arabic, and it drives out some left-to-right and right-to-left support that would otherwise be missed. If you speak Arabic, apologies for any mistakes, I do speak a little, but have used Google Translate for most of the text! The second article (available here) will cover route-based internationalization and will be more advanced. Both articles will assume a basic knowledge of how MVC works (especially the relevant components: View, Controller, and RouteHandler) and how .NET handles localisation and culture. The examples will focus on the default ASPX view engine rather than Razor, but will provide information where the Razor markup is different. The mechanism can be extended to allow multi-lingual websites with a little effort. RouteHandler To run the code, you must have VS2010 installed (and .NET 4 ) as well as the MVC 3 framework. This article springs from a proof of concept website I wrote for my current employer using MVC 3. Prior to this, I had little to no experience of ASP.NET MVC 3. My employer has a requirement for their website to be available in English and Arabic, this article documents and shares what was achieved during that process. There are a few articles covering localisation in MVC 3 on the web, but none I could find that achieved exactly what I wanted, which was a bilingual website with a routing strategy similar to that of MSDN (as explained in part 2). That said, I must acknowledge a heavy debt to these articles: When implementing localisation in my application, I found the process to be far less polished and intuitive than plain old ASP.NET. There is no way of generating the RESX files automatically, and extra steps must be taken to make the contents available to the view. I also could not find Microsoft best practices for localisation and globalization in MVC 3 available on the web. In principle, the process of getting resources to work with MVC3 is fairly straightforward, but less so than for a normal ASP.NET application: In this explanatory article and code, I have used the master page (or Layout for the Razor version) as an example. The same methodology applies to the child views, I have included a simple example of this in the download projects not reflected in the article. I will start by assuming the application has a master page (or a Layout for Razor), and I will describe the process of getting it ready for localisation. Normal Views work in the same manner, so everything that is applied here can be applied to a View. Let’s take the initial mark-up in the Master Page "view" to be something simple like: <body> <h1>Welcome</h1> <div> <asp:ContentPlaceHolder </asp:ContentPlaceHolder> </div> </body> An equivalent Razor Layout could look like: <body> <h1>Welcome</h1> <div> @RenderBody() </div> </body> We need to transfer the heading text (“Welcome”) to a resource file. Unlike ASP.NET, we cannot just create the resource from the context menu. My advice is to mirror your view folder; my master page is in a folder called Views/Shared, so I will create a directory structure Resources/Shared. The MasterPage is called “MasterPage.Master” (Razor users, please see the note in red at the end of this section), so right-click the resources folder and follow these options: Select the General tab, Resources File, and change the filename: Once you have done this, you need to move the text into the resource file, giving it a sensible name, in this case “Heading”, and enter the text ("Welcome"). Now set the access modifier to public, without this step the view will be unable to access it: Typically, you would repeat this for each section of text to be localized, but I have only entered one value to keep the example deliberately simple. The next step is to copy the RESX file into the same directory as itself (remember to save beforehand!), and rename it for the culture you wish to support. In my case, I am using Arabic, so “MasterPage.resx” has a companion MasterPage.ar.resx. Note that the copied file carries over the public access modifier so there is no need to re-set it. Now all we need to do is to repeat the process creating the RESX files for the remaining views. Once finished, we should have “mirrored” Views and Resources folders, with one RESX file for each language per view: Getting the text from the RESX file to the view through the mark-up is reasonably easy, we are now effectively getting a property on a class. In the Master Page, we replace the word “Welcome” with the magic incantation: <body> <h1><%= InternationalizedMvcApplication.Resources.Shared.MasterPage.Heading %></h1> <div> <asp:ContentPlaceHolder </asp:ContentPlaceHolder> </div> </body> For Razor, the markup is: <body> <h1>@InternationalizedMvcApplicationRazor.Resources.Shared.Layout.Heading %> </h1> <div> @RenderBody() </div> </body> Notice that the value is accessed like a property on the class (in fact, the resources are compiled down to a DLL): InternationalizedMvcApplication.Resources.Shared is a namespace (following the directory structure), MasterPage (or Layout for Razor) is the class name (following the RESX file name), and Heading is the property name (following the name key we gave it). As the resources are publically available to classes in the application, you can access them programmatically too: InternationalizedMvcApplication.Resources.Shared MasterPage Layout Heading using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using InternationalizedMvcApplication.Resources.Shared; //using InternationalizedMvcApplicationRazor.Resources.Shared; //For Razor namespace InternationalizedMvcApplication.Controllers { public class HomeController : Controller { public ActionResult Index() { //Note using InternationalizedMvcApplication.Resources.Shared; for brevity ViewBag.InternationalizedHeading = MasterPage.Heading; //Layout.Heading for razor return View(); } } } Note that in the examples, the View does not access the ViewBag, but it can be accessed in the normal ways if wanted. ViewBag The final step is to define which culture to use; in this article, I will use the browser’s default language for simplicity, and then extend the mechanism in the second article. One thing you should do in production code is to provide a way of overriding the browser’s default culture; it might not be the one the user wants, in an Internet Café when abroad, for example. Once the culture is defined, it works in much the same way as standard ASP: if the culture specific RESX is available, it uses values from that; if no RESX is available, it falls back to the default culture (remember that I specified .ar.resx as the file extension for Arabic, but English only requires .resx as the default will be English). Just like standard ASP.NET, if an individual entry is missing, it falls back to the default. You can also extend the name to provide variants for British English (as opposed to the default US) or Jordanian Arabic (as opposed to the default Saudi). Unlike ASP.NET, we cannot override the page’s InitializeCulture method: there is no code-behind, which is the preferred method for many developers. We can add this to global.asax: InitializeCulture protected void Application_BeginRequest(object sender, EventArgs e) { //Note everything hardcoded, for simplicity! if (Request.UserLanguages.Length == 0) return; string language = Request.UserLanguages[0]; if (language.Substring(0, 2).ToLower() == "ar") Thread.CurrentThread.CurrentUICulture = CultureInfo.CreateSpecificCulture("ar"); } There is, however, a neater mechanism, I have left this code block commented in the sample code as I will not continue to use it. The mechanism I have adopted is to add the following into the web.config, under system.web: system.web <globalization culture="en-GB" uiCulture="auto:en-GB" /> Now the UI Culture will be set to the default culture from the browser, falling back on British English. In both the programmatic example and the configuration based one, I have not set the main application culture, you may need to do this. Now we can test what we have done. Running the application yields: Now we swap the default language; to do this in IE 9: Now, the browser's default language is Arabic, a refresh on our page shows: This has been a good test! Although we have Arabic text, the page is still working left to right. This is easily fixed, the steps are similar to creating the original RESX files: Now we can place the following in any tag of the view HTML: dir="<%= InternationalizedMvcApplication.Resources.Common.TextDirection %>" Similarly for the Razor engine: dir="@InternationalizedMvcApplicationRazor.Resources.Common.TextDirection" In the sample code, this has been placed into the root HTML tag, making the whole page either right to left or left to right. Sadly, the intellisense does not work here, so you will have to rely upon memory. Running once more provides: Remember to swap the language back to English, and check the text is running left to right. We now have a basic English/Arabic bilingual website. We have created a simple, globalized MVC 3 application, this is not production quality (e.g., we have no override mechanism), but what we have can be easily adapted for production. The principles are similar to, but less smooth than standard ASP.NET practices: In the next article, we will keep using the browser’s default culture so the page will display initially in that language, but we will make it overridable by clicking a link. The language can then be selected via its URL. For example, English will be: (default) or whereas Arabic will be. The URL will totally ignore subdivisions of the language (e.g., Saudi Arabic or Jordanian Arabic). URL based selection fits the MVC pattern better than parameterised URLs or cookies, it will also help search engines rank your site on a per-language basis. For those interested: part 2 is available here. If you have any comments or feedback, please feel free to ask. I hope that there will be suggestions for neater mechanisms, though I think the one here is pretty light-weight. If you edit this article, please keep a running update of any changes or improvements you've made here. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) John Verco wrote:In WebForms applications you were able to set the Cuture and UICulture attributes for the page to "auto" in the @Page directive, allowing for .NET to determine the appropriate locale based on browser settings. It seems like in MVC3 this is no longer allowed. Am I correct? Anuj Tripathi wrote:Can I see a little hope where user needs to select the culture and will override the InitializeCulture class General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/181738/Creating-a-Bilingual-ASP-NET-MVC-Application-Part?msg=4234082
CC-MAIN-2014-41
refinedweb
1,890
53.61
mbind - Set memory policy for an memory range Synopsis Description Return Value Errors Notes See Also #include <numaif.h> int mbind(void *start, unsigned long len, int policy, unsigned long *nodemask, unsigned long maxnode, unsigned flags) mbind sets the NUMA memory policy for the memory range starting with start and length len. The memory of a NUMA machine is divided into multiple nodes. The memory policy defines in which node memory is allocated. mbind has only an effect for new allocations; when the pages inside the range have been already touched before setting the policy the policy has no effect. Available policies are MPOL_DEFAULT, MPOL_BIND, MPOL_INTERLEAVE, MPOL_PREFERRED. All policies except MPOL_DEFAULT require to specify the nodes they apply to in the nodemask parameter. nodemask is a bit field of nodes that contains upto maxnode bits. The node mask bit field size is rounded to the next multiple of sizeof(unsigned long), but the kernel will only use bits upto maxnode. When MPOL_MF_STRICT is passed in the flags parameter EIO will be returned when the existing pages in the mapping dont follow the policy.. mbind returns -1 when an error occurred, otherwise 0. For a higher level interface it is recommended to use the functions in numa(3). Until glibc supports these system calls you can link with -lnuma to get system call definitions. MPOL_MF_STRICT is ignored on huge page mappings right now. For preferred and interleave mappings it will only accept the first choice node. For MPOL_INTERLEAVE mode the interleaving is changed at fault time. The final layout of the pages depends on the order they were faulted in first. numa(3), numactl(8), set_mempolicy(2), get_mempolicy(2), mmap(2)
http://www.squarebox.co.uk/cgi-squarebox/manServer/usr/share/man/man2/mbind.2
crawl-003
refinedweb
280
56.15
The26, though you can change this when you’re running the installer. To add this directory to your path, you can type the following command into the command prompt in a DOS box: set path=%path%;C:\python26. Note that there is a difference between python file and python <file. In the latter case, input requests from the program, such as calls to input() and raw_input(), are satisfied from file. Since this file has already been read until the end by the parser before the program starts executing, the program will encounter end-of-file immediately. In the former case (which is usually what you want) they are satisfied from whatever file or device is connected to standard input of the Python interpreter. passed to the script in the variable sys.argv, which is a list of strings. Its length.6 (#1, Feb 28 2007,. [1] Typing an interrupt while a command is executing raises the KeyboardInterrupt exception, which may be handled by a try statement. On BSD’ish encoding encoding, and it will be possible to directly write Unicode string literals in the selected encoding. UTF-8 byte order mark (aka BOM), you can use that instead of an encoding declaration. IDLE supports this capability if Options/General/Default Source Encoding/UTF-8 is set. Notice that this signature is not understood in older Python releases (2.2 and earlier), and also not understood by the operating system for script files with #! lines (only used on Unix systems).file('.pythonrc.py'). If you want to use the startup file in a script, you must do this explicitly in the script: import os filename = os.environ.get('PYTHONSTARTUP') if filename and os.path.isfile(filename): execfile(filename) Footnotes
https://docs.python.org/2.6/tutorial/interpreter.html
CC-MAIN-2016-36
refinedweb
288
57.47
17742/in-list-of-dicts-find-min-value-of-a-common-dict-field I have a list of dictionaries like so: [{'price': 99, 'barcode': '2342355'}, {'price': 88, 'barcode': '2345566'}] I want to find the min() and max() prices. Now, I can sort this easily enough using a key with a lambda expression (as found in another SO article), so if there is no other way I'm not stuck. However, from what I've seen there is almost always a direct way in Python, so this is an opportunity for me to learn a bit more. There are several options. Here is a straight-forward one: seq = [x['the_key'] for x in dict_list] min(seq) max(seq) [Edit]) lst = [{'price': 99, 'barcode': '2342355'}, {'price': 88, 'barcode': '2345566'}] maxPricedItem = max(lst, key=lambda x:x['price']) minPricedItem = min(lst, key=lambda x:x['price']) There are several options. Here is a ...READ MORE First, use the dataframe to match the ...READ MORE If you want to find the value ...READ MORE Yes it is possible. You can refer ...READ MORE Call dict with no parameters new_dict = dict() or simply write new_dict ...READ MORE suppose you have a string with a ...READ MORE if you google it you can find. ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE following way to find length of string x ...READ MORE You can use the ast module Ex: import ast s = """[{'10': ...READ MORE OR
https://www.edureka.co/community/17742/in-list-of-dicts-find-min-value-of-a-common-dict-field
CC-MAIN-2019-39
refinedweb
244
84.57
In message <20010524233356.A13206@yellowpig> Bill Allombert <allomber@math.u-bordeaux.fr> wrote: > the relevant code look like > > #if PARI_BYTE_ORDER == LITTLE_ENDIAN > # define INDEX0 1 > # define INDEX1 0 > #elif PARI_BYTE_ORDER == BIG_ENDIAN > # define INDEX0 0 > # define INDEX1 1 > #else > error... unknown machine > #endif [snip] > solution is correct. Despite ARM systems usually being small endian, the word ordering of double floating point values is as you'd expect on a big endian machine. This typically affects things like compilers, which need a fix similar to what you have here. Peter -- ------------------------------------------------------------------------ Peter Naulls - peter@erble.freeserve.co.uk RISC OS Projects Initiative - Java for RISC OS and ARM - Debian Linux on RiscPCs - ------------------------------------------------------------------------
https://lists.debian.org/debian-arm/2001/05/msg00040.html
CC-MAIN-2017-47
refinedweb
108
50.73
ANTS Memory Profiler 8 Find memory leaks and optimize memory usage in your .NET application Walkthrough Using ANTS Memory Profiler to track down a memory leak in a WinForms application This: Figure 1. The ANTS Memory Profiler startup screen. Here, there’s a list of your recent profiling sessions so you can re-run them easily. For this example, we’ll start a new session by clicking New profiling session. The New profiling session screen is displayed: Figure 2. It's easy to configure and start a new profiling session. All we need to do is point it at QueryBee, choose our performance counters, and click Start profiling. The profiler starts up QueryBee and begins collecting performance counter data: Figure 3. Whilst profiling, ANTS Memory Profiler collects performance counter data. The profiler is telling us that it's profiling our application. There are also some useful instructions on this screen telling us to take and compare snapshots. Taking and comparing memory snapshots is a key activity when looking for memory leaks, so our approach will be as follows: - Wait for QueryBee to open. - Take a first snapshot without using the application; this first snapshot will be used as a baseline. - Within QueryBee, perform the actions that we think cause the memory leak. - Take a second snapshot. - Examine the comparison that the profiler shows us after it has finished taking and analyzing the second snapshot.. Figure 4. Results from our first snapshot – Summary screen. Now, we go back to QueryBee and perform the tasks which we think cause the memory leak. We open up QueryBee and connect to a database. Figure 5. QueryBee – Database connection dialog. Figure 6. QueryBee – The query window. The query window opens up and we enter and execute a SQL query. We obtain some results and close the query window. Figure 7. QueryBee – The results are displayed in a grid. We close the query form. At this point, the window is gone. We expect the memory usage to fall back to where it was in the first snapshot, but that is not the case. Figure 8. Despite closing our query window, the memory usage has not fallen. So what's happening here? We take a second snapshot and get the results. Figure 9. The summary pane compares the results of the two snapshots. A number of problems are highlighted by the summary screen. - We can see a large memory increase between snapshots, which we noticed on the timeline (top left). - The Large Object Heap appears to be fragmented, which could cause problems (top right). - The Generation 2 heap accounts for a large proportion of memory usage - often indicating objects are being held onto for longer than necessary (bottom left).. Figure 10. The class list allows you to compare memory usage in both snapshots in more detail. The String class has been placed at the top of the list, with over 300,000 new instances. We want to understand why there is such a large increase so load the Instance Categorizer for the String class by clicking the icon. Figure 11. The Instance Categorizer shows chains of instances sorted by their shortest path to GC Root.. Figure 12. The instance list view shows us a set of strings which we recognize as coming from our SQL Database. The Instance List is showing us data which QueryBee had retrieved from the SQL Database, but that data should have been destroyed when QueryForm was closed. We select one of the instances and click the icon to generate an Instance Retention Graph. Figure 13.. Figure 14. Foregrounded event in the ConnectForm source code. The profiler automatically jumps to the Foregrounded event. We check where it is being used by right-clicking on Find All References. Figure 15. The Foregrounded event is used in three places.. Figure 16. We rebuild our QueryBee application. Back in the profiler, we start up a new profiling session. We want to find out whether the reference to the QueryForm has disappeared. Note that it remembered our settings from last time, so all we need to do is click Start Profiling. Figure 17. The settings dialog remembers settings from last time.. Figure 18. Summary screen comparing snapshots 1 and 3. We can see there is now only a small memory increase between the snapshots, which is promising. Let's see if there's a QueryForm still in the class list. We switch to the class list view and search only for classes in the QueryBee namespace. Figure 19. Class list_19<<
http://www.red-gate.com/products/dotnet-development/ants-memory-profiler/walkthrough
CC-MAIN-2014-35
refinedweb
753
66.44
Games on Facebook are hosted as a portal, the actual game content is hosted from your own web server. By configuring your Facebook Web Games URL, you can make your game available on Facebook.com, where it will appear within an iframe. On Facebook, you can make the most of App Center and game recommendations to provide discoverability for your content, and use the social features of the Facebook platform to make your game more social. In your app settings, there's a field for Facebook Web Games URL. This field configures the iframe that loads when a player loads your game. This puts you in complete control your game, and you're free to update versions and content at your own release cycle. See your App Settings here. When a player on Facebook.com loads your game, Facebook will make an HTTP POST request to the Facebook Web Games URL provided under App Settings. The response to this request should be a full HTML response that contains your game client. You can use the Facebook SDK for JavaScript to authenticate users, interact with the frame, and to access dialogs in-game, so be sure to include that in your game's HTML. See here for more information on Login for Games On Facebook. The HTTP POST request made to your Fcaebook Web Games URL will contain additional parameters, including a signed_request parameter that contains the player's Facebook identity if they've granted basic permissions to your app. If a player is new, the signed_request parameter value will be useful to validate that this request did indeed come from Facebook. Read more about signed requests in the Login for Games on Facebook guide. HTTPS is required when browsing Facebook.com, and this requirement also applies to game content. Therefore a valid SSL certificate is required when serving your game content. When you configure your web server to host your game, you'll need to make sure it's on a valid domain that you own, and that you have a valid SSL certificate for this domain. It's possible to pass your own custom parameters to the game launch query. This is useful for tracking the performance of OG Stories, referral sites, or for tracking shared link performance. There are two ways to accomplish this: The URL for your Facebook game will always be{namespace}/ . When provide promotion links, either from your App Page or other places on the internet, you can append query params here. For example{namespace}/?source=mysourceid These query params will be preserved on game launch, and passed to your server in addition to the signed_request. You can also share links that take players directly to portions of your game. If you are using PHP or have launch scripts, this can be helpful to start players into areas of the game outside of the standard flows. The full path will be preserved in the request to your server. For example, if you share a link to{namespace}/special_launch.php Facebook will make a request to https://{your_web_games_url}/special_launch.php when loading the iframe for your game. When players launch your game on Facebook, a query parameter signed_request is added to the HTTP request to your server. This signed_request can be decoded to provide user information, and a signature to verify the security and authenticity of this data. You can parse this parameter like this: 238fsdfsd.oijdoifjsidf899) If no user_id field is present, then the player has not given public_profile permissions to your game yet. If you parse the signed_request and discover the player is new and has not granted basic permissions to your game, you can ask for these permissions on load, via the Javascript SDK. FB.login(function(response){ // Handle the response }); You can optionally ask for more permissions, such as FB.login(function(response) { // Handle the response }, {scope: 'email'}); Other important information in this payload will be age settings and locale preferences for the player. See Login for Games on Facebook for more information. While you're developing your game, you'll probably want to host it on a web server running on your local machine to speed up your edit-compile-test cycle. The most common way to do that is to set up a server stack like XAMPP. You will also need to create and install a local SSL certificate so that this server supports HTTPS. Once you're ready to take your game live to the world, you'll have to arrange for hosting on a public-facing web server. As your traffic grows, you may want to consider using a content delivery network (CDN) such as Akamai or CDNetworks to reduce your hosting costs and improve performance. A CDN works by caching your game's content at various locations on the internet. This means players will have game assets delivered to their client from a closer location. Your players get a quicker loading game, and your server is protected from excessive traffic.
https://developers.facebook.com/docs/games/gamesonfacebook/hosting/
CC-MAIN-2018-43
refinedweb
833
62.78
[cs188-tf@solar ~]$ mkdircommand. Use cdto change to that directory: [cs188-tf@solar ~]$ mkdir tutorial [cs188-tf@solar ~]$ cd tutorial [cs188-tf@solar ~/tutorial]$ ~cs188. [cs188-tf@solar ~/tutorial]$ cp ~cs188/projects/tutorial/*.py . [cs188-tf@solar ~/tutorial]$ ls buyLotsOfFruits.py foreach, i.e. multi-tasking vi, pico, or joeon Unix; or Notepad on Windows; or TextWrangler on Macs). To run Emacs, type emacsat a command prompt: [cs188-tf@solar ~/tutorial]$ emacs test.py & [1] 3262 test.pywhich will either open that file for editing if it exists, or create it otherwise. Emacs notices that test.pyis a Python source file and enters Python-mode, which is supposed to help you write code. When editing this file you may notice some of that some text becomes automatically colored: this is syntactic). For advanced debugging, you may want to use an IDE like Eclipse. In that case, you should refer to PyDev. pythonat the Unix command prompt. The Python interpeter can be used to evaluate expressions, for example simple arithmetic expressions. If you enter such expressions at the prompt (The Python interpeter can be used to evaluate expressions, for example simple arithmetic expressions. If you enter such expressions at the prompt ( [cs188-tf@solar ~/tutorial]$ python Python 2.4.2 (#1, Jan 11 2006, 12:45:36) [GCC 3.4.3] on sunos5 Type "help", "copyright", "credits" or "license" for more information. >>> >>>) they will be evaluated and the result wil be returned on the next line. TheThe >>> 1 + 1 2 >>> 2 * 3 6 >>> 2 ** 3 8 **operator in the last example corresponds to exponentiation. +operator is overloaded to do string concatenation on string values. >>> 'artificial' + "intelligence" 'artificialintelligence' >>> 'artificial'.upper() 'ARTIFICIAL' >>> 'HELP'.lower() 'help' >>> len('Help') 4 ' 'or double quotes " "to surround string. >>>>> print s hello world >>> s.upper() 'HELLO WORLD' >>> len(s.upper()) 11 >>> num = 8.0 >>> num += 2.5 >>> print num 10.5 dirand helpcommands: Try out some of the string functions listed inTry(for now, instsance we can have lists of lists:The items stored in lists can be any Python data type. So for instsance. This is how many errors will manifest: index out of bounds errors, type errors, and so on will all report exceptions in this way.The attempt to modify an immutable structure raised an exception. This is how many errors will manifest: index out of bounds errors, type errors, and so on will all report exceptions in this way. >>> pair = (3,5) >>> pair[0] 3 >>> x,y = pair >>> x 3 >>> y 5 >>> pair[1] = 6 TypeError: object does not support item assignment >>> studentIds = {'aria': 42.0, 'arlo': 56.0, 'john': 92.0 } >>> studentIds['arlo'] 56.0 >>> studentIds['john'] = 'ninety-two' >>> studentIds {'aria': 42.0, 'arlo': 56.0, 'john': 'ninety-two'} >>> del studentIds['aria'] >>> studentIds {'arlo': 56.0, 'john': 'ninety-two'} >>> studentIds['aria'] = [42.0,'forty-two'] >>> studentIds {'aria': [42.0, 'forty-two'], 'arlo': 56.0, 'john': 'ninety-two'} >>> studentIds.keys() ['aria', 'arlo', 'john'] >>> studentIds.values() [[42.0, 'forty-two'], 56.0, 'ninety-two'] >>> studentIds.items() [('aria',[42.0, 'forty-two']), ('arlo',56.0), ('john','ninety-two')] >>> len(studentIds) 3 As with nested lists, you can also create dictionaries of dictionaries. Exercise: Use dir and help to learn about the functions you can call on dictionaries. Exercise: How would you use the dictionary type in order to represent a set (rather than a list) of unique items? forloop. Open the file called foreach.py and update it with the following code: At the command line, use the following command in the directory containing foreach.py! Put this code into a file called listcomp.py and run the script:Put this code into a file called listcomp.py and run the script Those of you familiar with Scheme, will recognize that the list comprehension is similar to theThose of you familiar with Scheme, will recognize that the list comprehension is similar to the [cs188-tf@solar ~/tutorial]$! Its best to use a single tab for indentation.there would be no output. The moral of the story: be careful how you indent! Its best to use a single tab for indentation. Save this script as fruit.py and run it: Exercise: Add some more fruit to theExercise: Add some more fruit to the [cs188-tf@solar ~/tutorial]$ python fruit.py That'll be 4.800000 please Sorry we don't have coconuts fruitPricesdictionary and add a buyLotsOfFruitfunction which takes a list of (fruit,pound)tuples and returns the cost of your list. If there is some fruitin the list which doesn't appear in fruitPricesit should print an error message and return None(which is like nilin Scheme). Solution quickSortfunction in Python using list comprehensions. Use the first element as the pivot. The solution should be very short. Solution TheTheclass FruitShopclass has some data, the name of the shop and the prices per pound of some fruit, and it provides functions, or methods, on this data. What advantage is there to wrapping this data in a class? There are two reasons: 1) Encapsulating the data prevents it from being altered or used inappropriately and 2) The abstraction that objects provide make it easier to write general-purpose code. So how do we make an object and use it? Save the class code above into a file called shop.py. We can use the FruitShop as follows: Copy the code above into a file called shopTest.py (in the same directory as shop.py) and run it: import shop name = 'CS 188' fruitPrices = {'apples':2.00, 'oranges': 1.50, 'pears': 1.75} myFruitShop = shop.FruitShop(name, fruitPrices) print myFruitShop.getCostPerPound('apples') otherName = 'CS 170' otherFruitPrices = {'kiwis':1.00, 'bananas': 1.50, 'peaches': 2.75} otherFruitShop = shop.FruitShop(otherName, otherFruitPrices) print otherFruitShop.getCostPerPound('bananas') So what just happended? The [cs188-tf@solar ~/tutorial]$ python shopTest.py Welcome to the CS 188 fruit shop 2.0 Welcome to the CS 170 fruit shop 1.5 import shop statement told Python to load all of the functions and classes in shop.py. These import statements are used more generally to load code modules. The line myFruitShop = shop.FruitShop(name, by the interpreter; when calling a method, you only supply the remaining arguments. The self variable contains all the data ( name and fruitPrices) for the current specific instance, similar to this in Java. Exercise: Write a function, shopSmart which takes an orderList (like the kind passed in to FruitShop.getCostOfOrder and a list of FruitShop and returns the FruitShop where your order costs the least amount in total. Solution rangeto generate a sequence of integers, useful for generating traditional indexed forloops: for index in range(3): print lst[index] reloadcommand: >>> reload(shop)
http://inst.eecs.berkeley.edu/~cs188/sp07/Unix-Python%20Tutorial.htm
CC-MAIN-2014-10
refinedweb
1,110
61.53
Forum:We need more "front pages!" From Uncyclopedia, the content-free encyclopedia This just appeared to me The site could be organi(z)sed so that the front page pointed at several semi-featured articles. I don't mean Yesterday's featured, the one featured a year ago, or anything like that. Now there is the UnNews, the Featured, and practically the rest (For instance, recent articles is not very prominent, and it can contain unfunny stuff as well, as long as the thing has been written well enough). I don't know if my idea would make any big difference, but it might. The layout could be just about the same, but with a few more places with some highlight value. Those would get changed when people feel like it. For a practical example, now "Random page" brings up anything. I think it should bring up something good, and it should be more prominently visible - or there could be a "Random good page" or something - of course there are kazillions[1] of possibilities. What would be different - These highlights would be voted on, or maybe better yet, decided by admins or another selected (voted on?) group that would change from time to time if need be. The pace of the changes wouldn't have to be fast - or it could be whimsical, like for instance the second highlight could change in two days or more. There would be, say, 2nd, 3rd and 4th page, so to speak. The group to select the stuff for 2nd etc pages... ...should consist of people with good taste and markedly different sense of humerrrr from each other. They shouldn't be dicks but they could be if need be: if a variety of humo(u)r were guaranteed by the 2-day rule, it wouldn't matter much if they were dicks or not. So long as they weren't identical dicks. WHY THE FUCK THIS? Simple: more incentive to try to write good articles, even if not feature-worthy, with different sense of humour, getting a chance for a second place, to see what is funny to others without being voted on (or down because the current voters happen to have a different sense of hum). I have seen at least 37,000 (US punctuation) good articles getting voted down and totally unfunny articles featured (OK, not totally unfunny - but to me, less funny than some others) - and I know it's just a matter of taste. It really is. I have nominated maybe 3 or 4 articles to be featured, at least one of them got through, so I'm not bitter so far. I've even managed a front page with Mhaille so that's not it either - or maybe it is. I want some second places too, and I'm sure there are plenty of others who want them as well. But the main reason is, I feel the appearance of the site would improve if there were a couple of ways more to hit variable good stuff easily. Obviously bad articles would simply get a bit more under wraps by the system. Also, I think it would feel far more positive to select stuff for 2nd and 3rd page than vote for deletions. Yes but we already have this I know there are secondary highlights. Actually I don't know it, because if there are, I don't see them as such. If I'm right or wrong, I don't see how more could be bad in this case. I'm pretty sure there are plenty of good users to go through 5 minutes every few days to pick an article on 3rd or 4th page. Setting it up is probably the only actual job in it. Eggsample of how to do it Let's say we have a 10-member group (even less might be enough) for each 2nd, 3rd and 4th page, which would all have links to maybe 4 or 5 articles. Those articles would get changed every week for instance. Or, we could highlight the compartements a bit more, picking a highlight for science, sports, people, a few others. No, I don't want to be the one to do all the selecting I will if others want me to but it fucking well isn't the main reason, so banish the thought. I have a life too, but I happen to like Uncyclopedia and want to see it improve. Also... ...there would be added icentive for rewriting stuff. This might make it. If you're in a group to just bluntly select, not vote, stuff for second page, wouldn't you rewrite a half-ready article you see as promising? Huh? Huh? Obvious rules - The members of the group cannot highlight their own or each others' articles, at least not often. Because of this the groups must change every once in a while. - promote articles by the same writer too many (3?) times in a row at the cost of others and you are out - submit rewrites you want to promote for others to judge Democracy I know Americans are not for representational democracy but here it might work nevertheless - so please give it a chance. Now that I think of it, the group(s) of editors should be voted on, like VFH. For example, each group would stay in power for a month or so. The system could be given a year or half a year to see if it's good at all. -- Style Guide 06:58, 16 May 2009 (UTC) - The frontpage has enough stuff already. In fact, some stuff could go away. It would, of course, be replaced by more me. Woo! - Making another group to vote on another thing is problematic as we have enough trouble keeping people who vote voting on the votey pages that we already have. - There's already a "not featured" section on the frontpage, Template:Recent. - None of my comments here were in any measurable way humorous. Thus, my UnBroken streak continues! - Oh, wait. That first one has a "Woo!", indicating that what preceded it was comedy gold. It was a comment about me, probably. Sir Modusoperandi Boinc! 09:17, 16 May 2009 (UTC) - Modusoperandi - Template:Recent was addressed in the initial text. - I agree something could go away but I don't know what, and why. If you read everything the front page links to, you don't have much yet. What would be wrong in a sports page, science page, etc.? - there wouldn't be much voting in this one. If you measure the trouble for voters, in this one the vote would have more value. This is why I mention representational democracy. - building up your series of against-comments by adding two redundant ones seems to give me the right to do the same. This is the first such comment... - ...and this is the second. WOO-HOO! -- Style Guide 09:26, 16 May 2009 (UTC) - Wups. It is, however, prominent enough. The idea, I think, is that the Features are our best, and enjoy the highest priority on the frontpage. - Everything that didn't reference me and my awesomeness. - When VFH, VFP, Top3 & the monthly votey thingies have consistent, sexy traffic, then I can see adding another thing. Adding more just dilutes what we have, when what we have isn't all that popular. That said, yes we can, if enough others are consistently committed to it. Notice the key words, "enough", "consistently" and "committed". - You've put some thought into this. Try not to take my devil's advocate position personally. I'm a hopeless crank. That's why I'm here. It's the worst. Sir Modusoperandi Boinc! 09:35, 16 May 2009 (UTC) - My initial idea probably was born because I can read good articles from years past - some far better than what's up front, some not, of course. A new user will have a hell of a job trying to dig out the gems. If I have, say, 20 good articles properly advertised in front of me, I will probably read at least five of them. If I see links that point to different, non-defined directions, I click one or two of them, and stop clicking when I hit the first one that totally sucks. If not earlier. If I have a culture section, sports section, people section, and so forth, and each of them has an article as the main news, chances are I read all of those, even if they're not the very cream - and come back later to see if they have improved. I think the "second prize" idea is dead, but the compartments thingy should work for the benefit of the site. -- Style Guide 09:48, 16 May 2009 (UTC) - I initially had high hopes for the star-rating thingy, as it (cross-referenced with categories) would automatically (with the appropriate coding) generate a "good but not featured" thing. Maximum goodness with minimum effort equals happy naked dancing. Sadly, it didn't work out. I danced anyway. Sir Modusoperandi Boinc! 09:57, 16 May 2009 (UTC) - I take one example just to show I mean business. Look at the article Free_lunch on VFH. I see it clearly as front page material, as opposed to, for instance, Iron Maiden, which I found mostly tedious: too long for the idea (I give you comparison just to show I can make them). Free Lunch has 12/6 now and at least three of the against-votes are for reasons I'd never consider relevant on a humour site. Most or all of those are from people I otherwise respect, which goes to show my sense of humour is not shareware, and neither is theirs. What will become of Free Lunch if it gets voted down and the author(s) are not interested enough to improve it to some imaginary length/sense/image quality standard? Nothing much after it has passed the "recent articles" -stage, where it has no more attention than an article that's rife with typos and so forth. It would make a great supplement as a Food/Nutrition main story. Plus, the person promoting it into that would probably add something to it, or at least proofread it. Which might give someone else the idea to improve it further. Nothing lost if it got featured later, nothing lost if it didn't, but the author would get some sense of accomplishment and would go on writing. Sorry but I didn't check if the author(s) are such that would write in any case - so if this example is bad because of that, you can easily imagine some other article in its place. -- 10:12, 16 May 2009 (UTC) - Whether a page is being voted on VFH or it's just being read after hitting "random page", it's exactly the same; "Great", "Meh" or "Sucks". VFH is just the random button vetted by random people who decided those pages were great. VFH votes are just the opinions of semi-random eyes (as it would be after typing something into the search box). Lastly, I've completely lost my train of thought and I'm planning on falling asleep in three, two... Sir Modusoperandi Boinc! 10:33, 16 May 2009 (UTC) - While I feel it's vaguely wrong to argue with someone already asleep, I must point out that I'm not random enough to like a randomly written, inconsistent, typo-ridden article as well as any of those that have appeared on front page. But this is beside the:01, 16 May 2009 (UTC) You might be thinking of portals? A whole section dedicated to a certain subject/area of interest (sports, science, Spang, etc), which could be linked from the main page and/or sidebar? Each portal could have its own editor(s) and so each would feature articles that appeal to people with certain tastes, something that stops a lot of articles from passing VFH. And of course it wouldn't have to have the same layout as Wikipedia's main page, so you'd be more free with the layout. • Spang • ☃ • talk • 00:37, 17 May 2009 - I made a rough Sonic. It has some formatting issues, most notably, the wall-to-wall divisions, but it gives the basic idea. --Mnb'z 03:18, 17 May 2009 (UTC) - Yes, that's what I was thinking of, only I didn't know the:29, 17 May 2009 (UTC) We Already have an informal Secondary VFH System of Sorts (edit conflict) Its called Vote for Good Article. Unfortunately, it stalled out after about a month. It was basically started as a way to find funny, but not feature worthy articles. Although it was sometimes used to recognize good articles that were on VFH but didn't make quasi-featured status, its primary purpose was to funny articles that would never be nominated on VFH. For example, Proto-badger or Japanese Stomping Fish. --Mnb'z 00:43, 17 May 2009 (UTC) - Once more: - there wouldn't be much voting in this one. If you measure the trouble for voters, in this one the vote would have more value. This is why I mention representational democracy. - Also, voting on an article to maybe see it get somewhere is far different from voting for someone, in the hopes that the votes fall for you the next time. - of course the Cabal (which we all know doesn't exist) wouldn't share power (just like the only one Master of the Ring didn't) - it could kick anyone and everyone out of any editing group whenever it felt like doing so. - The editing part of being an editor of those portals wouldn't be a lot of work: Just to find, proofread and type the name of a good article in the right spot. The links to the portals might be above recent articles, for instance. Or maybe a bit more prominently somewhere. Above "recent" would work for most of those who already know Uncyclopedia, I suppose. - anyway, clear categories (with leading stories, the point) would make it superficially more interesting to people, since categories are fed to people all the time. - anybody applying for any such editing group would have to make a solemn promise not to cause any stupid arguments (stupid defined by Cabal I guess) at the risk of being thrown shit at, getting banned for precious minutes (they feel like eternity when you're banned), and being shown a huge penis - really huge. - Proto-Badger is exactly what I have in:34, 17 May 2009 (UTC) Of course... ...the Cabal could just go and name the groups instead of any voting - if it doesn't cause horribleness for some reason I cannot foresee. Since the Cabal sees all, they must naturally know who are up to the tasks without causing fuck all around. My idea with all this is just to have something I see as an improvement with little more work for the Cabal. The place is full of enthusiasts who would like some deciding power, I'm sure. But I think the voting bit would make it more variable and at least it wouldn't cause arguments among the nonexistent Cabal. Well, I can think of two. One is about starting the thing, the other is about ending it before it stinks the place:55, 17 May 2009 (UTC) Yet another idea... - The articles for those portals could just as well be picked out of VFH: those that have failed non-spectacularly, controversial ones, those that fail because of length, jokes that some cannot approve of and others can, stuff like that. The 50/50 articles and so on. If nobody wants the vote thing, this could work. If nobody wants the category portals with a main page each, I'm fresh out of ideas. - Modusoperandi: note that this just might get some sexy traffic for VFH as well. I for one haven't nominated anything for some time since I know my sense of humidity is a bit bust. I do vote but don't always want to because it feels like a waste of time to vote for something you really like and see it go down. This is just paranoia of course - but I figure many have the same one. So. Powder to the:24, 17 May 2009 (UTC) Uncyclopedian Purgartoy Or you could have a cross ref link area where articles that weren't quite good enough for VFH but showed a lot of promise can go to 'purgatory' where there is a chance they can be fast tracked back but only after modest rewrites/ spelling corrections etc. --Romartus 10:43, 17 May 2009 (UTC) On Finding Good Articles I would suggest having one person in charge of each portal at the beginning, to pick the initial featured article of the portal, the list of good articles, et cetera. Then, we could add some sort of voting system for new portal features and the like. --Mnb'z 16:39, 17 May 2009 (UTC) That sounds like a good idea. I had to check the front page to see where the portals were. Shouldn't they be more 'standy-outy' (a technical term) so that potential new recruits can have a look at what subjects have already been covered and those that are not ? --Romartus 17:59, 17 May 2009 (UTC) - I'm with both of you there. One person is quite enough. There could be a spare or two, though, just to keep it going in case the main man gets lazy or something. I also think they should be more standy-outish because they're new: added interest to the:49, 17 May 2009 (UTC) Whatever you do Don't include more vote pages or more award pages. Let each portal to be run by a single person. If you get more votes, you'll be killing all the official voting pages, we have too many as it is anyway. Let the guy running the portal decide. Also, as with people running the portals, I'd like to see writers rather than maintenance jockeys running the portals. But that's just me. ~ 19:06, 17 May 2009 (UTC) - A writer might be more prone to pick stuff that hits his literary taste. I say this as a writer, not as a maintenance j:29, 17 May 2009 (UTC) Suggested front page layout; Mnbvcxz's sample portalMight this be an acceptable layout for the main page? Don't make me point out I made it in 2 minutes and that I don't mean it to look exactly like this. Wait - I made myself point it out! Fuck me! Anyway, the yellow rectangles are there just to show where the portals could (and I think should) be, they are not the deadly yellow snow. --)}" > - I got a sample version Here for Sonic the Hedgehog. I looks like it shouldn't be too much work to maintain, as long as writers properly whoreinform the portal operators of good articles they make. Also, I don't plan on being the Sonic portal operator, that is just a sample setup. --Mnb'z 04:13, 18 May 2009 (UTC) - All right, that's pretty much what I had in mind. The (still nonexistent) Cabal will decide on what happens with front page, right? Mordillo says there "Whatever you do..." (oops he didn't say so, or if he did, he changed it - whichever way, it's not there now {Yes it is, as a header, you bat}) which seems to mean he doesn't have much against the general idea. How about the rest of them, and/or who's the bossman:28, 18 May 2009 (UTC) - There is no boss of the frontpage. It's like the Wild West, but with pale kids with bad skin and allergies. Sir Modusoperandi Boinc! 04:29, 18 May 2009 (UTC) Header needed, what next? What happens next if anything? Who decides the categories to be promoted? Then some admins draws his gun and either acts or violently doesn't act according to the suggestion, I guess? It seems to me there must be a few portals ready, at least 4, before they hit front page. Which means I, the prominent revolutionary, should get an OK from someone who can decide about the front page, and fish for people to work those portals, and submit their names for someone to decide. Or would some admin ask around and just pick? That seems a lot quicker. --:39, 18 May 2009 (UTC) - Spang says in the other forum the frontpage should match Wikipedia's. OK, that's a reason not to change it much. Could someone please tell me now if this thing I suggest will not happen (because of that or something else) so I can stop wasting my time with it? It's been weekend so I understand why most haven't commented. I have to clarify to those who don't want to read all my crap: the layout is not important to me but the accessibility of non-featurable but otherwise good content in easy categories, selected by someone. I would just as well like to see the categories in the upper right corner get their own front pages, with no change to front page layout. I also repeat this for the benefit of those who don't want to read all the above argument: I would like to change the front page to something a bit less random. There would be a "second prize" set of GOOD articles, categorized, so that people would get quick access to next-to-top stuff according to what they want to read about. Also, it would be a positive thing for writers to get their almost-featured articles appear on one of the category portal pages. --:51, 18 May 2009 (UTC) - This is wiki, and it operates on the general principle of {{sofixit}}. I would suggest you create a sample portal, even if its just to create a "template" for the layout. I also don't think the cabal is going to create a list subjects that get to have a portal, it would probably work like the articles and categories. I.e. everything gets created haphazardly by whoever gets around to doing it. Even if the portals don't get approved to the mainpage, they still would have a function. - Also, editing this rapidly will not get you a response sooner. This post has only been up for a while, and the larger a post gets, the less likely someone is to respond. (I know this from experience.) --Mnb'z 05:15, 18 May 2009 (UTC) - That's not why I edit it rapidly, but because I get fresh ideas and questions all the time. I made the short version in my previous edit because of what you just told me. Also, you just made a sample portal which looks fine to me. I'll make another if you think it has any use without a link:47, 18 May 2009 (UTC) - I can see the top box on the frontpage going from, say, "*Politics" to "*Politics Portal" (perhaps with the option thingy to randomisate them). That would both not mess with the formatting too much and attract people looking for the gravity gun. As far as the Cabal goes, though, my opinion doesn't mean much, 'cause I'm #149. Spang is #7. Mordillo is some weird squiggly thing. He says it means "#1". I say he's full of shit. Sir Modusoperandi Boinc! 06:06, 18 May 2009 (UTC) Perhaps like this - leave existing category texts as they are - add some Portals logo or whatnot in front of them to point out there has been a change - make the existing texts link to said portal front pages, which then would have a clearly visible link to the existing category page in addition to the top stories. - probably each portal front page should also have a link to the Categories:26, 18 May 2009 (UTC) At this point, I doubt we actually will change the layout of the main page, I for once am very much against changing the current layout (except for perhaps trimming it a bit). A suggestion I have is - chop banners for each of the portals, to be on small size so we can fit them into the current category list, and change the individual categories with portals, and then keep a general "category" link as it is the case now. This way, we make them prominent enough without messing up the front page. ~ 11:37, 18 May 2009 (UTC) - Yes, that's almost the same thing I meant. It's also easier to work it out if they're only sideroads - no need to set them all up at once, I suppose, but add the banners at the rate the portals get finished. How many portals would you say is the maximum without clogging the space too much? 5:57, 18 May 2009 (UTC) - Correct me if I'm wrong, but isn't this a bit of a waste of time? We barely have enough 'specialist pages' on a topic to justify a portal type sections of the site (with vast amounts being patchy even in very full categories). We have namespace article sections (Unbooks, HowTo etc) which is a similar idea however considering the activity going on those, starting up new sections/portals seems a little pointless and unnecessary (does this add anyhting to our articles, making them funnier or better?). Wikipedia has a similar thing to my understanding, but they have lots of really nerdy types who will write article on every facet of the topic. We don't, we write shorter with more humour orientated articles to parody Wikipedia, not really become it. At a time when site unity and morale could be better, as well having a more targeted writing for the sites purpose, this does not seem to be a great avenue. It will take time and effort of users to become half decent, and even if completed to some standard, it will lead the site in a bad direction (assuming there’s the kind of numbers that could write enough to even make it worthwhile).--Sycamore (Talk) 13:22, 18 May 2009 (UTC) - I do think you're wrong for some of the reasons I mention at different spots in the forum. Well, one main reason actually. There are plenty of good articles that go relatively unnoticed because of general unworthiness to front page (I often feel it's only a matter of taste). I want them to get promoted above the "recent article" and "news", which don't have much control. At least "recent" doesn't. I don't want anyone to start writing new articles for the portals, just to promote stuff that is good but doesn't make main page. Summing up some of my thoughts: - categorized good articles for the readers (doesn't make them funnier, just easier to find among the bad ones. The editor does that for the reader.) - some prize for the writer whose sense of humour is good but deviates from the one of those who happened to vote this time. Makes subsequent articles better and funnier because of added incentive. It's all there ^ somewhere. - I think the people to run the portals must be some who understand what makes a good article: at the very least, proofreading - and making funnier if at all possible - yes, I think this would also improve the articles if done right. What do you mean "barely enough..."? 24.000 articles, I think it's possible to slap up a few decent portals every few weeks. They don't even need to stay rigid: if supply for one category runs too low, dump the portal and start another one, or dump it for good. Note that I'm not trying to spread all of the site all over the place - just, I repeat, promote what already is good. - define "waste:40, 18 May 2009 (UTC) Sample Portal - Art - Literature - Film - Games - Bloodbath - Uncyclopedia Games - Video Games - Sonic - Mario - History - Music - People - Comedians - Politics - Religion - Science - Pregnancy - Society - Television - Theatre I've we-worked this a bit so it should take less effort to update, and it will work more like a navigation tool than as a backup award system. Oh, and I whore VFH alot in that article. Right now, the closest thing to voting would be suggestions for "good by not yet quasi/featured" articles, and suggestions for the random image in the top left. (I haven't goten around to randomizing it yet, but it shows what it would look like.) What we need a process to find good articles without going though the trouble of having 10 people read each article, but I digress... Anyway, I probably could stick that, or something like that, in the categories. Also, all the "portal operator" needs to do is pick the "good" articles. (And less importantly, pic the images for the random images on the top right.) Unfortunately, we would either be stuck with a "Good Article Pope" or moar voting. --Mnb'z 16:33, 18 May 2009 (UTC) (had a couple transclusion-induced formatting problems, that should be ok now --Mnb'z 16:36, 18 May 2009 (UTC)) My intended idea... ...was about somewhat wider categories than just one game/character/thingy. Of course Sonic is just as useful as my ideas if people like it, etc. and if Cabal says it's OK. Mnbvcxz - where else are these portals useful if they don't get front page space? I'll make one on science one of these days if there's some use. Do you mean Sonic to be a portal to any good game articles? As such, it would serve a better purpose - more content has been 16:54, 18 May 2009 (UTC) Creating Lists Due to my bot powers, I have discovered that I (or anyone else with a bot) can rather easily generate lists of featured articles for categories (and their subcategories.) However, for this to work right, it requires that the featured category be properly maintained. (Which it generally is.) Any article incorrectly placed in the feature category will create a false positive. Additionally, transclusions of featured pages into other pages will also create false positives. However, most false positives are easy to spot; anything in userspace is probably a false positive. As an example, the featured articles in Category:Politics and Government and its 1st order subcategories include: - "Paul is dead" hoax - Adolf Hitler - Al Gore - Alien vs. Predator - All Your Base Are Belong To Us - Argos - Axis of Evil Hot Dog Eating Competition - Battle of the Sexes - Battleship Potemkin - Biggles - Bloodbath - Certificate of Hitlertude - Civil War - Colin Powell - Colonel - Conservapedia - Democrazy - Diplomacy - DOHS Anti-Terrorism Regulations - Down with this sort of thing! - Dwight D. Eisenhower - Education - Fascist - Ferdinand von Zeppelin - French Revolution - Future ☭f tomorrow, today! - George Dubya Bush - George W. Bush - George W. Bush (featured) - Gerrymander (politics) - Golf War - Google Middle Earth - Grand Conspiracy - Grand Theft Audio - Gratuitous Anime Panty Shot - Harry S. Truman - Horatio Hornblower - HowTo:Bluff Your Way to Political Power - Illegal aliens from outer space! - Inanimate Sponge - Intelligent Design - IPod yocto - J'accuzzi - Jaws did WTC - League of Nations - Letter to the Isle - List of people you do not want to come face to face with in a narrow hallway - Lord Sauron - Mae Zedong - Maginot Line - Mahmoud Ahmadinejad - Make Poverty History - Manhattan Engineering District - Maozilla - Martin Van Buren - Missing milk - Moon hoax - Napoleon Bonaparte - NASA - National Try To Assassinate The President Day - Nazi - New Hampshire Merchant Cat - Newmath - Nike Revolution of 2006 - No More Room In Hell Act - North Korea - Orange construction barrels - Pacific War - Political advertising - Pot v. Kettle - Private Eye - Redundancy - Robert Mugabe - Rock, Paper, Airstrike - Royal Pointless Military Things Tournament - Russian reversal (phenomenon) - Scotland - Self-toasting bread - Senator - Social Commentary - Society for the Intervention and Rehabilitation of Supervillains - Stereotype Reassignment Surgery - Taft Punk - Teabag everything that moves - The Color Problem - The defense rests, your honor - The Free World - The Pun Invasion of Uncyclopedia - The Siege of Bordeaux - Titshugger Penishead McFucknutter - Torture - UnBooks:Practical Lessons on Communism - UnBooks:The Digital Divide - UnCameron - UnScripts:Song of the South - Vietnam War Hoax - Vote Fish Penis - Voynich Manuscript - Walgreens Drug Store - War on Terra - Why?:Cancer is Great - Why?:Does Christopher Meloni not have an emmy yet? - Wikipedia - Wikipedia/old - William Gladstone - Witless Protection Program - WMD (Donuts) - Women's Suffrage - World War IV - You should talk about ponies - A letter to Bill Richardson Known false positives in that list (all userspace) - User:Ethereal/Wikipedia -archived copy, probably shouldn't have the featured tag on it - User:Galvatron -false positive, template spammage can't find why - User:MrCleveland - -false positive due to article transclusion I went back to the direct subcategories of Category:Politics and Government only. Going to the sub-cats of the sub-cats adds too many unrelated articles, at least for this category. Anway, it should be easy to get featured and quasi-featured lists of articles for the portals on broad subjects. --Mnb'z 17:44, 18 May 2009 (UTC) All right - I'm not much of a sprelunker myself. I studied 3D software instead of other computer skills. Anyway - good to know. I guess this means that you can find (probably) good articles easily. Also, I had a look at main page source. It's clear that if I want to make a portal I'll either need to copy and edit something that's already there, or get a lot of help, or shoot myself. --:57, 18 May 2009 (UTC) - I can find featured and quasi-featured articles through category comparisons. Unfortunately, some of the more specific subjects have have only a couple of these, if that. --Mnb'z 19:06, 18 May 2009 (UTC) Right, this forum is getting out of hand Multi, this has turned into a conversation between you and Mnb, no one else is looking at it and no one will bother reading it, and you will not be carrying out any changes on the front page without some sort of consent from someone. My proposal, kill this forum and put a very very short version of it on a new one. I've lost count on what you're trying to achieve here, and I've been reading it more or less from the start. ~ 20:19, 18 May 2009 (UTC) - Hey, Mordillo! I've been clogging up this forum from the very beginning! So there! Sir Modusoperandi Boinc! 20:25, 18 May 2009 (UTC) - Yes, I did notice earlier this is unreadable. I would have put it short in the very beginning if the idea had been ready - but there was no chance of that being the case. Will do as you:56, 19 May 2009 (UTC) - I am reading it. Also, wouldn't you have to read through this one to understand the next one? Anyway, Modusoperandi's earlier suggestion of turning the top right section links into portal page links is the best one. Which has the bonus of being exactly what wikipedia do now. Just copy one of their portals and have at it. • Spang • ☃ • talk • 05:13, 19 May 2009 Better ramble on here than somewhere else Ok, if portals are a no-go, what about adding templates with a list of featured(/quasi) articles in given category. As an example, here is one from the list above forCategory:Politics and Government. This includes features that category and its immediate subcats but not in the subcats of its subcats. Anyway, here is what it would look like:
http://uncyclopedia.wikia.com/wiki/Forum:We_need_more_%22front_pages!%22?oldid=5325318
CC-MAIN-2014-52
refinedweb
5,992
69.41
Button inherit stylesheet for no reason The button "last_button" for some reason takes on the same style sheet as "the_button". This did not happen before I replaced QPushButton with QClickAgnostic. MainWindow.h private slots: void on_btn_left_click(); void on_btn_right_click(); private: QClickAgnostic * last_button = qobject_cast<QClickAgnostic *> ( sender() ); MainWindow.cpp void MainWindow::on_btn_left_click() { // Styles the previously selected button to look like a normal button;}")); // retrieves the pressed button QClickAgnostic * the_button = qobject_cast<QClickAgnostic *> ( sender() ); if(the_button) { // code... // Styles the current button to look like a clicked button the_button-;}")); } // makes last button this button since this button will be last button next time a click event occurs // Has never, until QClickAgnostic, inherited the style sheet of the_button. last_button = the_button; } qclickagnostic.h #ifndef QCLICKAGNOSTIC_H #define QCLICKAGNOSTIC_H #include <QPushButton> #include <QMouseEvent> class QClickAgnostic : public QPushButton { Q_OBJECT public: explicit QClickAgnostic(QWidget *parent = nullptr); private slots: void mouseReleaseEvent(QMouseEvent *e); signals: void rightClicked(); void leftClicked(); public slots: }; #endif // QCLICKAGNOSTIC_H qclickagnostic.cpp #include "qclickagnostic.h" QClickAgnostic::QClickAgnostic(QWidget *parent) : QPushButton(parent) { // empty } void QClickAgnostic::mouseReleaseEvent(QMouseEvent *e) { if(e->button() == Qt::LeftButton) { emit leftClicked(); } else if(e->button() == Qt::RightButton) { emit rightClicked(); } } Hi But it seems you just set a pointer last_button = the_button; so last_button just points to the_button; and hence it will look exactly the same. Yes it seems as if you're right. How do I solve this? I haven't had this issue before when I declared both the_button and last_button as QPushButton instead of my custom QPushButton class @legitnameyo I'm not an expert C++-er, but I don't get how your code works. private: QClickAgnostic * last_button = qobject_cast<QClickAgnostic *> ( sender() ); What is sender()at the point this is initialized? void MainWindow::on_btn_left_click() {;}")); What is the value of last_buttonthe very first time this function gets called? - mrjj Lifetime Qt Champion - . How do I solve this By doing it the Qt way. I assume you have a grid of buttons and only one to be selected at a given time so if you select another it should be drawn as normal and new as pressed. At least that's how i read your code. You can do that very easily with a QButtonGroup and the Checkable property on each button. Then Qt does the housekeeping for you. You can style the look when "selected" with QPushButton:checked That way you dont need last_button and you can apply the stylesheet to the parent and it will just work with setting / resetting the style. result What is sender() at the point this is initialized? as it's a qobject_cast and doesn't crash on start up, most likely a nullptr? What is the value of last_button the very first time this function gets called? as it's not checked against 0, it should fail. Maybe luck? using sender() is bad design anyway. One shouldn't use it if possible. @mrjj posted a nice qt alternative I would suggest the OP to use that ;) - legitnameyo How do I implement this code way? I do my UI in code since that's what I prefer. This is what I've got so far and it isn't working... QButtonGroup *g_btns = new QButtonGroup(this); /* all buttons are created and I add the buttons in about this way*/ btn1->setCheckable(true); //etc.. g_btns->addButton(btn1); g_btns->addButton(btn2); g_btns->addButton(btn3); // etc ... connect(g_btns, SIGNAL(buttonClicked(int)), this, SLOT(onGroupButtonClicked(int))); void MainWindow::onGroupButtonClicked(int btn_id) { qDebug() << "clicked!"; } And this gives me nothing. The onGroupButtonClicked does not activate. @legitnameyo I never used QButtonGroup, but a quick look into the documentation tells me, you never gave your added buttons an ID void QButtonGroup::addButton(QAbstractButton *button, int id = -1) I'm guessing here and say, that buttons with the ID of -1 do not emit the buttonClicked(int) signal - jsulm Qt Champions 2018 @legitnameyo It should work. Do you get any warnings at runtime (related to connect)? Is g_btns in connect() really the one you're creating at the beginning of the code snippet you provided? @J.Hilk Was a good guess, but it seems, it will use negative values for auto numbering and does, in fact, emit them. QButtonGroup *buttonGroup = new QButtonGroup(this); buttonGroup->addButton( ui->pushButton_2); buttonGroup->addButton( ui->pushButton_3); buttonGroup->addButton( ui->pushButton_4); buttonGroup->addButton( ui->pushButton_5); connect(buttonGroup, static_cast<void(QButtonGroup::*)(int)>(&QButtonGroup::buttonClicked), [ = ](int id) { qDebug() << id; }); outputs. -2 -2 -3 -5 -4 @mrjj I find this behavior actually not very intuitive. That the id increases when non is specified, maybe... But a higher negative number !? @J.Hilk Yep. I had to test it as sounded odd it would auto number with negative values. But it seems design wants to allow to have some auto numbered and some manually even i don't see why. @mrjj Negative-decrementing auto-numbers are usually used for "anonymous" items while allowing positive numbers for explicit use. Then the numbers won't clash. There is NO error such as this one below QMetaObject::connectSlotsByName: No matching signal for onGroupButtonClicked(int) And yes g_btns is defined only ONCE. Everything is in the order I got it in my code. However one thing that might change things up is that I use a custom class for my buttons called QClickAgnostic, which inherits from QPushButton as such above and the buttons are created like this QClickAgnostic *btn = new QClickAgnostic(ui->frame_of_btns); - KillerSmath @legitnameyo You need to notify the ButtonGroup about the click event. I took a look at the source code of the abstractbutton and noticed the button directly notifies the group that it was clicked. But you broke this step when you reimplemented the MouseReleaseEvent function. See a snippet of this step. void QAbstractButton::mouseReleaseEvent(QMouseEvent *e) { ... if (hitButton(e->pos())) { d->repeatTimer.stop(); d->click(); // call click function e->accept(); } else { setDown(false); e->ignore(); } } void QAbstractButtonPrivate::click() { ... if (guard) emitReleased(); if (guard) emitClicked(); // call emitClicked } void QAbstractButtonPrivate::emitClicked() { ... #if QT_CONFIG(buttongroup) if (guard && group) { emit group->buttonClicked(group->id(q)); // emit the signal if (guard && group) emit group->buttonClicked(q); // emit the signal } #endif } So, you can solve this problem calling the click() function inside MouseReleaseEvent. void QClickAgnostic::mouseReleaseEvent(QMouseEvent *e) { if(e->button() == Qt::LeftButton) { click(); // called emit leftClicked(); } else if(e->button() == Qt::RightButton) { click(); // called emit rightClicked(); } } @KillerSmath Good digging ! - legitnameyo @KillerSmath fix did the work, combined with @mrjj QGroupButton! I added this code to style the pressed button void MainWindow::onGroupButtonClicked(int btn_id) { g_btns->button(btn_id)-;}")); } Thanks a lot guys!
https://forum.qt.io/topic/101998/button-inherit-stylesheet-for-no-reason
CC-MAIN-2019-30
refinedweb
1,070
55.64
Deploying an application on the cloud doesn't mean just deploying code. It means deploying all of the infrastructure - servers, storage, and other services - that your application requires to run. Two of the most popular tools for deploying infrastructure-as-code on AWS are AWS CloudFormation and the Cloud Development Kit (CDK). But which one should you use? Let's look at the capabilities as well as the pros and cons of each. What is CloudFormation? AWS CloudFormation is the foundational technology for deploying infrastructure on the cloud service. CloudFormation is a declarative language for defining infrastructure components such as virtual networks, virtual machines, and virtually (see what I did there?) any other AWS resource to boot. Generally, if you can create an AWS resource in the console or via a programmatic SDK, you can create it via CloudFormation. Using CloudFormation, your team can forge its application into a template that you can deploy in a repeatable fashion. You can even parameterize your templates so you can use them to deploy your application in multiple stages. CloudFormation architecture basics A CloudFormation deployment consists, at a minimum, of either a JSON or a YAML file - called a template - that defines AWS resources. Developers use templates to create a set of resources called a stack. A template is broken up into several major sections: - Parameters: A series of variables whose values are supplied to the template - Resources: The resources to create in your AWS account. - Outputs: Values resulting from resources created by the template. When you create a stack from a template - called launching a stack - you can supply parameters either as arguments in the Management Console or via a separate parameters file. A stack is complete when all of its resources finish creating. You can see the progress, as well as other values such as outputs, in the AWS Management Console. Once a stack is created, you can create change sets to alert the resources in your stack. You can also delete a stack, which will reclaim all associated AWS resources. In addition to creating base infrastructure, you can use CloudFormation to ship your application code. CloudFormation supports all of the major code deployment mechanisms in AWS, including Lambda, CodeDeploy, are Elastic Container Services (ECS). CloudFormation has multiple methods that encourage code reuse. Instead of putting your entire infrastructure into a single template, you can spread it across multiple templates and chain their creation. This enables you, for example, to deploy a Virtual Private Cloud (VPC) in one template that you then populate with resources created in other templates. You can further divide CloudFormation templates up into reusable modules. To take the previous example, you could create a module that defines a VPC that other developers can import into their own templates. You register modules in the CloudFormation registry in your AWS account, where other developers on your team can find and reuse them. You can also use CloudFormation templates to deploy AWS resources across multiple accounts using CloudFormation StackSets. This is useful for companies that deploy independent stacks on behalf of customers. What is the AWS CDK? The AWS Cloud Development Kit (CDK) provides some of the same benefits of CloudFormation but with a few key differences. The CDK is an infrastructure-as-code solution that you can use with several popular programming languages. In other words, it's like CloudFormation, but using a language you already know. The CDK also contains command line tools to create infrastructure-as-code templates and to instantiate, update, and tear down stacks. Under the hood, the CDK generates CloudFormation templates for its deployments. It's essentially a way to generate CloudFormation using higher-level constructs. AWS CDK architecture basics The building blocks of the CDK are constructs. Constructs function like resources in CloudFormation but with a twist. A construct can exist at one of three levels: - L1: A basic AWS resource. - L2: An AWS resource with intent - e.g., an Amazon S3 bucket deployed with specific settings, such as disabling public access. - L3*: A pattern - i.e., a collection of L1 and L2 constructs knitted together into a single solution meant to address a specific real-world scenario. To use the CDK, you create an app with a command-line call: cdk init app --language typescript You add resources to your app by creating constructs with specific values. For example, the following TypeScript code creates an Amazon S3 bucket: import * as cdk from 'aws-cdk-lib'; import { aws_s3 as s3 } from 'aws-cdk-lib'; export class HelloCdkStack extends cdk.Stack { constructor(scope: cdk.App, id: string, props?: cdk.StackProps) { super(scope, id, props); new s3.Bucket(this, 'MyFirstBucket', { versioned: true }); } } Once you're finished writing your code, you can deploy it using the cdk deploy command. To deploy it, you need to specify an environment, which consists of an AWS account and an AWS region. Comparing CloudFormation and the CDK Both CloudFormation and the CDK share a number of benefits. They both enable you to: - Define your infrastructure as code, check it into source control, and make it part of your CI/CD deployment pipeline - Automate all aspects of an AWS application deployment - Create repeatable deployments across multiple stages The way they each go about this, however, is different. And each has its strengths and weaknesses. CloudFormation Pros A plus in CloudFormation's column is its completeness and ubiquity. It's AWS's foundational infrastructure-as-code tech and is integrated into multiple AWS DevOps features, including CodeBuild and CodePipeline. It's been around for years and you can find a wealth of sample code online. (AWS themselves publish a pretty thorough repo of samples.) Another benefit of CloudFormation is that the service handles parallelization and sequencing for you. If two resources are independent, CloudFormation can initiate creation simultaneously. It can also detect dependencies between two resources and create them in the proper order. Template developers can further specify a DependsOn relationship to specify explicit dependencies. CloudFormation Cons The major downside of CloudFormation is that it's another thing to learn. Rather than work in a language and programming model you're familiar with, you have to learn either the JSON or YAML format that CloudFormation defines. This can be an impediment to getting your development team more closely involved in building out your app's infrastructure. Because CloudFormation is a declarative syntax and not a programming language, it lacks some helpful constructs that make authoring easier. The lack of type checking, for example, means you won't catch some obvious errors until runtime. CDK Pros Many developers end up embracing the CDK because it allows them to utilize their favorite programming language. Instead of learning an entirely new syntax, developers can create infrastructure using the same language they use to code their applications. Additionally, the CDK provides a more structured reuse format than CloudFormation. The three-tiered reuse level of components, intents, and patterns means you can build up a library of reusable components and patterns your entire organization can use to build infrastructure and ship applications more quickly. Finally, CDK code is more testable than CloudFormation. The CDK contains a testing framework you can use to test both the validity of the values it generates in AWS CloudFormation and to generate "snapshot" diffs against previous versions. CDK Cons The biggest downside with the AWS CDK is the level of experience required to use it. AWS itself recommends that CDK users be "moderately to highly experienced" in AWS and CloudFormation already. In other words, it's not a technology you'll want to use if your team is new to the cloud. Another downside is the limited language support. If you develop applications in any of the six languages (TypeScript, JavaScript, Python, Java, C#, Go) that the CDK supports, then you're good to go. If you program in something else (e.g., Rust), and no one on your team is proficient in one of the supported languages, then the benefits of using the CDK drop dramatically. Additionally, since the CDK generates CloudFormation, it can be difficult to debug on occasion. If the generated CloudFormation generates a runtime error, it may take some time to figure out how that error maps to changes in your codebase. Which is the best for you? Whether you use CloudFormation or the CDK will depend on a few factors: - Your team's experience level with AWS and CloudFormation - What programming languages your team knows - Your plans for reusability If you're just starting out on your AWS journey, CloudFormation is the way to go. It's highly tested, well-supported, and well-documented. Start by finding sample templates that are a close fit to your app and modifying them to suit your needs. As your project evolves, you can grow your suite of templates and gradually refactor reusable components into modules. For teams that have been using AWS and CloudFormation for a while and want to up their game, the CDK is a logical next progression. Identify what you're currently shipping in CloudFormation and break down how you would refactor that into CDK components. Other considerations If your team already has considerable experience with infrastructure-as-code on another platform, you may feel comfortable jumping into the CDK straightaway. For example, if you've used ARM templates or Bicep on Azure, CloudFormation will already feel familiar. You can go straight to the CDK and reap some of the advanced benefits it offers, such as type safety and enhanced testing. The TinyStacks approach to CDK and CloudFormation At TinyStacks, we've built an infrastructure-as-code platform that enables application developers to ship their code in a day instead of weeks. There's no need for everyone to be an expert in CloudFormation or the CDK. Define your deployment frameworks in a centralized manner. To see how TinyStacks can revolutionize your DevOps deployments, sign up and check it out today.
https://www.tinystacks.com/blog-post/aws-cdk-vs-cloudformation/
CC-MAIN-2022-40
refinedweb
1,654
54.73
Code style CKEditor 5 development environment has ESLint enabled both as a pre-commit hook and on CI. This means that code style issues are detected automatically. Additionally, .editorconfig files are present in every repository to automatically adjust your IDEs settings (if it is configured to read them). Here comes a quick summary of these rules. # General LF for line endings. Never use CRLF. The recommended maximum line length is 120 characters. It cannot exceed 140 characters. # Whitespace No trailing spaces. Empty lines should not contain any spaces. Whitespace inside parenthesis and before and after operators: function foo( a, b, c, d, e ) { if ( a > b ) { c = ( d + e ) * 2; } } foo( bar() ); No whitespace for an empty parenthesis: const a = () => { // Statements... }; a(); No whitespace before colon and semicolon: let a, b; a( 1, 2, 3 ); for ( const i = 0; i < 100; i++ ) { // Statements... } # Indentation Indentation with tab, for both code and comments. Never use spaces. If you want to have the code readable, set tab to 4 spaces in your IDE. class Bar { a() { while ( b in a ) { if ( b == c ) { // Statements... } } } } Multiple lines condition. Use one tab for each line: if ( some != really.complex && condition || with && ( multiple == lines ) ) { // Statements... } while ( some != really.complex && condition || with && ( multiple == lines ) ) { // Statements... } We do our best to avoid complex conditions. As a rule of thumb, we first recommend finding a way to move the complexity out of the condition, for example, to a separate function with early returns for each “sentence” in such a condition. However, overdoing things is not good as well and sometimes such a condition can be perfectly readable (which is the ultimate goal here). # Braces Braces start at the same line as the head statement and end aligned with it: function a() { // Statements... } if ( a ) { // Statements... } else if ( b ) { // Statements... } else { // Statements... } try { // Statements... } catch ( e ) { // Statements... } # Blank lines The code should read like a book, so put blank lines between “paragraphs” of code. This is an open and contextual rule, but some recommendations would be to separate the following sections: - variable, class and function declarations, if(), for()and similar blocks, - steps of an algorithm, returnstatements, - comment sections (comments should be preceded with a blank line, but if they are the “paragraph” themselves, they should also be followed with one), - etc. Example: class Foo extends Plugin { constructor( editor ) { super( editor ); /** * Some documentation... */ this.foo = new Foo(); /** * Some documentation... */ this.isBar = false; } method( bar ) { const editor = this.editor; const selection = editor.model.document.selection; for ( const range of selection.getRanges() ) { const position = range.start; if ( !position ) { return false; } // At this stage this and this need to happen. // We considered doing this differently, but it has its shortcomings. // Refer to the tests and issue #3456 to learn more. const result = editor.model.checkSomething( position ); if ( result ) { return true; } } return true; } performAlgorithm() { // 1. Do things A and B. this.a(); this.b(); // 2. Check C. if ( c() ) { d(); } // 3. Check whether we are fine. const areWeFine = 1 || 2 || 3; this.check( areWeFine ); // 4. Finalize. this.finalize( areWeFine ); return areWeFine; } } # Multi-line statements and calls Whenever there is a multi-line function call: - Put the first parameter in a new line. - Put every parameter in a separate line indented by one tab. - Put the last closing parenthesis in a new line, at the same indendation level as the beginning of the call. Examples: const myObj = new MyClass( 'Some long parameters', 'To make this', 'Multi line' ); fooBar( () => { // Statements... } ); fooBar( new MyClass( 'Some long parameters', 'To make this', 'Multi line' ) ); fooBar( 'A very long string', () => { // ... some kind // ... of a // ... callback }, 5, new MyClass( 'It looks well', paramA, paramB, new ShortClass( 2, 3, 5 ), 'Even when nested' ) ); Note that the examples above are just showcasing how such function calls can be structured. However, it is best to avoid them. It is generally recommended to avoid having functions that accept more than 3 arguments. Instead, it is better to wrap them in an object so all parameters can be named. It is also recommended to split such long statements into multiple shorter ones, for example, by extracting some longer parameters to separate variables. # Strings Use single quotes: const a = 'I\'m an example for quotes'; Long strings can be concatenated with plus ( +): const html = 'Line 1\n' + 'Line 2\n' + 'Line 3'; or template strings can be used (note that the 2nd and 3rd line will be indented in this case): const html = `Line 1 Line 2 Line 3`; Strings of HTML should use indentation for readability: const html = `<p> <span>${ a }</span> </p>`; - Comments start with a capital first letter and require a period at the end (since they are sentences). - There must be a single space at the start of the text, right after the comment token. Block comments ( /** ... */) are used for documentation only. Asterisks are aligned with space: /** * Documentation for the following method. * * @returns {Object} Something. */ someMethod() { // Statements... } All other comments use line comments ( //): // A comment about the following statement. foo(); // Multiple line comments // go through several // line comments as well. Comments related to tickets or issues should not describe the whole issue fully. A short description should be used instead, together with the ticket number in parenthesis: // Do this otherwise because of a Safari bug. (#123) foo(); # Linting CKEditor 5 development environment uses ESLint and stylelint. A couple of useful links: - Disabling ESLint with inline comments. - CKEditor 5 ESLint preset (npm: eslint-config-ckeditor5). - CKEditor 5 stylelint preset (npm: stylelint-config-ckeditor5). Avoid using automatic code formatters on existing code. It is fine to automatically format code that you are editing, but you should not be changing the formatting of the code that is already written to not pollute your PRs. You should also not rely solely on automatic corrections. # Visibility levels Each class property (including methods, symbols, getters or setters) can be public, protected or private. The default visibility is public, so you should not document that a property is public — there is no need to do this. Additional rules apply to private properties: - The names of private and protected properties that are exposed in a class prototype (or in any other way) should be prefixed with an underscore. - When documenting a private variable that is not added to a class prototype (or exposed in any other way), @privateis not necessary. - A symbol property (e.g. this[ Symbol( 'symbolName' ) ]) should be documented as @property {Type} _symbolName. Example: class Foo { /** * The constructor (public, as its visibility isn't defined). */ constructor() { /** * Public property. */ this.foo = 1; /** * Protected property. * * @protected */ this._bar = 1; /** * @private * @property {Number} _bom */ this[ Symbol( 'bom' ) ] = 1; } /** * @private */ _somePrivateMethod() {} } // Some private helper. // // @returns {Number} function doSomething() { return 1; } # Accessibility The table below shows the accessibility of properties: (yes – accessible, no – not accessible) For instance, a protected property is accessible from its own class in which it was defined, from its whole package, and from its subclasses (even if they are not in the same package). Protected properties and methods are often used for testability. Since tests are located in the same package as the code, they can access these properties. # Getters You can use ES6 getters to simplify class API: class Position { // ... get offset() { return this.path[ this.path.length - 1 ]; } } A getter should feel like a natural property. There are several recommendations to follow when creating getters: - They should be fast. - They should not throw. - They should not change the object state. - They should not return new instances of an object every time (so foo.bar == foo.baris true). It is OK to create a new instance for the first call and cache it if it is possible. # Order within class definition Within class definition the methods and properties should be ordered as follows: - Constructor. - Getters and setters. - Iterators. - Public instance methods. - Public static methods. - Protected instance methods. - Protected static methods. - Private instance methods. - Private static methods. The order within each group is left for the implementor. # Tests There are some special rules and tips for tests. # Test organization Always use an outer describe()in a test file. Do not allow any globals, especially hooks ( beforeEach(), after(), etc.) outside the outermost describe(). The outermost describe()calls should create meaningful groups, so when all tests are run together a failing TC can be identified within the code base. For example: describe( 'Editor', () => { describe( 'constructor()', () => { it( ... ); } ); // ... } ); Using titles like “utils” is not fine as there are multiple utils in the entire project. “Table utils” would be better. Test descriptions ( it()) should be written like documentation (what you do and what should happen), e.g. “the foo dialog closes when the X button is clicked”. Also, “…case 1”, “…case 2” in test descriptions are not helpful. Avoid test descriptions like “does not crash when two ranges get merged” — instead explain what is actually expected to happen. For instance: “leaves 1 range when two ranges get merged”. Most often, using words like “correctly”, “works fine” is a code smell. Thing about the requirements — when writing them you do not say that feature X should “work fine”. You document how it should work. Ideally, it should be possible to recreate an algorithm just by reading the test descriptions. Avoid covering multiple cases under one it(). It is OK to have multiple assertions in one test, but not to test e.g. how method foo()works when it is called with 1, then with 2, then 3, etc. There should be a separate test for each case. Every test should clean after itself, including destroying all editors and removing all elements that have been added. # Test implementation Avoid using real timeouts. Use fake timers instead when possible. Timeouts make tests really slow. However — do not overoptimize (especially that performance is not a priority in tests). In most cases it is completely fine (and hence recommended) to create a separate editor for every it(). We aim at having 100% coverage of all distinctive scenarios. Covering 100% branches in the code is not the goal here — it is a byproduct of covering real scenarios. Think about this — when you fix a bug by adding a parameter to an existing function call you do not affect code coverage (that line was called anyway). However, you had a bug, meaning that your test suite did not cover it. Therefore, a test must be created for that code change. It should be expect( x ).to.equal( y ). NOT: . expect( x ).to.be.equal( y ) When using Sinon spies, pay attention to the readability of assertions and failure messages. Use named spies, for example: const someCallbackSpy = sinon.spy().named( 'someCallback' ); const myMethodSpy = sinon.spy( obj, 'myMethod' ); Use sinon-chai assertions expect( myMethodSpy ).to.be.calledOnce // expected myMethod to be called once but was called twice # Naming # JavaScript code names Variables, functions, namespaces, parameters and all undocumented cases must be named in lowerCamelCase: let a; let myLongNamedVariable = true; function foo() {} function longNamedFunction( example, longNamedParameter ) {} Classes must be named in UpperCamelCase: class MyClass() {} const a = new MyClass(); Mixins must be named in UpperCamelCase, postfixed with “Mixin”: const SomeMixin = { method1: ..., method2: ... }; Global namespacing variables must be named in ALLCAPS: const CKEDITOR_TRANSLATIONS = {}; # Private properties and methods Private properties and methods are prefixed with an underscore: CKEDITOR._counter; something._doInternalTask(); # Methods and functions Methods and functions are almost always verbs or actions: // Good execute(); this.getNextNumber() // Bad this.editable(); this.status(); # Properties and variables Properties and variables are almost always nouns: const editor = this.editor; this.name; this.editable; Boolean properties and variables are always prefixed by an auxiliary verb: this.isDirty; this.hasChildren; this.canObserve; this.mustRefresh; # Shortcuts For local variables commonly accepted short versions for long names are fine: const attr, doc, el, fn, deps, err, id, args, uid, evt, env; The following are the only short versions accepted for property names: this.lang; this.config; this.id; this.env; # Acronyms and proper names Acronyms and, partially, proper names are naturally written in uppercase. This may stand against code style rules described above — especially when there is a need to include an acronym or a proper name in a variable or class name. In such case, one should follow the following rules: - Acronyms: - All lowercase if at the beginning of the variable name: let domError. - Default camel case at the beginning of the class name: class DomError. - Default camel case inside the variable or class name: function getDomError(). - Proper names: - All lowercase if at the beginning of the variable: let ckeditorError. - Original case if at the beginning of the class name: class CKEditorError. - Original case inside the variable or class name: function getCKEditorError(). However, two-letter acronyms and proper names (if originally written uppercase) should be uppercase. So e.g. getUI (not getUi). Two most frequently used acronyms which cause problems: - DOM – It should be e.g. getDomNode(), - HTML – It should be e.g. toHtml(). # CSS classes CSS class naming pattern is based on BEM methodology and code style. All names are in lowercase with an optional dash ( -) between the words. Top–level building blocks begin with a mandatory ck- prefix: .ck-dialog .ck-toolbar .ck-dropdown-menu Elements belonging to the block namespace are delimited by double underscore ( __): .ck-dialog__header .ck-toolbar__group .ck-dropdown-menu__button-collapser Modifiers are delimited by a single underscore ( _). Key-value modifiers follow the block-or-element_key_value naming pattern: /* Block modifier */ .ck-dialog_hidden /* Element modifier */ .ck-toolbar__group_collapsed /* Block modifier as key_value */ .ck-dropdown-menu_theme_some-theme In HTML: <div class="ck-reset_all ck-dialog ck-dialog_theme_lark ck-dialog_visible"> <div class="ck-dialog__top ck-dialog__top_small"> <h1 class="ck-dialog__top-title">Title of the dialog</h1> ... </div> ... </div> # ID attributes HTML ID attribute naming pattern follows CSS classes naming guidelines. Each ID must begin with the ck- prefix and consist of dash–separated ( -) words in lowercase: <div id="ck">...</div> <div id="ck-unique-div">...</div> <div id="ck-a-very-long-unique-identifier">...</div> # File names File and directory names must follow a standard that makes their syntax easy to predict: - All lowercase. - Only alphanumeric characters are accepted. - Words are separated by dashes ( -) (kebab-case). - Code entities are considered single words, so the DataProcessorclass is defined in the dataprocessor.jsfile. - However, a test file covering for “mutations in multi-root editors”: mutations-in-multi-root-editors.js. - HTML files have the .htmlextension. # Examples ckeditor.js tools.js editor.js dataprocessor.js build-all.jsand build-min.js test-core-style-system.html # Standard files Widely used standard files do not obey the above rules: README.md, LICENSE.md, CONTRIBUTING.md, CHANGES.md .gitignoreand all standard “dot-files” node_modules
https://ckeditor.com/docs/ckeditor5/latest/framework/guides/contributing/code-style.html
CC-MAIN-2020-45
refinedweb
2,418
58.48
Solving a power series equation I'd like to solve the equation f(t) == 1 where f is a power series: var('n,t') a1 = float(4 + sqrt(8)) b1 = float(3 + sqrt(8)) c1 = b1 an = 4*n*(n-1)+a1*n an1 = 4*n*(n+1) + a1*(n+1) anr = b1 + 2*n*(4*n+a1) bn = 1+2*n*(n-1)+n*(b1-1) bn1 = 1+2*n*(n+1)+(n+1)*(b1-1) bnr = c1 + 2*n*(2*n+b1-1) def f(t): return (1+b1)^-t + 3/2*a1^-t + 1/2*(2*b1)^-t + b1^-t + 2*sum(an^-t +an1^-t + anr^-t + 2*bn^-t +2*bn1^-t + bnr^-t ,n,1,1000).n() I've tried simply plugging in values for t and I estimate that if f(t) = 1 then t is approximately 1.51. I'd like to possibly do something like solve(f(t) == 1,t) but this appears to lock sage up and I have to interrupt the process. The true f function is actually an infinite series, but I am truncating it to the first 1000 terms. I know there are methods of solving series equations such as the method of regula falsi. Does Sage have anything like this? Thanks! If an answer solves your question, please accept it by clicking the "accept" button (the one with a check mark, at the top left of the answer). This will mark the question as solved in the Ask Sage's list of questions, as well as in lists of questions related to a particular query or keyword.
https://ask.sagemath.org/question/42778/solving-a-power-series-equation/
CC-MAIN-2019-18
refinedweb
271
71.89
Take a look at the program below. We have a void function named favorite_animal() and main() with a few statements inside. #include <iostream> std::string sea_animal = "manatee"; void favorite_animal(std::string best_animal) { std::string animal = best_animal; std::cout << "Best animal: " << animal << "\n"; } int main() { favorite_animal("jaguar"); std::cout << sea_animal << "\n"; std::cout << animal << "\n"; } When this program is compiled and executed, sea_animal will print, but animal won’t. Why do you think that’s the case? Scope is the region of code that can access or view a given element. - Variables defined in global scope are accessible throughout the program. - Variables defined in a function have local scope and are only accessible inside the function. sea_animal was defined in global scope at the top of the program, outside of main(). So sea_animal is defined everywhere in the program. Because animal was only defined within favorite_animal() and not returned, it is not accessible to the rest of the program. Instructions If you run the code, you can print secret_knowledge right in main() without entering the passcode. Yikes! Only people who enter the correct passcode should have access to that knowledge. Move secret_knowledge into local scope so that it only prints from the function call when the correct code is entered. Nice work! Now it’s time to get rid of that error. Delete the line in main() that prints secret_knowledge directly without doing any math and keep the enter_code(0310);.
https://production.codecademy.com/courses/learn-c-plus-plus/lessons/cpp-functions-scope-flexiblity/exercises/cpp-functions-scope
CC-MAIN-2021-04
refinedweb
239
56.15
Opening a Widget created in Designer, from a Main Window I've created 2 .ui files, one is a main window, the other a widget. Designer builds the 2 header files each with QT_BEGIN_NAMESPACE around the class declaration. The problem is, what works in opening my main window, does not work in opening the second widget window! To display my main window, I created a class that inherits from my .ui file: class myWindow: public QMainWindow, private Ui::uiClassWindow setupUi(this); That opens fine, so then to open the second widget window, I declare a generic widget object and then save it with a pointer to my Widget Ui Header file: QWidget newWidget; setupUi(newWidget) But setupUi resolves to my Main Window header file... How do I tell it to use the Widget's SetupUi? I'm not sure how to use namespaces, so is that part of the problem? I'm kinda stumbling around enough to get things to work, but would love some feedback if there's a better or more typical way of doing all of this Thanks - dheerendra Definitely it is not a name space issue. You must passing wrong class to your pointers somewhere. Please do the following and see how it goes. Unless we have code, difficult at see what is happening. - Open new project and use QMainWindow with form. - Compile and launch it. - For the same project add "New File->Qt->QtDesignerFormClass". Use Widget this time. This will generate entire template based on the widget. Call this as TestWidget - Add TestWidget.h to your main.cpp - Create the instance of TestWidget in main.cpp and show it. This should launch two windows for you. Now you can look at how things and fix your issue.
https://forum.qt.io/topic/45858/opening-a-widget-created-in-designer-from-a-main-window
CC-MAIN-2017-43
refinedweb
292
76.11
Your First Maven Project In this tutorial I will guide you through creating your first Maven project. A Maven Hello World project so to speak. Creating a Maven project means creating a POM file and the standard directory layout. Actually, the project is not really a "Maven project". It is just a Java project which is built with Maven. The very same project could also be built with an Ant or Gradle build script. Installing Maven To use Maven your must first make sure that you have installed Maven on your computer. The first page in this Maven tutorial covers how to install Maven. Creating the Project Directory Once you have assured that Maven is installed, create a new directory somewhere on your hard disk. This directory will be the root directory for your first Maven project. Creating the POM File Once you have created the project root directory, create a file called pom.xml inside the directory. Inside the pom.xml file you put the following XML: <project xmlns="" xmlns: <modelVersion>4.0.0</modelVersion> <groupId>com.jenkov</groupId> <artifactId>hello-world</artifactId> <version>1.0.0</version> </project> This is a minimal pom.xml file. The groupId identifies your organization. The artifactId identifies the project. More specifically, it identifies the artifact built from the project, like for instance a JAR file. The version identifies the version of the artifact which the POM file builds. When you evolve the project and you are ready to release, remember to update the version number. Other projects that need to use your artifact will refer to it using the groupId, artifactId and version, so make sure to set these to some sensible values. Testing the POM File When you have created the pom.xml file inside the project root directory it is a good idea to just test that Maven works, and that Maven understands the pom.xml file. To test the pom.xml file, open a command prompt and change directory ( cd) into the project root directory. Then execute this command: mvn clean The mvn clean command will clean the project directory for any previous temporary build files. Since the project is all new, there will be no previous build files to delete. The command will thus succeed. You will see that Maven writes what project it has found. It will output that to the command prompt. This is a sign that Maven understands your POM. Here is an example of what Maven could output: D:\data\projects\my-first-maven-project>mvn clean [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building hello-world 1.0.0 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hello-world --- [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 0.873 s [INFO] Finished at: 2015-07-05T14:57:00+02:00 [INFO] Final Memory: 4M/15M [INFO] ------------------------------------------------------------------------ Creating a Java Source Directory Once you have tested the POM file works, create a Java source directory. The Java source directory should be located inside the standard directory layout. Basically, that means that should should create the following directory structure: src main java That means, a src directory inside the project root directory. Inside the src directory you create a main directory. Inside the main directory you create a java directory. The java directory is the root directory for your Java source code. Creating a Java Source File Inside the Java root source directory ( src/main/java) create a new directory (java package) called helloworld. Inside the helloworld directory (java package) insert a file named HelloWorld.java. Inside the HelloWorld.java file you put the following Java code: package helloworld; public class HelloWorld { public static void main(String args[]){ System.out.println("Hello World, Maven"); } } Save the file. Building the Project When you have created the Java source file, open a command prompt and change directory into the project root directory. Then execute this command: mvn package The mvn package command instructs Maven to run the package build phase which is part of the default build life cycle Maven should now run. Maven will compile the Java source file and create a JAR file containing the compiled Java class. Maven creates a target subdirectory inside the project root directory. Inside the target directory you will find the finished JAR file, as well as lots of temporary files (e.g. a classes directory containing all the compiled classes). The finished JAR file will be named after this pattern: artifactId-version So, based on the POM shown earlier in this tutorial, the JAR file will be named: hello-world-1.0.0.jar You have now built your first Maven project! Congratulations!
http://tutorials.jenkov.com/maven/your-first-maven-project.html
CC-MAIN-2017-13
refinedweb
777
58.28
Dependency injection is a software design pattern that allows removing hard-coded dependencies and making it possible to change them, whether at run-time or compile-time. Dependency Injection has three kind of software components 1. Dependent or Consumer : Describe what software components, it depends on . 2. Dependencies on which consumer depends 3. Injector also known as provider : Decides what concrete classes satisfy the requirements of dependent object. What is new In DI: In conventional development the dependent object decide for itself what concrete class it will use. In DI pattern this decision is delegated to the injector which can choose substitute concrete class on run time. Before plunge into the actual Implementation of DI in scala I would like to discuss when to use this pattern and what are the advantages of this pattern. As Object Oriented Language Coder we all are familiar with Inheritance. But Here we would focus on Composition. 1. Inheritance : It generally provides two main abstractions a. Dynamic Binding – JVM will decide which method implementation will invoke at runtime. b. Polymorphism – Variable of Super Class Type can be used to hold the reference of it’s subclass. Let’s Explain this concept using the simple example . Class Vehicle is base class of car. Vehicle has a method fuleType. Def fuleType = “Diesel” Now car extends the vehicle class and override it’s method as follows def fuleType = “petrol” Now we have two implementations of fuleType , at run time it is the responsibility of jvm to decide which implementation will be invoked at runtime. This is known as dynamic binding. Class bike also extends the class Vehicle. We can use vihicle type object to hold car as well as bike. Val vehicle = new Car val vehicle = new Bike This phenomena is known as polymorphism. Problem with Inheritance Relationship In an inheritance relationship, superclasses are often said to be “fragile,” because one little change to a superclass can ripple out and require changes in many other places in the application’s code. Here coupling of base class and sub class is very strong. Changes to the superclass’s interface, however, can ripple out and break any code that uses the superclass or any of its subclasses. What’s more, a change in the superclass interface can break the code that defines any of its subclasses. In the above example if we add another subclass Bus , there would not be any problem but if you change return type of fuleType method to String to Integer. you can break the code that invokes that method on any reference of type vehicle. In addition, you break the code that defines any subclass of vehicle that overrides the method. Such subclasses won’t compile until you go and change the return value of the overridden method to match the changed method in superclass vehicle. Inheritance is also sometimes said to provide “weak encapsulation,” because if you have code that directly uses a subclass, such as car, that code can be broken by changes to a superclass, such as vehicle. 2. Composition: Given that the inheritance relationship makes it hard to change the interface of a superclass, it is worth looking at an alternative approach provided by composition. It turns out that when your goal is code reuse, composition provides an approach that yields easier-to-change code.. What to choose Inheritance or Composition If you find “is a “ relationship then go with inheritance. Car “ is a “ Vehicle Bus “is a” Vehicle But if relationship is “has a “ type then go with composition. Car “has a “ Engine” Car “has a “ fule tank” Self Type Annotation: In scala there are various way to achive compositions. But here I will discuss how to use self-type-annotations for composition. There is a class engine. Class vehicle needs this class to complete its execution. Here we are not going to inherit the engine as superclass of vechile. We are looking to achive code usability using composition. “this:Engine =>” is known as self-type-annotation. Using this it is possible to use start method of class engine from class Car. Suppose car has more dependencies like PowerWindow, FuleTank etc than we can use annotation like this:Engine with PowerWindow with FuleTank Now we know what is composition and how and where to use composition. It’s time to Implement DI in scala. Here we have three classes Engine, Tank and Car. Class Car depends on Tank and Engine to complete it’s execution. Engine and Tank are the dependencies for Car. Now Encapsulate the classes in component traits as follows.This simply creates a component namespace for our classes. Dependencies can be injected in Car using self type annotation. One of the beauties here is that all wiring is statically typed. For example, if we have a dependency declaration missing, if it is misspelled or something else is screwed up then we get a compilation error. This also makes it very fast. But still there is a problem, We have strong coupling between the service implementation and its creation, the wiring configuration is scattered all over our code base; utterly inflexible. Let’s fix it. Instead of instantiating the services in their enclosing component trait, let’s change it to an abstract member field. By doing this switch we have now abstracted away the actual component instantiation as well as the wiring into a single “configuration” object. 1 thought on “Dependency Injection In Scala using Self Type Annotations7 min read” val tank = new Tank val engine = new Engine in the last variant could be removed
https://blog.knoldus.com/dependency-injection-in-scala-using-self-type-annotations/
CC-MAIN-2021-31
refinedweb
928
55.64
Edit: Ok, so I should have been using g++ instead of gcc for the .cpp file. That's settled. New question though, how do I link files that I have included? Say if I write a program and I want to include curses.h, how do I compile that? Everything below this is the original question. Hi all. Up until this point I've always written and compiled my programs in Windows. I got a Macbook Pro a couple of months ago, and figure it's time to start programming in Unix. I'm not very advanced in terminal, so it took me a bit of time to try to even get to the proper directory. I finally got the commands correct to compile but I got errors on everything, including a hello world program! Every time I try to compile something I get this: And a bunch of stuff depending on what I'm trying to compileAnd a bunch of stuff depending on what I'm trying to compileCode:/usr/bin/ld: Undefined symbols: for I get this in the terminal:I get this in the terminal:Code:#include <iostream> using namespace std; int main( int argc, char *argv[] ) { cout << "Hello World" << endl; return 0; } I have no idea what most of this means. Is there a reason why such a simple program won't compile? I know it's probably something stupidly simple, but as I've never worked in this environment before, it's totally beyond me.I have no idea what most of this means. Is there a reason why such a simple program won't compile? I know it's probably something stupidly simple, but as I've never worked in this environment before, it's totally beyond me.Code:current directory$ gcc -o helloworld helloworld.cpp /usr/bin/ld: Undefined symbols: std::basic_ostream<char, std::char_traits<char> >::operator<<(std::basic_ostream<char, std::char_traits<char> >& (*)(std::basic_ostream<char, std::char_traits<char> >&)) std::ios_base::Init::Init() std::ios_base::Init::~Init() std::cout std::basic_ostream<char, std::char_traits<char> >& std::endl<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&) std::basic_ostream<char, std::char_traits<char> >& std::operator<< <std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*) ___gxx_personality_v0 collect2: ld returned 1 exit status If anybody can offer some advice, or show me a good starters guide to writing/compiling c/c++ in *nix, that be great. Thanks so much. Ed
http://cboard.cprogramming.com/linux-programming/93513-first-program-%2Anix.html
CC-MAIN-2015-40
refinedweb
407
52.6
Problem You want to retrieve the current date and time from the user's computer, either as a local time or as a Coordinated Universal Time (UTC). Solution Call the time function from the header, passing a value of 0 as the parameter. The result will be a time_t value. You can use the gmtime function to convert the time_t value to a tm structure representing the current UTC time (a.k.a. Greenwich Mean Time or GMT); or, you can use the localtime function to convert the time_t value to a tm structure representing the local time. The program in Example 5-1 obtains the current date/time, and then converts it to local time and outputs it. Next, the program converts the current date/time to a UTC date/time and outputs that. Example 5-1. Getting the local and UTC times #include #include #include using namespace std; int main( ) { // Current date/time based on current system time_t now = time(0); // Convert now to tm struct for local timezone tm* localtm = localtime(&now); cout << "The local date and time is: " << asctime(localtm) << endl; // Convert now to tm struct for UTC tm* gmtm = gmtime(&now); if (gmtm != NULL) { cout << "The UTC date and time is: " << asctime(gmtm) << endl; } else { cerr << "Failed to get the UTC date and time" << endl; return EXIT_FAILURE; } } Discussion The time function returns a time_t type, which is an implementation-defined arithmetic type for representing a time period (a.k.a. a time interval) with at least a resolution of one second. The largest time interval that can be portably represented using a time_t is 2,147,483,648 seconds, or approximately 68 years. A call to time(0) returns a time_t representing the time interval from an implementation defined base time (commonly 0:00:00 January 1, 1970) to the current moment. A more workable representation of the current date and time is achieved by converting to a tm struct using the localtime or gmtime functions. A tm struct has the integer fields shown in Example 5-2. Example 5-2. Layout of a tm struct struct tm { int tm_sec; // seconds of minutes from 0 to 61 (60 and 61 are leap seconds) int tm_min; // minutes of hour from 0 to 59 int tm_hour; // hours of day from 0 to 24 int tm_mday; // day of month from 0 to 23 int tm_mon; // month of year from 0 to 11 int tm_year; // year since 1900 int tm_wday; // days since sunday int tm_yday; // days since January 1st int tm_isdst; // hours of daylight savings time } When using the gmtime function, be sure to check its return value. If the computer running the code doesn't have a local time zone defined, the gmtime function will be unable to compute the UTC time, and will return 0. If you pass 0 to the asctime function, undefined behavior will result. The localtime, gmtime, and asctime functions all return pointers to statically allocated objects. This is more efficient for the library, but it means that subsequent calls will change the value of those objects. The code in Example 5-3 shows how this can have surprising effects. Example 5-3. Pitfalls of using asctime void f( ) { char* x = asctime(localtime(time(0))); wait_for_15_seconds( ); // do some long processing task asctime(localtime(time(0))); cout << x << endl; // prints out the current time, not fifteen seconds ago . }
http://flylib.com/books/en/2.131.1/obtaining_the_current_date_and_time.html
CC-MAIN-2018-05
refinedweb
563
54.36
On Thu, Jan 1, 2009 at 5:17 AM, Ryan Ingram <ryani.spam at gmail.com> wrote: > It might be possible to build a "lazy-ify" monad transformer which > would make this work. In particular, I think Oleg's LogicT > transformer[1] does something of the sort, only applying side effects > where they are required in order to get "the next result" in a > nondeterministic computation. LogicT is continuation-based, so it's strict in the first argument to (>>=). observeT $ undefined >>= \_ -> return () will throw an exception. On the other hand, LogicT is non-strict in the second argument to mplus if the transformed monad is non-strict in (>>=). take 4 . runIdentity . observeAllT . msum $ map return [0..] should return [0,1,2,3]. (My implementation does. I assume the LogicT on Hackage is the same.) -- Dave Menendez <dave at zednenem.com> <>
http://www.haskell.org/pipermail/haskell-cafe/2009-January/052709.html
CC-MAIN-2014-35
refinedweb
141
69.18
Major Australian Retailer Accused of Selling Infected Hard Drives 128 skegg writes "Dick Smith, a major Australian electronics retailer, is being accused of regularly selling used hard drives as new. Particularly disturbing is the claim that at least one drive contained malware-infested pirated movies, causing the unlucky buyer significant data loss. Apparently the Fair Trading Commissioner will be conducting an investigation." Standard Practice (Score:5, Interesting) Seems standard practice with a lot of stores. Someone takes something back because they don't want or need it for whatever reason, the shop will just shrinkwrap it up again and the next buyer is none the wiser. I'm surprised that it hasn't happened sooner. On another note, so how exactly can a video file (pirated movie or not) be 'malware infested'? Re: (Score:1) [google.co.uk] Re:Standard Practice (Score:4, Informative) If you accept the licence agreement, it then downloads malware to your PC. So all the "malware infested" media does is get the unsuspecting (or credulous, it's a fine line) user to download their own malware. It's not the video that contains the bad software and you'd expect any AV software to pick up on this old, old (the article is dated 2006) attack vector. Sasquatch and the Queen playing Beach Volleyball (Score:2) Yep. Was trying to download a "White Christmas" wmv for Xmas family listening off eMule. Every single file was a redirect to a malware codec. Sheesh... not even Mr. Crosby's classic is safe! Isn't that one of those cases where a malware peddler on P2P notes what you're searching for and returns lots of fake results "customised" to your search term that are all basically the same piece of malware if you try to download them? For example, if you searched for "sasquatch and queen elizabeth ii playing beach volleyball" (i.e. the most unlikely term to get *any* match, let alone exact match), you'd get quite a number of "results" such as "sasquatch-and-queen-elizabeth-ii-playing-beach-volleyball.wmv Re: (Score:2) Re: (Score:3, Informative) The same way jpegs can be. Re:Standard Practice (Score:5, Informative) The parent couldn't be more correct. People discount regular data files as being malicious simply because they're not labelled executables. What they don't think is that those files are opened by executables. These executables are often trusted programs which makes this an even bigger threat to a system as the malicious code can run hidden under the legitimate process and do its work. There's anything from buffer overruns to file parsing mistakes in the programs that can open them up to become a conduit for abuse. An example of this is Adobe Reader's countless exploits with the PDF file format. Re:Standard Practice (Score:4, Insightful) Which is also why SQL injection attacks exist, everything you send to the server is data. If you take that data and execute it as code, well duh you've just created an exploit. Never, ever trust anything coming from the user. Re: (Score:1) You are the most ignorant AC I've come across in quite some time. Re:Standard Practice (Score:5, Informative) This is an incorrect assertion, an assertion my previous post debunked, but I suppose I'll re-explain: You could have a drive full of PDFs, you could have it full of PNGs, whatever file format you'd like. You could mount the drive as noexec, however when it comes down to it, a trusted program (NOT ON THAT DRIVE) can interact with those files and since file formats can be complex AND since the programs opening them are also complex, there's a chance that the program will be vulnerable to a crafted file that tricks the program to do something that a "regular movie" or whatever wouldn't do and may not have been tested for. If you've written a file parser of any kind, you'll see how complicated it gets in having your program code check the file for abnormalities before interacting with it. This complexity is a steep curve and all it takes is not checking an array boundary for your program to mistakenly leak data memory into its executable memory space. The old addage plays correct here: Never trust user inputs. Re: (Score:2) Wrong. Every modern OS has this same problem. The only way to fix it is to switch to a CPU that uses Harvard architecture instead of Von Neumann architecture. As long as as there's no separation of code and data memory, this will be a problem. Re: (Score:1) One of the things that the author of "Godel, Escher, Bach" mentions, is that Godel's Incompleteness Theorem pretty much states that ANY "complete" system can be broken. The only way to avoid this, is to design an incomplete system (aka, one that is lacking features) that is small enough that all possible interactions are predictable (aka Java's original limited sandpit design). Re: (Score:1) Re: (Score:1) On another note, so how exactly can a video file (pirated movie or not) be 'malware infested'? By containing code that exploitable video players load into memory, and somehow manages to change that info into an executable status, and then somehow executes the code. But that's only one possibility. Re: (Score:2, Informative) While not 'containing' the malware, some media files have a field that specifies where the codec for them can be downloaded, and some players respond to this by downloading and installing the 'codec'. Needless to say, the 'codec' installer contains the malware. Re: (Score:3). Re: (Score:2). True... if you get a dodgy MKV and open it up in VLC, it doesn't attempt to load a fake codec; it just uses exploits in VLC to gain VLC-level access to your system. You never have the option to back out before the malware is downloaded. That doesn't really make MKV containers safer than WMV containers. The big issue here is that a lot of people look at WMV/MKV/PDF/DOCX/etc. as "file formats". In fact, these are all "container formats" that interact with a specific API, and can contain multiple documents tha Re:Standard Practice (Score:5, Interesting) Basically any file type that can have a link to a webpage embedded, I believe both .MPG and .WMV are capable of this and a player that will launch the link without asking which WMP 9 was the last WMP I believe that would launch a weblink without asking but I'm sure there were others. Basically how it works is like this: You try to play infected video, video launches default browser to embedded website and then if the browser is unpatched or has any known vulnerabilities you get hit with a driveby. I used to see this trick often here at the shop in the era of fastrack and Limewire, people would look for the latest blockbuster and not think about formatting and get screwed. As for TFA? Frankly don't surprise me as I've seen the same thing from Best Buy in my area which just reshrinkwraps returned items and will just put them back on the shelf. Funny part is I found out when a local preacher went there and bought an external drive and when first plugged into Windows it asked if he wanted it to play the videos. Well the old guy thought it must be some "Welcome to your new drive" kind of thing and launched it only to be looking at a gangbang vid. Needless to say he freaked and brought it to me thinking his PC must have been hacked! Frankly anything these big box retailers do anymore really doesn't surprise me which is why i tell folks to ask around and see if the people that have bought from them before were happy. I'm happy to point any potential customers towards previous customers if they want to ask, because i'm proud of my work, but I've seen some of these places...wow is all I got to say. Hell i know so many horror stories from some of these places it ain't even funny, parts ending up "missing" from the PC when they took it to get cleaned, a PC going in for an OS upgrade only to come out with a cheaper graphics card than what it went in with, and stolen RAM is practically SOP in some places. Finally just like in TFA I've seen parts so obviously used sold to customers as new, hell some they didn't even bother blowing dust out the fan or like with the preacher even emptying the drive first. So I hope they get seriously busted for this and get hit with MASSIVE fines, otherwise they'll just consider it the cost of doing business and continue. I just couldn't do it myself, I take pride in the things I sell and build and try to get the customer the best deal I can. If something is used I tell them upfront and tell them the price difference and let them decide. Of course all drives going through my place are wiped first! Re:Standard Practice (Score:4, Informative) Basically any file type that can have a link to a webpage embedded, I believe both .MPG and .WMV are capable of this No, just WMV. But "intelligent" players like Windows Media Player would "helpfully" realize that a WMV file renamed to MPG, AVI etc. was actually a WMV file and play it as such anyway. There's no reason for a movie format to contain such a link, it's for DRM'd WMV files that are supposed to take you to a page explaining how to buy access to it. Whoever came up with that scheme was stupid and I don't know any other player than WMP that ever supported it, since it was 99.99% used for malware and 0.01% for legitimate uses. Re: (Score:2) Whoever came up with that scheme was stupid and I don't know any other player than WMP that ever supported it, since it was 99.99% used for malware and 0.01% for legitimate uses. It had legitimate uses??? The same problem exists with WiMP and MP3s. MP3s don't support DRM, WMAs do. So you can imbed a trojan link in the WMA file, rename it MP3, and WiMP will play the song AND the malware. Like you say, no other media player does that, and I see no legit use for it EXCEPT malware. Maybe Norton or McAfee paid MS f Re:Standard Practice (Score:4, Informative) It used to not even prompt back in the day, it just automatically opened the link. Perhaps that was only the case if you had the "download license automatically" checkbox ticked in the preferences? At any rate, you can turn this "helpful" feature off, and I always have. Though of course, this doesn't excuse MS's crappy implementation and presentation of a feature that most people won't realise is dangeous. Re: (Score:2) Perhaps that was only the case if you had the "download license automatically" checkbox ticked in the preferences? At least in some version of WMP this was the default. This lead to pages like this [spyany.com]. It says so on the page too: By default, Windows Media Player will attempt to acquire a license when you try to play the secure content if one was not issued to you by the content provider when you downloaded the content. Re: (Score:2) Problem was that was the default behavior on WMP 7-9. It doesn't do this anymore but you'd be surprised how many "XP Pirate Edition" boxes are out there with updates disabled so they don't get WGA'd. Frankly I wouldn't be surprised if a good 70%+ of the zombies out there are pirate Windows, which is why i say MSFT's answer to piracy was brain dead. The correct move would have been to have a $50 special for Windows home which until they stupidly got rid of it Win 7 HP was replacing pirated Windows left and Re: (Score:2) You can insert some microsoft-bashing here if you want, but to be fair, every OS bundles a ton of helpful programs now for web-browsing and media-playing. Re:Standard Practice (Score:4, Interesting) I don't know if they will get with fines (most of the time, playing the three monkey game will be enough to avoid civil/criminal charges.) However, this is a lesson to everyone: After buying any new storage media, completely erase it first. This is something I try to keep the habit of doing, be it a USB flash drive, a SD card for my phone, external hard disks, or an internal HDD of a new PC. The best utility, hands down, is HDDErase because it tells the drive controller to do the dirty work and erase everything, including the host protected area, sector relocation table, etc. I then follow it up by a DBAN, or at least a dd if=/dev/zero of=/dev/sdwhatever. If one can't do an ATA erase, then zeroing it out with a couple passes is the next best thing. If only on Windows, encrypting the disk with BitLocker, then running the format command will help. The format command in Vista and newer checks to see if the previous data was a BitLocker volume, and if so, scrub away the remnants of the old volume keys. You can use TrueCrypt and create a dummy volume for the same result. I erase data before using a drive for three reasons: First, to exercise the drive and all accessible sectors, so the drive relocates marginal stuff immediately. In the old days, you could periodically low level format a HDD which would shrink the drive's capacity, but extend the life of the drive by cleaning out the relocation table and making it ready for handling new defects encountered. However, new drives don't have this, so the next best thing is to test all sectors before use. Second, there have been cases of people facing criminal and civil charges for data on their storage media that wasn't theirs... it came with the device. Whether this is true or not can be debated, but it is best to not let it happen in the first place. Third, there is always the chance of malware be installed somewhere along the supply chain. By completely zeroing it out from the MBR to the last sectors, this threat is mitigated for the most part. This also shows another sad fact. There are a number of "computer repair" places that are pretty shady. I'm sure most readers of /. can likely do better than a lot of repair joints. Re: (Score:2) or at least a dd if=/dev/zero of=/dev/sdwhatever I'm not sure if this method can be recommended. At least for me it has always been very slow (maybe 10MB/s), I still wonder why. It seems that the disk keeps seeking all the time (not going track-by-track). DBAN does it right. I've been trying tweaking the dd parameters (such as adding bs=512), but no bonus. Re: (Score:2) If you haven't tried it Hiram's Boot CD [hiren.info] is like a Swiss Army knife for repairmen and anybody else that needs to work on a PC and it includes DBAN as well as...hell look at the list, the better question would be "What DON'T it have?" . But I personally use the Diskwipe utility as it completely erases the drive with random ones and zeroes so when its gone its gone. pretty quick too i might add. But you need to work on a box just boot Hiram either off CD or stick and there you go, just about every tool you co Re: (Score:2) Flip side of this, which you ignore is willingness to accept returns and provide a full or partial refund. Obviously to provide a full refund, that item that was returned has to go somewhere, can't bin it and, can't sell it as second hand and loose money. People get really annoyed when companies won't accept returns and provide refunds, people get really annoyed when they end up buying someone elses return, hmm, I believe it's what's called a 'catch 22' [wikipedia.org]. So w Re: (Score:2) Actually there is a simple way they can have a generous return policy, sell the returns as used AND end up making more money on the deal and it is this: Don't give them cash, give them a gift card for that amount. I had an old teacher that spent many years managing an entire chain of stores down in FLA and he said 'if you give them cash you're an idiot' because if you give them a card they always spend over the card because they will NEVER find something the exact amount and they won't be able to stand leavi Re: (Score:3) It's not standard practice by most retailers, just a few dodgy ones and quite frowned upon by the ACCC. JB Hi-Fi have been caught [accc.gov.au]doing it with mobile phones. Re: (Score:2) Nothing new (Score:5, Interesting) Re: (Score:2) Which company would return harddisks without properly erasing them first? Obviously the shop that sold the parts as new isn't particularly bright, but the company who owned the disks prior has some significant security issues. Re: (Score:1) Re: (Score:1) Often companies have a contract with a supplier to do maintenance... in these cases, it'd be a case where the computer went in for maintenance or replacement, the data got copied onto the new PC, but the local tech forgot to wipe the old components before putting them back up for sale. Since it wasn't their company, and "nobody's going to notice", they didn't bother with the extra effort involved. This usually happens when something blows on the motherboard and the fix is a complete replacement of the syste Re:What? (Score:5, Informative) He is embellishing for the media or trying to claim the dog ate his homework (or dingo ate his baby? ). Re:What? (Score:5, Interesting) Selling used stuff as new aside for a second Umm. No. The media blowup is being fuelled by "I bought a hard disk and it had hard core porn on it!" sensationalism but seem to be ignoring this deeper issue - Dick Smith Electronics, Harvey Norman, JB-HiFi and the rest have been getting away with it for years but the fact is selling used goods (no matter how good a condition it's in) as new is illegal. They can ask the same price for it if the return is in great condition but they can't just seal it back up and pop it back on the shelf next to the new unopened boxes. Re: (Score:1) Indeed. They should be listed as "refurbished" at least -- or "open box". Techxperts? (Score:1) Re: (Score:2) There really hasn't been any technical knowledge in these stores for more than a decade Re: (Score:1) Maybe not infected (Score:4, Interesting) I recall from the article that the disk was definitely second hand because it had a whole lot of movies on it (free!) but the guy who reported it to the media made a big song and dance about how the files "appeared corrupt" and "could have infected his system". None of which impresses me much. He could use a secure OS. Other retailers sell stuff which has been returned by customers. DSE should have formatted the disk, and they are at fault for that reason. IIRC the reason he went to the media was that he is promoting an album or something and this was a golden opportunity to get his face and T shirt on TV. Re: (Score:1) >DSE should have formatted the disk, and they are at fault for that reason. Not quite. The core problem is that DSE (and others) are passing off used returned goods as new. That's illegal. Customers are finding out and it's become a media storm because they're finding the previous owner's stuff on the phone or hard disk. Re: (Score:2) You know, he could have just plugged the drive and tried to boot from it. A boot virus could easily wipe out every available drive before prompting a "system not found" error. You could even hide it on a brand-new formatted drive, since the bootsector is the first sector and usually the first cylinder (currently usually sectors 0-63) is reserved. How will your "secure OS" protect you against that? Re: (Score:2) he could have just plugged the drive and tried to boot from it. He didn't. He was pissed because he tried to play a movie file and it didn't work. Re: (Score:2) Re: (Score:2) Preformatting the device would erase any malware which might have been on it. A secure OS would prevent the installation of any malware infected files which it might load. Obviously the secure OS doesn't help you if it is not running. Re: (Score:2) Re: (Score:2) No. Just no. It's not just "wrong" to sell a used item as new. It's I-L-L-E-G-A-L. Period. And that's what they did. New vs used is not subjective. If you sell an item and it comes back with the shrink wrap opened, whether an hour has gone by or a year, it has to be presumed "used". That's what honest, law-abiding businesses do. They don't put un-shrinkwrapped packages on the shelf without clearly marking them as used, and they certainly don't re-shrinkwrap them and pass them off as new. Not even if the cus Re: (Score:2) In this case, I don't care how old the damn part was. It was sold to someone else before I got it, therefore the Doctrine of First Sale no longer applies, thus the part is used not new. rather easy going return policy. (Score:1) Not to defend the stores' oversight, but this particular store, had a rather generous return policy of 14 days no questions asked pretty much. Therefore, many people where purchasing TV sets, cameras, and whatever other good they sold, to use over a sport final weekend, or holiday, then return the item for a full refund. No intention of actually keeping the good they purchased. Re: (Score:2) It is the retailer's choice to offer a "no questions asked" return policy. It is irrelevant that many customers abuse such a policy. When the store offers such a policy, it assumes the all risks involved because of "no questions asked". It is unethical (and also illegal) for them to pawn off that risk on unsuspecting customers who are paying full retail price and expecting new products. What they should have done is to refurbish the goods (add new shrink-wrap, reformat memory sticks and hard drives, reset ph Woolworth's: ADVERSE affects on DickSmith stores (Score:1) (AU retail giant) Woolworths-owned Dick Smith Electronics has - in our experience - several times shelved and sold "repaired" returned items (usually on a "take it or leave it basis" when stocks run low after an advertised "sale" (or did they -only- have such used gear on-hand from the start of the "sale"). Items we've seen & rejected out-of-hand: - ASUS netbooks (in this case, shown as non-functioning "demos" & their boxes had NO indication of any repair or refurbishment by the maker; ONLY after bein DSE distributing pirated media? (Score:4, Insightful) DSE distributing pirated media? I'm sure the recording industry will be very interested to hear about this... Re: (Score:1) Re: (Score:1) two things to remember though 1: this was when Dick Smith actually owned the business, the current DSE has nothing to do with him and Say Whom? (Score:2) ?malware-infested pirated movies? ! Really? Isn't that why we use VLC instead of media player? Re: (Score:1) [secunia.com] Your media player choice doesn't really matter if an exploit exists in the version you're running. 14 days return (Score:3)... Re:14 days return (Score:5, Informative)... In Australia it is illegal to re-sell used returned goods as new. The goods can be re-sold but must clearly be marked as returned items, and usually a discount is offered for accepting the goods in this condition. (The discount might not be offered if the item is in high demand). What's more if goods have been returned and the item registered or activated online or similar they are not suppose to sell the item. That is the secondary reason that computer software isn't returnable at most stores (though there are exceptions like EB games). Re: (Score:2) Same in the UK. You can resell it, you have to marked it as returned, and basically the seller has to take the loss of whatever they get returned. It works on the basis that returns are such a small percentage of items, of little value to someone wishing to scam them, and represent such a small fraction of their costs, and *STILL* can be resold for even the same price so long as they are clearly marked that it's not an issue. Go read any EU trading law. It's all in there. Re: (Score:2) Of course it gets sold again. But under no circumstances should it then be advertised as 'new', ie. fresh from the factory and never been used as that is blatantly false advertising in bad faith. Re: (Score:2) Bigger Fish (Score:1) Lets not forget that the company that owns and manages DSE is Australia's third largest employer Woolworths LTD. shortcut shortcut (Score:1) People can't even take short-cuts properly! I guess the kind of person who takes shortcuts can't be bothered to do it properly - short-cutting the short-cut. But I suppose that those who can take short-cuts properly don't get spotted.... DSE = Radio Shack (Score:5, Interesting) Re: (Score:2) Most DSE stores do still carry a few components, including resistors. It's just that you have to look quite hard. Down the back. In the dark corner. Behind the door on the right. Marked beware of the leopard. Just keep looking, they are there somewhere. Jaycar seems to be doing quite well here in Christchurch, they just moved into a much larger store, same stuff, just more of it. Re: (Score:2) Jaycar / Soanar / Electus seem to be getting bigger and bigger. Farnell is also a good choice in Australia. They were recently bought out by element14 who now offer free express shipping to major cities. Minimum order is $10 though. Re: (Score:1) Just go to Jaycar (Or buy online @ jaycar.com.au) for all of your electronics needs, going to Dick Smiths is like going to K-Mart for a "big screen" TV or a name brand appliance. As a former employee... (Score:5, Insightful) ...this kind of thing was prevalent throughout the company. We would frequently be expected to sell used and returned stock without being given any real freedom in regards to marking it down. This led to a culture of lying to customers, especially in cases where it was not evident that the stock had been used. Of course, used stock would be sold as new to customers all the time. It even extended to returns on products that were in sealed packaging, despite having a clearly posted 14 day no questions asked refund policy we would be expected to tell customers that we wouldn't provide a refund, even if it was something that wasn't functioning as the customer expected (although within manufacturers specs). Had this happen to me (Score:5, Interesting) Well, a friend. Their HDD had died and they asked me what to do. "Buy a new one" says I. Turns out they had no back-ups of pictures etc, so I offered to try a recovery (no promises and I warned them everything could be lost). Anyhoo, the recovery worked with the failed HDD working as a slave to the new one. I picks up loads of deleted pictures and felt rather chuffed with my little self. "You seems to have made loads of friends on that Egypt trip." I say. "Never been to Egypt." they reply. It takes 5 seconds for me to twig that donkey-boy here had done the recovery on the wrong HDD and more stuff was still being found. School reports, banking spreadsheets, tonnes of stuff. Not really what one expects to find on a "new" HDD. Once I had the pictures recovered from the correct drive (and backed-up) my friend took the "new" HDD back to the shop for a bit of a word. Selling hooky equipment to a police officer? Not one of the storekeeper's greatest ideas. And for the previous owner, there was enough information on there for someone to do them serious ill. Luckily for them, my friend made the storekeeper physically destroy the drive (and got a full refund). There's no issue with selling 2nd hand kit, just advertise it as such and make sure it's properly wiped first. People are clueless (Score:2) Re: (Score:2) Depends on what you mean by secure erase. sudo dd if=/dev/zero of=/dev/sda With respect, and as long as there is no disk error during the operation (as evidenced by "<correct # bytes> copied" at the end), if you don't think that's a secure erase, you're in la-la land. Definitely secure enough for warez, and probably even secure enough if they were money-and-resources-no-object state or military secrets. Obviously I mean secure enough in terms of function, if not meeting bure Re: (Score:2) I doubt that much of the ID fraud comes from old hard drives. Phishing, cracking web sites, especiall dumpster diving for paper records, are where the ID theft is happening. I'd say that 99.999% of the time, a high level format would be sufficient. Returning Hardware (Score:2) This what they get for useing sales guys as tech's (Score:2) And not have the techs be techs like how geek squad used to be. Now days way to be come a tech or keep the job at a store is to get your numbers of Extended Warranties (some times even having to lie about what it covers), high cost cables , other ad ons, rip off software and more. [consumerist.com] [blogspot.com] [consumerist.com] been there done that. (Score:1) Infected movies? (Score:2) malware-infested pirated movies, causing the unlucky buyer significant data loss Why the hell would you want to execute a movie? The data loss is due to the device being bad, if it has been returned it was likely because of a reason. Re: (Score:2) Because these are computer n00bs. You really expect them to know these things? :( Radioshack does this also (Score:1) Nothing New (Score:1) Re:Dick Smith (Score:5, Informative) Re: (Score:1) This is why you don't make a company with your name in it :) Re: (Score:3) Re: (Score:1) Their marketing head thought that meant fridges and things Bravo on their part, because "fridges and things" are what I see every single time I walk past them. Fridges that happen to be extremely thin and small and have moving pictures displayed on their front... Re: (Score:2) Re: (Score:1) Stage 1 - Open up a new chain, supplying electronic parts/books/kits because other stores don't stock such basic materials for the hobby electrician. Stage 2 - start to supplement income with consumer electronics because, hey, a basic part only costs 20c at best. Stage 3 - reduce parts/kits catalogue to one tiny rotary shelf of resistors/transistors/capacitors because "Well that leaves mo Re: (Score:2) Re: (Score:2) What is worse is that it isn't hard to wipe the drives. HDDErase can gnaw through a terabyte drive in 15 minutes to an hour [1], and DBAN might take a long time, but the computer can be set aside while that is going on. Even operating systems like OS X come with very easy to use HDD wiping tools. [1]: HDDErase tells the HDD controller to zero everything out, so because the drive isn't waiting for oodles of zeros from the interface, it can write at its fastest speed.
http://hardware.slashdot.org/story/11/12/23/0257244/major-australian-retailer-accused-of-selling-infected-hard-drives?sdsrc=next
CC-MAIN-2015-40
refinedweb
5,545
68.91
Arduino Internet of Things Part 4: Connecting Bluetooth Nodes to the Raspberry Pi Using Python's Bluepy Library In the previous three installments of the Arduino Internet of Things series, I walked through building an inexpensive, $3, breadboard Arduino (Part 1); I demonstrated how to record temperature and send the data via Bluetooth using the Arduino (Part 2); and I instructed how to use interrupt techniques with a motion sensor to create a Bluetooth node that will last for months on a single LiPo battery charge (Part 3). In this tutorial I explain how to connect all of these devices via Bluetooth Low Energy (BLE) and communicate with a Raspberry Pi to prepare the system for integration into the Internet of Things (IoT). At the end of this tutorial, the user's Raspberry Pi will be able to listen for multiple Bluetooth devices and record the data transmitted by each node. I will start by implementing the Python-enabled Bluetooth library, Bluepy, and showing the user how to scan for BLE devices. Downloading and Testing Bluepy for Python on Raspberry Pi Bluepy is a Bluetooth Low Energy interface built on Raspberry Pi for Python. The GitHub page can be found here. I have used Bluepy for many different Bluetooth LE projects ranging from iBeacons to multi-node IoT frameworks. It's great for IoT applications because of its integration with Raspberry Pi and Python. Below is the outline of the install process for Python 3.xx via the command window. As of writing, this method worked for the Raspberry Pi 3 Model B+. sudo apt-get install python3-pip libglib2.0-dev sudo pip3 install bluepy from bluepy.btle import Scanner, DefaultDelegate class ScanDelegate(DefaultDelegate): def __init__(self): DefaultDelegate.__init__(self) def handleDiscovery(self, dev, isNewDev, isNewData): if isNewDev: print "Discovered device", dev.addr elif isNewData: print "Received new data from", dev.addr scanner = Scanner().withDelegate(ScanDelegate()) devices = scanner.scan(10.0) for dev in devices: print "Device %s (%s), RSSI=%d dB" % (dev.addr, dev.addrType, dev.rssi) for (adtype, desc, value) in dev.getScanData(): print " %s = %s" % (desc, value) After saving the code above, open the command window and navigate to the file's directory. Then, start the file in the command window. The command window needs to be user for permissions reasons. sudo python3 ble_scanner.py You should be seeing results that look something like this: The printout above shows the Each Bluetooth Low Energy device address that the Raspberry Pi was able to read during its scan. The six hex values in 'Device xx:xx:xx:xx:xx:xx' is the unique address for each BLE device. You can also see information about received signal strength (RSSI) and other BLE information. This step is only to test that Bluepy is working correctly on your RPi. In the next step, we will investigate how to read data from specific devices. Finding BLE Addresses with Arduino Uno In order to connect the BLE nodes to the Raspberry Pi, we need to ensure that we have the BLE addresses for each node. Another alternative would be to name each device and then conduct a scan for specific keywords, but that is a much more complicated problem that can be explored once the user has a basic understanding of the Bluepy protocol and BLE behavior. To acquire the Bluetooth address of each BLE device, you will need to connect the TX/RX pins of the HM-10 to the TX/RX of the Arduino Uno board (as shown in Figure 1 below, taken from my HM-10 iBeacon tutorial). Figure 1: Wiring for obtaining Bluetooth address from CC25xx/HM-10 modules. In order to communicate with the HM-10 the user will need to utilize the AT command set. AT commands are a specific protocol for instructing the BLE module. First, the letters AT are used, then a + sign, then the specific command. For example, to instruct the module to print its Bluetooth address, you would type into the Arduino Serial window: 'AT+ADDR?' and the response should be 'AT+ADDR:XXXXXXXXXXXX' where the Xs are 12 unique numbers and letters that designate the Bluetooth address of the HM-10 (six hex values). To start, it's always a good idea to type in 'AT' to which you should receive 'OK' as a response. This tells you that the communication line is established between the Arduino and the HM-10. If you receive nothing, try switching the TX/RX wires. If there's still nothing, then try going to the bottom of the window and clicking where it says 'No line ending' and select 'Both NL & CR' - this will put a newline and carriage return at the end of each command. This function works for some CC254x modules that do not have the full HM-10 firmware installed (with the crystal). The 'Both NL & CR' is selected in my example in Figure 2. Figure 2: Screenshot showing the result of 'AT+ADDR?' in the Arduino serial monitor. You should do this for each Bluetooth module and store the values in a list either in a Python code or elsewhere. Listening for and Reading from A Specific BLE Module Now that we have installed Bluepy and acquired the address of each BLE module, we can create a script in Python to read data from any node. Bluepy's official documentation is great for getting started (find it here). Essentially, Bluepy handles a lot of the Bluetooth protocols, but we still need to tell it what to listen for and how to handle the data. Below is a (slightly) modified version from Bluepy's documentation: from bluepy import btle import struct class MyDelegate(btle.DefaultDelegate): def __init__(self,params): btle.DefaultDelegate.__init__(self) def handleNotification(self,cHandle,data): print("handling notification...") ## print(self) ## print(cHandle) print(struct.unpack("b",data)) p = btle.Peripheral('00:15:87:00:4e:d4') p.setDelegate(MyDelegate(0)) while True: if p.waitForNotifications(1.0): continue print("waiting...") The code above tells Python to look for the device at address '00:15:87:00:4E:D4' and wait for intervals of 1 second for data to be received. When the BLE device wakes, the Python program should print 'waiting...' until data is received. The program will also exit with an error stating that the 'Device disconnected.' This is okay in our case because we actually ARE disconnecting the device on the Arduino end. But for testing purposes, this code functions just fine. You should receive data (assuming you're using temperature data from the Arduino IoT node) that looks like the following: waiting... waiting... waiting... waiting... waiting... handling notification... (23,) handling notification... (50,) waiting... waiting... waiting... Looping and Listening for Multiple Devices The full coop code can be found here. In short, it uses the 'concurrent futures' Python library (see here) to parallel-ize listening and reading for multiple Bluetooth Low Energy devices that are transmitting data. The loop also sends a byte of data back to the BLE module to be read on the device's end. The BLE devices can be programmed to read on the Arduino end and react accordingly. If you're using the Arduino IDE serial port, you can also listen for the transmitted data and verify the two-way communication using only an HM-10/CC2541 module (this is the only one I tested). Conclusion At this point, the Raspberry Pi can read multiple BLE nodes and record data. This means that within the loop, the data can be stored or uploaded to the internet to complete the integration into the Internet of Things. In the final installation of this tutorial series, I will cover one method of connecting to the internet using Python and Google Drive. This will allow the user to send data to the Raspberry Pi via several IoT nodes and then upload the data to the internet..
https://makersportal.com/blog/2018/3/25/arduino-internet-of-things-part-4-connecting-bluetooth-nodes-to-the-raspberry-pi-using-pythons-bluepy-library
CC-MAIN-2019-22
refinedweb
1,320
63.19
The WebAPI framework is full of abstractions. There are controllers, filter providers, model validators, and many other components that form the plumbing of the framework. However, there are only three simple abstractions you need to know on a daily basis to build reasonable software with the framework. - HttpRequestMessage - HttpResponseMessage - HttpMessageHandler The names are self explanatory. I was tempted to say HttpRequestMessage represents an incoming HTTP message, but saying so would mislead someone into thinking the WebAPI is all about server-side programming. I think of HttpRequestMessage as an incoming message on the server, but the HttpClient uses all the same abstractions, and when writing a client the request message is an outgoing message. There is a symmetric beauty in how these 3 core abstractions work on both the server and the client. There are 4 abstractions in the above class diagram because while an HttpMessageHandler takes an HTTP request and returns an HTTP response, it doesn't know how to work in a pipeline. Multiple handlers can form a pipeline, or a chain of responsibility, which is useful for processing a network request. When you host the WebAPI in IIS with ASP.NET, you'll have a pipeline in IIS feeding requests to a pipeline in ASP.NET, which in turn gives control to a WebAPI pipeline. It's pipelines all the way down (to the final handler and back). It's the WebAPI pipeline that ultimately calls an ApiController in a project. You might never need to write a custom message handler, but if you want to work in the pipeline to inspect requests, check cookies, enforce SSL connections, modify headers, or log responses, then those types of scenarios are great for message handlers. The 4th abstraction, DelegatingHandler, is a message handler that already knows how to work in a pipeline, thanks to an intelligent base class and an InnerHandler property. The code in a DelegatingHandler derived class doesn't need to do anything special to work in a pipeline except call the base class implementation of the SendAsync method. Here is where things can get slightly confusing because of the name (SendAsync), and the traditional notion of a pipeline. First, let's take a look at a simple message handler built on top of DelegatingHandler (it doesn't do anything useful, except to demonstrate later points): public class DateObsessedHandler : DelegatingHandler { protected async override Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { var requestDate = request.Headers.Date; // do something with the date ... var response = await base.SendAsync(request, cancellationToken); var responseDate = response.Headers.Date; // do something with the date ... return response; } } First, there is nothing pipeline specific in the code except for deriving from the DelegatingHandler base class and calling base.SendAsync (in other words, you never have to worry about using InnerHandler). Secondly, we tend to think of a message pipeline as unidirectional. A message arrives at one end of the pipeline and passes through various components until it reaches a terminal handler. The pipeline for WebAPI message handlers is is bidirectional. Request messages arrive and pass through the pipeline until some handler generates a response, at which point the response message flows back out of the pipeline. The purpose of SendAsync then, isn't just about sending. The purpose is to take a request message, and send a response message. All the code in the method before the call to base.SendAsync is code to process the request message. You can check for required cookies, enforce an SSL connection, or change properties of the request for handlers further down the pipeline (like changing the HTTP method when a particular HTTP header is present). When the call to base.SendAsync happens, the message continues to flow through the pipeline until a handler generates a response, and the response comes back in the other direction. Symmetry. All the code in the method after the call to base.SendAsync is code to process the response message. You can add additional headers to the response, change the status code, and perform other post processing activities. Most message handlers will probably only inspect the request or the response, not both. My sample code wanted to show both to show the two part nature of SendAsync. Once I started thinking of SendAsync as being divided into two parts, and how the messages flow up and down the pipeline, working with the WebAPI became easier. In a future post we'll look at a more realistic message handler that can inspect the request and short circuit the response. Finally, it's easy to add a DelegatingHandler into the global message pipeline (this is for server code hosted in ASP.NET): GlobalConfiguration.Configuration .MessageHandlers .Add(new DateObsessedHandler()); But remember, you can also use the same message handlers on the client with HttpClient and a client pipeline, which is a another reason the WebAPI is symmetrically beautiful.
http://odetocode.com/blogs/scott/archive/2013/04/04/webapi-tip-7-beautiful-message-handlers.aspx
CC-MAIN-2017-30
refinedweb
810
53.81
- Stackato - Get Stackato - Why a Private PaaS? - Features & Benefits - Stackato by Language - Compare Editions - Events - Partners - Resources - Stackato & Cloud Foundry - Developer Tools - Languages - Support Eric Promislow, February 23, 2006. I have too much SGML experience to be much of a fan of XML Schema as a data-definition language. In most cases, the description files are harder for humans to create and understand than those old DTDs, and even the argument that it's easier for machines to process them doesn't hold much water. To process a DTD you needed a true lexical analyzer and parser to handle gotchas like parameter entities, comments, and strings. XML Schema instances might be easier to lex and parse, but you need a symbol table to handle its type system. Many of us at ActiveState like Relax NG, and we use the XML-based "full syntax" form for ease of processing in some of our products and systems. These documents are straightforward to create, read, maintain, and process. So I was asked which was more popular, and I resorted to that standby, the Google search. I figured it would be sufficient to treat the respective namespace URIs for RelaxNG and XML Schema as markers in published custom schemas. In other words, if someone had developed their own schema and was using it to publish XML documents on the web, they would need to publish the schema as well, and Google would pick it up. Neither of these strings was needed in the instances, just in the schema definition files, so I wouldn't be counting the number of documents created against a certain schema, just the actual schemas. I had to repeat the queries several times, as the results were fully unexpected, but I always got this answer: Results 1 - 10 of about 13,900 for '' Results 1 - 10 of about 586 for '' Even without trying to detect instances of the compact, equivalent, non-XML syntax for RelaxNG, this suggests that people creating their own XML schemas for documents (which are more likely to be published via the web than database rows), are almost 25 times more likely to use Relax NG than XML Schema. My guess is that the people who used to write DTDs for SGML documents have happily adopted Relax NG, while XML Schema has found a home for more data-intensive applications, such as the data-description component of SOAP's WSDL. Share this post: Tags: activestate, sgml, sgml documents, xml schema Category: dynamic languages 6 comments for 24 out of 25 Dentists Say Their Patients Use Relax NG Permalink I repeated the search using Yahoo! You need to remove the URL punctuation to prevent Yahoo! from returning just a single hit (the page with that URL), so I searched for "www w3 org 2001 XMLSchema" and "www relaxng org ns structure 1.0", with the quotation marks. I then did exactly the same searches with Google, in order to be sure of comparing apples to apples. There were about 1M yhits and 1.7M ghits for the XML Schema string, and 47 yhits (sic!) and 39.4K ghits for the RELAX NG string. Although these results do not strike me as a whole lot more realistic than your original Google search, it does indicate that mere comparative counts aren't something to draw large-scale conclusions from. Submitted by Michael Fuller (not verified) on Mon, 2006-02-27 18:55. Permalink Curious. I just repeated your search on Google and Curious. I just repeated your search on Google and while I got the same result for the RelaxNgG query, the W3C schema query numbers were completely different: Results 1 - 10 of about 13,900 for ''. Results 1 - 10 of about 71,600 for ''. Now that ratio is more what (sadly) I would expect. Permalink Apart from wondering what the results would be with Obviously XML Schema is discussed and mentioned more. Apart from wondering what the results would be with that filter on, I used the namespace URLs in my search rather than the terms themselves, thinking they would find public documents that *used* the technologies, as opposed to ones that just talked about it. Obviously XML Schema is discussed and mentioned more. It has the weight of the W3C and a dozen large corporations behind it. RelaxNG is the work of a handful of dedicated individuals who typically let their work speak for itself. But there was also wishful thinking behind my post.
http://www.activestate.com/blog/2006/02/24-out-25-dentists-say-their-patients-use-relax-ng
CC-MAIN-2015-18
refinedweb
746
66.67
Asked by: LCD/VFD Display addin Question - I am in the process of developing a service to run on my WHS which uses a VFD display (a HD44780 based 2x16 line display via the parallel port) to show "status" of my server. Currently it shows a rotation of information * Current Date/Time * Time since last restart * Network Health (as on the WHS console) * Storage Space Free * Backup Status (i.e. Idle, Running, Restoring etc) * Server name and IP It's not quite ready to release publically yet, but I'm posting here for two reasons - one to guage interest, and two, to ask if there's any other information you think might be worth adding. (I'm already looking to see if it's possible to retrieve the system temperature without any 3rd party software)Thursday, July 12, 2007 8:58 AM All replies I'd certainly be interested, the only things I can think people might want are number of logged on users and processor load. Cheers, GordonThursday, July 12, 2007 9:16 AM Sounds good. Here are a couple of suggestions: - Include the name of the computer being backed up if a backup is in progress - Cycle through each computer name and give health status/date of last backup - Select which information to display, cycle time, etc (i.e. customize the time to show each info, a check-box style list so if the user didn't want server ip to come up, etc) Would this add-in require a specific VFD or would it work on different ones? It's not that I have a specific one already (I've been considering getting one for my whs so I could do what you are doing- albeit with a smaller list than what you have already achieved), but I'd like to know which ones it would work with so I may get one to work with it (I live in Australia, so getting the exact same hardware that is available in US is sometimes touch and go). If I can get a VFD that's compatible, I'd definitely be interested in your add-in. Hopefully my suggestions give you some ideas, Davo.Thursday, July 12, 2007 9:26 AM - The VFD I'm using is a Silverstone SST-FP54 unit and from some quick websearching, it does appear to be available in Australia - however any 2 line HD44780 based LCD or VFD display should work fine. (I only have the one panel, so can only be absolutely sure it works with that one!) Panels which aren't HD44780 and connected via the parallel port (in SPP mode) won't work - and as I don't have access to other panels, that is unlikely to change. Thanks for the suggestions - I think a control panel for the display is a good idea, and is something I was planning to add (once I've figured out how to pass the settings through to the service), as it'll be necessary to specify what size of panel you're using anyway. Regards TomThursday, July 12, 2007 9:40 AM Thanks for the reply and info on the VFD. Looks like I might have to get one of them; the info you have already got it to display would be handy, and if you can add stuff like processor loading as well..... Cheers, DavoThursday, July 12, 2007 9:52 AM - Sounds great! Good idea!Thursday, July 12, 2007 11:47 AM Can i recommend an *off the shelf* USB panel that iv used in the past.. something to consider for future compatibility... and easy enough that it can be installed by a non-tech user... Not overly expensive these days either Pertelian External LCD Display (Least in the US yah can find it ), July 12, 2007 1:01 PM - Hi Tom, That sounds like a good add-in. I'll give it a go when it's ready.Thursday, July 12, 2007 3:38 PM - This is a good idea and I would be interested in having such a device.Thursday, July 12, 2007 3:47 PM The box I'm currently using for my WHS is a Silverstone case with a built-in VFD display also, so I would be very interested in your add-in. The only suggestion I can think of is if a critical notification is received, to display that. Thanks. RobSaturday, July 14, 2007 1:40 AM Both MBM 5 and LCD Smartie will work with WHS and they are both free. They have multiple setting that monitor a large variety of things. SmittySaturday, July 14, 2007 1:07 PM - MotherBoard Monitor is no longer maintained. The author found it increasingly frustrating to try to pry information out of motherboard manufacturers to keep it up to date. It may still work on your current motherboard, but don't count on it working on the next one you buy. You would be better off with something like SpeedFan, which has a smaller list of things you can monitor, but which is still maintained by the author.Saturday, July 14, 2007 1:14 PMModerator Thanks I will have to look into that program. My bad, you are correct that allmost all new motherboards i.e. Intel 915 and above are probably not supported by MBM 5, but most below that should work OK. The LCD Smartie though is pretty current with new LCD displays being added all the time, case in point is the VLSystems line, they released a driver for it were as the manufacturer pretty much dropped it after Jan of last year. SmittySaturday, July 14, 2007 1:18 PM - Indeed, LCDSmartie does appear to work on WHS - I've used it in the past (when my VFD was in my "main" desktop machine) However... I've not figured out how to run it as a service, so it'll work without a user logged on to the WHS. Any pointers to this, and I may not continue development if I'm just reinventing what LCDSmartie can do. If anyone can find out how to run LCDSmartie as a service on a WHS box (and provide some instructions!), then I'll happily try to port the work I've already done into an LCDSmartie plugin DLL - as there doesn't appear to be a DLL for WHS yetSaturday, July 14, 2007 2:13 PM - Marcel's "anyserviceinstaller" Is able to run LCDSmartie as a service. I'll start porting my LCD service (well, the bits of it that aren't done far better already by Smartie) to hopefully be able to display things like server health, backup status, and disk space from managed volumes. I'll post again when I've got something to show for my efforts!Saturday, July 14, 2007 2:57 PM - The initial version of my LCD_Smartie plugin is now available - at the moment it is only capable of outputting the current backup status - but I plan to enhance it in the future... It can be downloaded from my website:, July 22, 2007 9:49 AM - Saturday, July 28, 2007 6:35 AM - CPU Load is something LCDSmartie can already do on it's own. As I've said, given that LCD Smartie can be installed, I'm simply developing a small plugin for that to provide some WHS Specific information (currently only the simple backup status, but I'm intending to add server-health too)Saturday, July 28, 2007 1:31 PM - Well, it seems that my LCDSmartie plugin works fine with LCD Smartie running on the console, but for some reason when running as a service it can't open the plugin. I have no idea why this is, I've tried permissions etc. Sorry folks, looks like this is a non-starter too. If anyone has a solution to get LCDSmartie working properly as a service, please let me know - but for now, I don't have time to try to figure this out.Sunday, July 29, 2007 2:46 PM Forgive the delay in answering this - I've not been able to get the Matrix display out of another computer to test... To do this you need the 2003 resource kit from MS for two programs: INSTSRV.EXE and SRVANY.EXE For this I assume they are in C:\TEST The only issue I have found is running the application from a normal user reports the serial port in use as teh service has it - but you can change the config. Note I am using version 4.2 - Open up an MS-DOS command prompt. - CD \TEST - Type the following command: INSTSRV LCDSmartie C:\TEST\SRVANY.EXE - Open up the Registry Editor (Click on the Start Button, select Run, and type REGEDIT) Sunday, August 19, 2007 9:56 PM - - Close the Registry Editor Hi, cool idea. I am intrested in this project too..though this might help to read sys temps etc. It works on my laptop, mini itx board and desktop computers so i think its will work on most. Hope it helps ******************************************************************************************* ********************************************************************************************** RichTuesday, August 21, 2007 7:50 AM - Thanks for the suggestion as to how to get LCDSmartie running as a service - when I get a chance I'll give it a try... Do you know if that'll solve the problem with LCDSmartie not handling plugins properly?Tuesday, August 21, 2007 5:54 PM - I've now had a chance to test this, and it doesn't appear to have solved the problem. LCD Smartie still fails to load the plugin when it's running as a service - but it's fine when it's on the console. I've not got time to do much tinkering now anyway...Monday, September 3, 2007 8:05 PM Did you copy the interop and other required DLLs to the plugins directory? RobTuesday, September 4, 2007 6:40 PM Yeah, the problem was that LCD Smartie couldn't run *any* third-party addins when running as a service - the problem was not specific to my addin. Howver, I've now installed the server with the RTM build of WHS and all seems OK. Now to find some time to resume development of this plugin... TomSaturday, November 3, 2007 3:12 PM I'm intersted in hooking up a CF display but would rather avoid using LCD smartie and keep things tidier. I did have a go a programming for an LCD recently and controlling the display is pretty simple theses days the System.IO.Ports namespace, roughly ripping the basic approach from, November 7, 2007 11:34 AM Your project sounds great. For users of the normal veriety it wouldnt be that much help but to someone like myself that monitors everything it is a great idea. Another item you could add to it might be a harddrive monitor. something that displays which hard drive can be removed and activity and even when the drive is becoming full.Monday, January 14, 2008 1:19 AM - Unfortunately, due to work commitments, this project is currently on hold for the foreseeable future. Sorry guys. TomMonday, January 14, 2008 7:56 AM HI, do you know if the source code is available for this utility ? CheersMonday, January 14, 2008 11:48 AM - RBowdenJr I don't have the source code to my original version (lost in a HDD crash) but I can probably find the source code for the current vb.net LCDSmartie plugin if you want to continue development on it. If you let me know your email address (use the contact form on my website:) and I'll happily let you have the "work in progress" code that I've produced so far (to be hoenst, there's not that much of it!) TomMonday, January 14, 2008 11:53 AM Hi Yeah that would be good, it’s something I am interested in and looks like a good little project. Cheers RichMonday, January 14, 2008 12:07 PM - I am very interested in this project. I have a old Matrix Orbital 2x20 LCD serial LCD. I have LCDSmartie installed as a service. It runs at start up and displays Internal IP, CPU, Disk, Memory, and a few other things. I can't get your plugin to work on the service, but it work great interactivly. Hopefully you can get this fixed as I have a few other plug-in that seem to be working, the service run on the administrator account, not the SYSTEM account. I am also played with some plugin to get the External (WEB) IP although those links seem to be broken. So I can't figure out a way to CHECK for internet connectivity, if I could I flash/blink the backlight on the LCD as a warning. I would also like to be able to access the NETWORK STATUS text that all the PC have on the tray icon. Maybe flash the backlight if it goes critical. Let me know how progress is comming.Monday, March 3, 2008 4:50 PM Sorry, there's been no further progress - and work is such that I simply don't have time to look into it. I had the same problems you've found, could get it working interactively, but couldn't get LCDSmartie to load *any* plugins when running as a service. I'm willing to provide the source code as-it-stands (it's very rough) to anyone who wants to try, but I'm afraid I'm not in a position to continue development of this in the forseable future. Sorry everyone. TomMonday, March 3, 2008 8:51 PM
https://social.microsoft.com/Forums/en-US/29b0a6ae-76b7-40bf-8a62-adff2e5060cf/lcdvfd-display-addin?forum=whsdevelopers
CC-MAIN-2020-50
refinedweb
2,277
65.66
How to Read CSV file in Java In past articles, we studied how to read an excel file in Java. In this article, we will discuss how to read a CSV file in Java using different methods. The data is multiplying with time; all such data is kept in these CSV files. There is a need to understand the techniques to read these files using Java. We will learn all such methods in this article. So, let’s start discussing this topic. Keeping you updated with latest technology trends, Join TechVidvan on Telegram What is CSV file? The CSV file stands for the Comma-Separated Values file. It is a simple plain-text file format that stores tabular data in columns in simple text forms, such as a spreadsheet or database, and splits it by a separator. The separator used to split the data usually is commas (,). We can import files in the CSV format and export them using programs like Microsoft Office and Excel, which store data in tables. The CSV file uses a delimiter to identify and separate different data tokens in a file. The use of the CSV file format is when we move tabular data between programs that natively operate on incompatible formats. Creating CSV File We can create a CSV file in the following two ways: - Using Microsoft Excel - Using Notepad editor 1. Using Microsoft Excel 1: Open Microsoft Excel. 2: Write the data into the excel file: 3: Save the excel file with .csv extension as shown below: 2. Using Notepad We can also create a CSV file using Notepad with the following steps: Step 1: Open Notepad editor. Step 2: Write some data into file separated by comma (,). For example: Rajat, Jain, 26, 999967439, Delhi Step 3: Save the file with .csv extension. We have created the following file. Ways to read CSV file in Java There are the following four ways to read a CSV file in Java: 1. Java Scanner class The Scanner class of Java provides various methods by which we can read a CSV file. It provides a constructor that produces values scanned from the specified CSV file. This class also breaks the data in the form of tokens. There is a delimiter pattern, which, by default, matches white space. Then, using different types of next() methods, we can convert the resulting tokens. Code to read a CSV file using a Scanner class: import java.io. * ; import java.util.Scanner; public class CSVReaderDemo { public static void main(String[] args) throws Exception { Scanner sc = new Scanner(new File("C:\\Users\\Dell\\Desktop\\csvDemo.csv")); //parsing a CSV file into the constructor of Scanner class sc.useDelimiter(","); //setting comma as delimiter pattern while (sc.hasNext()) { System.out.print(sc.next()); } sc.close(); //closes the scanner } } 2. Java String.split() method The String.split() of Java identifies the delimiter and split the rows into tokens. The Syntax of this method is: public String[] split(String regex) Code to read a CSV file using String.split() method: import java.io. * ; public class CSVReader { public static final String delimiter = ","; public static void read(String csvFile) { try { File file = new File(csvFile); FileReader fr = new FileReader(file); BufferedReader br = new BufferedReader(fr); String line = " "; String[] tempArr; while ((line = br.readLine()) != null) { tempArr = line.split(delimiter); for (String tempStr: tempArr) { System.out.print(tempStr + " "); } System.out.println(); } br.close(); } catch(IOException ioe) { ioe.printStackTrace(); } } public static void main(String[] args) { //csv file to read String csvFile = "C:\\Users\\Dell\\Desktop\\csvDemo.csv"; CSVReader.read(csvFile); } } 3. Using BufferedReader class in Java import java.io.BufferedReader; import java.io.FileReader; import java.io.IOException; public class CSV { public static void main(String[] args) { String line = ""; String splitBy = ","; try { //parsing a CSV file into BufferedReader class constructor BufferedReader br = new BufferedReader(new FileReader("C:\\Users\\Dell\\Desktop\\csvDemo.csv")); while ((line = br.readLine()) != null) //returns a Boolean value { String[] employee = line.split(splitBy); //use comma as separator System.out.println("Emp[First Name=" + employee[1] + ", Last Name=" + employee[2] + ", Contact=" + employee[3] + ", City= " + employee[4] + "]"); } } catch(IOException e) { e.printStackTrace(); } } } Output: Emp[First Name=Joseph, Last Name=Patil, Contact= 4645968519, City= Hoogly] Emp[First Name=Andrew, Last Name=Mukherjee, Contact= 9067215139, City= Burmingham] Emp[First Name=Varun, Last Name=Patel, Contact= 2503595381, City= Sindh] Emp[First Name=Michael, Last Name=Baldwin, Contact= 7631068844, City= Kentucky] Emp[First Name=Emmanuel, Last Name=Agarwal, Contact= 3538037535, City= Nice] Emp[First Name=Sumeet, Last Name=Patil, Contact= 6871075256, City= Aukland] Emp[First Name=Pranab, Last Name=Kulkarni, Contact= 7982264359, City= Hubli] Emp[First Name=Rajeev, Last Name=Singh, Contact= 3258837884, City= Patiala] Emp[First Name=Sujay, Last Name=Kapoor, Contact= 5127263160, City= Mumbai] 4. Using OpenCSV API in Java OpenCSV API is a third party API. This API provides standard libraries to read various versions of the CSV file. The OpenCSV API also offers better control to handle the CSV files. This library can also read Tab-Delimited File or TDF file format. Features of Java OpenCSV API - Can read Any number of values per line. - Avoids commas in quoted elements. - Can handle entries that span multiple lines. We use the CSVReader class to read a CSV file. The class CSVReader provides a constructor to parse a CSV file. Steps to read Java CSV file in eclipse: 1: Create a class file with the name CSVReaderDemo and write the following code. 2: Create a lib folder in the project. 3: Download opencsv-3.8.jar 4: Copy the opencsv-3.8.jar and paste into the lib folder. 5: Run the program. Code to read a CSV file using OpenCSV API: import java.io.FileReader; import com.opencsv.CSVReader; public class CSVReaderDemo { public static void main(String[] args) { CSVReader reader = null; try { //parsing a CSV file into CSVReader class constructor reader = new CSVReader(new FileReader("C:\\Users\Dell\Desktop\csvDemo.csv")); String[] nextLine; //reads one line at a time while ((nextLine = reader.readNext()) != null) { for (String token: nextLine) { System.out.print Reading Java CSV file with a different separator We can also read a file using a different delimiter. In the following CSV file, we have created a CSV file using a semicolon (;) to separate tokens. Code to read a CSV file with different delimiters: import java.io.FileReader; import java.io.IOException; import com.opencsv.CSVReader; public class CSV { public static void main(String[] args) { CSVReader reader = null; try { reader = new CSVReader(new FileReader("C:\\Users\\Dell\\Desktop\\csvDemo.csv")); String[] nextLine; //read one line at a time while ((nextLine = reader.readNext()) != null) { for (String token: nextLine) { System.out.println Conclusion In this Java tutorial, we have learned different ways to read CSV file in Java. We can use any one of the three methods to read Java CSV file. Java OpenCSV API is an in-built API in eclipse that we can use to read CSV files. These all ways to read and write CSV files in Java are the simplest Core Java components.
https://techvidvan.com/tutorials/read-csv-file-in-java/
CC-MAIN-2021-17
refinedweb
1,165
58.79
Introduction to AIO Linux asynchronous I/O is a relatively recent addition to the Linux kernel. It's a standard feature of the 2.6 kernel, but you can find patches for 2. I/O models Before digging into the AIO API, let's explore the different I/O models that are available under Linux. This isn't intended as an exhaustive review, but rather aims to cover the most common models to illustrate their differences from asynchronous I/O. Figure 1 shows synchronous and asynchronous models, as well as blocking and non-blocking models. Figure 1. Simplified matrix of basic Linux I/O models Each of these I/O models has usage patterns that are advantageous for particular applications. This section briefly explores each one. Synchronous blocking I/O One of the most common models is the synchronous blocking I/O model. In this model, the user-space application performs a system call that results in the application blocking. This means that the application blocks until the system call is complete (data transferred or error). The calling application is in a state where it consumes no CPU and simply awaits the response, so it is efficient from a processing perspective. Figure 2 illustrates the traditional blocking I/O model, which is also the most common model used in applications today. Its behaviors are well understood, and its usage is efficient for typical applications. When the read system call is invoked, the application blocks and the context switches to the kernel. The read is then initiated, and when the response returns (from the device from which you're reading), the data is moved to the user-space buffer. Then the application is unblocked (and the read call returns). Figure 2. Typical flow of the synchronous blocking I/O model From the application's perspective, the read call spans a long duration. But, in fact, the application is actually blocked while the read is multiplexed with other work in the kernel. Synchronous non-blocking I/O A less efficient variant of synchronous blocking is synchronous non-blocking I/O. In this model, a device is opened as non-blocking. This means that instead of completing an I/O immediately, a read may return an error code indicating that the command could not be immediately satisfied ( EAGAIN or EWOULDBLOCK), as shown in Figure 3. Figure 3. Typical flow of the synchronous non-blocking I/O model The implication of non-blocking is that an I/O command may not be satisfied immediately, requiring that the application make numerous calls to await completion. This can be extremely inefficient because in many cases the application must busy-wait until the data is available or attempt to do other work while the command is performed in the kernel. As also shown in Figure 3, this method can introduce latency in the I/O because any gap between the data becoming available in the kernel and the user calling read to return it can reduce the overall data throughput. Asynchronous blocking I/O Another blocking paradigm is non-blocking I/O with blocking notifications. In this model, non-blocking I/O is configured, and then the blocking select system call is used to determine when there's any activity for an I/O descriptor. What makes the select call interesting is that it can be used to provide notification for not just one descriptor, but many. For each descriptor, you can request notification of the descriptor's ability to write data, availability of read data, and also whether an error has occurred. Figure 4. Typical flow of the asynchronous blocking I/O model (select) The primary issue with the select call is that it's not very efficient. While it's a convenient model for asynchronous notification, its use for high-performance I/O is not advised. Asynchronous non-blocking I/O (AIO) Finally, the asynchronous non-blocking I/O model is one of overlapping processing with I/O. The read request returns immediately, indicating that the read was successfully initiated. The application can then perform other processing while the background read operation completes. When the read response arrives, a signal or a thread-based callback can be generated to complete the I/O transaction. Figure 5. Typical flow of the asynchronous non-blocking I/O model The ability to overlap computation and I/O processing in a single process for potentially multiple I/O requests exploits the gap between processing speed and I/O speed. While one or more slow I/O requests are pending, the CPU can perform other tasks or, more commonly, operate on already completed I/Os while other I/Os are initiated. The next section examines this model further, explores the API, and then demonstrates a number of the commands. Motivation for asynchronous I/O From the previous taxonomy of I/O models, you can see the motivation for AIO. The blocking models require the initiating application to block when the I/O has started. This means that it isn't possible to overlap processing and I/O at the same time. The synchronous non-blocking model allows overlap of processing and I/O, but it requires that the application check the status of the I/O on a recurring basis. This leaves asynchronous non-blocking I/O, which permits overlap of processing and I/O, including notification of I/O completion. The functionality provided by the select function (asynchronous blocking I/O) is similar to AIO, except that it still blocks. However, it blocks on notifications instead of the I/O call. Introduction to AIO for Linux This section explores the asynchronous I/O model for Linux to help you understand how to apply it in your applications. In a traditional I/O model, there is an I/O channel that is identified by a unique handle. In UNIX®, these are file descriptors (which are the same for files, pipes, sockets, and so on). In blocking I/O, you initiate a transfer and the system call returns when it's complete or an error has occurred. In asynchronous non-blocking I/O, you have the ability to initiate multiple transfers at the same time. This requires a unique context for each transfer so you can identify it when it completes. In AIO, this is an aiocb (AIO I/O Control Block) structure. This structure contains all of the information about a transfer, including a user buffer for data. When notification for an I/O occurs (called a completion), the aiocb structure is provided to uniquely identify the completed I/O. The API demonstration shows how to do this. AIO API The AIO interface API is quite simple, but it provides the necessary functions for data transfer with a couple of different notification models. Table 1 shows the AIO interface functions, which are further explained later in this section. Table 1. AIO interface APIs Each of these API functions uses the aiocb structure for initiating or checking. This structure has a number of elements, but Listing 1 shows only the ones that you'll need to (or can) use. Listing 1. The aiocb structure showing the relevant fields struct aiocb { int aio_fildes; // File Descriptor int aio_lio_opcode; // Valid only for lio_listio (r/w/nop) volatile void *aio_buf; // Data Buffer size_t aio_nbytes; // Number of Bytes in Data Buffer struct sigevent aio_sigevent; // Notification Structure /* Internal fields */ ... }; The sigevent structure tells AIO what to do when the I/O completes. You'll explore this structure in the AIO demonstration. Now I'll show you how the individual API functions for AIO work and how you can use them. aio_read The aio_read function requests an asynchronous read operation for a valid file descriptor. The file descriptor can represent a file, a socket, or even a pipe. The aio_read function has the following prototype: int aio_read( struct aiocb *aiocbp ); The aio_read function returns immediately after the request has been queued. The return value is zero on success or -1 on error, where errno is defined. To perform a read, the application must initialize the aiocb structure. The following short example illustrates filling in the aiocb request structure and using aio_read to perform an asynchronous read request (ignore notification for now). It also shows use of the aio_error function, but I'll explain that later. Listing 2. Sample code for an asynchronous read with aio_read #include <aio.h> ... int fd, ret; struct aiocb my_aiocb; fd = open( "file.txt", O_RDONLY ); if (fd < 0) perror("open"); /* Zero out the aiocb structure (recommended) */ bzero( (char *)&my_aiocb, sizeof(struct aiocb) ); /* Allocate a data buffer for the aiocb request */ my_aiocb.aio_buf = malloc(BUFSIZE+1); if (!my_aiocb.aio_buf) perror("malloc"); /* Initialize the necessary fields in the aiocb */ my_aiocb.aio_fildes = fd; my_aiocb.aio_nbytes = BUFSIZE; my_aiocb.aio_offset = 0; ret = aio_read( &my_aiocb ); if (ret < 0) perror("aio_read"); while ( aio_error( &my_aiocb ) == EINPROGRESS ) ; if ((ret = aio_return( &my_iocb )) > 0) { /* got ret bytes on the read */ } else { /* read failed, consult errno */ } In Listing 2, after the file from which you're reading data is opened, you zero out your aiocb structure, and then allocate a data buffer. The reference to the data buffer is placed into aio_buf. Subsequently, you initialize the size of the buffer into aio_nbytes. The aio_offset is set to zero (the first offset in the file). You set the file descriptor from which you're reading into aio_fildes. After these fields are set, you call aio_read to request the read. You can then make a call to aio_error to determine the status of the aio_read. As long as the status is EINPROGRESS, you busy-wait until the status changes. At this point, your request has either succeeded or failed. Note the similarities to reading from the file with the standard library functions. In addition to the asynchronous nature of aio_read, another difference is setting the offset for the read. In a typical read call, the offset is maintained for you in the file descriptor context. For each read, the offset is updated so that subsequent reads address the next block of data. This isn't possible with asynchronous I/O because you can perform many read requests simultaneously, so you must specify the offset for each particular read request. aio_error The aio_error function is used to determine the status of a request. Its prototype is: int aio_error( struct aiocb *aiocbp ); This function can return the following: EINPROGRESS, indicating the request has not yet completed ECANCELLED, indicating the request was cancelled by the application -1, indicating that an error occurred for which you can consult errno aio_return Another difference between asynchronous I/O and standard blocking I/O is that you don't have immediate access to the return status of your function because you're not blocking on the read call. In a standard read call, the return status is provided upon return of the function. With asynchronous I/O, you use the aio_return function. This function has the following prototype: ssize_t aio_return( struct aiocb *aiocbp ); This function is called only after the aio_error call has determined that your request has completed (either successfully or in error). The return value of aio_return is identical to that of the read or write system call in a synchronous context (number of bytes transferred or -1 for error). aio_write The aio_write function is used to request an asynchronous write. Its function prototype is: int aio_write( struct aiocb *aiocbp ); The aio_write function returns immediately, indicating that the request has been enqueued (with a return of 0 on success and -1 on failure, with errno properly set). This is similar to the read system call, but one behavior difference is worth noting. Recall that the offset to be used is important with the read call. However, with write, the offset is important only if used in a file context where the O_APPEND option is not set. If O_APPEND is set, then the offset is ignored and the data is appended to the end of the file. Otherwise, the aio_offset field determines the offset at which the data is written to the file. aio_suspend You can use the aio_suspend function to suspend (or block) the calling process until an asynchronous I/O request has completed, a signal is raised, or an optional timeout occurs. The caller provides a list of aiocb references for which the completion of at least one will cause aio_suspend to return. The function prototype for aio_suspend is: int aio_suspend( const struct aiocb *const cblist[], int n, const struct timespec *timeout ); Using aio_suspend is quite simple. A list of aiocb references is provided. If any of them complete, the call returns with 0. Otherwise, -1 is returned, indicating an error occurred. See Listing 3. Listing 3. Using the aio_suspend function to block on asynchronous I/Os struct aioct *cblist[MAX_LIST] /* Clear the list. */ bzero( (char *)cblist, sizeof(cblist) ); /* Load one or more references into the list */ cblist[0] = &my_aiocb; ret = aio_read( &my_aiocb ); ret = aio_suspend( cblist, MAX_LIST, NULL ); Note that the second argument of aio_suspend is the number of elements in cblist, not the number of aiocb references. Any NULL element in the cblist is ignored by aio_suspend. If a timeout is provided to aio_suspend and the timeout occurs, then -1is returned and errno contains EAGAIN. aio_cancel The aio_cancel function allows you to cancel one or all outstanding I/O requests for a given file descriptor. Its prototype is: int aio_cancel( int fd, struct aiocb *aiocbp ); To cancel a single request, provide the file descriptor and the aiocb reference. If the request is successfully cancelled, the function returns AIO_CANCELED. If the request completes, the function returns AIO_NOTCANCELED. To cancel all requests for a given file descriptor, provide that file descriptor and a NULL reference for aiocbp. The function returns AIO_CANCELED if all requests are canceled, AIO_NOT_CANCELED if at least one request couldn't be canceled, and AIO_ALLDONE if none of the requests could be canceled. You can then evaluate each individual AIO request using aio_error. If the request was canceled, aio_error returns -1, and errno is set to ECANCELED. lio_listio Finally, AIO provides a way to initiate multiple transfers at the same time using the lio_listio API function. This function is important because it means you can start lots of I/Os in the context of a single system call (meaning one kernel context switch). This is great from a performance perspective, so it's worth exploring. The lio_listio API function has the following prototype: int lio_listio( int mode, struct aiocb *list[], int nent, struct sigevent *sig ); The mode argument can be LIO_WAIT or LIO_NOWAIT. LIO_WAIT blocks the call until all I/O has completed. LIO_NOWAIT returns after the operations have been queued. The list is a list of aiocb references, with the maximum number of elements defined by nent. Note that elements of list may be NULL, which lio_listio ignores. The sigevent reference defines the method for signal notification when all I/O is complete. The request for lio_listio is slightly different than the typical read or write request in that the operation must be specified. This is illustrated in Listing 4. Listing 4. Using the lio_listio function to initiate a list of requests struct aiocb aiocb1, aiocb2; struct aiocb *list[MAX_LIST]; ... /* Prepare the first aiocb */ aiocb1.aio_fildes = fd; aiocb1.aio_buf = malloc( BUFSIZE+1 ); aiocb1.aio_nbytes = BUFSIZE; aiocb1.aio_offset = next_offset; aiocb1.aio_lio_opcode = LIO_READ; ... bzero( (char *)list, sizeof(list) ); list[0] = &aiocb1; list[1] = &aiocb2; ret = lio_listio( LIO_WAIT, list, MAX_LIST, NULL ); The read operation is noted in the aio_lio_opcode field with LIO_READ. For a write operation, LIO_WRITE is used, but LIO_NOP is also valid for no operation. AIO notifications Now that you've seen the AIO functions that are available, this section digs into the methods that you can use for asynchronous notification. I'll explore asynchronous notification through signals and function callbacks. Asynchronous notification with signals The use of signals for interprocess communication (IPC) is a traditional mechanism in UNIX and is also supported by AIO. In this paradigm, the application defines a signal handler that is invoked when a specified signal occurs. The application then specifies that an asynchronous request will raise a signal when the request has completed. As part of the signal context, the particular aiocb request is provided to keep track of multiple potentially outstanding requests. Listing 5 demonstrates this notification method. Listing 5. Using signals as notification for AIO requests void setup_io( ... ) { int fd; struct sigaction sig_act; struct aiocb my_aiocb; ... /* Set up the signal handler */ sigemptyset(&sig_act.sa_mask); sig_act.sa_flags = SA_SIGINFO; sig_act.sa_sigaction = aio_completion_handler; /* the Signal Handler */ my_aiocb.aio_sigevent.sigev_notify = SIGEV_SIGNAL; my_aiocb.aio_sigevent.sigev_signo = SIGIO; my_aiocb.aio_sigevent.sigev_value.sival_ptr = &my_aiocb; /* Map the Signal to the Signal Handler */ ret = sigaction( SIGIO, &sig_act, NULL ); ... ret = aio_read( &my_aiocb ); } void aio_completion_handler( int signo, siginfo_t *info, void *context ) { struct aiocb *req; /* Ensure it's our signal */ if (info->si_signo == SIGIO) { req = (struct aiocb *)info->si_value.sival_ptr; /* Did the request complete? */ if (aio_error( req ) == 0) { /* Request completed successfully, get the return status */ ret = aio_return( req ); } } return; } In Listing 5, you set up your signal handler to catch the SIGIO signal in the aio_completion_handler function. You then initialize the aio_sigevent structure to raise SIGIO for notification (which is specified via the SIGEV_SIGNAL definition in sigev_notify). When your read completes, your signal handler extracts the particular aiocb from the signal's si_value structure and checks the error status and return status to determine I/O completion. For performance, the completion handler is an ideal spot to continue the I/O by requesting the next asynchronous transfer. In this way, when completion of one transfer has completed, you immediately start the next. Asynchronous notification with callbacks An alternative notification mechanism is the system callback. Instead of raising a signal for notification, this mechanism calls a function in user-space for notification. You initialize the aiocb reference into the sigevent structure to uniquely identify the particular request being completed; see Listing 6. Listing 6. Using thread callback notification for AIO requests void setup_io( ... ) { int fd; struct aiocb my_aiocb; ... /* a thread callback */ my_aiocb.aio_sigevent.sigev_notify = SIGEV_THREAD; my_aiocb.aio_sigevent.notify_function = aio_completion_handler; my_aiocb.aio_sigevent.notify_attributes = NULL; my_aiocb.aio_sigevent.sigev_value.sival_ptr = &my_aiocb; ... ret = aio_read( &my_aiocb ); } void aio_completion_handler( sigval_t sigval ) { struct aiocb *req; req = (struct aiocb *)sigval.sival_ptr; /* Did the request complete? */ if (aio_error( req ) == 0) { /* Request completed successfully, get the return status */ ret = aio_return( req ); } return; } In Listing 6, after creating your aiocb request, you request a thread callback using SIGEV_THREAD for the notification method. You then specify the particular notification handler and load the context to be passed to the handler (in this case, a reference to the aiocb request itself). In the handler, you simply cast the incoming sigval pointer and use the AIO functions to validate the completion of the request. System tuning for AIO The proc file system contains two virtual files that can be tuned for asynchronous I/O performance: - The /proc/sys/fs/aio-nr file provides the current number of system-wide asynchronous I/O requests. - The /proc/sys/fs/aio-max-nr file is the maximum number of allowable concurrent requests. The maximum is commonly 64KB, which is adequate for most applications. Summary Using asynchronous I/O can help you build faster and more efficient I/O applications. If your application can overlap processing and I/O, then AIO can help you build an application that more efficiently uses the CPU resources available to you. While this I/O model differs from the traditional blocking patterns found in most Linux applications, the asynchronous notification model is conceptually simple and can simplify your design. Resources Learn - The POSIX.1b implementation explains the internal details of AIO from the GNU Library perspective. - Realtime Support in Linux explains more about AIO and a number of real-time extensions, from scheduling and POSIX I/O to POSIX threads and high resolution timers (HRT). - In the Design Notes for the 2.5 integration, learn about the design and implementation of AIO in Linux. -.
http://www.ibm.com/developerworks/linux/library/l-async/
CC-MAIN-2015-22
refinedweb
3,316
54.63
prnio - generic printer interface #include <sys/prnio.h> The prnio generic printer interface defines ioctl commands and data structures for printer device drivers. prnio defines and provides facilities for five basic phases of the printing process: o Identification - Retrieve device information/attributes o Setup - Set device attributes o Transfer - Transfer data to or from the device o Cleanup - Transfer phase conclusion o sup- port. PRNIOC_GET_IFCAP Application can retrieve printer interface capabili- ties using this command. The ioctl(2) argument is a pointer to uint_t, a bit field representing a set of properties and services provided by a printer driver. Set bit means supported capability. The following values are defined: PRN_BIDI - When this bit is set, the inter- face operates in a bidirectional mode, instead of forward-only mode. PRN_HOTPLUG - If this bit is set, the inter- face peri- pheral may stall during the transfer phase and the driver can timeout and return from the write(2) and read(2) returning the number of bytes that have been transferred. If PRN_TIMEOUTS is set, the driver supports this functionality and the timeout values can be retrieved and modified via the PRNIOC_GET_TIMEOUTS and PRNIOC_SET_TIMEOUTS ioctls. Otherwise, applications can imple- ment their own timeouts and abort phase. PRN_STREAMS - This bit impacts the applica- tion abort phase behaviour. If the device claimed PRN_STREAMS capability, the applica- tion must issue an I_FLUSH ioctl(2) before close(2) to dismiss the untransferred data. Only STREAMS drivers can support this capa- bility. PRNIOC_SET_IFCAP This ioctl can be used to change interface capabili- ties. The argument is a pointer to uint_t bit field that is described in detail in the PRNIOC_GET_IFCAP section. Capabilities should be set one at a time; otherwise the command will return EINVAL. The follow- ing capabilities can be changed by this ioctl: capabili- ties, this command should be followed by PRNIOC_GET_IFCAP. PRNIOC_GET_IFINFO This command can be used to retrieve printer interface info string, which is an arbitrary format string usu- ally appli- cation may then repeat the command with a bigger buffer. Although prnio does not limit the contents of the interface info string, some values are recommended and defined in <sys/prnio.h> by the following macros: PRN_PARALLEL - Centronics or IEEE 1284 compatible devices PRN_SERIAL - EIA-232/EIA-485 serial ports PRN_USB - Universal Serial Bus printers PRN_1394 - IEEE 1394 peripherals Printer interface info string is for information only: no implications should be made from its value. PRNIOC_RESET. PRNIOC_GET_1284_DEVID This command can be used to retrieve printer device ID as defined by IEEE 1284-1994.The ioctl(2) argu- ment. PRNIOC_GET_STATUS This command can be used by applications to retrieve current device status. The argument is a pointer to uint_t, where the status word is returned. Status is a combination of the following bits: PRN_ONLINE - For devices that support PRN_HOTPLUG capability, this bit is set when the device is online, otherwise the device is offline. Devices without PRN_HOTPLUG support should always have this bit set. PRN_READY - This bit indicates if the device is ready to receive/send data. Applications may use this bit for an outbound flow control PRNIOC_GET_1284_STATUS Devices that support PRN_1284_STATUS capability accept this ioctl to retrieve the device status lines defined in IEEE 1284 for use in Compatibility mode. The fol- lowing bits may be set by the driver: PRN_1284_NOFAULT - Device is not in error state PRN_1284_SELECT - Device is selected PRN_1284_PE - Paper error PRN_1284_BUSY - Device is busy PRNIOC_GET_TIMEOUTS. PRNIOC_SET_TIMEOUTS attri- butes: ____________________________________________________________ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | |_____________________________|_____________________________| | Architecture | SPARC, IA | |_____________________________|_____________________________| | Interface Stability | Evolving | |_____________________________|_____________________________| close(2), ioctl(2), read(2), write(2), attributes(5), ecpp(7D), usbprn(7D), lp(7D) IEEE Std 1284-1994
http://man.eitan.ac.il/cgi-bin/man.cgi?section=7I&topic=prnio
CC-MAIN-2020-24
refinedweb
606
51.99
Music.Theory.Z12.Drape_1999 Description Haskell implementations of pct operations. See. Synopsis - cf :: Integral n => [n] -> [[a]] -> [[a]] - cgg :: [[a]] -> [[a]] - cg :: [a] -> [[a]] - cg_r :: Integral n => n -> [a] -> [[a]] - ciseg :: [Z12] -> [Z12] - cmpl :: [Z12] -> [Z12] - cyc :: [a] -> [a] - d_nm :: Integral a => [a] -> Maybe Char - dim :: [Z12] -> [(Z12, [Z12])] - dim_nm :: [Z12] -> [(Z12, Char)] - dis :: Integral t => [Int] -> [t] - doi :: Int -> [Z12] -> [Z12] -> [[Z12]] - fn :: [Z12] -> String - has_ess :: [Z12] -> [Z12] -> Bool - ess :: [Z12] -> [Z12] -> [[Z12]] - has_sc_pf :: Integral a => ([a] -> [a]) -> [a] -> [a] -> Bool - has_sc :: [Z12] -> [Z12] -> Bool - icf :: (Num a, Eq a) => [[a]] -> [[a]] - ici :: Num t => [Int] -> [[t]] - ici_c :: [Int] -> [[Int]] - icseg :: [Z12] -> [Z12] - iseg :: [Z12] -> [Z12] - imb :: Integral n => [n] -> [a] -> [[a]] - issb :: [Z12] -> [Z12] -> [String] - mxs :: [Z12] -> [Z12] -> [[Z12]] - nrm :: Ord a => [a] -> [a] - nrm_r :: Ord a => [a] -> [a] - pci :: [Z12] -> [Z12] -> [[Z12]] - rs :: [Z12] -> [Z12] -> [(SRO, [Z12])] - rsg :: [Z12] -> [Z12] -> [SRO] - sb :: [[Z12]] -> [[Z12]] - spsc :: [[Z12]] -> [String] Documentation cf :: Integral n => [n] -> [[a]] -> [[a]]Source Cardinality filter cf [0,3] (cg [1..4]) == [[1,2,3],[1,2,4],[1,3,4],[2,3,4],[]] cgg :: [[a]] -> [[a]]Source Combinatorial sets formed by considering each set as possible values for slot. cgg [[0,1],[5,7],[3]] == [[0,5,3],[0,7,3],[1,5,3],[1,7,3]] cg_r :: Integral n => n -> [a] -> [[a]]Source Powerset filtered by cardinality. >>> cg -r3 0159015 019 059 159 cg_r 3 [0,1,5,9] == [[0,1,5],[0,1,9],[0,5,9],[1,5,9]] cmpl :: [Z12] -> [Z12]Source Synonynm for complement. >>> cmpl 02468t13579B cmpl [0,2,4,6,8,10] == [1,3,5,7,9,11] d_nm :: Integral a => [a] -> Maybe CharSource Diatonic set name. d for diatonic set, m for melodic minor set, o for octotonic set. dim_nm :: [Z12] -> [(Z12, Char)]Source dis :: Integral t => [Int] -> [t]Source Diatonic interval set to interval set. >>> dis 241256 dis [2,4] == [1,2,5,6] doi :: Int -> [Z12] -> [Z12] -> [[Z12]]Source Degree of intersection. >>> echo 024579e | doi 6 | sort -u024579A 024679B let p = [0,2,4,5,7,9,11] in doi 6 p p == [[0,2,4,5,7,9,10],[0,2,4,6,7,9,11]] >>> echo 01234 | doi 2 7-35 | sort -u13568AB doi 2 (sc "7-35") [0,1,2,3,4] == [[1,3,5,6,8,10,11]] ess :: [Z12] -> [Z12] -> [[Z12]]Source Embedded segment search. >>> echo 23a | ess 01643252B013A9 923507A ess [2,3,10] [0,1,6,4,3,2,5] == [[9,2,3,5,0,7,10],[2,11,0,1,3,10,9]] has_sc_pf :: Integral a => ([a] -> [a]) -> [a] -> [a] -> BoolSource Can the set-class q (under prime form algorithm pf) be drawn from the pcset p. icf :: (Num a, Eq a) => [[a]] -> [[a]]Source Interval cycle filter. >>> echo 22341 | icf22341 icf [[2,2,3,4,1]] == [[2,2,3,4,1]] ici :: Num t => [Int] -> [[t]]Source Interval class set to interval sets. >>> ici -c 123123 129 1A3 1A9 ici_c [1,2,3] == [[1,2,3],[1,2,9],[1,10,3],[1,10,9]] ici_c :: [Int] -> [[Int]]Source Interval class set to interval sets, concise variant. ici_c [1,2,3] == [[1,2,3],[1,2,9],[1,10,3],[1,10,9]] icseg :: [Z12] -> [Z12]Source Interval-class segment. >>> icseg 013265e497t812141655232 icseg [0,1,3,2,6,5,11,4,9,7,10,8] == [1,2,1,4,1,6,5,5,2,3,2] issb :: [Z12] -> [Z12] -> [String]Source mxs :: [Z12] -> [Z12] -> [[Z12]]Source Matrix search. >>> mxs 024579 642 | sort -u6421B9 B97642 S.set (mxs [0,2,4,5,7,9] [6,4,2]) == [[6,4,2,1,11,9],[11,9,7,6,4,2]] nrm :: Ord a => [a] -> [a]Source Normalize. >>> nrm 01234565432100123456 nrm [0,1,2,3,4,5,6,5,4,3,2,1,0] == [0,1,2,3,4,5,6] pci :: [Z12] -> [Z12] -> [[Z12]]Source Pitch-class invariances (called pi at pct). >>> pi 0236 120236 6320 532B B235 pci [0,2,3,6] [1,2] == [[0,2,3,6],[5,3,2,11],[6,3,2,0],[11,2,3,5]] rs :: [Z12] -> [Z12] -> [(SRO, [Z12])]Source Relate sets. >>> rs 0123 641eT1M import Music.Theory.Z12.Morris_1987.Parse rs [0,1,2,3] [6,4,1,11] == [(rnrtnmi "T1M",[1,6,11,4]) ,(rnrtnmi "T4MI",[4,11,6,1])] rsg :: [Z12] -> [Z12] -> [SRO]Source Relate segments. >>> rsg 156 3BAT4I rsg [1,5,6] [3,11,10] == [rnrtnmi "T4I",rnrtnmi "r1RT4MI"] >>> rsg 0123 05t3T0M rsg [0,1,2,3] [0,5,10,3] == [rnrtnmi "T0M",rnrtnmi "RT3MI"] >>> rsg 0123 4e61RT1M rsg [0,1,2,3] [4,11,6,1] == [rnrtnmi "T4MI",rnrtnmi "RT1M"] >>> echo e614 | rsg 0123r3RT1M rsg [0,1,2,3] [11,6,1,4] == [rnrtnmi "r1T4MI",rnrtnmi "r1RT1M"]
http://hackage.haskell.org/package/hmt-0.14/docs/Music-Theory-Z12-Drape_1999.html
CC-MAIN-2016-36
refinedweb
789
66.27
.io.output; 018 019 import java.io.File; 020 import java.io.FileInputStream; 021 import java.io.FileOutputStream; 022 import java.io.IOException; 023 import java.io.OutputStream; 024 025 import org.apache.commons.io.IOUtils; 026 027 028 /** 029 * An output stream which will retain data in memory until a specified 030 * threshold is reached, and only then commit it to disk. If the stream is 031 * closed before the threshold is reached, the data will not be written to 032 * disk at all. 033 * <p> 034 * This class originated in FileUpload processing. In this use case, you do 035 * not know in advance the size of the file being uploaded. If the file is small 036 * you want to store it in memory (for speed), but if the file is large you want 037 * to store it to file (to avoid memory issues). 038 * 039 * @version $Id: DeferredFileOutputStream.java 1307462 2012-03-30 15:13:11Z ggregory $ 040 */ 041 public class DeferredFileOutputStream 042 extends ThresholdingOutputStream 043 { 044 045 // ----------------------------------------------------------- Data members 046 047 048 /** 049 * The output stream to which data will be written prior to the theshold 050 * being reached. 051 */ 052 private ByteArrayOutputStream memoryOutputStream; 053 054 055 /** 056 * The output stream to which data will be written at any given time. This 057 * will always be one of <code>memoryOutputStream</code> or 058 * <code>diskOutputStream</code>. 059 */ 060 private OutputStream currentOutputStream; 061 062 063 /** 064 * The file to which output will be directed if the threshold is exceeded. 065 */ 066 private File outputFile; 067 068 /** 069 * The temporary file prefix. 070 */ 071 private final String prefix; 072 073 /** 074 * The temporary file suffix. 075 */ 076 private final String suffix; 077 078 /** 079 * The directory to use for temporary files. 080 */ 081 private final File directory; 082 083 084 /** 085 * True when close() has been called successfully. 086 */ 087 private boolean closed = false; 088 089 // ----------------------------------------------------------- Constructors 090 091 092 /** 093 * Constructs an instance of this class which will trigger an event at the 094 * specified threshold, and save data to a file beyond that point. 095 * 096 * @param threshold The number of bytes at which to trigger an event. 097 * @param outputFile The file to which data is saved beyond the threshold. 098 */ 099 public DeferredFileOutputStream(int threshold, File outputFile) 100 { 101 this(threshold, outputFile, null, null, null); 102 } 103 104 105 /** 106 * Constructs an instance of this class which will trigger an event at the 107 * specified threshold, and save data to a temporary file beyond that point. 108 * 109 * @param threshold The number of bytes at which to trigger an event. 110 * @param prefix Prefix to use for the temporary file. 111 * @param suffix Suffix to use for the temporary file. 112 * @param directory Temporary file directory. 113 * 114 * @since 1.4 115 */ 116 public DeferredFileOutputStream(int threshold, String prefix, String suffix, File directory) 117 { 118 this(threshold, null, prefix, suffix, directory); 119 if (prefix == null) { 120 throw new IllegalArgumentException("Temporary file prefix is missing"); 121 } 122 } 123 124 /** 125 * Constructs an instance of this class which will trigger an event at the 126 * specified threshold, and save data either to a file beyond that point. 127 * 128 * @param threshold The number of bytes at which to trigger an event. 129 * @param outputFile The file to which data is saved beyond the threshold. 130 * @param prefix Prefix to use for the temporary file. 131 * @param suffix Suffix to use for the temporary file. 132 * @param directory Temporary file directory. 133 */ 134 private DeferredFileOutputStream(int threshold, File outputFile, String prefix, String suffix, File directory) { 135 super(threshold); 136 this.outputFile = outputFile; 137 138 memoryOutputStream = new ByteArrayOutputStream(); 139 currentOutputStream = memoryOutputStream; 140 this.prefix = prefix; 141 this.suffix = suffix; 142 this.directory = directory; 143 } 144 145 146 // --------------------------------------- ThresholdingOutputStream methods 147 148 149 /** 150 * Returns the current output stream. This may be memory based or disk 151 * based, depending on the current state with respect to the threshold. 152 * 153 * @return The underlying output stream. 154 * 155 * @exception IOException if an error occurs. 156 */ 157 @Override 158 protected OutputStream getStream() throws IOException 159 { 160 return currentOutputStream; 161 } 162 163 164 /** 165 * Switches the underlying output stream from a memory based stream to one 166 * that is backed by disk. This is the point at which we realise that too 167 * much data is being written to keep in memory, so we elect to switch to 168 * disk-based storage. 169 * 170 * @exception IOException if an error occurs. 171 */ 172 @Override 173 protected void thresholdReached() throws IOException 174 { 175 if (prefix != null) { 176 outputFile = File.createTempFile(prefix, suffix, directory); 177 } 178 FileOutputStream fos = new FileOutputStream(outputFile); 179 memoryOutputStream.writeTo(fos); 180 currentOutputStream = fos; 181 memoryOutputStream = null; 182 } 183 184 185 // --------------------------------------------------------- Public methods 186 187 188 /** 189 * Determines whether or not the data for this output stream has been 190 * retained in memory. 191 * 192 * @return {@code true} if the data is available in memory; 193 * {@code false} otherwise. 194 */ 195 public boolean isInMemory() 196 { 197 return !isThresholdExceeded(); 198 } 199 200 201 /** 202 * Returns the data for this output stream as an array of bytes, assuming 203 * that the data has been retained in memory. If the data was written to 204 * disk, this method returns {@code null}. 205 * 206 * @return The data for this output stream, or {@code null} if no such 207 * data is available. 208 */ 209 public byte[] getData() 210 { 211 if (memoryOutputStream != null) 212 { 213 return memoryOutputStream.toByteArray(); 214 } 215 return null; 216 } 217 218 219 /** 220 * Returns either the output file specified in the constructor or 221 * the temporary file created or null. 222 * <p> 223 * If the constructor specifying the file is used then it returns that 224 * same output file, even when threshold has not been reached. 225 * <p> 226 * If constructor specifying a temporary file prefix/suffix is used 227 * then the temporary file created once the threshold is reached is returned 228 * If the threshold was not reached then {@code null} is returned. 229 * 230 * @return The file for this output stream, or {@code null} if no such 231 * file exists. 232 */ 233 public File getFile() 234 { 235 return outputFile; 236 } 237 238 239 /** 240 * Closes underlying output stream, and mark this as closed 241 * 242 * @exception IOException if an error occurs. 243 */ 244 @Override 245 public void close() throws IOException 246 { 247 super.close(); 248 closed = true; 249 } 250 251 252 /** 253 * Writes the data from this output stream to the specified output stream, 254 * after it has been closed. 255 * 256 * @param out output stream to write to. 257 * @exception IOException if this stream is not yet closed or an error occurs. 258 */ 259 public void writeTo(OutputStream out) throws IOException 260 { 261 // we may only need to check if this is closed if we are working with a file 262 // but we should force the habit of closing wether we are working with 263 // a file or memory. 264 if (!closed) 265 { 266 throw new IOException("Stream not closed"); 267 } 268 269 if(isInMemory()) 270 { 271 memoryOutputStream.writeTo(out); 272 } 273 else 274 { 275 FileInputStream fis = new FileInputStream(outputFile); 276 try { 277 IOUtils.copy(fis, out); 278 } finally { 279 IOUtils.closeQuietly(fis); 280 } 281 } 282 } 283 }
http://commons.apache.org/io/api-release/src-html/org/apache/commons/io/output/DeferredFileOutputStream.html#line.44
crawl-003
refinedweb
1,206
63.49
This is your resource to discuss support topics with your peers, and learn from each other. 09-04-2009 05:09 AM Hi, I have a sample application for testing Given Email Id is Valid or not.In this Application I have a Basic EditField with Style as BasicEditField.FILTER_EMAIL.Im checking isdatavalid() method of the basic editfield is true or not while clicking on "Validate" menu Item.My issue is that isdatavalid() always returns true if the editfieldcontains any string,even if it is not a valid EmailId, How can i validate user Entered EmailId is valid or not?. 09-04-2009 05:45 PM Use an EmailAddressEditField rather than a BasicEditField, it is a very useful field because it helps the user's type email addresses in - most Blackberry users are used to this Fields actions and routinely hit the space bar rather than the '.' or @. If you want to use a BasicEditField, then I think the EmailAddressTextFilter is a good filter to use, for example: BasicEditField bef = new BasicEditField("Email: ", null); bef.setFilter(new EmailAddressTextFilter()); I find using Filters in this way gives better results than using what I think should be the equivalent Style, But use EmailAddressEditField if you can. 09-09-2009 06:16 AM 09-09-2009 07:09 AM Use EmailAddressEditField Press the kudos button to thank the user who helped you. If your problem was get solved then please mark the thread as "Accepted solution". 09-09-2009 08:32 AM 09-09-2009 09:00 AM if you talking about spaces then i read somewhere that it is in fact valid to have a space in an email address. Press the kudos button to thank the user who helped you. If your problem was get solved then please mark the thread as "Accepted solution". 09-09-2009 01:18 PM "i used EmailAddressEditField but it didn't work" It would be useful to know exactly what did not work. Can you tell us that? 09-09-2009 06:02 PM There are two kinds of validation: whether the string "looks like" an email address and whether it is an actual address. The latter is extremely difficult and about the only reliable way is to send an email to that address and check that it gets delivered. I assume that you're only trying to accomplish the first kind of validation. Like others have said here, EmailAddressEditField is a good place to start. That only does a limited amount of checking, though--you can assume that the field only has characters that are valid in an email address. However, EmailAddressEditField doesn't override isDataValid(). You can do that yourself by extending EmailAddressEditField, although be aware that deciding whether an address looks valid is itself non-trivial. Here's a very simple attempt: public class ValidatingEmailAddressEditField extends EmailAddressEditField { // ... /** * Validates an email address. Checks that there is an "@" * in the field and that the address ends with a host that * has a "." with at least two other characters after it and * no ".." in it. More complex logic might do better. */ public boolean isDataValid() { String address = getText(); int at = address.indexOf("@"); int len = address.length(); if (at <= 0 || at > len - 6) return false; String host = address.substring(at + 1, len); len = host.length(); if (host.indexOf("..") >= 0) return false; int dot = host.lastIndexOf("."); return (dot > 0 && dot <= len - 3); } }
http://supportforums.blackberry.com/t5/Java-Development/How-to-Validate-an-Email-Address-in-BlackBerry-Applications/m-p/328533/highlight/true
CC-MAIN-2015-40
refinedweb
564
72.97
This program I have made measures the area of the room. Now i want to create two classes. This is the original coding I made that works perfectly. But here I put everything in one class: import javax.swing.JOptionPane; public class RoomOne { public static void main(String[] args) throws Exception { //Declaration of strings String firstNumber; String secondNumber; int numberOne=10; int numberTwo=20; int finalAnswer=numberOne*numberTwo; JOptionPane.showMessageDialog(null, "The area of the room is: " + finalAnswer + " sqaure feet", "Room Measurement", JOptionPane.INFORMATION_MESSAGE); { System.exit(0); } } } I want to have two classes, the main class, and a secondary class. The secondary class should contain the entire coding. I want to be able to call the secondary class in the main class. How do I do that?
http://www.javaprogrammingforums.com/whats-wrong-my-code/8563-how-do-i-call-class-main-class.html
CC-MAIN-2017-30
refinedweb
127
59.8
Closed Bug 903245 Opened 7 years ago Closed 6 years ago [Gaia] Support sharing of Contact via NFC Categories (Firefox OS Graveyard :: Gaia::Contacts, defect, P1) Tracking (feature-b2g:2.0, tracking-b2g:backlog) People (Reporter: frlee, Assigned: mbudzynski) References Details (Whiteboard: [1.3:p2, ft:Comms]) Attachments (2 files, 5 obsolete files) for NFC, contact application needs to be able to create NDEF message: [Gaia] Contact application: create NDEF message → [Gaia] Support sharing of Contact blocking-b2g: --- → 1.3? Joe, could you please put this into your backlog and evaluate if your team can help or not? Thank you! Flags: needinfo?(jcheng) Whiteboard: [FT:Comms] will be decided in 1.3 triages Flags: needinfo?(jcheng) Component: NFC → Gaia::Contacts blocking-b2g: 1.3? → 1.3+ Whiteboard: [FT:Comms] → [FT:Comms, 1.3:p1] Summary: [Gaia] Support sharing of Contact → [Gaia] Support sharing of Contact via NFC Joe, can we have a assignee for this bug? Flags: needinfo?(jcheng) what's the status of the gecko support? can it be linked to this bug? thanks Flags: needinfo?(jcheng) (In reply to Joe Cheng [:jcheng] from comment #5) > what's the status of the gecko support? can it be linked to this bug? thanks Sure 1.3 targeted remove 1.3+ Blocks: comms_1.3_targeted blocking-b2g: 1.3+ → --- Whiteboard: [FT:Comms, 1.3:p1] → [FT:Comms, 1.3:p2] (In reply to Joe Cheng [:jcheng] from comment #7) > 1.3 targeted > remove 1.3+ Hi, Joe, does it mean that Comms team won't focus on this in v1.3? Just want to know if this would be slipped to v1.4 for sure. Thanks! Flags: needinfo?(jcheng) blocking-b2g: --- → 1.4? We wont have time to address this in v1.3 - we still need to work on the v1.2 open bugs plus committed features. Flags: needinfo?(jcheng) (In reply to Wilfred Mathanaraj [:WDM] from comment #9) > We wont have time to address this in v1.3 - we still need to work on the > v1.2 open bugs plus committed features. Thanks for the feedback, Wilfred. Samll update, in the not yet landed NFC Manager (Bug 860910), one of the recent changes was to remove the need to parse the multiple possible VCards formats inside NFC Manager. It was done this way: formatVCardRecord: function nm_formatVCardRecord(record) { var vcardBlob = new Blob([NfcUtil.toUTF8(record.payload)], {'type': 'text/vcard'}); var activityText = { name: 'import', data: { type: 'text/vcard', blob: vcardBlob } }; return activityText; }, There's a confirmation screen on the other end in the contacts app, so it seems reasonable enough until more UX design is done for NFC P2P communication. If contacts wants to know the source of it, the current way is to register to receive "nfc-ndef-discovered" messages, and check if it's from a P2P source. > If contacts wants to know the source of it, the current way is to register > to receive "nfc-ndef-discovered" messages, and check if it's from a P2P > source. The best way, is to register for P2P callbacks (when the code lands...). Something like this: navigator.mozNfc.onpeerfound = function foo(nfcPeerObject) { // do something... nfcPeerObject.sendNDEF(Contacts_NDEF_Message_Vcard_to_Other_Device); }; blocking-b2g: 1.4? → --- Hi Garner, I integrated the nfc_manager.js. I tested with Android device by transferring contact info. 1. It reached the nfc_manager.js in formatMimeMedia: function nm_formatMimeMedia(record) 2. It has checking condition to execute the formatVCardRecord: function nm_formatVCardRecord(record) . 3. This condition gets failed and did not stored into contact information locally. 4. I enabled log on the record.type, It gives "text/x-vCard" instead of "text/vcard". [116,101,120,116,47,120,45,118,67,97,114,100] I changed the condition like, if ( (NfcUtil.equalArrays(record.type, NfcUtil.fromUTF8('text/x-vCard'))) || (NfcUtil.equalArrays(record.type, NfcUtil.fromUTF8('text/vcard'))) ) { .... } The above condition is working and able to store the contact information from another NFC device. please confirm it., Relevant section: So, we'd need to ask the contacts team to see if they want to support that mime-type. Probably yes, but it should probably be a deliberate decision for compatibility reasons. Hi Ken, any plans for official alternate vcard format support? Alternate mimetypes for Vcards are pretty easy to add on the nfc_manager end. Flags: needinfo?(kchang) Hi Sandip, can you please answer this question? Flags: needinfo?(kchang) Flags: needinfo?(skamat) Currently we want to stick to the one single vcard format. For v1.4 consideration its important we stay with the MVP implementation. But for future releases this should definitely be one of the items to be considered for enhancements. Flags: needinfo?(skamat) Update whiteboard tag to follow format [ucid:{id}, {release}:p{1,2}, ft:{team-id}] Whiteboard: [FT:Comms, 1.3:p2] → [1.3:p2, FT:Comms] Whiteboard: [1.3:p2, FT:Comms] → [1.3:p2, ft:Comms] Sami, are you going to take this bug? If not, I want to check if anyone can take it. Flags: needinfo?(s.x.veerapandian) Hi Ken, yes I will take the bug. Please assign it to my name. pull request : Can you please review & provide feedback. I tested on Nexus-4 device. Reeving device ( Android Nexus-4, Lumia 920, Samsung-S3) Attachment #8346379 - Flags: review?(francisco.jordano) Flags: needinfo?(s.x.veerapandian) Comment on attachment 8346379 [details] [diff] [review] 0001-Bug-903245-Support-sharing-of-Contact-via-NFC-r-revi.patch Review of attachment 8346379 [details] [diff] [review]: ----------------------------------------------------------------- In general I see the following problems: - We already have a component to transform the contact to vcard. - In the contact detail check if the device has NFC support or not. - Add a method to listen to NFC events, but also remove it when you leave the detail view. Cheers! F. ::: apps/communications/contacts/js/contacts.js @@ +478,4 @@ > showForm(); > }; > > + var getCurrentContact = function c_getCurrentContact(){ I think Ben already commented this, but we have a component in shared/js/contact2vcard.js that transform a mozContact object into a vcard (format 4.0) Also the name of the function is not clear to me, as |getCurrentContact| is quite generic, would use |getCurrentContactAsVcard| As well I don't think we need this function in the 'contacts.js' file itself. ::: apps/communications/contacts/js/views/details.js @@ +68,5 @@ > '#details-back': handleDetailsBack, > '#edit-contact-button': showEditContact > }); > + > + window.navigator.mozNfc.onpeerready = function(event) { I understand with this code that no action is needed by the user when sharing a contact via NFC, you just need to be in the contact detail as requirement. Could we wrap this within a method that we call in the initialization? And also remove the listener when we are outside of the detail view? As well, what will happen with devices that don't support NFC? Will they have the mozNfc object? in that case we will need to check if the device supports or not nfc. Attachment #8346379 - Flags: review?(francisco.jordano) → review- Comment on attachment 8346379 [details] [diff] [review] 0001-Bug-903245-Support-sharing-of-Contact-via-NFC-r-revi.patch I'm going to defer to Francisco on the review here as I'm quite buried with other bugs at the moment. Thanks Francisco! Assignee: nobody → s.x.veerapandian (In reply to saminath from comment #20) > Hi Ken, yes I will take the bug. Please assign it to my name. Thanks and assigned. Attachment #8346379 - Attachment is obsolete: true Attachment #8348011 - Flags: review?(bkelly) pull request : Tested against Nexus-4 & Samsung S4 Android device Hi, please take new PR from below link pull request : Tested against Nexus-4 & Samsung S4 Android device Is there a way of trying with unagi phones? If I remember correctly, those devices came with NFC support, wondering if I will be able to try with them. Hi Francisco, Sorry, I do not have unagi phone. Today I checked with Samsung NOTE-II(Android-4.1.2). With VCARD Version 4.0, It receives the data, but not saved in the Contact. Dimi, Could you please helps out? Sami, where is the code that receives the contact and calls mozContact.save()? (In reply to Francisco Jordano [:arcturus] from comment #29) > Is there a way of trying with unagi phones? There is not NFC chip in unagi device...:-(. We use nexus 4 as a reference development phone. | Sami where is the code that receives the contact and calls mozContact.save()? | Hi Ben, In Firefox, contact save is working. Implemented in gaia/apps/system/nfc_manager.js. formatMimeMedia() fill the action with vcard record, pass to MozActivity(). Please check: Comment on attachment 8348011 [details] [diff] [review] 0001-Bug-903245-Support-sharing-of-Contact-via-NFC-r-revi.patch Review of attachment 8348011 [details] [diff] [review]: ----------------------------------------------------------------- Sami, this is getting better and will be a great feature to have. I still have some concerns, though, so I need to r- for right now. Please ask me to re-review after you've taken a look at the comments. Also, I agree with Jose that we should probably try to isolate/modularize this code a bit more. Perhaps this could be moved to an nfc.js file within contacts. The contacts.js code would then just be responsible for initializing nfc.js and telling it when details are shown/hidden. ::: apps/communications/contacts/js/contacts.js @@ +250,5 @@ > + var contains = function contains(vcard) { > + var str; > + str = vcard; > + return ( (vcard.toLowerCase().indexOf(str.toLowerCase())) !== -1 ); > + }; As far as I can tell this function compares the vcard to itself. It will always return true, won't it? Should vcard and str be passed in separately? @@ +253,5 @@ > + return ( (vcard.toLowerCase().indexOf(str.toLowerCase())) !== -1 ); > + }; > + > + var nfcPeerHandler = function nfcPeerHandler (event) { > + var payload ; Nit: Please use 2-space indents throughout. @@ +260,5 @@ > + var count = 0; > + var res; > + > + var nfcdom = window.navigator.mozNfc; > + var nfcPeer = nfcdom.getNFCPeer(event.detail); Error check if nfcPeer is invalid here before proceeding? @@ +261,5 @@ > + var res; > + > + var nfcdom = window.navigator.mozNfc; > + var nfcPeer = nfcdom.getNFCPeer(event.detail); > + var records = new Array(); Nit: I think we prefer the literal |[]| instead of |new Array()|. @@ +262,5 @@ > + > + var nfcdom = window.navigator.mozNfc; > + var nfcPeer = nfcdom.getNFCPeer(event.detail); > + var records = new Array(); > + var tnf = 0x02; What is this magic value? @@ +266,5 @@ > + var tnf = 0x02; > + var type = 'text/x-vCard'; > + var id = ''; > + > + contact= currentContact ; Please handle the condition when |currentContact| is |null|. It seems we should be able to short-circuit here. @@ +280,5 @@ > + payload = str; > + } > + else{ > + return; > + } This |else { return; }| is not needed since you're in a callback. @@ +284,5 @@ > + } > + }); > + } > + ); > + if (payload) { Doesn't this block of code need to be up in the |ContactToVcard()| success callback? There is no guarantee that |ContactToVcard()| will run synchronously, so you may not have payload set correctly here. @@ +293,5 @@ > + payload); > + records.push(record); > + res = nfcPeer.sendNDEF(records); > + res.onsuccess = (function() { > + debug('contact data transfer successfully'); Should the success or failure of the transfer be reported back to the user on the screen? @@ +328,5 @@ > + > + if(window.navigator.mozNfc) { > + window.navigator.mozNfc.onpeerready = nfcPeerHandler; > + } > + }; This will enable the NFC peer handler (and if I understand automatic transfer to a ready peer) even if we fail to enter the details view. This should probably be moved up into |getContactById()| callback to ensure its only run if we actually navigate to details. ::: apps/communications/contacts/js/views/details.js @@ +86,5 @@ > } > } > + if(window.navigator.mozNfc) { > + window.navigator.mozNfc.onpeerready = null; > + } I think the code should be refactored in some way so that the |onpeerready| is set and cleared from the same code module. This helps ensure that the state is kept consistent. So, can you move this to |contacts.js| in someway? Perhaps with a callback indicating that details have closed if something like that doesn't already exist? ::: shared/js/contact2vcard.js @@ +26,4 @@ > /** Field list to be skipped when converting to vCard */ > var VCARD_SKIP_FIELD = ['fb_profile_photo']; > > + var VCARD_VERSION = '2.1'; This is going to change all of our vcard exports to 2.1. Do we really want to do that? We should probably make this code support either 2.1 or 4.0 via a parameter. Attachment #8348011 - Flags: review?(bkelly) → review- And thank you for working on this! Comment on attachment 8348015 [details] [diff] [review] 0001-Bug-903245-Support-sharing-of-Contact-via-NFC-r-revi.patch This time I'm deferring the review in Ben's one. We are getting closer, once you add comments that Ben did we will be almost ready. Thanks a lot for working in this feature! (In reply to saminath from comment #33) Hi, Saminath, we are highly appreciated for your help to take this bug. Thank you very much! Sorry that I need to move the ownership to Michal due to its high priority. Would you like to own other NFC feature work or bugs? Or you would like to own other Firefox OS parts? Thank you. Assignee: s.x.veerapandian → mbudzynski Hi Ben, I addressed all your comments and would be sending PR. Attachment #8348011 - Attachment is obsolete: true Attachment #8348015 - Attachment is obsolete: true Attachment #8350028 - Attachment is obsolete: true Attachment #8350031 - Flags: review?(francisco.jordano) Attachment #8350031 - Flags: review?(bkelly) Comment on attachment 8350031 [details] [diff] [review] 0001-Bug-903245-Support-sharing-of-Contact-via-NFC-r-revi.patch Review of attachment 8350031 [details] [diff] [review]: ----------------------------------------------------------------- It's a bit weird, this patch is not against the original master branch, seems it's against another previous patch so cannot evaluate it correctly. Thanks again! Attachment #8350031 - Flags: review?(francisco.jordano) → review- Hi Ben/francisco, Here with I attached the Patch, addressed with previous comments. Also Find the PR. This Bugs assigned to Michal, Thanks for your support. Attachment #8350031 - Attachment is obsolete: true Attachment #8350031 - Flags: review?(bkelly) Attachment #8351126 - Flags: review?(francisco.jordano) Attachment #8351126 - Flags: review?(bkelly) (In reply to Kevin Hu [:khu] from comment #37) Hi Kevin, I submitted , What ever I completed. Presume that it could be useful. Thanks for your support. (In reply to saminath from comment #42) > (In reply to Kevin Hu [:khu] from comment #37) > > Hi Kevin, > > I submitted , What ever I completed. Presume that it could be useful. > > Thanks for your support. Wow, thank you very much, saminath. Hi Michal, thanks for your great help. Are you working on this bug? When do you plan to finish it? We would like to have a NFC demo during NFC workweek. Flags: needinfo?(mbudzynski) Hi folks, any idea about the ownership of this bug and if we should review or wait till Michal gives the current patch a look? Cheers, F. Comment on attachment 8351126 [details] [diff] [review] 0001-Bug-903245-Support-sharing-of-Contact-via-NFC-r-revi.patch I spoke with Francisco and we decided to drop the review flags until Michal has had a chance to look at the patch. Sami, we greatly appreciate your work here! I'm sorry for the confusion on this bug. The lack of hardware made it difficult for me and Francisco to test patches. Also, the fact that its on our release roadmap created some time pressure not present on other bugs. I hope our delays reviewing and the change of ownership does not discourage you from continuing to contribute. Attachment #8351126 - Flags: review?(francisco.jordano) Attachment #8351126 - Flags: review?(bkelly) (In reply to Francisco Jordano [:arcturus] from comment #45) > Hi folks, > > any idea about the ownership of this bug and if we should review or wait > till Michal gives the current patch a look? > > Cheers, > F. I saw this conclusion, Michal is the owner for this bug, in email. Is it possible to have this feature done during NFC work week? blocking-b2g: --- → 1.4+ Priority: -- → P1 Target Milestone: --- → 1.3 C3/1.4 S3(31jan) No more blocks 1.3 committed but blocks 1.4 committed feature Blocks: 1.4-comms-committed No longer blocks: comms_1.3_targeted No longer blocks: comms_1.3_targeted Target Milestone: 1.3 C3/1.4 S3(31jan) → 1.4 S1 (14feb) remove 1.4 flag as this is a feature work. let's use user story or metabug to keep tracking. blocking-b2g: 1.4+ → backlog Flags: needinfo?(mbudzynski) I've attached the work in progress patch, it's working but without test. I'll add them later during the weekend. Target Milestone: 1.4 S1 (14feb) → 1.4 S2 (28feb) Comment on attachment 8376634 [details] [review] Patch Tests added, ready to r. Attachment #8376634 - Attachment description: [WIP] patch → Patch Attachment #8376634 - Flags: review?(arcturus) Can we get this reviewed at the earliest please? (Making the builds for an upcoming conference) Already sent an email to Francisco to double confirm his availability. I believe he will do it as his earliest convenience. Comment on attachment 8376634 [details] [review] Patch Great job Michal! I finished doing the first round, some comments to address in the PR. Will love to see a second round for r+plusing. Specially in the comments related to not enable or load features in the phones with no NFC. I have also another question, mainly for product, we have several ways to accessing a contact detail, for example web activities, (we could be seeing a contact from a sms), right now we could share no matter what as long as we are in that view, is that the expected behavior? Also, I don't have a NFC capable device, could someone from the Paris office give it a try once we do the code review? Cheers! Thank you Francisco, I fixed or commented most of your concerns. Could you pls r again? Sure, let's take a look :) Comment on attachment 8376634 [details] [review] Patch Looking good to me, I tried on a non nfc phone, working great and ask Anthony to check the solution in nfc devices and he gave a positive feedback. r+ here, great job, please squash all the commits and merge! Attachment #8376634 - Flags: review?(francisco.jordano) → review+ Michal, I wonder if it was possible to land this by today? Flags: needinfo?(mbudzynski) It is ready to land since Friday, I get r+ yesterday but couldn't land it because the tree is closed. I'll land as soon as I will be able to. Flags: needinfo?(mbudzynski) Hi Michal, Thanks for the feature. I was trying out the code for the MWC demo, and found the onpeerready callback appears (from behavior only, not from looking at the code) to only be set on an existing user's details screen/view. It may be more useful if that callback is set on the possibly empty contacts list view itself, instead of the details of an unrelated contact? Also, there's no explicit nfc-ndef-discovered activity handler for vcards. This is somewhat fine as there is an explicit case for text/vcard in nfc_manager (but not the rest like text/x-vCard, etc). Flags: needinfo?(mbudzynski) ^^ and Saminath too. :-) But according to the spec we can only share one contact at a time, so I don't understand why do we want to set the callback also on the list. Could you pls be a little bit more clear there? NFC manager sent 'import' activity when we send text/vcard file, so Contacts app use importing action to receive sent contact. I don't see a point in duplicating this part of code to achieve the same thing. Flags: needinfo?(mbudzynski) And according to the new landing rules this patch cannot land till we will branch 1.4 (In reply to Michal Budzynski (:michalbe) from comment #64) > But according to the spec we can only share one contact at a time, so I > don't understand why do we want to set the callback also on the list. Could > you pls be a little bit more clear there? Actually, yes, we shouldn't send the whole contacts book :) So, no code changes are needed here, my comment is not relevant. To clarify what I was thinking: what might be nice, UX wise, is that each user can get a visual feedback on whether the devices are paired, separately from whether a particular user's phone is even able to send anything. It's not exactly obvious, on the ever larger phones these days, where the NFC antenna on the back of the device is. If the 2 phones are touching, it's not clear which phone vibrated either, so a visual "Phones are paired, but receiving only" is nice. This doesn't happen on Android, but we can potentially clear up this use case on the FirefoxOS side. Yes, it sounds good, but I think this should be handled by the system app, not by all the apps itself. remove target milestone. NFC sharing is not key focus in 1.4, so we should defer it to 1.5. Target Milestone: 1.4 S2 (28feb) → --- I wonder if we could restart this bug now? Flags: needinfo?(mbudzynski) Wesley and Joe, I wonder if this feature is in Comms team's backlog. Flags: needinfo?(whuang) Flags: needinfo?(jcheng) NFC is one of 1.5 features. Wilfred, can u pls add to comms team 1.5 list. thx Flags: needinfo?(wmathanaraj) Flags: needinfo?(whuang) its comms team component and its "backlog" in blocking flag so yes it will show up in comms team backlog Flags: needinfo?(jcheng) Quick quesetion, was this merged to 1.3T? In that case how we check that the mighty merge that will happen between 1.3T and master will work for this? (In reply to Francisco Jordano [:arcturus] from comment #74) > Quick quesetion, was this merged to 1.3T? > > In that case how we check that the mighty merge that will happen between > 1.3T and master will work for this? No, it isn't in 1.3T yet. and we don't have plan to merge it to 1.3T. Hi Michal, As the patch was there weeks ago, is it possible to land it to master recently? Not sure if rebase takes huge efforts. I know 1.5 has not started yet, but early landing gives us more time to test. The work was done in v1.4 but the feature was not landed in v1.4 due to the new guidelines for v1.4. We can land this for v1.5 and I dont see any issue landing it in v1.5. I am leaving NI michal to see if there is any other work pending than landing the patch. Flags: needinfo?(wmathanaraj) Wilfred, Patch is r+ for sometime, but it still somehow makes travis red, I'll investigate it today and land asap. Flags: needinfo?(mbudzynski) Patch merged [1], thanks everyone! [1] Status: NEW → RESOLVED Closed: 6 years ago Resolution: --- → FIXED feature-b2g: --- → 2.0 blocking-b2g: backlog → --- tracking-b2g: --- → backlog
https://bugzilla.mozilla.org/show_bug.cgi?id=903245
CC-MAIN-2020-10
refinedweb
3,788
68.36
lapse in the specification of the bidirectional aspects of a given HTML feature. And some of these challenges could be greatly simplified by adding a few strategically placed new HTML features. The following document proposes fixes for some of the most repetitive pain points. Preliminaries - LTR: left-to-right - RTL: right-to-left - All examples in this document are in "fake bidi", i.e. use uppercase English to represent RTL characters and lowercase English for LTR characters. They will usually first give the characters in the order in which they are stored in memory, and then in the visual order in which they appear when displayed. For example, . - Much of this proposal deals with determining and declaring the base direction of text. This is because text displayed in the wrong direction is often garbled. For example, "10 main st." is displayed in RTL as .main st 10 and "MAKE html WORK FOR YOU" is displayed in LTR as EKAM html UOY ROF KROW instead of the intended UOY ROF KROW html EKAM and is quite unreadable.. Examples of such entities are legion: the title of an article, the name of an author, a description, etc. As long as the entire document and all the entities it contains are of uniform direction, there is no problem. Arbitrary-direction entities also don't cause a problem when they are displayed as a separate block element (which is treated as a separate "paragraph" in UBA terms). However, when an inline entity is allowed to contain text of arbitrary direction, bad things start happening, and existing HTML mark-up is powerless to stop it. Example 1: PURPLE PIZZA - 3 reviews The entity here is the RTL name of a restaurant, being displayed in an LTR context. The intent is to have it appear as AZZIP ELPRUP - 3 reviews However, it is actually displayed as 3 - AZZIP ELPRUP reviews and is effectively unreadable. This happens because according to the UBA, a number "sticks" to the strong-directional run preceding it. Example 2: <span dir="rtl">PURPLE PIZZA</span> - 3 reviews This is a common first attempt at fixing Example 1. In fact, wrapping opposite-direction text in mark-up indicating its direction is generally a good idea, and is in many cases essential. Here, however, it makes no difference at all - the result is exactly the same as in Example 1. That the "fix" does not work is, in fact, to be expected: the <span dir="rtl"> only explicitly states the direction of the text inside it, and does not say anything at all about what surrounds it. In fact, the currently recommended way to fix our purple pizza is not to use mark-up at all, but to insert an LRM character (U+200E, ‎) after the PURPLE PIZZA. This prevents the RTL text from "sticking" to the number that happens to follow it. If the context had been RTL and the entity LTR, the same magic would be worked by the RLM character (U+200F, ‏). The same technique is supposed to be applied to Examples 3 and 4 below. Unfortunately, using LRM/RLM marks like this is less than ideal, for reasons we will discuss below. Example 3: USE css (<span dir="ltr">position:relative</span>). The entity here is a code snippet ("position:relative"), marked with a span and its LTR direction, to be displayed in an RTL context. Despite the RTL context, it is preceded by the LTR word "css" because technical terms and brand names often appear in their original Latin script in RTL text. The intent is to have it appear as .(position:relative) css ESU However, it is actually displayed as .(css (position:relative ESU This happens because the LTR word "css" before the entity "sticks" to the LTR entity according to normal UBA rules. as documents > LEVON TSRIF YM > 1 RETPAHC However, it is actually displayed as documents > 1 RETPAHC < LEVON TSRIF YM i.e. with the RTL folder names visually in the wrong order (and the arrow between them reversed). This happens because according to the UBA, the two RTL entities "stick" together, whether or not they are wrapped in <span>s as shown here. Example 5: joe hacker RLO: overdrawn The entity here is the name of a user, as chosen by a malicious user to include the invisible RLO character (U+202E), followed by a status string. Obviously, the user's name is "HTML-escaped" when displayed, but this does not do anything to the RLO character. The outcome is that this is displayed as joe hacker: nwardrevo where the entity influenced the display of what follows it, reversing its characters. This has security implications and has surfaced on blogs. On the other hand, it does not even have to be due to malicious use, only to the inadvertently bad trimming of an overly-long string. Currently, there is no reliable way to deal with examples 1 - 4 using mark-up, except by redundantly marking an entity's surroundings with the base direction, which is counterintuitive and painful to implement. The usual way to deal with 1 - 4 is to surround an entity in either LRM or RLM characters - LRM in an LTR context, and RLM in an RTL context. This prevents the entity from "sticking" to what precedes or follows it. However, using the LRM/RLM technique has several disadvantages, particularly in a web application: - The LRM or RLM is being used to address a layout issue that reflects the structure of the document, i.e. to indicate the boundary of an entity. There should be a way to express it in mark-up, not magic Unicode characters. In fact, the entity is typically already surrounded by an element that either gives it style or indicates its direction; why can't the element itself be used to indicate an entity? - In a web application, having to add logic to choose between an LRM and an RLM is a pain, especially when the existing code layer does not happen to have easy access to the context's direction. - Not all search engines (e.g. the browser's own CTRL-F) are smart enough to ignore invisible Unicode characters such as LRM and RLM. This makes a document using such characters less searchable: the user searches for "A B", but does not find "A B" because there is an invisible character between them. Or, conversely, the user copy-pastes text - accidentally including the LRM or RLM character - from the page into some search box, and does not get hits in any other documents because they do not contain the LRM/RLM. In a manually-authored HTML document using a few judiciously placed LRMs/RLMs, such problems do not amount to much. In a web application, however, the simplest way to use this technique is to do it wholesale, around every inserted entity. This results in very real searchability problems. Avoiding them requires implementing quite complicated logic to decide whether the LRM or RLM is really necessary. Furthermore, LRMs and RLMs do not help in example 5. Nor is there any mark-up to solve it. The only current way to deal with it is for the application to either remove any LRE, RLE, LRO, RLO, or PDF characters in it, or to remove any extra PDFs and then add any missing ones at the end. This is a rarely-implemented pain in the neck. and accept as input both LTR and RTL data. Furthermore, the application often does not know and can not control the direction of the data. For example, an online book store that carries books in many languages needs to display the original book titles regardless of the language of the user interface. Thus, a Hebrew or Arabic book title may appear in an English interface, and vice-versa. The direction of the title may be available as a separate attribute, but more likely it isn't, and needs to be guessed. The safest guess is on the basis of the characters making up the title. If this site also allows user comments or reviews, it is unreasonable to limit these to one language. For example, for an English book listed in an Arabic or Hebrew interface, it is perfectly reasonable to get comments both in English and in the book's language. The application does not know what the user will type until the user types it. Unless opposite-direction data is explicitly declared as such, it is often displayed garbled as shown above. Perhaps even worse, the user experience of typing opposite-direction data is quite awkward due to the cursor and punctuation jumping around during data entry and difficulty in selecting text. Currently, avoiding such problems requires that the application implement logic to estimate the data's direction - and use it in the many places where it is needed. Such logic is not easy to implement, since it requires using long tables of strong-RTL and/or strong-LTR characters, and becomes non-obvious when a string contains both. For an input element, where the direction must be automatically set as the user types the text, there is no choice but to implement the estimation logic in page scripts, thus requiring even more advanced programming skills. As a result, few applications wind up doing direction estimation, and a poor user experience is quite common for web pages mixing LTR data in an RTL interface or vice-versa. Not The Problem The issue at hand is with text data that is basically compatible with the UBA. That is, given the correct base direction, applying the UBA will display the text intelligibly. The only problem is that we don't know the correct base direction. This is distinct from a different, harder issue: text mixing LTR and RTL without using the formatting characters necessary to display it intelligibly using standard UBA rules. Whichever base direction is applied, the text will not be displayed as intended. Examples of such data are not as rare as one might think: -. Such text does not include the Unicode formatting characters that could fix its display either because it must conform to a syntax that would misinterpret such characters, or simply because it was created by a human user that does not know such characters exist, much less how to enter or use them. Given the text's syntax, or at least a set of patterns for the problematic parts, the text could, in theory, be parsed into its constituent parts, and formatting characters added to make the text display correctly. Although this is a painful real-world problem, it is unrelated to HTML per se and currently lacks a mature solution. We are not proposing one here.. Different approaches have been preferred in different contexts: first-strong for search boxes, any-RTL for advertisements, and word-count for longer texts like e-mails. Nevertheless, it is worth pointing out that the choice of the precise algorithm is an optimization. For most real-world data strings, all these estimation algorithms will give the same correct result. In addition to the basic algorithm choice, there are also more complex the element's structure, the higher the chances that it mixes LTR and RTL content, and the lower the chances that an estimation algorithm will succeed in displaying the contents intelligibly. It is meaningless to use an estimation algorithm on content mixed to the extent that it is unintelligible in both LTR and RTL (when displayed by standard UBA rules). In many applications, it is necessary to allow the user to enter text of either direction into a given <input type="text"> or <textarea> element, regardless of the page's direction. Although algorithms for estimating the direction of a string exist (and hopefully will be exposed by the browser as described in 1.2 above), they remain heuristic for mixed-script strings. As a result, all major browsers provide some way for the user to explicitly set the direction of an <input type="text"> or <textarea> element, e.g. via keyboard shortcuts, so the text being entered by the user is displayed correctly. The Problem Once the text entered by the user has been submitted to the server, the direction in which it was displayed in the page is lost, unless explicitly added to the form as an invisible input by page scripts. However, scripts are not available in all environments, e.g. e-mail forms. As a result, in such an environment, the application is forced to guess at the direction of a string submitted by the user, will sometimes get it wrong, and as a result display it incorrectly in subsequent pages. Although most images, e.g. photos, are equally applicable to LTR and RTL pages, some images are inherently and primarily "handed" or "directional", and need to appear in a mirror image in an RTL page. Common examples include various arrow and "connector" images. A less obvious example might be star rating images: the "full" half of a half-star needs to be on the left in LTR and on the right in RTL. The Problem Currently, the author of a page to be localized into both LTR and RTL languages is forced to create two separate versions of each "handed" image, stored in two separate files, and use one or the other depending on the page language by changing the src attribute of the <img>. This process is monotonous and error-prone.". In the UBA, whitespace provides almost no separation against either kind of bidi influence. On the other hand, the UBA's sections 3.3.1 and 3.3.2 require that the bidi state be completely reset at a "paragraph break". This means that strong-directional text (e.g. letters) and explicit bidi formatting characters (e.g. RLE and RLO) in one paragraph have no effect on the formatting of the text in the next paragraph and vice-versa. This is a very high level of bidi separation. In plain text, "newline" characters like the line feed (U+000A) and carriage return (U+000D) are commonly used both to end paragraphs and simply to wrap logical lines. The former usage needs a UBA paragraph break, while the latter usage wants no more bidi separation than other kinds of whitespace. The UBA resolves this ambiguity in favor of the paragraph break because of its importance. All common UBA implementations for plain text treat newline characters as a UBA paragraph break, in accordance with the UBA specification. The UBA leaves the definition of a "paragraph" in higher-level protocols like HTML up to the protocol. It is well-accepted that HTML block elements like <div> and <p> form UBA paragraphs, and this is implemented by all major browsers. Thus, whatever happens inside a block element has no effect on the bidirectional rendering of the text before it or after it. The Problem The HTML 4 standard explicitly specifies that <br> is to be treated for bidi purposes as whitespace, and not as a UBA paragraph break. The arguments for this decision seem to be that: - <br> is defined as an inline element. - The preferred way to demarcate a paragraph in HTML is as a <p> or some other block element. Firefox and Opera follow this specification and treat <br> as whitespace for UBA purposes. In actual usage, however, <br> is a very popular element and is used to form paragraphs at least as often as <p>, just like newlines in plain text. In fact, unlike newlines in plain text, it is almost always used for that purpose, as opposed to just wrapping a line to fit in a limited amount of space, simply because HTML normally takes care of line wrapping by itself. As a result, Firefox's implementation of <br> as UBA whitespace, despite being in accordance with the current HTML specification, is regularly reported as a bug. It results in innocent-looking HTML like 1. his name is JOHN.<br> 2. SUSAN is a friend of his. being rendered as 1. his name is .NHOJ NASUS .2 is a friend of his. Because the "JOHN.<br>2. SUSAN" forms a single RTL run despite the <br>, the "2" goes to the right of SUSAN. (Please note that wrapping the "JOHN" and "SUSAN" in separateJOHN</span>.<br>2. <span dir="rtl">SUSAN</span>", does not make any difference.) Although this LTR example is somewhat contrived, the RTL equivalent is quite realistic because it is common for LTR brand names, acronyms, etc. to be used in RTL text: 1. IT IS IMPORTANT TO LEARN html.<br> 2. css IS IMPORTANT TOO. which is rendered in Firefox and Opera as html. NRAEL OT TNATROPMI SI TI .1 .OOT TNATROPMI SI 2. css As a result, IE and WebKit treat <br> as a UBA paragraph break. Although this is not in conformance with the HTML 4 spec, the bidi separation it provides does seem to follow most users' expectations. If IE and WebKit were to change their <br> behavior to conform to the current standard, many existing RTL HTML documents would be broken, especially given that they tend to be authored mostly with IE in mind. While the bidi separation provided by treating <br> as a UBA paragraph separator is useful, the very strong nature of this separation (closing all open embedding levels) also creates problems. Being an inline element, <br> can be nested within an arbitrary number of other inline elements. If these inline ancestors have explicit dir attribute values of their own, should the <br> terminate their effects as UBA's definition of a paragraph separator says it should? That is what a newline in plain text does when it comes between an LRE or RLE and its matching PDF. So, should the second line in <div dir="rtl"><span dir="ltr">1. hello!<br>2. goodbye!</span></div> be displayed as RTL? That would conform to the definition of a UBA paragraph break, but would go against the spirit of HTML. This is, in fact, what WebKit currently does (although it is now being treated as a bug). To avoid this problem, IE apparently re-opens the directional embedding levels specified on ancestor elements via mark-up (dir attribute, <bdo> element) or CSS up to the closest ancestor block element after closing them at a <br> paragraph break. On the other hand, it does not reopen the directional embedding levels stemming from surrounding LRE/RLE/LRO/RLO and PDF characters. Should There is no standard definition of whether a block element serves as a UBA break between the text preceding and following it, i.e. whether the text preceding a <div></div> or an <hr> (defined to be a block element) should behave as if it were in the same UBA paragraph as the text following it. For short, we will call block elements with text on both sides "embedded". Different browsers treat embedded block elements differently. Just as with <br>, in Firefox and Opera, an embedded block element provides no bidi separation between the text preceding and following it, while IE and WebKit treat it as a UBA paragraph break. See. Since a value displayed in the wrong direction can come out garbled, pages wind up having to wrap their RTL dialog text in RLE + PDF characters for correct display on LTR systems.. (This is not a concern in the case of RTL dialog text, since a system that does not have RTL script support will not display RTL text correctly anyway.) One would expect that the page's direction set using <html dir=...> would apply to the page's <title>. Unfortunately, however, this is not the case in any major browser. The directional context all major browsers use for <title> is either the OS or the browser chrome's default direction, which neither the server nor page scripts can even determine, let alone control. Nor does setting the dir attribute directly on the <title> element have any effect in any major browser. Since a value displayed using the wrong direction can come out garbled, pages wind up having to wrap their RTL <title> in RLE + PDF characters. This has the same problems as with script dialog text, see 2.4. above. Proposed Solution The HTML specification should explicitly state that the <title>'s text will be displayed in the <title>'s computed direction. It is easy enough for a browser to implement this, since it knows the default directional context in which the text will be displayed. If and only if this differs from the desired direction, the browser needs to wrap the title text in RLE + PDF when RTL is desired and LRE + PDF when LTR is desired. In principle, this could break existing RTL documents that count on their title being displayed in LTR, as is usually the case today. The change should be made despite this, because: - Such documents can't really count on the current behavior anyway: on an RTL OS / browser the title is already displayed RTL. - In many cases, RTL documents work around the problem by having a title that looks the same whether displayed in LTR or RTL. - This will fix more documents than it will break. - Forcing backward compatibility will perpetuate an ugly exception. 2.6. title and alt attribute text should be displayed in the element's direction Background As in 2.4 above. The Problem Currently all major browsers (IE, FF, Chrome, Safari, Opera) display the tooltips specified by a title or alt element attribute in the direction of the element to which it belongs, but this does not appear to be formally specified anywhere. Furthermore, this consensus seems fragile because in principle, the direction of an element and the text of its tooltip do not have to coincide. Here is a reasonable counterexample: an RTL web page displays an LTR address (e.g. for a location in Europe), with a tooltip on the address element saying "ADDRESS" in the page's language. The tooltip thus needs to be RTL while the element needs to be LTR. Until recently, Chrome displayed tooltips in the OS / browser's default direction. When fixing this bug, the initial inclination was to apply only the page's direction, not the element's, due to the "in principle" consideration. Apparently not trusting browser behavior, the W3C suggests that tooltip direction may have to be set using LRE | RLE + PDF. This is actually quite difficult to do properly, since wrapping an LTR tooltip in LRE + PDF just in case the browser winds up displaying it in an RTL context will result in the LRE and PDF displaying as rectangles on LTR OS's without RTL support enabled, i.e. the vast majority of computers. In a single <select>, the values of different options may have different directions. Currently, however, out of all major browsers, only FF supports the dir attribute on <option>, and does so poorly: once the value is chosen, it is displayed in the <select>'s direction. IE and Opera display all options in the <select>'s direction. Safari automatically estimates the direction of each option and displays it as such both in the dropdown and after it has been chosen regardless of the <select>'s direction (which is only used to place the down-arrow button and to align the values). This is all very nice, but direction estimation algorithms do make mistakes, so it would be good to be able to specify the actual dir value for a given <option> - and Safari does not support that. Chrome does not support the dir attribute on <option> and is on its way to doing what Safari does. As a result, the only practical way to specify <option> value direction is using LRE | RLE + PDF, which is cumbersome. Proposed Solution The HTML specification should state that an <option> element's computed direction will take its dir attribute into account, and will be used to display the option's text in both the dropdown and after being chosen. The HTML specification should also state that setting an <option> element's alignment via CSS or the align attribute will affect its display accordingly in both the dropdown and after being chosen. 2.8. <input type="text"> and <textarea> should support compatible "set direction" functionality Background Garbling by incorrect direction also applies to text being entered by the user in an input control. In fact, entering text of direction opposite to the input's declared direction is an unpleasant experience even if the full text does not wind up being garbled, due to the cursor and punctuation jumping around during data entry and difficulty in selecting text. All major browsers thus provide some way for the user to set the direction of each <input type="text"> and <textarea> element. The Problem Unfortunately, the way "set direction" functionality interacts with page scripts varies significantly between browsers, which makes it difficult to write scripts that are informed of the user's choice. IE: Direction is set using keyboard shortcuts -. They trigger the onpropertychange event, at which time the dir value is already changed. They also trigger onkeyup, but before the dir value has been changed, so setTimeout(0) has to be used to get the updated dir value. They do not trigger onkeypress. FF: Direction is set using the CTRL + SHIFT + X keyboard shortcut, which cycles through LTR and RTL. It does not set the value of the element's dir attribute, and is thus invisible to scripts. Opera: same keyboard shortcuts as IE. They do not set the value of the element's dir attribute, and are thus invisible to scripts. Chrome: same keyboard shortcuts as IE. They set the value of the element's dir attribute, which is then available to scripts. They trigger the onkeyup event, at which time the dir value is already changed. They do not trigger onkeypress or oninput. They also do not trigger onpropertychange, since this event exists only in IE. Safari: Right-click on the <input> or <textarea> provides a "Set paragraph direction" submenu. Using "Set paragraph direction" sets the value of the element's dir attribute, which is then available to scripts. However, it does not trigger onkeyup, onkeypress, or oninput. It also doesn't trigger onpropertychange, since this event exists only in IE.. Furthermore, it should be recommended that on an OS that has a widespread convention for setting direction (such as CTRL + LEFT SHIFT for LTR and CTRL + RIGHT SHIFT for RTL on Windows), the user agent will support that convention (although it may provide other methods too). 2.9. when an input value is remembered, its direction should be remembered too Background Some browsers implement auto-completion, a feature whereby values previously entered into an element like <input type="text"> are remembered and under certain conditions presented to the user in a dropdown. When the user selects one of the items in the dropdown, this value is assigned to the element. At different times, the user may enter values of different direction for the same input. The direction of a value is set either directly by the user through a "set direction" command exposed by the browser (e.g. via keyboard shortcuts, see 2.8 above) or letting page scripts automatically set the input's dir attribute after estimating the direction of the value on the fly. The Problem Browsers do not remember the direction of previously-entered values. Some display them in the dropdown in the OS or browser default direction. Some display them in the input's current direction. Finally, some display each value in its own estimated direction. Each of these will result in some values being displayed incorrectly; even the last approach will sometimes fail because estimation algorithms do make mistakes, and this may not have been the direction originally set by the user or page scripts. After the user chooses a value from the dropdown, the value is usually displayed in the input's current direction, which may or may not be correct for it. Proposed Solution The HTML specification should state that whenever a user agent stores a user-provided <input type="text"> or <textarea> value for later use (such as auto-completion), it should also store the nominal direction value the element had when displaying this value. This may be the original direction of the element, or may have been set by the user for that value via keyboard shortcuts, or may have been set for that value by page scripts. If the user agent later displays the value in an auto-completion dropdown, it should be displayed in its stored direction. If the value is assigned to an element, the element's dir value should be set to its stored direction.:> In our opinion, not only is browser behavior unacceptably incompatible and inconsistent, but none of the above provides a usable display of opposite-direction list items. In a browser open on a given page, the UI is made up of two parts: the chrome of the browser itself (e.g. its menus and toolbars), and the page being displayed in the browser. The two parts can be and often are in two different langauges and thus directions. It is unclear which of the two is the principal part of the UI. Certainly the page takes up most of the window and is presumably the user's focus of attention. As a result, it seems natural that the vertical scrollbar should be on the "end" edge relative to the page's (i.e. the <body> element's) overall direction - and not the browser's chrome direction. However, this usually results in a usability issue when surfing: the scrollbar moves from side to side when going from an LTR page to an RTL page or vice-versa, confusing the user and making the scrollbar surprisingly difficult to find visually and click on physically. It is also arguable that the overall scrollbar is a part of the browser chrome, not the page, so it has no business being dependent on the page direction. As a result, Firefox, Chrome and Safari place the scrollbar on the "end" edge relative to the browser's chrome direction. Furthermore, this is the behavior required by the dir on html, vertical scrollbar alignment and dir on body, vertical scrollbar alignment test tests in the i18n test suite being developed by the W3C. However, IE and Opera continue to put the scrollbar relative to the page direction. Proposed Solution The HTML specification.) 2.12. the vertical scrollbar of an element below <body> should be on the "end" side relative to the element's direction Background As in 2.1 above. The Problem Users expect the vertical scrollbar of a "widget" inside the page to be on an LTR widget's right side, and on an RTL widget's left side. The rationale for making the browser chrome direction determine the location of the vertical scrollbar for the <body> element in 2.11 were exceptional to the <body> element: - Only the <body>'s scrollbars could conceivably be in the same window location across all pages. - Only the <body>'s scrollbars can be conceived of as being part of the browser chrome. However, due to the usability problem with the page's overall vertical scrollbar described.
http://www.w3.org/International/wiki/BidiProposal
CC-MAIN-2014-15
refinedweb
5,224
60.04
As some of you have read, Ive been working with JNI for these days. Ive found that JNA may be easier to manage and also allows some other things. I have this very simple program: The .cpp: #include <stdio.h> #include "PruebasDeJNA.h" int suma(int a,int b) { return a+b; } The .h: #include <stdio.h> int suma(int,int); All it does is add two numbers and return them. Nothing more. In my .java program: package hola; import com.sun.jna.*; public class Mundo { public interface PruebasDeJNA extends Library { PruebasDeJNA INSTANCE = (PruebasDeJNA)Native.loadLibrary((Platform.isWindows() ? "PruebasDeJNA" : "c"),PruebasDeJNA.class); int suma(int a,int b); } public static void main(String[] args) { System.out.println(PruebasDeJNA.INSTANCE.suma(5, 7)); } } The error that it gives me is that it cannot find the function "suma" in the library. As you can see, avoiding porting the code from C++ to Java is neccesary as there are too many lines to port over and the program is alot of math so one miscalulation could throw away alot of work. Thanks for the help
https://www.daniweb.com/programming/software-development/threads/397324/some-help-with-jna
CC-MAIN-2017-09
refinedweb
182
60.11
Thanks to all. The image is dynamically created by user action and not creating a image file. I implemented an action for the src and working file. Thanks for the help. -Yoga -----Original Message----- From: Craig R. McClanahan [mailto:craigmcc@apache.org] Sent: Tuesday, October 28, 2003 7:47 AM To: Struts Users Mailing List Subject: Re: specifying image source as jpg stream Max Cooper wrote: >You may want to write a separate servlet to serve the image data. That >allows you to implement getLastModified() and allow proper browser-caching >support, which can significantly increase the speed of your pages if the >user is likely to view the images more than once. We did this with an Action >first and since we had caching turned off, it reloaded the images every >time. Switching to a separate servlet where we implemented getLastModified() >was perceptably faster. > >Perhaps Struts should allow Action-implementers to implement some kind of >getLastModified() method for this reason. Or at least to turn caching on and >off at the Action (or action-mapping) level. getLastModified() is really >useful if you have the image data (or document data, etc.) stored in a db. > > > Controlling this stuff at the per-Action level is a nice idea. If you're using an Action to create dynamic output already (such as when you directly stream the binary output and then return null), it's quite easy to do today -- your Action will be able to see the "If-Modified-Since" header that the browser sends, and then can decide to return a status 304 (NOT MODIFIED) if your current database stuff is not more recent. Something along the lines of this in your Action.execute() method should do the trick: // When was our database data last modified? long dataModifiedDate = ... timestamp when database last modified ... // Have we sent to this user previously? long modifiedSince = request.getDateHeader("If-Modified-Since"); if (modifiedSince > -1) { // i.e. it was actually specified if (dataModifiedDate <= modifiedSince) { response.sendError(HttpServletResponse.SC_NOT_MODIFIED); return (null); } } // Set the timestamp so the browser can send back If-Modified-Since response.setDateHeader("Date", dataModifiedDate); // Now write the actual content type and data response.setContentType("mage/jpg"); ServletOutputStream stream = response.getOutputStream(); ... write out the bytes ... // Return null to tell Struts the response is complete return (null); >-Max > > >
http://mail-archives.us.apache.org/mod_mbox/struts-user/200311.mbox/%3C78BE7700906EE7438678F9C9152DDEA1017A7D19@bkorex01.corp.mphasis.com%3E
CC-MAIN-2019-43
refinedweb
382
58.18
Sometimes some of us want to narrow encoding of an output XML document, while to preserve data fidelity. E.g. you transform some XML source with arbitrary Unicode text into another format and you need the resulting XML document to be ASCII encoded (don't ask me why). Here is fast and simple solution for that problem. One could ask - btw, can ASCII encoded XML document to contain arbitrary Unicode characters? While it sounds a bit contradictory, the answer is of course - sure it can, because encoding of an XML document (that one which you can see in XML declaration) is sort of "transfer encoding", while character range for any XML document is always the whole set of legal characters of Unicode and ISO/IEC 10646 (more strict definition). XML syntax allows any (but legal in XML) character to be written as numeric character reference, e.g. ב (Hebrew letter BET). So the solution for the given problem of narrowing the encoding is to encode all characters that don't fit into target encoding as numeric character references. Basically that's what XSLT 1.0 spec requires from XSLT processors:. Well, unfortunately native .NET 1.X XSLT processor - XslTransform class doesn't support that yet. So let's see how we can get this done. The first solution that comes in my mind is simple custom XmlWriter, which filters output text and encodes all non ASCII characters as numeric character references. Just like SAX filter for those SAX minded. Here is the implementation: public sealed class ASCIIXmlTextWriter : XmlTextWriter { //Constructors - add more as needed public ASCIIXmlTextWriter(string url) : base(url, Encoding.ASCII) {} public override void WriteString(string text) { StringBuilder sb = new StringBuilder(text.Length); foreach (char c in text) { if (c > 0x0080) { sb.Append(""); sb.Append((int)c); sb.Append(';'); } else { sb.Append(c); } } base.WriteRaw(sb.ToString()); } } <?xml version="1.0" encoding="UTF-8"?> <message>English, Русский (Russian), עברית (Hebrew)</message> <xsl:stylesheet version="1.0" xmlns: <xsl:template <xsl:copy-of </xsl:template> </xsl:stylesheet> XPathDocument doc = new XPathDocument("foo.xml"); XslTransform xslt = new XslTransform(); xslt.Load("foo.xsl"); ASCIIXmlTextWriter writer = new ASCIIXmlTextWriter("out.xml"); xslt.Transform(doc, null, writer, null); writer.Close(); <message>English, Русский (Russian), עברית (Hebrew)</message> Above simple solution handles equally well both text and attribute values, but not comments and PIs (but that's easy as well). It's neither optimized nor tested, but you've got the idea. TrackBack URL: As for the & ampersand issue, simply call HTTPUtility.HTMLEncode in the WriteString method as well Nice... But it started to change & to normal ampersand and I cannot live with that. Well keep trying. /( )` \ \___ / | /- _ `-/ ' (/\/ \ \ /\ / / | ` \ O O ) / | `-^--'`< ' (_.) _ ) / `.___/` / `-----' / <----. __ / __ \ <----|====O)))==) \) /==== <----' `--' `.__,' \ | | \ / ______( (_ / ,' ,-----' | `--{__________) This works great. Thanks so much! For others trying this, note that the class uses System.Xml and System.Text. This page contains a single entry by Oleg Tkachenko published on July 22, 2004 2:53 PM. Justification of XHTML was the previous entry in this blog. SgmlReader and namespaces is the next entry in this blog. Find recent content on the main index or look in the archives to find all content.
http://www.tkachenko.com/blog/archives/000266.html
CC-MAIN-2018-05
refinedweb
527
59.6
On Sat, Mar 03, 2001 at 12:00:19PM +0000, Philip Blundell wrote: > It's certainly very strange. Perhaps you could try disassembling the > gzip that works for you, and comparing it to one that doesn't. If > that doesn't yield any clues then I think the only thing for it is > for you to dig into one of the crashing binaries with the debugger > and find out what it is that makes it go wrong. I've looked at it some more, but haven't found the cause. It seems like either argv[0] is corrupted somewhere, or parameters get corrupted e.g. when the program calls glibc. The generated code for gzip's main() is very different, no idea what's happening in either version of the binary. The code for basename() is identical. I guess I will have to try with gdb. <sigh> I hardly know anything about what happens before a program's main() is executed under Linux. Where's the best place to look? Also, where can I get information about ELF? It would be nice to have a crashing version of gzip with debugging symbols. (I compiled gzip on rameau yesterday, but that binary works. Still have to try debussy.) Where did the gzip in question (=current potato version) get compiled? This is what happens: When gzip is invoked, it calls one of its functions, basename(), to strip e.g. '/usr/bin/' from argv[0]: int main (argc, argv) int argc; char **argv; { int file_count; /* number of files to precess */ int proglen; /* length of progname */ int optc; /* current option */ progname = basename(argv[0]); ... ---------------------------------------------------------------------- basename() calls glibc's strrchr(): char *basename(fname) const char *fname; { char *p; if ((p = strrchr(fname, PATH_SEP)) != NULL) fname = p+1; if ('A' == 'a') strlwr(fname); /* optimized away */ return fname; } 2009230: e1a0c00d mov ip, sp 2009234: e92dd810 stmdb sp!, {r4, fp, ip, lr, pc} 2009238: e24cb004 sub fp, ip, #4 ; 0x4 200923c: e1a04000 mov r4, r0 2009240: e3a0102f mov r1, #47 ; 0x2f 2009244: ebffde6d [5] bl 0x2000c00 ; <== strrchr() 2009248: e3500000 cmp r0, #0 ; 0x0 200924c: 12800001 addne r0, r0, #1 ; 0x1 2009250: 01a00004 moveq r0, r4 2009254: e91ba810 ldmdb fp, {r4, fp, sp, pc} ---------------------------------------------------------------------- In glibc, strrchr makes a call to strchr(), which is weak-aliased to index(): char * strrchr (const char *s, int c) { register const char *found, *p; c = (unsigned char) c; /* Since strchr is fast, we use it rather than the obvious loop. */ if (c == '\0') return strchr (s, '\0'); found = NULL; while ((p = strchr (s, c)) != NULL) { found = p; s = p + 1; } return (char *) found; } strrchr: 7b30c: e1a0c00d mov ip, sp 7b310: e92dd830 stmdb sp!, {r4, r5, fp, ip, lr, pc} 7b314: e24cb004 sub fp, ip, #4 ; 0x4 7b318: e1a04001 mov r4, r1 7b31c: e21440ff ands r4, r4, #255 ; 0xff 7b320: 1a000002 bne 0x7b330 7b324: e1a01004 mov r1, r4 7b328: ebfe8bb3 bl 0x1e1fc 7b32c: e91ba830 ldmdb fp, {r4, r5, fp, sp, pc} 7b330: e3a05000 mov r5, #0 ; found = NULL 7b334: ea000001 [4] b 0x7b340 ; while 7b338: e1a05000 mov r5, r0 7b33c: e2850001 add r0, r5, #1 ; 0x1 7b340: e1a01004 [2] mov r1, r4 7b344: ebfe9190 bl 0x1f98c ; strchr(s, c) 7b348: e3500000 cmp r0, #0 ; 7b34c: 1afffff9 [3] bne 0x7b338 ---------------------------------------------------------------------- Finally, the crash occurs in index(): char * strchr (s, c_in) const char *s; int c = s; ((unsigned long int) char_ptr & (sizeof (longword) - 1)) != 0; ++char_ptr) if (*char_ptr == c) return (void *) char_ptr; else if (*char_ptr == '\0') return NULL; ... index/strchr: 79e30: e1a0c00d [1] mov ip, sp 79e34: e92dd810 stmdb sp!, {r4, fp, ip, lr, pc} 79e38: e24cb004 sub fp, ip, #4 79e3c: e1a03000 mov r3, r0 ; char_ptr = s 79e40: e3130003 tst r3, #3 ; 0x3 79e44: e20110ff and r1, r1, #255 ; c = (uchar) c_in; 79e48: 0a000007 beq 0x79e6c 79e4c: e5d30000 ldrb r0, [r3] ; <== CRASH HERE ---------------------------------------------------------------------- *** Segmentation fault Register dump: R0: 00000001 R1: 0000002f R2: bffffb9c R3: 00000001 R4: 0000002f R5: 00000000 R6: 02000a18 R7: 400207a4 R8: 00000001 R9: 02000fe4 SL: 4013c2c8 FP: bffffb0c IP: bffffb10 SP: bffffafc LR: 400a7348 PC: 400a5e4c CPSR: 20000010 Trap: 0000000e Error: 00000002 OldMask: 00000000 Backtrace: /lib/libc.so.6(index+0x1c)[0x400a5e4c] /lib/libc.so.6(strrchr+0x3c)[0x400a7348] ./gzip.crashes(basename+0x18)[0x2009248] ./gzip.crashes(strcpy+0x304)[0x2001004] /lib/libc.so.6(__libc_start_main+0x108)[0x4004be50] ./gzip.crashes(strcpy+0x34)[0x2000d34] The register dump implies that at [1] strchr() was called with R0=1. Thus, R0 must also have been 1 at [2]. [2] could only be reached via the branch at [4]; R0 would have had to be 0 at [3], but in that case that branch is not taken. Thus, R0 was 1 on entry to strrchr(), and also when strrchr() was called at [5]. Confused, Richard -- __ _ |_) /| Richard Atterer | CS student at the Technische | GPG key: | \/¯| | Universität München, Germany | 888354F7 ¯ ´` ¯
https://lists.debian.org/debian-arm/2001/03/msg00015.html
CC-MAIN-2015-27
refinedweb
793
73.51
Question: i'm trying to make a GTK application in python where I can just draw a loaded image onto the screen where I click on it. The way I am trying to do this is by loading the image into a pixbuf file, and then drawing that pixbuf onto a drawing area. the main line of code is here: def drawing_refresh(self, widget, event): #clear the screen widget.window.draw_rectangle(widget.get_style().white_gc, True, 0, 0, 400, 400) for n in self.nodes: widget.window.draw_pixbuf(widget.get_style().fg_gc[gtk.STATE_NORMAL], self.node_image, 0, 0, 0, 0) This should just draw the pixbuf onto the image in the top left corner, but nothing shows but the white image. I have tested that the pixbuf loads by putting it into a gtk image. What am I doing wrong here? Solution:1 I found out I just need to get the function to call another expose event with widget.queue_draw() at the end of the function. The function was only being called once at the start, and there were no nodes available at this point so nothing was being drawn. Solution:2 You can make use of cairo to do this. First, create a gtk.DrawingArea based class, and connect the expose-event to your expose func. class draw(gtk.gdk.DrawingArea): def __init__(self): self.connect('expose-event', self._do_expose) self.pixbuf = self.gen_pixbuf_from_file(PATH_TO_THE_FILE) def _do_expose(self, widget, event): cr = self.window.cairo_create() cr.set_operator(cairo.OPERATOR_SOURCE) cr.set_source_rgb(1,1,1) cr.paint() cr.set_source_pixbuf(self.pixbuf, 0, 0) cr.paint() This will draw the image every time the expose-event is emited. Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/05/tutorial-drawing-pixbuf-onto-drawing.html
CC-MAIN-2018-34
refinedweb
295
66.13
Easy handling of vagrant hosts within fabric Project description Easy handling of vagrant hosts within fabric Installation Easy as py(pi): $ pip install --upgrade fabrant From git source: $ git clone $ cd fabrant $ python setup.py install Usage Using fabrant within a fabric script is easy: from fabrant import vagrant # specify path for vagrant dir. Start the box if it is not already running. # Halt the box when context is closed. with vagrant("path/to/dir", up=True, halt=True): run("ls /vagrant") # prints contents of usually enabled share Contribute Report any issues you find here or create pull requests. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/fabrant/
CC-MAIN-2018-39
refinedweb
122
63.59
Approximating diffraction patterns of rectangular apertures with the FFT A very common problem encountered by people learning Fourier optics or diffraction theory is the determination of the far field diffraction pattern from a rectangular aperture. This problem has a clean, analytical solution. Those who are familiar with MATLAB or a similar language may try to write computer code that calculates this diffraction pattern based on fast Fourier transform (FFT) implementations. Unfortunately, the output of the is code not, strictly speaking, the same as predicted by the analytical solution for a few reasons: - The analytical solution is a function of continuous variables in space, whereas the simulated solution requires taking discrete samples of the relevant fields. - The analytical solution is not band-limited. Therefore, aliasing will distort the computed solution. In my last post I showed how to do basic pupil function simulations but did not go into many details about the FFT and possible errors associated with it. In this post I will dig a bit deeper into the topic of the FFT and its use in computational wave optics. The purpose of this post is to investigate differences between the analytical theory and the simulated solution of the diffraction pattern from a rectangular aperture by investigating the simpler problem of diffraction from a 1D slit. %pylab %matplotlib inline plt.style.use('dark_background') plt.rcParams['image.cmap'] = 'plasma' from scipy.fftpack import fft from scipy.fftpack import fftshift, ifftshift Using matplotlib backend: Qt4Agg Populating the interactive namespace from numpy and matplotlib Scalar diffraction theory for a 1D slit¶ Goodman and many others have shown that the far-field (also known as Fraunhofer) solution to the diffracted electric field from a rectangular aperture is proportional to the Fourier transform of the field distribution in the aperture. This fact follows directly from applying the Fraunhofer approximation to the diffraction integral developed by Huygens and Fresnel (Goodman). For both convenience and numerical efficiency, I will focus on the one-dimensional analog to this problem: diffraction from an infinitely-long slit with width $ a $. If there is an incident monochromatic plane wave with amplitude $E_0$ traveling in the z-direction perpendicular to a slit in the $z=0$ plane, then the Fraunhofer diffraction pattern for the field is expressed by the equation$$U \left(x', z \right) = \frac{e^{jkz} e^{ jk\frac{x'^2}{2z} }}{\sqrt{j \lambda z}} \int_{-\infty}^{\infty} u \left( x, z = 0 \right) \exp \left( -j \frac{2 \pi}{\lambda z} xx'\right) dx $$ with $u \left( x, z = 0 \right) = E_0 $ denoting the incident field, $ k = \frac{2 \pi}{\lambda} $ representing the freespace wavenumber, $ x = 0 $ and $ x' = 0 $ the center planes of the slit and diffraction patterns, respectively, and $ j $ the imaginary number. The factor $ \sqrt{j \lambda z} $ comes from separating the solution to the 2D Huygens-Fresnel integral into two, 1D solutions. The irradiance in the z-planes after the slit is proportional to the absolute square of the field,$$I \left( x', z \right) \propto \left| U \left( x', z \right) \right|^2$$ Taking the absolute square eliminates the complex exponentials preceding the integral so that the irradiance profile becomes$$I \left( x', z \right) \propto \frac{1}{\lambda z} \left| \int_{-\infty}^{\infty} u \left( x \right) \exp \left( -j \frac{2 \pi}{\lambda z} xx'\right) dx \right|^2 = \frac{1}{\lambda z} \left| \mathcal{F} \left[ u \left( x, z \right) \right] \left( \frac{x'}{\lambda z}\right) \right|^2 $$ This expression means that the irradiance describing the Fraunhofer diffraction pattern from slit is proportional to the absolute square of the Fourier transform of the field at the aperture evaluted at spatial frequencies $ f_{x} = x' / \lambda z$. To arrive at the analytical solution, we perform the integration over a region where the slit is not opaque, $ \left[ -a/2, a/2 \right] $. Inside this region, the field is simply $ u \left( x \right) = E_0 $ because we have a monochromatic plane wave with this amplitude.$$ \mathcal{F}\left[ u \left( x, z \right) \right] = \int_{-a/2}^{a/2} E_0 \exp \left( -j \frac{2 \pi}{\lambda z} xx'\right) dx $$ The analytical solution to this integral is$$ \mathcal{F}\left[ u \left( x, z \right) \right] = a E_0 \text{sinc} \left( \frac{ax'}{\lambda z} \right) $$ where $ \text{sinc} \left( x \right) = \frac{\sin{\pi x}}{\pi x}$. The far-field irradiance is therefore the absolute square of this quantity divided by the product of the wavelength and the propagation distance. Let's plot the analytically-determined irradiance profile of the diffraction pattern: # Create a sinc function to operate on numpy arrays def sinc(x): if (x != 0): # Prevent divide-by-zero return np.sin(np.pi * x) / (np. pi * x) else: return 1 sinc = np.vectorize(sinc) amplitude = 1 # Volt / sqrt(micron) slitWidth = 5 # microns wavelength = 0.532 # microns propDistance = 10000 # microns (= 10 mm) x = np.arange(-10000, 10000, 1) F = sinc(slitWidth * x / wavelength / propDistance) I = amplitude / (wavelength * propDistance) * (slitWidth * F)**2 plt.plot(x, I, linewidth = 2) plt.xlim((-5000, 5000)) plt.xlabel(r'Position in observation plane, $\mu m$') plt.ylabel('Power density, $V^2 / \mu m$') plt.grid(True) plt.show() We can also verify a few key properties of the Fourier transform and ensure that our units are correct. The most important property is the conservation of energy, which in signal processing is often known as Parseval's theorem. In our case, conservation of energy means that the integral of the field over the slit must equal the integral of the field in any arbitrary z-plane.$$\int_{-\infty}^{\infty} \left| u \left( x, z = 0 \right) \right|^2 dx = \int_{-\infty}^{\infty} \left| U \left( x', z \right) \right|^2 dx'$$ where the spatial frequency is $ f_{x} = \frac{x'}{\lambda z} $. Because the incident field is a plane wave, the integral over the slit is just the square of the plane wave amplitude multiplied by the slit width:$$\int_{-a/2}^{a/2} E_0^2 dx = aE_0^2$$ We need the square integral to be proportional to units of power, which will be true if the field has units of $ \frac{\text{Volts}}{\sqrt{\mu m}} $ and not the more familiar $ \frac{\text{Volts}}{\mu m} $. This peculiarity stems from the fact that we're looking at a somewhat synthetic 1D case. With this in mind, the square integral has units of $ \text{Volts}^2 $ and is proportional to the power delivered by the plane wave. The absolute square of the diffracted field is$$ \left| U \left( x', z \right) \right|^2 = \frac{a^2 E_0^2}{\lambda z} \text{sinc}^2 \left( \frac{a x'}{\lambda z} \right)$$ The integral of $ \text{sinc}^2 \left( x \right) $ with my definition of $ \text{sinc} \left( x \right) $ over the real number line is just 1, so$$ \int_{-\infty}^{\infty} \left| U \left( x', z \right) \right|^2 dx' = \frac{a^2 E_0^2}{\lambda z} \times \frac{\lambda z}{a} = aE_0^2$$ All of this is essentially a verification that our units and prefactors have most likely been handled correctly. Most Fourier optics texts drop a lot of prefactors in their derivations, so it's always good to check that they've done everything correctly :) from scipy.integrate import simps # Compute input power powerIn = slitWidth * amplitude**2 # Compute output power by numerical integration of the sinc prefactor = slitWidth **2 * amplitude**2 / wavelength / propDistance sincIntegral = simps(F, x) powerOut = prefactor * sincIntegral print('The input power is {0:.4f} square Volts. The output power is {1:.4f} square Volts'.format(powerIn, powerOut)) The input power is 5.0000 square Volts. The output power is 5.0373 square Volts Simulating slit diffraction with the FFT¶ Can we simply use the FFT in Python's Scipy or in MATLAB to arrive at the same solution as above, skipping the math in the process? Let's try it. We'll start by defining a square slit centered at the origin with the same width as above. The incident field is a plane wave with amplitude $E_0 = 1 \, V / \sqrt{m} $. x = np.linspace(-50, 50, num = 1024) field = np.zeros(x.size, dtype='complex128') # Ensure the field is complex field[np.logical_and(x > -slitWidth / 2, x <= slitWidth / 2)] = amplitude + 0j plt.plot(x, np.abs(field), '.') plt.xlabel(r'x-position, $\mu m$') plt.ylabel(r'Field amplitude, $V / \sqrt{\mu m}$') plt.ylim((0, 1.5)) plt.grid(True) plt.show() Now, it's important to really understand that this slit is represented in computer memory as a discrete sequence of field samples, not a continous curve. If we zoom in on the slit, we can better appreciate this: plt.plot(x, np.abs(field), '.') plt.xlabel(r'x-position, $\mu m$') plt.ylabel(r'Field amplitude, $V / \sqrt{\mu m}$') plt.xlim((-3, 3)) plt.ylim((-0.5, 1.5)) plt.grid(True) plt.show() Let's go ahead now and compute the FFT of this sequence of field samples. We first need to figure out what the units of the FFT are going to be. The sampling frequency $f_S$ for this field is simply 1 divided by the distance between each sample. The FFT outputs an array of numbers from spatial frequency 0 up to $ \left( \frac{N-1}{N} \right) f_S $. By default, the length of the output array $ N $ is the same as the length of the input. In Python, array indexes start from 0, so the relationship between index $ i $ and spatial frequency is$$ f_{x} = \frac{f_S}{N} i $$ (In MATLAB, array indexes start at 1 so $i$ above should be replaced by $i - 1$ .) We also need to do two other things when computing the numerical Fourier transform. First, we need to shift the input field using the fftshift command for arrays with an even number of elements and ifftshift for arrays with an odd number of elements. This will shift the elements of the array so that the point corresponding to x = 0 lies in the first element of the array (index 0). The second thing we must do is multiply the fft results by the grid spacing dx to ensure that the scaling on the y-axis is correct. dx = x[1] - x[0] # Spatial sampling period, microns fS = 1 / dx # Spatial sampling frequency, units are inverse microns f = (fS / x.size) * np.arange(0, x.size, step = 1) # inverse microns diffractedField = dx * fft(fftshift(field)) # The field must be rescaled by dx to get the correct units # Plot the field up to the Nyquist frequency, fS / 2 plt.plot(f[f <= fS / 2], np.abs(diffractedField[f <= fS / 2]), '.', linewidth = 2) plt.xlim((0, fS / 2)) plt.xlabel(r'Spatial frequency, $\mu m^{-1}$') plt.ylabel(r'Field amplitude, $V / \sqrt{\mu m}$') plt.grid(True) plt.show() OK, so far so good. Let's now put the results in a format that is more easily compared to the analytical theory for the diffracted irradiance. We'll shift the FFT results back so that the peak is in the center using fftshift. (We also need to use fftshift here for arrays with an odd number of elements; typically, the order for odd elements is ifftshift -> fft -> fftshift.) We'll also rescale the x-axis by multiplying by $ \lambda z $ to get $x'$ and plot the theoretical result on top. xPrime = np.hstack((f[-(f.size/2):] - fS, f[0:f.size/2])) * wavelength * propDistance IrradTheory = amplitude / (wavelength * propDistance) * \ (slitWidth * sinc(xPrime * slitWidth / wavelength / propDistance))**2 IrradFFT = fftshift(diffractedField * np.conj(diffractedField)) / wavelength / propDistance plt.plot(xPrime, np.abs(IrradFFT), '.', label = 'FFT') plt.plot(xPrime, IrradTheory,:1: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future if __name__ == '__main__': Not too bad! It looks like the simulation and the FFT agree pretty well. But, if we zoom in to some regions of the curve, you can see some discrepancies: fig, (ax0, ax1) = plt.subplots(nrows = 1, ncols = 2, sharey = False, figsize = ((8,6))) ax0.plot(xPrime, np.abs(IrradFFT), '.', label = 'FFT') ax0.plot(xPrime, IrradTheory, label = 'Theory', linewidth = 2) ax0.set_xlim((-500, 500)) ax0.set_ylim((0.002, 0.005)) ax0.set_xlabel(r'x-position, $\mu m$') ax0.set_ylabel(r'Power density, $V^2 / \mu m$') ax0.grid(True) ax1.plot(xPrime, np.abs(IrradFFT), '.', label = 'FFT') ax1.plot(xPrime, IrradTheory, label = 'Theory', linewidth = 2) ax1.set_xlim((-2250, -1750)) ax1.set_ylim((0.000, 0.0004)) ax1.set_xlabel(r'x-position, $\mu m$') ax1.grid(True) ax1.legend() plt.tight_layout() plt.show() Considering the discrepancy between the results and the theory, the following question comes to mind: Is the discrepancy between the theoretical and FFT results due to numerical rounding errors, how we defined the slit, or something else? percentError = np.abs((IrradTheory - IrradFFT) / IrradTheory) * 100 plt.semilogy(xPrime, percentError) plt.xlabel(r'x-position, $\mu m$') plt.ylabel('Percent error') plt.xlim((-20000, 20000)) plt.grid(True) plt.show() Now that's interesting. The percent error varies wildly and spans nearly six orders of magnitude! Furthermore, it appears like the error actually gets slightly worse near the edge of the range, as evidenced by the upwards trend in the positive peaks. Finally, the fact that the percent error appears to have some sort of periodicity suggests that the discrepancies are not due to round-off errors but rather something else. Does aliasing cause the disagreement?¶ Let's first see whether aliasing is causing a problem. The slit is not bandlimited, which means that its Fourier spectrum is infinite in extent. For this reason, we can never sample the slit at high-enough frequencies to and calculate its spectrum without aliasing artifacts. We can however, see if fidelity improves by increasing the sampling rate. # Increase samples to approximately 1 million (2^20) xNew = np.linspace(-50, 50, num = 2**20) fieldNew = np.zeros(xNew.size, dtype='complex128') # Ensure the field is complex fieldNew[np.logical_and(xNew > -slitWidth / 2, xNew <= slitWidth / 2)] = amplitude + 0j plt.plot(xNew, np.abs(fieldNew), '.') plt.xlabel(r'x-position, $\mu m$') plt.ylabel(r'Field amplitude, $V / \sqrt{\mu m}$') plt.ylim((0, 1.5)) plt.grid(True) plt.show() dxNew = xNew[1] - xNew[0] # Spatial sampling period, microns fS = 1 / dxNew # Spatial sampling frequency, units are inverse microns f = (fS / xNew.size) * np.arange(0, xNew.size, step = 1) # inverse microns xPrimeNew = np.hstack((f[-(f.size/2):] - fS, f[0:f.size/2])) * wavelength * propDistance diffractedField = dxNew * fft(fftshift(fieldNew)) # The field must be rescaled by dx to get the correct units IrradTheoryNew = amplitude / (wavelength * propDistance) * \ (slitWidth * sinc(xPrimeNew * slitWidth / wavelength / propDistance))**2 IrradFFTNew = fftshift(diffractedField * np.conj(diffractedField)) / wavelength / propDistance plt.plot(xPrimeNew, np.abs(IrradFFTNew), '.', label = 'FFT') plt.plot(xPrimeNew, IrradTheoryNew,:4: DeprecationWarning: using a non-integer number instead of an integer will result in an error in the future Now the agreement looks much better! Here's the new percent error with $ 2^{20} $ samples compared to the old one with $ 2^{10} $. percentErrorNew = np.abs((IrradTheoryNew - IrradFFTNew) / IrradTheoryNew) * 100 plt.semilogy(xPrime, percentError, label = r'$N = 2^{10}$', linewidth = 2) plt.semilogy(xPrimeNew, percentErrorNew, label = r'$N = 2^{20}$') plt.xlabel(r'x-position, $\mu m$') plt.ylabel('Percent error') plt.xlim((-20000, 20000)) plt.grid(True) plt.legend(loc = 'lower right') plt.show() Here we see that increasing the sampling rate by nearly three orders of magnitude (1024 samples vs. 1,048,576) improves the relative error at the origin and at the points where the error was already low to begin with. However, it does nothing to improve the error at the peaks in the plot, the first two peaks lying approximately 1000 microns on either side of the origin. Just aliasing, or something more?¶ We can begin to better understand the differences between the computational solution and the analytical solution by first underlining these two facts: - The diffracted far-field is a continous Fourier transform of the continuous field distribution in the aperture. - The FFT produces a discrete Fourier transform (DFT) of the sampled field in the aperture. Since the analytical solution and the FFT solution come from two related but different mathematical operations, should we have expected to get the right answer in the first place? As others have pointed out, the discrete time Fourier transform (DTFT) of a square function is a ratio of two sines, not a sinc. Furthermore, the DFT (and equivalently the FFT result) is simply a sampled version of the DTFT. Therefore, we might expect that sampling the DTFT solution for a square wave will produce the same results as the FFT solution. Let's go ahead and try it. According to Wikipedia, and changing variables to agree with my notation above, the DTFT solution of the square wave is$$F \left( f_{x} \right) = \frac{\sin \left[ \pi f_{x} \left( M + 1 \right) \right] }{\sin \left( \pi f_{x} \right) } e^{ -j \pi f_{x} M }, \, M \in \mathbb{Z} $$ where $ M $ is an integer and $ f\_{x} $ is the spatial frequency. An important property of the DTFT is that it is periodic between 0 and 1. If I had used the angular spatial frequency $ k\_{x} = 2 \pi f\_{x} $ the periodicity would be between 0 and $ 2 \pi $. With this information, we can map our spatial frequency vector onto the appropriate range for the DTFT. The square function corresponding to this DTFT solution is defined with the left edge of the square starting at the origin:$$ f \left( x \right) = \text{rect} \left( \frac{n - M/2}{M} \right) , \, M \in \mathbb{Z}, \, n = 0, 1, \ldots, N - 1$$ $ n $ is the index of the samples taken from the square and $ M $ denotes the center of the square. So, to use this expression for the DTFT of the square, we will first need to shift our original input so that the left edge of the square is at the origin. numSamples = 2**20 # Large numbers of samples needed to make FFT and DTFT agree x = np.linspace(0, 100, num = numSamples) field = np.zeros(x.size, dtype='complex128') # Ensure the field is complex field[x <= slitWidth] = amplitude + 0j M = np.sum(np.real(field) / amplitude) # Only works if input field is real and ones. plt.plot(x, np.abs(field), '.') plt.xlabel(r'x-position, $\mu m$') plt.ylabel(r'Field amplitude, $V / \sqrt{\mu m}$') plt.ylim((0, 1.5)) plt.grid(True) plt.show() print('M is {0:.0f}.'.format(M)) M is 52429. def rectDTFT(f, M): # Returns the DTFT of a rect centered at M if (f != 0): return np.sin(f * np.pi * (M + 1)) \ / np.sin(f * np.pi) \ * np.exp(-1j * f * np.pi * M) else: return (M + (1 + 0j)) rectDTFT = np.vectorize(rectDTFT) # Compute the diffracted field with the FFT dx = x[1] - x[0] fS = 1 / dx f = (fS / x.size) * np.arange(0, x.size, step = 1) # inverse microns diffractedField = dx * fft(fftshift(field)) # Sample the DTFT of a rectangular slit dx_DTFT = x[1] - x[0] fS_DTFT = 1 / dx_DTFT fDTFT = np.linspace(0, 1, num = numSamples, endpoint = False) # For plotting the DTFT plt.plot(f[f <= fS / 2], np.abs(diffractedField[f <= fS / 2]), linewidth = 2, label = 'FFT') plt.plot(fDTFT * fS_DTFT, dx_DTFT * np.abs(rectDTFT(fDTFT, M)), '.', label = 'DTFT') plt.xlim(0,5) plt.ylim(0,6) plt.xlabel(r'Spatial frequency, $\mu m^{-1}$') plt.ylabel(r'Field amplitude, $V / \sqrt{ \mu m}$') plt.grid(True) plt.legend() plt.show() I placed the x- and y-axes to display a similar range as the equivalent plot above. We see that the DTFT result, which is the ratio of two sine waves, is also in good agreement with the FFT. So now we have the FFT and the DTFT of the field at the aperture modeling the diffraction pattern reasonably well. How well does the DTFT match the analytical results? Let's look at their percent error and compare it to the one between the analytical theory and the FFT for $ 2^{20} $ samples. IrradDTFT = dx_DTFT**2 * fftshift(rectDTFT(fDTFT, M) * np.conj(rectDTFT(fDTFT, M))) / wavelength / propDistance percentErrorDTFT = np.abs(IrradTheoryNew - IrradDTFT) / IrradTheoryNew * 100 plt.semilogy(xPrimeNew, percentErrorNew, label = r'FFT vs. Theory, $N = 2^{20}$', linewidth = 2) plt.semilogy(xPrimeNew, percentErrorDTFT, label = r'DTFT vs. Theory, $N = 2^{20}$') plt.xlabel(r'x-position, $\mu m$') plt.ylabel('Percent error') plt.xlim((-20000, 20000)) plt.grid(True) plt.legend(loc = 'lower left') plt.show() What the above plot shows is that the error between the analytical theory and both the FFT and DTFT predictions are practically the same. This means that the FFT is computing the DTFT of the field at the aperture and not the continuous Fourier transform. The DTFT result is therefore inherently including the aliasing artifacts that prevent accurate computation of the diffracted field with the FFT. Despite the large relative errors, we still get pretty good results with both the FFT and DTFT predictions around the origin so long as the sampling frequency is large, or, equivalently, the grid spacing is small: plt.plot(xPrimeNew, IrradTheoryNew, label = 'Theory', linewidth = 2) plt.plot(xPrimeNew, IrradFFTNew, '.', label = 'FFT') plt.plot(xPrimeNew, IrradDTFT, '^', label = 'DTFT', markersize = 8, alpha = 0.5) plt.xlim((-5000, 5000)) plt.grid(True) plt.legend() plt.show() /home/douglass/anaconda3/lib/python3.5/site-packages/numpy/core/numeric.py:474: ComplexWarning: Casting complex values to real discards the imaginary part return array(a, dtype, copy=False, order=order) Summary¶ This entire exercise effectively showed that it is impossible to compute an exact diffraction pattern from a slit with a straight-forward approach based on the fast Fourier transform. There are two ways of looking at why this is, though they are both effectively based in the same underlying ideas. - The spectrum of a slit is not band limited, so aliasing will always distort the FFT calculation. - The FFT is computing the discrete time Fourier transform, not the continuous Fourier transform. Because the slit is not band limited, frequencies in the spectrum that are larger than the field's sampling frequency are effectively folded back into and interfere with the spatial frequencies that are less than the sampling frequency. This creates differences between the analytical theory and its computed estimate. Similarly, if we pay close attention to the analytical theory, we find that the diffraction pattern results from a continuous Fourier transform. The FFT however computes discrete samples of the discrete time Fourier transform. These are two similar, but fundamentally different things. Importantly, the DTFT result includes the aliasing artifacts by its very nature; it is a periodic summation of the continuous Fourier transform of the field, and this periodic summation results in overlapping tails of the spectra. If anything, I hope this post clarifies a commmon misconception in Fourier optics, namely that we can compute exact diffraction patterns with the FFT algorithm.
https://kmdouglass.github.io/posts/approximating-diffraction-patterns-of-rectangular-apertures-with-the-fft/
CC-MAIN-2019-51
refinedweb
3,838
56.05
msync - synchronise memory with physical storage #include <sys/mman.h> int msync(void *addr, size_t len, int flags); The msync() function writes, the file's st_ctime and st_mtime fields are marked for update. Upon successful completion, msync() returns 0. Otherwise, it returns -1 and sets errno to indicate the error. The msync() function will fail if: - [EBUSY] - Some or all of the addresses in the range starting at addr and continuing for len bytes are locked, and MS_INVALIDATE is specified. - [EINVAL] - The value second form of [EINVAL] above is marked EX because it is defined as an optional error in the POSIX Realtime Extension. None. mmap(), sysconf(), <sys/mman.h>.
http://pubs.opengroup.org/onlinepubs/007908775/xsh/msync.html
CC-MAIN-2015-32
refinedweb
110
64.81
Happy New Year - Formatting in Compiled JavaFX Script I just want to wish you a Happy New Year, and to show you an example of date formatting in compiled JavaFX Script. First, here are the screenshots of a few invocations of the program, each one taken after having changed to a different locale on my computer. Of course, your computer is probably already set to your desired locale, so when you run this program, you should see the word "Happy", followed by the day of the week in your language, followed by the date in your language (of the first day of 2008). Here's the program, followed by a brief explanation of its formatting aspects: /* * HappyNewYearFormatting.fx - Example of using formatting * in compiled JavaFX Script * * Developed 2007 by James L. Weaver (jim.weaver at lat-inc.com) */ import javafx.ui.*; import javafx.ui.canvas.*; import java.lang.System; import java.text.DateFormat; import java.util.GregorianCalendar; Frame { title: "Happy New Year!" width: 500 height: 100 background: Color.WHITE visible: true content: Canvas { var cal = new GregorianCalendar(2008, 0, 1) var df = DateFormat.getDateInstance(DateFormat.LONG) content: Text { font: Font { face: FontFace.SANSSERIF style: FontStyle.PLAIN size: 24 } stroke: Color.RED fill: Color.RED x: 15 y: 25 content: bind "Happy {%tA cal}, {df.format(cal.getTime())}" } } } Formatting in Compiled JavaFX Script You can control how numbers and dates are converted to character strings by providing an additional formatting prefix in a String expression. This prefix follows the specification of the Java Formatter class. In the example above, I'm using the %tA formatting prefix to convert the date to the day of the week in the default locale. I'm also using the capabilities of the Java DateFormat class to show the month, day and year in the language and order of the default locale. Please have a happy, healthy, and prosperous 2008, Jim Weaver JavaFX Script: Dynamic Java Scripting for Rich Internet/Client-side Applications Immediate eBook (PDF) download available at the book's Apress site Rick, The JavaFX compiler creates JVM bytecode in .class files, and the original FX files are not necessary for execution. Jar files are not created as a part of the javafxc process that I've referenced on this list. You may be talking about interpreted JavaFX Script, which is just a working prototype for the compiled version, and should not be necessary too much longer. Posted by: Jim Weaver | February 07, 2008 at 11:48 AM Hello, I was curious. I've noticed that "compiling" JavaFX script simply puts .fx files into a Java archived file (JAR). However, anyone can go into that .jar file and see the code I wrote for the application, which may include database URL, login credentials, etc. Aside from creating separate Java classes to grab this sensitive information, is there any way to "literally" create a jar file with some form of compiled (binary) representation of your JavaFX script files? Any feedback would be greatly appreciated, either here or at my e-mail address. Posted by: Rick | February 07, 2008 at 11:13 AM
http://learnjavafx.typepad.com/weblog/2007/12/happy-new-year.html
crawl-002
refinedweb
517
56.66
A truely simple example to get started with WCF Recently I needed to set up some simple code to demonstrate WCF (as an alternative to some other means of communication in distributed applications). But when I googled around, I could not find a really, really simple WCF example. Sure, there are lots of WCF introductions, but they all explain a lot of stuff I did not really want to know at that time. Also they most often spread the code across many files and even across languages (C# and XML). And that´s what I really, really hate! When I first try out a new API like WCF I want to have everything needed in one (!) place. I want all that´s required to be the minimum and in my favorite programming language. This way I can focus best on what´s really essential without getting distracted by syntax and unnecessary (but cool) "fluff". Finally I resorted to a book I wrote together with co-author Christian Weyer: ".NET 3.0 kompakt" (German). As you can imagine, I did not write the chapter on WCF, though ;-) In our book I found the most simple/gentle introduction to WCF and finally was able to set up my sample program within some 20 minutes. I only had to experiment a little for getting the client code to run without resorting to XML, since Christian did not show imperative code for it in the book. Now, to make life easier for you, I present to you the most simple WCF application I can think of in one piece. Copy the pieces into a VS 2005 console project, reference the System.ServiceModel library from .NET 3.0 and off you go. The code should run right out of the box. (Or download it here.) Although I don´t want to repeat what´s been said in our book or in others like "Programming WCF Services" by Juval Löwy or elsewhere on the web about WCF (e.g. here or here or here), I think it´s in order to give you a quick tour through my code: Service definition 1 using System; 2 using System.Collections.Generic; 3 using System.Text; 4 5 using System.ServiceModel; 6 7 8 namespace WCFSimple.Contract 9 { 10 [ServiceContract] 11 public interface IService 12 { 13 [OperationContract] 14 string Ping(string name); 15 } 16 } My simple sample service is of course trivial, since I´m not interested in service functionality but getting the WCF infrastructure to run. So my service WCFSimple.Contract.IService just consists of a single method - Ping() - which accepts a name and responds with a greeting. Passing in "Peter" returns "Hello, Peter". Defining a WCF service means defining its functionality in a so called service contract which is given as an interface here. That this interface is a service contract and which of its methods should be published is signified by the attributes. Service implementation 19 namespace WCFSimple.Server 20 { 21 [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)] 22 class ServiceImplementation : WCFSimple.Contract.IService 23 { 24 #region IService Members 25 26 public string Ping(string name) 27 { 28 Console.WriteLine("SERVER - Processing Ping('{0}')", name); 29 return "Hello, " + name; 30 } 31 32 #endregion 33 } Once you´ve defined a service you can implement it. Just define a class that implements the service´s interface (line 22, 26..30). The ServiceBehaviour attribute in this case tells the WCF infrastructure to create a fresh instance of the service class for each call to the service´s methods. It´s the same as .NET Remoting´s SingleCall wellknown objects. Of course, though, I could have chosen a different instanciation mode, but I wanted to avoid any side effects due to state issues. Hosting the service, implementing the server 36 public class Program 37 { 38 private static System.Threading.AutoResetEvent stopFlag = new System.Threading.AutoResetEvent(false); 39 40 public static void Main() 41 { 42 ServiceHost svh = new ServiceHost(typeof(ServiceImplementation)); 43 svh.AddServiceEndpoint( 44 typeof(WCFSimple.Contract.IService), 45 new NetTcpBinding(), 46 "net.tcp://localhost:8000"); 47 svh.Open(); 48 49 Console.WriteLine("SERVER - Running..."); 50 stopFlag.WaitOne(); 51 52 Console.WriteLine("SERVER - Shutting down..."); 53 svh.Close(); 54 55 Console.WriteLine("SERVER - Shut down!"); 56 } 57 58 public static void Stop() 59 { 60 stopFlag.Set(); 61 } 62 } 63 } I chose to set up the server for my service implementation just on a separate thread in my sample app. But to make it easier for you to move it to a process/project of its own, I let it look like the default class of a console application. In order to host a service, the Main() method creates a ServiceHost to manage any service implementation instances and publish the service on any number of endpoints. I chose to make the service accessible just through one endpoint, though. And this I do explicitly in code - instead of through some App.Config settings. That way I get to know how the WCF parts work together. The endpoint takes as a first argument the service definition (contract), since the class I passed to the ServiceHost could possibly implement several services. Then I specify the "communication medium", the binding. TCP seems to be suffcient for my purposes. Finally I set up the endpoint at a particular address that surely needs to match the binding. For TCP I specify an IP address (or domain name) and a port. That´s it. WCF can be as simple as the ABC: Address+Binding+Contract=Endpoint. Now I just need to start the ServiceHost with Open() and later on finish it with Close(). However, since the service host is doing its work on a separate thread, I cannot just move on after Open(). I have to wait for a signal telling the server to stop. For that purpose I set up a WaitHandle - stopFlag - which can be signalled from the outside through the Stop() method. If you put the server in a console app of its own, you can remove the WaitHandle stuff and replace the stopFlag.WaitOne() with just a Console.ReadLine(). Warning: The Close() takes at least 10 sec to complete, as it seems. This is to give any still connected clients the chance to disconnect, as far as I understand right now. So if you don´t happen to see the "Shut down!" message when running the sample, don´t worry. Running the server, implementing the client 66 namespace WCFSimple 67 { 68 class Program 69 { 70 static void Main(string[] args) 71 { 72 Console.WriteLine("WCF Simple Demo"); 73 74 // start server 75 System.Threading.Thread thServer = new System.Threading.Thread(WCFSimple.Server.Program.Main); 76 thServer.IsBackground = true; 77 thServer.Start(); 78 System.Threading.Thread.Sleep(1000); // wait for server to start up 79 80 // run client 81 ChannelFactory<WCFSimple.Contract.IService> scf; 82 scf = new ChannelFactory<WCFSimple.Contract.IService>( 83 new NetTcpBinding(), 84 "net.tcp://localhost:8000"); 85 86 WCFSimple.Contract.IService s; 87 s = scf.CreateChannel(); 88 89 while (true) 90 { 91 Console.Write("CLIENT - Name: "); 92 string name = Console.ReadLine(); 93 if (name == "") break; 94 95 string response = s.Ping(name); 96 Console.WriteLine("CLIENT - Response from service: " + response); 97 } 98 (s as ICommunicationObject).Close(); 99 // shutdown server 100 WCFSimple.Server.Program.Stop(); 101 thServer.Join(); 102 } 103 } 104 } The client in this sample has two tasks: 1) start the server, 2) call the server. To run client and server in the same process, the server is started in a background thread of its own (lines 75..77). To give it some time to become ready to serve requests, a delay of 1 sec is added afterwards. Also this avoids a mess of messages shown in the console window at start up ;-) Next the client sets up a channel along which the request can flow between it and the server. To create a channel, a channel factory is needed (line 81f), though. It can create channels for a certain contract, a specific binding, and targeting an address (lines 82..84). Again the ABC of WCF. Creating a channel with the channel factory then returns a proxy of the remote server which looks like the implementation - but in fact is a channel, i.e. a WCF object (line 86f). The rest of the code is just a bit of frontend logic: a loop to ask for a name, passing it to the service, and displaying the returned greeting. Just hit enter without a name to end the loop. Once the loop is terminated, the server is stopped. Before that, though, you should close the proxy! Otherwise the shutdown of the server will take some time, because it first waits for the client. Summary That´s it. That´s my very simplest WCF example. It lets you play around with WCF in a single process. Or you can easily distribute the code across several assemblies: one for the server and implementation, one for the client, and one for the contract shared between the two. Just be sure to download .NET 3.0 and reference System.ServiceModel. The rest is just copying the above code parts in order into a console project and hit F5. (Or download the code here.) Enjoy! PS: If you wonder, how I got those pretty listings into my posting, let me point you to CopySourceAsHtml (CSAH):. It´s a great tool for anybody who wants to paste code into a blog at least while using Windows Live Writer. Although I have to switch to HTML view to do that, which is a bit cumbersome, I love both tools.
http://weblogs.asp.net/ralfw/a-truely-simple-example-to-get-started-with-wcf
CC-MAIN-2014-42
refinedweb
1,596
67.04
This small utility is an experimental implementation of the so-called latent typing for .NET, using C#. In .NET, you can treat types in an unified way only if they implement a common interface or have a common base class. Of course, all .NET types have the common base class System.Object, but the usefulness of treating different types as System.Object is limited if you want to do more than store them. System.Object Sometimes, you want to treat different types in a unified way even though they do not have a meaningful common base class or implement a common interface. For example, you might want to store all objects in your program that have a Font property of type Font in a collection. Of course, there are various types that have a Font property but have not much in common. So there is no way to cast them to a common base type or common interface. Font Latent typing offers a solution to this problem. You would simply define an interface like: interface IHasFont { Font Font { get; set; } } None of the types you want to store implement this interface, but they all could implement it since they have the required property accessor methods. Normally, you would just write a wrapper class for each type that implements the required interface and delegates the implementation to the wrapped class. The AutoCaster does this automatically using System.Reflection.Emit.); The issue of latent typing will usually lead to a religious debate between advocates of static and dynamic type systems. This sums up the debate very well. Obviously, I think that there are some cases where latent typing is useful. But you have been warned! The class uses reflection and emits MSIL code, so you need various permissions to run it. It is, something This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here interface IFoo { FooMethod(); } interface IBar { FooMethod(); } class ClassA: IFoo { public IFoo.FooMethod() { } public FooMethod() { } } Latent<IBar>.Cast(new ClassA()) interface IFoo { void FooMethod(); } interface IBar { void FooMethod(); } public class ClassA : IFoo { void IFoo.FooMethod() { Console.WriteLine("IFoo"); } public void FooMethod() { Console.WriteLine("implicit IBar"); } } object foo; ... foo.Bar(); General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/9281/Autocaster-Implicit-interfaces-for-NET?msg=1014575
CC-MAIN-2016-30
refinedweb
426
65.32
Release Notes Glossary Item Box Robotics Developer Studio 4 The RDS 4 release is a new version of RDS. Note that RDS 2008 R3 is still available for download and will remain available up to 6 months after the RDS 4 release date. A key objective of this new release is to bring RDS up to date with other software available from Microsoft, including .NET 4.0, XNA 4.0, Visual Studio 2010 and Windows 7. A new DLL also allows CCR to be used with Silverlight 4.0. Preliminary testing has been done with the Windows 8 Consumer Preview and and there are no known issues. However, Windows 8 Consumer Preview and the beta version of Visual Studio 11 are not officially supported at this time because they are both Beta versions. The support matrix for the operating systems that RDS supports is as follows: RDS 4 supports the Kinect sensor by building on top of the Kinect for Windows SDK V1. The Kinect is a great sensor for robotics that is already changing the way that robots are designed because it provides 3D depth data at a price that cannot be matched by traditional Laser Range Finders. The Kinect services provide access to all the functionality of the Kinect including multiple Kinect sensors and the new Kinect for Windows hardware. Installing RDS 4 RDS 4 will install side-by-side with previous versions of RDS, including RDS 4 Beta and Beta 2. (However, it is recommended that you uninstall Beta versions before installing the final release version. When you uninstall you must remove both the RDS 4 Beta and the CCR & DSS 4 Beta). RDS 4 does not do an in-place upgrade, so do not attempt to install it into the same folder as a previous version. This is by design. If you have code from a previous version of RDS that you want to keep, copy it across into the new RDS 4 folder after installation, open a DSS Command Prompt window, and run DssProjectMigration on your folders. This will update the projects so that they will compile with RDS 4. NOTE: You must install Visual Studio 2010 before installing RDS. Previous versions of Visual Studio are no longer supported. You should also install the Kinect for Windows SDK V1 and the Silverlight 4 SDK before RDS. Build All Samples A "Build All Samples" item is available in the Start Menu that builds all of the sample code. This runs a command script that calls MsBuild. You should build the samples immediately after installing RDS. (This applies to the C# samples only - the VPL samples do not have to be rebuilt). If you do not have Kinect for Windows SDK V1 and all of its pre-requisities, including Speech, installed on your PC then you will see some error messages when you run BuildAllSamples. The User versions of the Kinect-related services will not build in this case. There are two separate steps involved for the Kinect services. The first step builds services that depend on the Kinect itself. The second step builds services that use the Speech components in conjunction with the Microphone Array. Note that on a 64-bit system you must install both the 32-bit and 64-bit versions of the Microsoft Speech Platform Runtime in order for the RDS samples to build. (This is the approach recommended for Kinect for Windows in any case). The User samples need to be rebuilt on your PC so that they will all be signed using your key. This avoids versioning conflicts due to Strong Name signing that can result if you mix your own DLLs (signed with your key) with the Microsoft DLLs (which are signed with the Microsoft key). A signing key is created during installation so every installation of RDS has a different key. This is not a problem if you are working only on one PC. However, if you transfer services to another PC you might encounter version mismatches. The simplest solution is to always compile the source on the target machine, or to make sure that you take all of the DLLs for the necessary service(s). The DssDeploy tool can help you with this by finding dependent DLLs. Alternatively, if you have to support multiple developers, copy the signing key (samples\mrisamples.snk) from one PC to all of the other PCs that will be used by your developers. This ensures that all DLLs are signed with the same key. "User" Samples In RDS 2008 R2, the samples and tutorials were modified to include "user" in all of the Contract Identifiers, Assembly names, Namespaces, etc. The purpose of this change was to avoid overwriting the Microsoft service DLLs that shipped as part of the package. Instead, all samples and tutorials create separate DLLs. In RDS 4 the "Build All Samples" item in the Start Menu allows you to build all of the samples on your PC thereby ensuring that you have two complete copies of the binaries: Microsoft DLLs and User DLLs. (See the previous section). The RDS 2008 R3 release took this separation one step further and provided both Microsoft and User manifests in the samples\config folder for all samples, and changes all Contract Identifiers in the manifests to User services. The effect of this change is that the User samples should now completely parallel the Microsoft versions and be independent of the Microsoft versions. The VPL diagrams that are shipped with RDS are all written to use the Microsoft service DLLs. If you want to modify a service that is used in a VPL diagram and see the effects of your changes, you will have to remove the service blocks for the Microsoft service DLL(s) and replace them with the corresponding User service DLL(s). If you want to ship code to customers, be sure to use the Microsoft versions (without the ".user") of Contract Identifiers and link against the Microsoft DLLs. All installations of RDS will have these Microsoft DLLs and they will therefore have the same Strong Name signing and there should not be any version conflicts. If you create your code by editing a User sample, you will have to make sure that to change it so that it references the Microsoft service DLLs. When you are writing services, you should always create a new Contract Identifier and change the Assembly Name (in the Project Properties). The DssNewService and Visual Studio Wizard will do this automatically. This creates a brand new service. If you copy an existing sample, you must also change the Contract Identifier and the Assembly Name even if the code that it contains is identical to begin with. If you are an advanced programmer, it is still possible to turn a User sample back into the equivalent Microsoft code. This is discouraged because it defeats the purpose of the User samples. Uninstalling RDS When you uninstall RDS it does not uninstall the CCR and DSS Runtime. This is by design. You should uninstall the runtime package after RDS if you want to remove all of RDS from your system. Other software that is installed by RDS includes PhysX, DirectX and XNA. These packages are not removed either. RDS does not remove the installation folder if it contains any modified files. All files and folders that were not installed by the package or have been modified are left behind. This can amount to a significant amount of data. However, it is intended to protect your source code from being lost inadvertently. If you no longer require the files, delete the installation folder after uninstalling to recover the disk space. CCR & DSS There have been many bug fixes to CCR and DSS. These are not listed here. CCR A new CCR for Silverlight 4.0 DLL is included. (Note that this DLL is not for Silverlight 5). This can be used, for example, in Windows Phone 7 applications. A Silverlight sample is provided (under Samples\Silverlight) to show how it can be used. In conjunction with this new DLL, several CCR Extension Methods have been added to CCR that mirror similar methods available in DSS services. DSS Tools DssDeploy now has an /ExcludeHost qualifier (short form /x) that excludes DssHost from the a package when you are creating a new one. To remain consistent with the previous behavior, the default is /x- which means do include DssHost. If you do not want DssHost (recommended) use /x+. DssProjectMigration no longer supports VS2005 or VS2008. It will work with Visual Studio 2010 C# Express but you will see an error message that it cannot migrate the projects. If projects are not in VS2010 format, open them in VS2010 Express first, which will update them, and then run DssProjectMigration again. This is due to a limitation of the Express Edition of VS2010. DssProjectMigration does not support migrating projects to the new version (called Visual Studio 11 or Visual Studio 2012) that works with Windows 8 Consumer Preview. DssInfo is deprecated. It might be removed in a future release. DssNewService can only create projects in VS2010 format. It does not add a using statement for Microsoft.Dss.Core which you might want to add depending on which APIs you want to use. HttpReserve has a new qualifier /Prefix. This is related to changes in the security model. For more information see the next section. Security Model In RDS 4, DSS nodes no longer listen on all network interfaces, but on the loopback address only. (This is usually referred to as localhost, or IP address 127.0.0.1). This prevents DSS nodes on different computers from talking to each other and also prevents browsing to a DSS node from another computer. If you want the old behavior, you must adjust the appropriate config file in the bin folder (either DssHost.exe.config or DssHost32.exe.config) so that AllowUnsecuredRemoteAccess is set to true. The same applies to using VPL (with VplHost.exe and VplHost32.exe). If you plan to use multiple computers in a network each running a DSS node, or you want to connect from another computer using a web browser, it is important that you read the section DSS Node Security Model. Sample Services A new service called SerialComService has been added. This can be used to communicate with a serial port. It is used by the Reference Platform services. There are several new generic contracts defined in Robotics Common: - InfraredSensor - InfraredSensorArray - SonarSensorArray - ADCPinArray - GpioPinArray - PanTilt The Generic Differential Drive (GDD) contract has been modified to add a new operation: ResetEncoders. Because this is an addition, new services are backwards compatible with old partners. The encoder information is contained within the Wheel State which is in turn included in the GDD State. In many implementations the extra overhead of separate encoder services is not warranted. Therefore the GDD is now free to manage the encoders directly. For backwards compatibility, the GDD contract identifier has not been changed. However, if you write a new service that conforms to the GDD you should handle the ResetEncoders request. Documentation The Start Menu has been modified to include a link to the Documentation folder rather than just a link to the Help file. There are several new documents included in PDF format: - DSS Attributes - Getting Started with RDS - Kinect Services for RDS - Log Analyzer (an advanced topic) - Obstacle Avoidance Drive - RDS Reference Platform Design Spec - Simulated Reference Platform - Structured Logging (an advanced topic) The RDS Help File has been renamed from MsrsUserGuide.chm to RDSUserGuide.chm. The CCR and DSS Class Reference now contains the full set of classes the same as on MSDN instead of a subset. Simulation Apartment Model The Apartment Model has been modified so that all of the doors are open. This allows the robot to roam over a larger area. Most of the objects in the Apartment now have meaningful names. Simulated IR and Sonar Sensors The simulated Infrared and Sonar sensors no longer display "hit points" as red dots to show what object they are hitting. The simulated Sonar is implemented using a depth buffer so that it can "see" glass objects such as the doors in the Apartment model. The simulated IR sensors use ray casting and "see" right though glass. This is how real sensors work and is one of the reasons for mixing both Sonar and IR on a robot in order to improve the reliability of obstacle detection. Simulated Kinect The simulated Kinect sensor provides both Depth and RGB data. Unlike the real Kinect, the simulated one can be made to work in any resolution and with any Field of View. This requires detailed knowledge of the simulator and it is not explained here. The Kinect simulation does not attempt to model the noise characteristics of the real Kinect, nor the non-linear nature of the depth range data. Likewise, the Depth and RGB pixels should be well aligned in simulation, which is not the case for a real Kinect. Lastly, the simulated Kinect works in any of the simulated environments because it does not actually work like an infrared device, i.e. it is unaffected by virtual sunlight. In summary, all data in simulation is (near) perfect. There is no simulation of the Microphone Array. You can use a normal microphone attached to your PC with the Speech Recognition service, but there is no way to get do beam forming, acoustic echo cancellation, etc. Although the simulator can support multiple Kinect sensors, there are no samples with multiple Kinect sensors. Skeleton Tracking is not supported by the simulated Kinect. (There are no people in the simulator anyway). This is a limitation of the Kinect for Windows SDK which does not provide APIs for converting depth images to skeletons. VPL and DSSME There are no significant new features in VPL or DSSME. However, there have been several bug fixes. In particular it is now possible to "drill down" into service state when creating an initial config where previously you could not specify data for types that were defined in another namespace. Known Issues Build All Samples The "Build All Samples" item in the Start Menu does not upgrade or migrate projects. It uses MsBuild to directly build the samples and tutorials included with RDS and this does not require projects to be upgraded. Depending on which Edition. Visual Studio 2010 You can use DssProjectMigration to upgrade any of your existing projects to VS2010 by using the command-line qualifier: /vs:VS2010. If you run DssProjectMigration on a top-level folder then all of the samples under that folder will be upgraded. Visual Studio C# Express 2010 does not support upgrading a project from the command line so DssProjectMigration cannot be used to upgrade samples using C# Express. (It issues an error message). When you run DssProjectMigration and only have VS2010 Express installed, it displays a message that it cannot find Visual Studio Professional or above as in the following example. C:\RDS4\samples\RoboticsTutorials\Tutorial1\CSharp>dssprojectmigration /b- . * Searching Directory: C:\RDS4\samples\RoboticsTutorials\Tutorial1\CSharp\ * Updating project C:\RDS4\samples\RoboticsTutorials\Tutorial1\CSharp\RoboticsTutorial1.csproj ... * Updating project C:\RDS4\samples\RoboticsTutorials\Tutorial1\CSharp\RoboticsTutorial1.csproj.user ... Cannot find Microsoft Visual Studio 2010 Professional or higher. Skipping conversion of projects to VS2010 format. Already a VS2010 solution: RoboticsTutorial1.sln However, if you open an old VS2008 sample in Visual Studio C# Express 2010 then it will be upgraded. The upgrade automatically changes the Target Framework to 4.0. You can then compile the sample. Similarly, if you create a new service then you can also compile it with Visual Studio C# Express 2010. NOTE: The Visual Studio 2010 GUI forces you to do an upgrade when you open a VS2008 project. Once you have upgraded to VS2010, the only way to go back to VS2008 is to manually edit the Solution (.sln) file and the Project (.csproj) file and change the relevant fields. There is no supported way to "downgrade" a Visual Studio project. (Although this is not difficult, it is not documented here due to the risk of rendering the project unusable). If you do not install Visual Studio before installing RDS, then you will not get the Visual Studio Wizards because the necessary template folder does not exist. As noted above, you should install Visual Studio before RDS. If you do not have Visual Studio installed, you might see errors when you try to run DssHost or other command-line tools saying that a pre-requisitie is missing: .NET 4.0. This occurs because RDS requires the full .NET profile, not just the Client profile that comes with Windows 7. As noted above, you must install Visual Studio before installing RDS. Service Pack 1 for Visual Studio has been released. You can install SP1 if you wish but it is not required, Internet Explorer 9 Internet Explorer 9 (IE9) is more strict on several web standards, including XSLT. As a result some pages in RDS services might not refresh correctly even though they display correctly the first time. The symptom is that no formatting is applied and you see raw data. To work around this problem you can turn on Compatibility Mode in IE9. Although this usually works, this does not guarantee that the problem will be fixed in all cases. HTML 5 Not Supported When you are coding a service to use an XSLT page you cannot use HTML 5 tags. This limitation arises because MasterPage.xslt has an explicit version of 4.01. Kinect Support As from the release date of the Kinect for Windows SDK V1, you can now use Kinect for Windows for both commercial and non-commercial use. Therefore the license for RDS 4 has been updated to once again allow both commercial and non-commercial use. It is now possible to serialize Skeleton data for transmission between DSS nodes. There is some overhead in doing this. Note that this does not affect messages sent between services running in the same DSS node, i.e. on a single computer, because there is an optimization step that does an in-memory copy. In RDS 4, it is now possible to have multiple Kinect sensors attached to a single computer. However, the Kinect requires a lot of USB bandwidth so you should attach each Kinect to a separate USB controller, i.e. not on the same USB hub. In addition, it is not possible to get Skeleton data from multiple Kinect sensors. This is a limitation of the Kinect for Windows SDK. Simulated Kinect The simulated Kinect has been improved so that it better matches the behavior of a real Kinect. However, not that it does not have the same granularity and the data returned is linear, unlike the real Kinect. The most noticeable effect of the changes is that values greater than the maximum range now return zero. This means that the background in the depth camera view will often be black whereas previously it was white. The simulated Kinect services (Kinect, Depth Camera, Webcam) have been restructured so that they more closely parallel the services for the real Kinect. This might cause breaking changes if you had previously written services to use the simulated Kinect under RDS 4 Beta. Infrared Sensors The Sharp Infrared Sensors commonly used on a Reference Platform robot are analog devices, but they voltage to distance curve is non-linear and becomes ambiguous below a certain range. The calculations currently performed to convert from voltage to distance give incorrect (very high) values under certain circumstances. This usually happens if the IR sensor is exposed to a source of infrared. One example is a television set, which generates radiation across a whole spectrum including both visible light and IR. Therefore you might see unusual values reported in the Robot Dashboard. A similar problem can occur if the IR sensor is faulty. This will be fixed in a future version of RDS but in the meantime you should simply cap the range values at the maximum range of the sensor. Obscure Deadlock in CCR when using an Interleave A deadlock can occur when an Interleave is activated (for the first time) whilst ports under its control are being posted to. This can only happen during a narrow time window. The workaround is to either make sure that the Interleave is set up prior to posting messages to ports that it is guarding, or create an empty interleave and use .CombineWith() to fold in the receivers for the ports. Exception in the Simulator when using the Editor When you use the entity properties window in the Simulator Editor (for example for setting angular velocity), the Simulation Engine deletes the entity and inserts a new entity in its place with the updated properties. Services for simulation entities need to support this explicitly if they hold a reference to the entity, otherwise they will get an Unhandled Exception (Access Violation) when they attempt to update the entity after you leave Edit Mode. The Simulated Reference Platform service, along with the Simulated IR and Simulated Sonar do not support this concept. Many entity services are likely to be affected. To work around this problem you will need to set the properties appropriately either in your code or in the saved simulation state file. The correct fix is to listen for Entity Inserts and Deletions. On an insert, update the handle to the entity. On a Delete, set the handle to null and add code throughout the service to check for a null reference before accessing the entity so that exceptions will not occur. Speech Recognizer and Text To Speech Configs When the SpeechRecognizerGUI sample and the TextToSpeech sample are launched for the first time, the following error might appear: * Service started [02/14/2012 19:31:14][] * Service started [02/14/2012 19:31:14][] * Service started [02/14/2012 19:31:14][] ** Common Create Handler ExceptionSystem.InvalidOperationException: Service not found: at Microsoft.Dss.Services.Serializer.DataCache.LoadServiceAssemblies(ServiceInfoType createRequest) at Microsoft.Dss.Services.Constructor.ConstructorService.CommonCreateHandler( DsspOperation create) [02/14/2012 19:31:14][] * Rebuilding contract directory cache. This will take a few moments ... [02/14 /2012 19:31:14][] * Contract directory cache refresh complete [02/14/2012 19:31:16][] * Service started [02/14/2012 19:31:17][] However the error will not occur when the service is launched the second or subsequent times. Speech Recognizer Sometimes Fails to Load Grammar In the LoadGrammar method of SpeechRecognizer.cs in the MicArraySpeechRecognizer project, there is no wait after the following async call: state.Recognizer.RecognizeAsyncCancel(); This sometimes results in an error message similar to "cannot change grammar while speech recognition is running". One way to handle this is shown at: SpeechRecognitionEngine RecognizeAsyncCancel Method This problem is timing related, so you might not see it at all. If it does happen, you might be able to just restart DSS. If it happens frequently, you should modify the sample service according to the MSDN web page. Contract Identifier URIs DSSProxy generates a warning message if the Contract Identifier for a service uses the prefix. This prefix is generated by the DSS Service Wizard as a placeholder. You should replace it with a URI that is appropriate to your own organization. Note that this does not have to be a real URL. The URI format was chosen as something that it easy to understand but can easily be made unique. It is not required that you have a web page at this address (although it would be a good way to document your services). Contract Identifiers are case-sensitive. Therefore it is recommended that all Contract Identifiers should be in lowercase. Contract Identifiers in manifests are always forced to lowercase by the Manifest Loader, so a Contract Identifier that contains uppercase characters will not match and the service will not be found. This will result in an error similar to: Service creation failure most common reasons: - Service contract identifier in manifest or Create request does not match Contract.Identifier - Service references a different version of runtime assemblies Additional information can be found in the system debugger log. DssHost and 64-bit Windows If DssHost (as opposed to DssHost32) is used to launch a DSS Node on 64-bit Windows, all of the services and any dependent DLLs must be compiled for 64-bit. Some packages, such as XNA, Speech, etc. only work in 32-bit so a service that links to any of these DLLs will fail to load in 64-bit. If you run a service that only operates in 32-bit you might see multiple errors. One of them will be similar to: System.BadImageFormatException: Could not load file or assembly The solution to this problem is simply to start the DSS Node again, but this time use DssHost32.exe. Note that when you create a new service, the command line in the Project settings uses DssHost.exe. If you run the project using the Debugger, e.g. by pressing F5, then this problem might occur. You can change the Project Settings in the Debug tab to use DssHost32. Robotics Introduction: New and Changed Features
http://msdn.microsoft.com/en-us/library/dd146032
CC-MAIN-2013-20
refinedweb
4,203
63.39
Agenda See also: IRC log <scribe> scribe: Felix RESOLUTION: minutes approved ACTION-189? <trackbot> ACTION-189 -- Tobias Bürger to postponed: Work on the the subproperties -- due 2010-12-31 -- OPEN <trackbot> <tmichel> Vero, you are dialing the wrong FR number please try +33.4.26.46.79.03 tobias not here ACTION-214? <trackbot> ACTION-214 -- WonSuk Lee to request review from MXM during last working draft call -- due 2010-03-02 -- OPEN <trackbot> wonsuk: implemented, did not receive a comment from MXM daniel: keep the action until we get a response from MSM ACTION-223? <trackbot> ACTION-223 -- Soohong Daniel Park to check for overlap between html5 and mawg issues -- due 2010-03-16 -- OPEN <trackbot> Daniel: keep open ACTION-234? <trackbot> ACTION-234 -- Joakim Söderberg to contact Google -- due 2010-05-03 -- OPEN <trackbot> Daniel: keep open ACTION-264? <trackbot> ACTION-264 -- Florian Stegmaier to edit the spec with the 3 changes proposed above. -- due 2010-06-22 -- OPEN <trackbot> Daniel: keep open vero: 264 and 299 can be closed, see Florian's mail daniel: close the two actions then close ACTION-264 <trackbot> ACTION-264 Edit the spec with the 3 changes proposed above. closed close ACTION-299 <trackbot> ACTION-299 Change the API interface about Date closed vero: agenda needs to have the right zakim number for dialing in daniel: chair will take care of that ACTION-273? <trackbot> ACTION-273 -- Thierry Michel to prepare the mapping table for the media container formats -- due 2010-09-15 -- OPEN <trackbot> thierry: in work, keep open ACTION-275? <trackbot> ACTION-275 -- Joakim Söderberg to and Wonsuk to deal with with the editorial comments of the LC comment 2405 -- due 2010-09-15 -- OPEN <trackbot> wonsuk: will ping joakim and finish that ASAP ACTION-277? <trackbot> ACTION-277 -- Chris Poppe to add a line for the "broadcastDate" property attribute -- due 2010-09-15 -- OPEN <trackbot> vero: this was edited in the doc close ACTION-277 <trackbot> ACTION-277 Add a line for the "broadcastDate" property attribute closed ACTION-280? <trackbot> ACTION-280 -- Joakim Söderberg to check the normative parts of the document according to comment 2418 -- due 2010-09-15 -- OPEN <trackbot> not done yet ACTION-282? <trackbot> ACTION-282 -- WonSuk Lee to follow up the comment on the namespaces -- due 2010-09-15 -- OPEN <trackbot> close ACTION-282 <trackbot> ACTION-282 Follow up the comment on the namespaces closed ACTION-283? <trackbot> ACTION-283 -- Soohong Daniel Park to send mail to Geolocation WG for an example, or controlled voca that can be used for the ma:location property -- due 2010-09-15 -- OPEN <trackbot> Daniel: keep open thierry: about geo location WG, please send the mail to me, I will send it to them daniel: ok, will do that after the meeting thierry: copy the MAWG so that we can track it ACTION-287? <trackbot> ACTION-287 -- Felix Sasaki to include information to the introduction of 5.2.2 to describe the tables and columns -- due 2010-09-15 -- OPEN <trackbot> close ACTION-287 <trackbot> ACTION-287 Include information to the introduction of 5.2.2 to describe the tables and columns closed ACTION-288? <trackbot> ACTION-288 -- Felix Sasaki to unify the XPATH expressions in the mapping tables according to the last substantial comment of Robin -- due 2010-09-15 -- OPEN <trackbot> felix: I will edit the tables after the call today to resolve that (not the main document) ACTION-289? <trackbot> ACTION-289 -- Joakim Söderberg to deal with the last editorial comment of Robin -- due 2010-09-15 -- OPEN <trackbot> <tmichel> thierry added the intro from Felix ACTION-291? <trackbot> ACTION-291 -- Thierry Michel to put agreed version of ontology to url pointed to by namespace -- due 2010-09-16 -- OPEN <trackbot> thierry: working on that with the webmaster, ongoing ACTION-293? <trackbot> ACTION-293 -- Véronique Malaisé to work with chris and raphael to update the properties of the ontology doc -- due 2010-09-16 -- OPEN <trackbot> vero: left the f2f early, not sure what happend to that daniel: keep that open then ACTION-294? <trackbot> ACTION-294 -- Véronique Malaisé to check together with other table editors option for clarifying type/role attributes -- due 2010-09-16 -- OPEN <trackbot> ACTION-295? <trackbot> ACTION-295 -- Chris Poppe to chanage title to plural in api doc -- due 2010-09-16 -- CLOSED <trackbot> daniel: will send a mail to chris and raphael about ACTION-293 and ACTION-294, to check the curretn status ACTION-297? <trackbot> ACTION-297 -- Thierry Michel to send mail to HTML WG chairs -- due 2010-09-16 -- OPEN <trackbot> thierry: tried to ping the chairs and the team contact, no response daniel: close the AI close ACTION-297 <trackbot> ACTION-297 Send mail to HTML WG chairs closed ACTION-298? <trackbot> ACTION-298 -- Joakim Söderberg to contact WebIDL for requesting information about the planned direction for the webservices -- due 2010-09-16 -- OPEN <trackbot> Daniel: keep open ACTION-300? <trackbot> ACTION-300 -- Joakim Söderberg to contact the Geolocation WG and see if they'd be OK to make the accuracy optional -- due 2010-09-16 -- OPEN <trackbot> thierry: that was probably done, but wait with closing until Joakim reports back ACTION-301? <trackbot> ACTION-301 -- Chris Poppe to check out the compression property of the Ontology document -- due 2010-09-16 -- OPEN <trackbot> to Chris, same as ACTION-302 also ACTION-303 keep open close ACTION-305 <trackbot> ACTION-305 Invite doug to telecon to discuss his lc comments closed ACTION-309? <trackbot> ACTION-309 -- Thierry Michel to improve markup of nomative/informative, probably with specific class to visualise -- due 2010-09-16 -- OPEN <trackbot> keep open keep open ACTION-310 and ACTION-311 close ACTION-312 <trackbot> ACTION-312 Send to the group the list of LC Comments that needs to be reviewed for moving on closed close ACTION-313 <trackbot> ACTION-313 Update definition of ma:policy according to comments from PLING closed close ACTION-314 <trackbot> ACTION-314 Update the description of the ma:format in the ontology doc using the input from Dave Singer (see Werner email) closed ACTION-315? <trackbot> ACTION-315 -- Joakim Söderberg to update the group web site to reflect the plural in the title and to make working his cvs -- due 2010-09-28 -- OPEN <trackbot> thierry: will take that action over ... wonsuk, can you check on the API doc for the plural of "resources"? wonsuk: will do <wonsuk> the plural is ok in the API doc <daniel> discussing various lc comments now thierry: I contacted James, asked him to send us information ... waiting for his response changing status to proposal thierry is updating the status of the comments directly changing to proposal changing to proposal thierry: was reviewed by werner changing to proposal thierry updating the proposal, all agreeing with it daniel: moving on with API document <wonsuk> going through all over API LC comments daniel: have now gone through all comments comming from the first LC. We agreed with all proposed resolutions ... after the teleconf, we will send out the resolutions to commenters ... and will wait for their final response thierry: explaining how it works: see e.g. ... I will mark the status as "response drafted" ... there is a box "resolution implemented". for that we need to review the editing in the spec ... for each reply send I will check "reply send to commentor" ... I will be tracking then whether we have responses from the commentors, and check that accordingly vero: and thierry will send the replies? thierry: yes, in the name of the working group ... media fragment WG closed their LC draft on 27th of August vero: there is an action on tobias to respond to mail from media fragment guys thierry: I will ping tobias again <scribe> ACTION: Thierry to ping tobias about reply to media fragments guys comments [recorded in] <trackbot> Created ACTION-317 - Ping tobias about reply to media fragments guys comments [on Thierry Michel - due 2010-10-05]. <tmichel> <tmichel> this is the schedule above thierry: I will send out resolutions to all commentors. I will give them one week for reply and check after that what happens daniel: before October 10th we have to finish all document updating thierry: correct vero: I have a proposal of a revised version, should I send it to the mailing list? thierry: sure ... the schedule is very tight ... I drafted the schedule so that we can review the LC comments during TPAC ... I don't want to release a second LC which is unfinished ... that could lead to a 3rd LC ... so we could follow that schedule, or forget about it and deal with different things at TPAC ... and use TPAC for editng felix: would like to use TPAC for editing so that we have enough time and don't run into the danger of a 3rd LC daniel: will talk about this offline adjourned <pchampin> bye thierry: don't forget to register for TPAC This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/states/status/ Succeeded: s/thiierry /thierry / No ScribeNick specified. Guessing ScribeNick: fsasaki Found Scribe: Felix Present: daniel fsasaki pchampin tmichel vero wonsuk Regrets: tobias florian chris john Agenda: Got date from IRC log name: 28 Sep 2010 Guessing minutes URL: People with action items: thierry[End of scribe.perl diagnostic output]
http://www.w3.org/2010/09/28-mediaann-minutes.html
CC-MAIN-2015-18
refinedweb
1,572
54.26
It's a daily digest of the issues on ironpython.codeplex.com, not of this mailing list. If it's not of interest we can put it on a smaller dev list. ~Jimmy On Jun 27, 2011, at 9:21 AM, David Fraser <davidf at sjsoft.com> wrote: > Yes, me too... > > ----- Original Message ----- > From: "Larry Jones" <Larry.Jones at aspentech.com> > To: ironpython-users at python.org > Sent: Monday, June 27, 2011 3:01:36 PM > Subject: [Ironpython-users] FW: IronPython, Daily Digest 6/23/2011 > > > > > > Hi y’all, > > > > Is anyone else getting a daily digest in addition to individual emails since moving to the new mailing list? > > > > > --- > > Larry Jones > ||| Senior Level Development Engineer > Aspen Technology, Inc. ||| +1 281-504-3324 ||| fax: 281-584-1062 ||| > > > > 30th_logo_RGB_ppt.png > > > > > > From: ironpython-users-bounces+larry.jones=aspentech.com at python.org [mailto:ironpython-users-bounces+larry.jones=aspentech.com at python.org] On Behalf Of no_reply at codeplex.com > Sent: Friday, June 24, 2011 8:38 AM > To: ironpython-users at python.org > Subject: [Ironpython-users] IronPython, Daily Digest 6/23/2011 > > > > > Hi ironpython, > > Here's your Daily Digest of new issues for project " IronPython ". > > In today's digest: ISSUES > > > > > 1. [New issue] Cannot create PythonType for generic parameter which has interface constraints ↓ ISSUES > > > > > 1. [New issue] Cannot create PythonType for generic parameter which has interface constraints view online > > User dinov has proposed the issue: > > " > > public class GenericType<T> where T : IEnumerable { > public T ReturnsGenericParam() { > return default(T); > } > } > > Import the type and do: > > clr.GetPythonType(clr.GetClrType(GenericType).GetMethod('ReturnsGenericParam').ReturnType) > > And this throws: > > SystemError: Method must be called on a Type for which Type.IsGenericParameter is false." > > > You are receiving this email because you subscribed to notifications on CodePlex. > > To report a bug, request a feature, or add a comment, visit IronPython Issue Tracker . You can unsubscribe or change your issue notification settings on CodePlex. > > [Text File:ATT336236.txt] > > _______________________________________________ > Ironpython-users mailing list > Ironpython-users at python.org > > _______________________________________________ > Ironpython-users mailing list > Ironpython-users at python.org >
https://mail.python.org/pipermail/ironpython-users/2011-June/015003.html
CC-MAIN-2016-44
refinedweb
337
52.76
XML - Managing Data Exchange/SVG Contents - 1 What is SVG? - 2 SVG vs. Macromedia Flash - 3 Why use SVG? - 4 SVG Viewer - 5 Creating SVG files - 6 Validation - 7 Optimisation - 8 Including SVG in HTML - 9 Summary - 10 Demos - 11 Exercises What is SVG?[edit] Based on XML, Scalable Vector Graphics (SVG) is an open-standard vector graphics file format and Web development language created by the W3C, and has been designed to be compatible with other W3C standards such as DOM, CSS, XML, XSLT, XSL, SMIL, HTML, and XHTML. SVG enables the creation of dynamically generated, high-quality graphics from real-time data. SVG allows you to design high-resolution graphics that can include elements such as gradients, embedded fonts, transparency, animation, and filter effects. SVG files are different from raster or bitmap formats, such as GIF and JPEG that have to include every pixel needed to display a graphic. Because of this, GIF and JPEG files tend to be bulky, limited to a single resolution, and consume large amounts of bandwidth. SVG files are significantly smaller than their raster counterparts. Additionally, the use of vectors means SVG graphics retain their resolution at any zoom level. SVG allows you to scale your graphics, use any font, and print your designs, all without compromising resolution. SVG is XML-based and written in plain text, meaning SVG code can be edited with any text editor. Additionally, SVG offers important advantages over bitmap or raster formats such as: - Zooming: Users can magnify their view of an image without negatively affecting the resolution. - Text stays text: Text remains editable and searchable. Additionally, any font may be used. - Small file size: SVG files are typically smaller than other Web-graphic formats and can be downloaded more quickly. - Display independence: SVG images always appear crisp on your screen, no matter the resolution. You will never experience “pixelated” images. - Superior color control: SVG offers a palette of 16 million colors. - Interactivity and intelligence: Since SVG is XML-based, it offers dynamic interactivity that can respond to user actions. Data-driven graphics[edit] Because it is written in XML, SVG content can be linked to back-end business processes, databases, and other sources of information. SVG documents use existing standards such as Cascading Stylesheets (CSS) and Extensible Stylesheet Language (XSL), enabling graphics to be easily customized. This results in: - Reduced maintenance costs: Because SVG allows image attributes to be changed dynamically, it eliminates the need for numerous image files. SVG allows you to specify rollover states and behaviors via scriptable attributes. Complex navigation buttons, for example, can be created using only one SVG file where normally this would require multiple raster files. - Reduced development time: SVG separates the three elements of traditional Web workflow – content (data), presentation (graphics), and application logic (scripting). With raster files, entire graphics must be completely recreated if changes are made to content. - Scalable server solutions: Both the client and the server can render SVG graphics. Because the “client” can be utilized to render the graphic, SVG can reduce server loads. Client-side rendering can enhance the user-experience by allowing users to “zoom in” on an SVG graphic. Additionally, the server can be used to render the graphic if the client has limited processing resources, such as a PDA or cell phone. Either way the file is rendered, the source content is the same. - Easily updated: SVG separates design from content, allowing easy updates to either. Interactive graphics[edit] SVG allows you to create Web-based applications, tools, or user interfaces. Additionally, you can incorporate scripting and programming languages such as JavaScript, Java, and Visual Basic. Any SVG element can be used to modify or control any other SVG or HTML element. Because SVG is text based, the text inside graphics can be translated for other languages quickly, which simplifies localization efforts. Additionally, if there is a connection to a database, SVG allows drill-down functionality for charts and graphs. This results in: - Improved end user experience: Users can input their own data, modify data, or even generate new graphics from two or more data sources. - In SVG, text is text: As mentioned previously, SVG treats text as text. This makes SVG-based graphics searchable by search engines. - SVG can create SVG: Enterprise applications such as an online help feature can be developed. Personalized graphics[edit] SVG can be targeted to people to overcome issues of culture, accessibility, and aesthetics, and can be customized for many audiences and demographic groups. SVG can also be dynamically generated using information gathered from databases or user interaction. The overall goal is to have one source file, which transforms seamlessly in a wide variety of situations. This results in: - One source, customized appearances: SVG makes it possible to change color and other properties based on aesthetics, culture, and accessibility issues. SVG can use stylesheets to customize its appearance for different situations. - Internationalization, localization: SVG supports Unicode characters in order to effectively display text in many languages and fashions – vertically, horizontally, and bi-directionally. - Utilizing existing standards: SVG works seamlessly with stylesheets in order to control presentation. Cascading Stylesheets (CSS) can be used for typical font characteristics as well as for other SVG graphic elements. For example, you can control the stroke color, fill color, and fill opacity of an element from an external stylesheet. SVG vs. Macromedia Flash[edit] Macromedia has been the dominant force behind vector-based graphics on the web for the last 10 years. It is apparent, however, that SVG provides alternatives to many of the functions of Flash and incorporates many others. The creation of vector-based graphical elements is the base structure of both SVG and Flash. Much like Flash, SVG also includes the ability to create time-based animations for each element and allows scripting of elements via DOM, JavaScript, or any other scripting language that the SVG viewer supports. Many basic elements are available to the developer, including elements for creating circles, rectangles, lines, ellipses, polygons, and text. Much like HTML, elements are styled with Cascading Stylesheets (CSS2) using a style element or directly on a particular graphical element via the style attribute. Styling properties may also be specified with presentation attributes. For each CSS property applicable to an element, an XML attribute specifying the same styling property can also be used. There is an on going debate about whether Flash or SVG is better for web development There are advantages to both, it usually comes down to the situation. Flash Advantages: - Use Flash if you want to make a Flash-like website – replicating the same effect using SVG is hard. - Use Flash if you want complex animations, or complex games (SVG's built in SMIL animation engine is extremely processor intensive). - Use Flash if your users will not be so computer literate, for instance a children's site, or a site appealing to a wide audience. - Use Flash if sound is important – SVG/SMIL supports sound, but it's pretty basic. - Use Flash if you prefer WYSIWYG to script. SVG advantages: - It's fully scriptable, using a DOM1 interface and JavaScript. That means you can start with an empty SVG image, and build it up using JavaScript. - SVG can easily be created by ASP, PHP, Perl, etc and extracted from a database. - It has a built-in ECMA-script (JavaScript) engine, so you don't have to code per browser, and you don't need to learn Flash's action-script. - SVG is XML, meaning it can be read by anything that can read XML . Flash can use XML, but needs to convert it before use. - This also allows SVG to be transformed through an XSLT stylesheet/parser. - SVG supports standard CSS1 stylesheets. - Text used in SVG remains selectable and searchable. - You only need a text editor to create SVG, as opposed to buying Flash. - SVG is an web real standard (not just “de facto”), supported by various different programs, some of which are free software (and thus available for most free computer operating systems). Why use SVG?[edit] SVG is emerging through the efforts of the W3C and its members. It is open source and as such does not require the use of proprietary languages and development tools as does Macromedia Flash. Because it is XML-based, it looks familiar to developers and allows them to use existing skills. SVG is text based and can be learned by leveraging the work (or code) of others, which significantly reduces the overall learning curve. Additionally, because SVG can incorporate JavaScript, DOM, and other technologies, developers familiar with these languages can create graphics in much the same way. SVG is also highly compatible because it works with HTML, GIF, JPEG, PNG, SMIL, ASP, JSP, and JavaScript. Finally, graphics created in SVG are scalable and do not result in loss of quality across platforms and devices. SVG can therefore be used for the Web, in print, as well as on portable devices while retaining full quality. SVG Viewer[edit] The Adobe SVG Viewer[edit] The Adobe SVG Viewer is available as a downloadable plug–in that allows SVG to be viewed on Windows, Linux and Mac operating systems in all major browsers including Internet Explorer (versions 4.x, 5.x, 6.x), Netscape (versions 4.x, 6.x), and Opera in Internet Explorer and Netscape. The Adobe SVG Viewer is the most widely deployed SVG Viewer and it supports almost all of the SVG Specification including support for the SVG DOM, animation and scripting. Features of the Adobe SVG Viewer Click the right mouse button (CTRL-Key + mouse click in Mac) over your SVG image to get a context menu. The context menu gives you several options, which can all be accessed utilizing the menu itself or “hotkeys”: Table 1: Features of the Adobe SVG Viewer SMIL[edit]. SMIL can be used with XML to enable video and sound when viewing a SVG. Attention Microsoft Windows Mozilla users![edit] The Seamonkey and Mozilla Firefox browsers have SVG support enabled natively. If desired, the Adobe SVG Viewer plugin will work with Mozilla Firefox, or the Seamonkey browser. [1] Webkit based browsers also have some SVG support natively. Native SVG (Firefox)[edit]. rsvg-view[edit] rsvg-view program is a part of the librsvg package[1]. It may be used as the default svg opener. It can resize svgs and export them to png which is often the only thing one needs to do with an svg file.[2] Creating SVG files[edit] How to do it[edit] One can use 4 groups of programs : - general text editors, like Notepad ++ ( with XML syntax highlithning) - specialized svg editors - programs that can exports svg ( like gnuplot, Maxima CAS) - own programs to create svg files directly thru concatenate of strings SVG editors[edit] As you can see from the previous example of a path definition, SVG files are written in an extremely abbreviated format to help minimize file size. However, they can be very difficult to write depending on the complexity of your image. There are SVG editor tools that can help make this task easier. Some of these tools are: Table 3: SVG Editors C[edit] Here is example in C : /* c console program based on : cpp code by Claudio Rocchini The uploaded document “circle.svg” was successfully checked as SVG 1.1. This means that the resource in question identified itself as “SVG 1.1” and that we successfully performed a formal validation using an SGML, HTML5 and/or XML Parser(s) (depending on the markup language used). */ #include <stdio.h> #include <stdlib.h> #include <math.h> const double PI = 3.1415926535897932384626433832795; const int iXmax = 1000, iYmax = 1000, radius=100, cx=200, cy=200; const char *\n", cx,cy,radius,white,black); } void beginSVG( int main(){ FILE * fp; char *\n", comment,iXmax,iYmax); draw_circle(fp,radius,cx,cy); fprintf(fp,"</svg>\n"); fclose(fp); printf(" file %s saved \n",filename ); getchar(); return 0; } Haskell[edit] Haskel code : lavaurs' algorithm in Haskell with SVG output by Claude Heiland-Allen Java Script[edit] Matlab[edit] Based on code by Guillaume JACQUENOT :[3] filename = [filename '.svg']; fid = fopen(filename,'w'); fprintf(fid,'<?xml version="1.0" standalone="no"?>\n'); fprintf(fid,'"">\n'); fprintf(fid,'<svg width="620" height="620" version="1.1"\n'); fprintf(fid,'xmlns="">\n'); fprintf(fid,'<circle cx="100" cy="100" r="10" stroke="black" stroke-\n'); fprintf(fid,'</svg>\n'); fclose(fid); Lisp[edit] One can use cl-svg library or your own procedure. Maxima[edit] BeginSVG(file_name,cm_width,cm_height,i_width,i_height):= block( destination : openw (file_name), printf(destination, "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>~%"), printf(destination,"<svg width=\"~d cm\" height=\"~d cm\" viewBox=\"0 0 ~d ~d\" xmlns=\"\" version=\"1.1\">~%", cm_width,cm_height,i_width,i_height), return(destination) ); CircleSVG(dest,center_x,center_y,_radius):=printf(dest,"<circle cx=\"~d\" cy=\"~d\" r=\"~d\" fill=\"white\" stroke=\"black\" stroke-width=\"2\"/>~%", center_x,center_y,_radius); CloseSVG(destination):= ( printf(destination,"</svg>~%"), close (destination) ); /* ---------------------------------------------------- */ cmWidth:10; cmHeight:10; iWidth:800; iHeight:600; radius:200; centerX:400; centerY:300; f_name:"b.svg"; /* ------------------------------------------------------*/ f:BeginSVG(f_name,cmWidth,cmHeight,iWidth,iHeight); CircleSVG(f,centerX,centerY,radius); CloseSVG(f); Python[edit] Getting started[edit] Because it is based on XML, SVG follows standard XML conventions. Every SVG file is contained within an <svg> tag as its parent element. SVG can be embedded within a parent document or used independently. For example, the following shows an independent SVG document: Exhibit 1: Creating a SVG <?xml version="1.0" standalone="no"?> <svg width="100%" height="100%" version="1.1" xmlns=""> ... </svg> The first line declares that the code that follows is XML. Note the “standalone” attribute. This denotes that this particular file does not contain enough processing instructions to function alone. In order to attain the required functionality it needs to display a particular image, the SVG file must reference an external document. The second line provides a reference to the Document Type Definition, or DTD. As mentioned in Chapter 7: XML Schemas, the DTD is an alternate way to define the data contained within an XML instanced document. Developers familiar with HTML will notice the DTD declaration is similar to that of an HTML document, but it is specific for SVG. For more information about DTDs, visit: Hint: Many IDEs (ex. NetBeans) do not have SVG “templates” built in to the tool. Therefore, it may be easier to use a simple text editor when creating SVG documents. Once you have an SVG Viewer installed, you should then be able to open and view your SVG document with any browser. When creating your SVG documents, remember to: - Declare your document as an XML file - Make sure your SVG document elements are between <svg> element tags, including the SVG namespace declaration. - Save your file with a .svg file extension. - It is not necessary do include a DOCTYPE statement, which includes information to identify this as an SVG document (since SVG 1.2 there is also not more such).[4][5][6] The <svg> element on line 5 defines the SVG document, and can specify, among other things, the user coordinate system, and various CSS unit specifiers. Just like with XHTML documents, the document element must include a namespace declaration to declare the element as being a member of the relevant namespace (in this case, the SVG namespace). Within the <svg> element, there can be three types of drawing elements: text, shapes, and paths. Text[edit] The following is an example of the text element: Exhibit 2: Using text with SVG <?xml version="1.0" encoding="UTF-8" standalone="no"?> <svg width="5.5in" xml: <text y="15" fill="red">This is SVG.</text> </svg> The <svg> element on line 4 specifies: 1) that white space within text elements will be retained, 2) the width and height of the SVG document — particularly important for specifying print output size. In this example, the text is positioned in a 5.5 inches wide by .5 inches tall image area. The “y” attribute on line 5 declares that the text element’s baseline is 15 pixels down from the top of the SVG document. An omitted “x” attribute on a text element implies an x coordinate of 0. Because SVG documents use a W3C DTD, you can use the W3C Validator to validate your document. Notice that the “style” attribute is used to describe the presentation of the text element. The text could equivalently have been given a red color by use of a presentation attribute fill="red". Shapes[edit] SVG contains the following basic shape elements: - Rectangles - Circles - Ellipses - Lines - Polylines - Polygons These basic shapes, along with “paths” which are covered later in the chapter, constitute the graphic shapes of SVG. Because is this an introduction to SVG, we will only cover rectangle and circle shapes here. For more information on all shapes, please visit: Rectangles[edit] The <rect> element defines a rectangle which is axis-aligned with the current user coordinate system, the coordinate system that is currently active and which is used to define how coordinates and lengths are located and computed on the current canvas. Rounded rectangles can be created by setting values for the rx and ry attributes. The following example produces a blue rectangle with its top left corner aligning with the top left corner of the image area. This utilizes the default value of “0” for the x and y attributes. Exhibit 3: Creating a rectangle in SVG <?xml version="1.0"?> <svg xmlns="" top="0in" width="5.5in" height="2in"> <rect fill="blue" width="250" height="200"/> </svg> It will produce this result: Circles[edit] A circle element requires three attributes: cx, cy, and r. The 'cx’ and 'cy’ values specify the location of the center of the circle while the 'r’ value specifies the radius. If the 'cx’ and 'cy’ attributes are not specified then the circle's center point is assumed to be (0, 0). If the 'r’ attribute is set to zero then the circle will not appear. Unlike 'cx’ and 'cy’, the 'r’ attribute is not optional and must be specified. In addition the keyword stroke creates an outline of the image. Both the width and the color can be changed. Exhibit 4: Creating a circle in SVG <?xml version="1.0"?> <svg xmlns="" width="350" height="300"> <circle cx="100" cy="50" r="40" stroke="darkslategrey" stroke- </svg> It will produce this result: Polygons[edit] A polygon is any geometric shape consisting of three or more sides. The 'points' attributes describes the (x,y) coordinates that specify the corners points of the polygon. For this specific example, there are three points which indicate that a triangle will be produced. Exhibit 5: Creating a Polygon in SVG <?xml version="1.0" standalone="no"?> <svg width="100%" height="100%" version="1.1" xmlns=""> <polygon points="220,100 300,210 170,250" style="fill:#blue;stroke:red;stroke-width:2"/> </svg> It will produce this result: Paths[edit] Paths are used to draw your own shapes in SVG, and are described using the following data attributes: Table 2: SVG Paths The following example produces the shape of a triangle. The “M” indicates a “moveto” to set the first point. The “L” indicates “lineto” to draw a line from “M” to the “L” coordinates. The “Z” indicates a “closepath”, which draws a line from the last set of L coordinates back to the M starting point. Exhibit 6: Creating paths in SVG <?xml version="1.0"?> <svg xmlns="" width="5.5in" height="2in"> <path d="M 50 10 L 350 10 L 200 120 z"/> </svg> It produces this result: Validation] The syntax is as follows: Exhibit 7: Embedding SVG into HTML using keyword embed <embed src="canvas.svg" width="350" height="176" type="image/svg+xml" name="emap"> An additional attribute, “pluginspage”, can be set to the URL where the plug-in can be downloaded: pluginspage="" Object[edit] The syntax is as follows and conforms to the HTML 4 Strict specification: Exhibit 8: Embedding SVG into HTML using keyword object <object type="image/svg+xml" name="omap" data="canvas_norelief.svg" width="350" height="176"></object> Between the opening and the closing <object> tags, information for browsers that do not support objects can be added: <object ...>You should update your browser</object> Unfortunately some browsers such as Netscape Navigator 4 do not show this alternative content if the type attribute has been set to something other than text/html. Iframe[edit] The syntax is as follows and conforms to the HTML 4 Transitional specification: Exhibit 9: Embedding SVG into HTML using keyword iframe <iframe src="canvas_norelief.svg" width="350" height="176" name="imap"></iframe> Between the opening and the closing <iframe> tags, information for browsers that do not support iframes can be added: <iframe ...>You should update your browser</iframe> Creating 3D SVG images[edit] Section by Charles Gunti, UGA Master of Internet Technology Program, Class of 2007 Sometime we may want to view an SVG image in three dimensions. For this we will need to change the viewpoint of the graphic. So far we have created two dimensional graphics, such as circles and squares. Those exist on a simple x, y plane. If we want to look at something in three dimensions we have to add the z coordinate plane. The z plane is already there, but we are looking at it straight on, so if data is changed on z it doesn't look any different to the viewer. We need to add another parameter to the data file, the z parameter. <?xml version="1.0"?> <data> <subject x_axis="90" y_axis="118" z_axis="0" color="red" /> <subject x_axis="113" y_axis="45" z_axis="75" color="purple" /> <subject x_axis="-30" y_axis="-59" z_axis="110" color="blue" /> <subject x_axis="60" y_axis="-50" z_axis="-25" color="yellow" /> </data> Once we have the data we will use XSLT to create the SVG file. The SVG stylesheet is the same as other stylesheets, but we need to ensure an SVG file is created during the transformation. We call the SVG namespace with this line in the declarations: xmlns=" Another change we should make from previous examples is to change the origin of (0, 0). We change the origin in this example because some of our data is negative. The default origin is at the upper left corner of the SVG graphic. Negative values are not displayed because, unlike traditional coordinate planes, negative values are above positive values. To move the origin we simply add a line of code to the stylesheet. Before going over that line, let's look at The g element. The container element, g, is used for grouping related graphics elements. Here, we'll use g to group together our graphical elements and then we can apply the transform. Here is how we declare g and change the origin to a point 300 pixels to the right and 300 pixels down: <g transform="translate(300,300)">graphical elements</g> SVG transformations are pretty simple, until it comes to changing the viewpoint. SVG has features such as rotating and skewing the image in two dimensions, but it cannot rotate the coordinate system in three dimensions. For that we will need to use some math and a little Java. When rotating in three dimensions two rotations need to be made, one around the y axis, and another around the x axis. The first rotation will be around the y axis and the formula will look like this: Az is the angle the z axis will be rotated y will not change because we are rotating around the y axis The second rotation will be around the x axis. Keep in mind that one rotation has already been made, so instead of using x, y, and z values we need to use x', y', and z' (x-prime, y-prime and z-prime) found in the last rotation. The formula will look like this: z" = z'*cos(Ay) – y'*sin(Ay) Ay is the angle of rotation on the y axis y" = z'*sin(Ay) + y'*cos(Ay) x" = x' Remember we are rotating around the x axis, so this does not change Remember from trig class the old acronym SOH CAH TOA? This means Sin = Opposite/Hypotenuse Cos = Adjacent/Hypotenuse Tan = Opposite/Adjacent And we use those functions to find the angles needed for our rotations. Based of the previous two formulas we can make the following statements about Az and Ay: tan(Az) = Xv/Zv sin(Ay) = Yv/sqrt(Xv2 + Yv2 + Zv2) With so many steps to take to make the rotation we should drop all of this information into a Java class, then call the class in the stylesheet. The Java class should have methods for doing all of the calculations for determining where the new data points will go once the rotation is made. Creating that java class is beyond the scope of this section, but for this example I'll call it ViewCalc.class. Now that we can rotate the image, we need to integrate that capability into the transformation. We will use parameters to pass viewpoints into the stylesheet during the transformation. The default viewpoint will be (0, 0, 0) and is specified on the stylesheet like so: Exhibit 10: 3D images with SVG <?xml version="1.0" ?> <xsl:stylesheet <!-- default viewpoint in case they are not specified --> <!-- from the command line --> <xsl:param0</xsl:param> <xsl:param0</xsl:param> <xsl:param0</xsl:param> <xsl:template … Java now needs to be added to the stylesheet so the processor will know what methods to call. Two lines are added to the namespace declarations: <?xml version="1.0" ?> <xsl:stylesheet version="1.0" xmlns="" xmlns:xsl="" <b>xmlns:java="ViewCalc" exclude-result-prefixes="java"</b>> Notice the exclude-result-prefixes="java" line. That line is added so things in the stylesheet with the java: prefix will be processed, not output. Be sure to have the ViewCalc class in the CLASSPATH or the transformation will not run. The final step is to call the methods in the ViewCalc class from the stylesheet. For example: <xsl:template <xsl:for-each <xsl:variable <xsl:variable <xsl:variable <xsl:variable <xsl:variable </xsl:for-each> Finally we pass new parameters and run the XSL transformation to create the SVG file with a different viewpoint. Summary[edit] Demos[edit] The following table provides a sampling of SVG documents that demonstrate varying degrees of functionality and complexity: Table 5: SVG Demos The Basic demo demonstrates the effects of zooming, panning, and anti-aliasing (high quality). The Fills demo demonstrates the effects of colors and transparency. The black circle is drag-able. Simply click and drag the circle within the square to see the changes. The HTML, JS, Java Servlet demo describes an interactive, database-driven, seating diagram, where chairs represent available seats for a performance. If the user moves the mouse pointer over a seat, it changes color, and the seat detail (section, row, and seat number) and pricing are displayed. On the client side of the application, SVG renders the seating diagram and works with JavaScript to provide user interactivity. The SVG application is integrated with a server-side database, which maintains ticket and event availability information and processes ticket purchases. The Java Servlet handles form submission and updates the database with seat purchases. The HTML, JS, DOM demo shows how SVG manages and displays data, generating SVG code from data on the fly. Although this kind of application can be written in a variety of different ways, SVG provides client-side processing to maintain and display the data, reducing the load on the server as well as overall latency. Using the DOM, developers can build documents, navigate their structure, and add, modify, or delete elements and content. The PHP, MySQL demo shows the use of database driven SVG generation utilizing MySQL. It randomly generates a map of a European country. Each time you reload the page you will see a different country. Exercises[edit] - Download and install the Adobe SVG Viewer. Once the Adobe SVG Viewer has been installed, go to this page to test that the install was successful: - If your primary browser is Internet Explorer, you can download version 3.0 which is fully supported by Adobe and can be accessed at - If your primary browser is Mozilla-based, you must download the 6.0 version at - After it has been installed you must copy the NPSVG6.dll and NPSVG6.zip files to your browser's plug-ins folder. These files are normally located in C:\Program Files\Common Files\Adobe\SVG Viewer 6.0\Plugins\. - Create your own stand-alone SVG file to produce an image containing a circle within a rectangle. - Create your own stand-alone SVG file. Use 3 circles and 1 path element to create a yellow smiley face with black eyes and a black mouth. Use a text element so that the message “Have a nice day!” appears below the smiley face. - Hint: Because <path> elements can be difficult to write, here is a sample path you can utilize: - <path d="M 100, 120 C 100,120 140, 140 180,120" style="fill:none;stroke:black;stroke-width:1"/> References[edit] - ↑ libsrsvg - free, open source SVG rendering library - ↑ Barry Kauler blog - ↑ 2D Apollonian gasket with four identical circles by Guillaume JACQUENOT ł - ↑ W3C SVG 1.1 Recommendation: SVG Namespace, Public Identifier and System Identifier, see also W3C SVG 2 Editor’s Draft: SVG namespace and DTD (06 February 2013) - ↑ SVG authoring and web server configuration guidelines. Jonathan Watt Don't include a DOCTYPE declaration - ↑ Mozilla on SVG: Namespaces Crash Course, Getting Started - ↑ validator w3.org - Collection of free SVG vectors - The Adobe SVG Zone - Cartographers on the Net - Digital Web Magazine – Tutorial: SVG: The New Flash - Learn SVG - SVG authoring and web server configuration guidelines - Mozilla Plugin Support on Microsoft Windows - W3C SVG Tutorial - W3C Document Structure - IBM developerWorks - W3C Synchronized Multimedia - Mozilla SVG Project - svg.startpagina.nl
http://en.wikibooks.org/wiki/Scalable_Vector_Graphics
CC-MAIN-2014-10
refinedweb
5,004
53.41
A few other notes on how to do some things in C#. First of all, you don't have to do the string.ToLower in the comparison yourself. You could create a Dictionary which uses a case insensitive string comparer. In the Add function, you don't have to check to see if a key exists before adding it. If your objective is to either add a new item, or replace an existing item, just assign the value via the index. You also don't have to search Dictionaries manually as you are in the create method. As ChaosEngine said, just use TryGetValue. This is what I would do. Here is an example showing the changes I would make. public class ObjectHandler { private Dictionary<string, Item> objects = new Dictionary<string, Item>(StringComparer.InvariantCultureIgnoreCase); public void Add(Item item) { objects[item.Name] = item; } public Item Create(string Name) { Item item; if(objects.TryGetValue(Name, out item)) { return item.Clone(); } return null; } } I would also strongly consider changing the name from Create to CreateCopy.
http://www.gamedev.net/user/37353-tstrimple/?tab=reputation
CC-MAIN-2016-18
refinedweb
172
69.38
It is quite easy to convert XML into RTF, well to turn it into fairly simple rtf output anyway. The RTF format is pretty straightforward and this makes it pretty easy to write into the RtfTextBox inside the .net system to display your xml in a nicely formatted way. With .net 2.0 it is possible to add a WebBrowser component instead and convert your xml into html. One thing to be careful of, the transform will only deal with valid xml, so if you are doing a transform on a segment of the code, turn the whole thing into a new document with a XmlDefintion up the top of the file or it will fail to convert at all, while also not telling you what went wrong. Here is an example of an xslt StyleSheet I used to converting an xml file into output:<xsl:stylesheet version=”1.0″ xmlns:xsl=”” xmlns:ToolManTools=”urn:ToolManTools” > <xsl:output method=”text”/> <xsl:template match=”FreeFormData”> <xsl:text>{\rtf1</xsl:text> <xsl:text>{\info{\title Frog data}{\author David Bennett}}</xsl:text> <xsl:text>\par\b Created: \b0\tab </xsl:text> <xsl:value-of select=”ToolManTools:GetDateFromTicks(./Created)”/> <xsl:text>\par </xsl:text> <xsl:for-each select=”Data/Node”> <xsl:text>\b </xsl:text> <xsl:value-of select=”Key” /> <xsl:text>\b0\tab\tab </xsl:text> <xsl:value-of select=”Value” /> <xsl:text>\line </xsl:text> </xsl:for-each> <xsl:text>\par </xsl:text> <xsl:text>}</xsl:text> </xsl:template> </xsl:stylesheet> From this output it is possible to see that to start with you just write out a small rtf header with the {\rtf output. Then I write out some information about the file, giving it a header and an author. The \b turns on bold and \b0 turns it off. \i turns on italic and \i0 turns it off. The \tab command inserts a tab character, and causes the output to be lined up. The \line command puts in a line break at the specified location and \par does a paragraph output. I also use an external object in this style sheet to convert the ticks stored in the xml file into a useful string output. The object gets passed in using the XsltArgumentList object and is mapped to a specific namespace with the xmlns argument in the xsl:stylesheet element. It is pretty easy to add in, all xml functions are assumed to take in a string and return a string. XslTransform transform = new XslTransform(); XsltArgumentList args = new XsltArgumentList(); StringWriter str = new StringWriter(); transform.Load(dir + “\\” + this.incidentEvent.Name + “.xslt“); // Put this in, just in case it needs it… doc.AppendChild(doc.CreateXmlDeclaration(“1.0”, null, null)); doc.AppendChild(this.incidentEvent.ToXml(doc)); args.AddExtensionObject(“urn:ToolManTools”, new ToolManTools()); transform.Transform(doc, args, str, null); tmp = str.ToString(); DataTextBox.Rtf = tmp; You can see that I add in the extension object there and that I use a StringrWriter to get the RTF before putting into the RtfTextBox for viewing.
https://blogs.technet.microsoft.com/david_bennett/2005/06/29/converting-xml-to-rtf-using-net/
CC-MAIN-2016-30
refinedweb
500
52.49
I have a list of records in a listview that I want the user to be able to re-sort using a drag and drop method. I have seen this implemented in other apps, but I have not found a tutorial for it. It must be something that others need as well. Can anyone point me to some code for doing this? Solution 1 I have been working on this for some time now. Tough to get right, and I don't claim I do, but I'm happy with it so far. My code and several demos can be found at Its use is very similar to the TouchInterceptor (on which the code is based), although significant implementation changes have been made. DragSortListView has smooth and predictable scrolling while dragging and shuffling items. Item shuffles are much more consistent with the position of the dragging/floating item. Heterogeneous-height list items are supported. Drag-scrolling is customizable (I demonstrate rapid drag scrolling through a long list---not that an application comes to mind). Headers/Footers are respected. etc.?? Take a look. Solution 2 Now it's pretty easy to implement for RecyclerView with ItemTouchHelper. Just override onMove method from ItemTouchHelper.Callback: public boolean onMove(RecyclerView recyclerView, RecyclerView.ViewHolder viewHolder, RecyclerView.ViewHolder target) { mMovieAdapter.swap(viewHolder.getAdapterPosition(), target.getAdapterPosition()); return true; } Pretty good tutorial on this can be found at medium.com : Drag and Swipe with RecyclerView Solution 3 Am adding this answer for the purpose of those who google about this.. There was an episode of DevBytes (ListView Cell Dragging and Rearranging) recently which explains how to do this You can find it here also the sample code is available here. What this code basically does is that it creates a dynamic listview by the extension of listview that supports cell dragging and swapping. So that you can use the DynamicListView instead of your basic ListView and that's it you have implemented a ListView with Drag and Drop. Solution 4 The DragListView lib does this really neat with very nice support for custom animations such as elevation animations. It is also still maintained and updated on a regular basis. Here is how you use it: 1: Add the lib to gradle first dependencies { compile 'com.github.woxthebox:draglistview:1.2.1' } 2: Add list from xml <com.woxthebox.draglistview.DragListView android: 3: Set the drag listener mDragListView.setDragListListener(new DragListView.DragListListener() { @Override public void onItemDragStarted(int position) { } @Override public void onItemDragEnded(int fromPosition, int toPosition) { } }); 4: Create an adapter overridden from DragItemAdapter public class ItemAdapter extends DragItemAdapter<Pair<Long, String>, ItemAdapter.ViewHolder> public ItemAdapter(ArrayList<Pair<Long, String>> list, int layoutId, int grabHandleId, boolean dragOnLongPress) { super(dragOnLongPress); mLayoutId = layoutId; mGrabHandleId = grabHandleId; setHasStableIds(true); setItemList(list); } 5: Implement a viewholder that extends from DragItemAdapter.ViewHolder public class ViewHolder extends DragItemAdapter.ViewHolder { public TextView mText; public ViewHolder(final View itemView) { super(itemView, mGrabHandleId); mText = (TextView) itemView.findViewById(R.id.text); } public void onItemClicked(View view) { } public boolean onItemLongClicked(View view) { return true; } } For more detailed info go to Solution 5 I found DragSortListView worked well, although getting started on it could have been easier. Here's a brief tutorial on using it in Android Studio with an in-memory list: Add this to the build.gradledependencies for your app: compile 'asia.ivity.android:drag-sort-listview:1.0' // Corresponds to release 0.6.1 Create a resource for the drag handle ID by creating or adding to values/ids.xml: <resources> ... possibly other resources ... <item type="id" name="drag_handle" /> </resources> Create a layout for a list item that includes your favorite drag handle image, and assign its ID to the ID you created in step 2 (e.g. drag_handle). Create a DragSortListView layout, something like this: <com.mobeta.android.dslv.DragSortListView xmlns: Set an ArrayAdapterderivative with a getViewoverride that renders your list item view. final ArrayAdapter<MyItem> itemAdapter = new ArrayAdapter<MyItem>(this, R.layout.my_item, R.id.my_item_name, items) { // The third parameter works around ugly Android legacy. public View getView(int position, View convertView, ViewGroup parent) { View view = super.getView(position, convertView, parent); MyItem item = getItem(position); ((TextView) view.findViewById(R.id.my_item_name)).setText(item.getName()); // ... Fill in other views ... return view; } }; dragSortListView.setAdapter(itemAdapter); Set a drop listener that rearranges the items as they are dropped. dragSortListView.setDropListener(new DragSortListView.DropListener() { @Override public void drop(int from, int to) { MyItem movedItem = items.get(from); items.remove(from); if (from > to) --from; items.add(to, movedItem); itemAdapter.notifyDataSetChanged(); } }); Solution 6 I recently stumbled upon this great Gist that gives a working implementation of a drag sort ListView, with no external dependencies needed. Basically it consists on creating your custom Adapter extending ArrayAdapter as an inner class to the activity containing your ListView. On this adapter one then sets an onTouchListener to your List Items that will signal the start of the drag. In that Gist they set the listener to a specific part of the layout of the List Item (the "handle" of the item), so one does not accidentally move it by pressing any part of it. Personally, I preferred to go with an onLongClickListener instead, but that is up to you to decide. Here an excerpt of that part: public class MyArrayAdapter extends ArrayAdapter<String> { private ArrayList<String> mStrings = new ArrayList<String>(); private LayoutInflater mInflater; private int mLayout; //constructor, clear, remove, add, insert... public View getView(final int position, View convertView, ViewGroup parent) { ViewHolder holder; View view = convertView; //inflate, etc... final String string = mStrings.get(position); holder.title.setText(string); // Here the listener is set specifically to the handle of the layout holder.handle.setOnTouchListener(new View.OnTouchListener() { public boolean onTouch(View view, MotionEvent motionEvent) { if (motionEvent.getAction() == MotionEvent.ACTION_DOWN) { startDrag(string); return true; } return false; } }); // change color on dragging item and other things... return view; } } This also involves adding an onTouchListener to the ListView, which checks if an item is being dragged, handles the swapping and invalidation, and stops the drag state. An excerpt of that part: mListView.setOnTouchListener(new View.OnTouchListener() { @Override public boolean onTouch(View view, MotionEvent event) { if (!mSortable) { return false; } switch (event.getAction()) { case MotionEvent.ACTION_DOWN: { break; } case MotionEvent.ACTION_MOVE: { // get positions int position = mListView.pointToPosition((int) event.getX(), (int) event.getY()); if (position < 0) { break; } // check if it's time to swap if (position != mPosition) { mPosition = position; mAdapter.remove(mDragString); mAdapter.insert(mDragString, mPosition); } return true; } case MotionEvent.ACTION_UP: case MotionEvent.ACTION_CANCEL: case MotionEvent.ACTION_OUTSIDE: { //stop drag state stopDrag(); return true; } } return false; } }); Finally, here is how the stopDrag and startDrag methods look like, which handle the enabling and disabling of the drag process: public void startDrag(String string) { mPosition = -1; mSortable = true; mDragString = string; mAdapter.notifyDataSetChanged(); } public void stopDrag() { mPosition = -1; mSortable = false; mDragString = null; mAdapter.notifyDataSetChanged(); } Solution 7 i build a new library based on RecyclerView, check it out
https://solutionschecker.com/questions/android-list-view-drag-and-drop-sort/
CC-MAIN-2022-40
refinedweb
1,139
50.02
. Saving images in WPF. Saving images in Silverlight 3: - ICSharpCode.SharpZipLib.Silverlight - ImageTools - ImageTools.IO - ImageTools.IO.Png (only if you want .png support) - ImageTools.IO.Bmp (only if you want .bmp support) - ImageTools.Utils! Please document the .net image encoder properties. The current API does not document the supported forms of TIFF encoding, jpeg encoding or PNG encoding. Tiff (group 3, group 4, packbits, lzw, horizontal tile size, verticle tile size, background color, image bounding box) canvas.ToImage(); do not contain defindion ToImage(); Please help me? canvas.ToImage(); do not contain defindion ToImage(); I m having same issue any help ? for those with the problems with the extension method. Add this to the top of your page: using ImageTools; HI i saved image only what i am seen in the View, if my Image is big and scrol bars came that part is not saved.. also more than300*3000 also not able to create image .. if this thing solved it is perfect to use. Thanks My Email id is muruganas81@gmail.com pls replay my comment if u need more info pls ask me How we can save the image in isolated storage and display in image control with silverlight . Pleas help me . I have an issue I can show the ashx image in silverlight apps but when its xap use in windows .vb apps image is not going to work its not showing what is issue I could not understand how can I short out this one please hwlp me How can I download the image like "" and save in local folder in silverlight , I have used so many way but I couldn't get any right solution so that this image display in .net windows application. if i have a Image instance, not files provided by OpenFileDialog, do you have solutions to save this Image instance to server drives. Nice one. Cheers.
https://blogs.msdn.microsoft.com/kirillosenkov/2009/10/12/saving-images-bmp-png-etc-in-wpfsilverlight/
CC-MAIN-2017-34
refinedweb
315
67.25
Here are some of the things I found. Suppose at 14:00:00 you have cpu 4698 591 262 8953 916 449 531 total_jiffies_1 = (sum of all values) = 16400 work_jiffies_1 = (sum of user,nice,system = the first 3 values) When you see the line that start with intr, you know to stop parsing. I still didnot run the code but just want to make sure if we can monitor any process or not. Source I would like to extend the program to get cpu usage for all cores individually. Here is the code to do it: private void button1_Click(object sender, EventArgs e) { selectedServer = "JS000943"; listBox1.Items.Add(GetProcessorIdleTime(selectedServer).ToString()); } private static int GetProcessorIdleTime(string selectedServer) { try { var searcher = ManagementObjectSearcher Some quick tips: * Instead of using DateTime.Now, .UtcNow would be better as it's both faster and less dependent on user settings. Are the following topics usually in an introductory Complex Analysis class: Julia sets, Fatou sets, Mandelbrot set, etc? as we see in the task manager. This is an odd-looking file consisting of a single line; for example: 19340 (whatever) S 19115 19115 3084 34816 19115 4202752 118200 607 0 0 770 384 2 7 20 0 Word for unproportional punishment? What's the male version of "hottie"? Task Display the current CPU utilization, as a percentage, calculated from /proc/stat. What happens to a radioactive carbon dioxide molecule when its carbon-14 atom decays? I used the following method to set the thread affinity to Low public static void setCurrentProgAffinity(String proc) { foreach (Process myCurrentProcess in Process.GetProcessesByName(proc)) { myCurrentProcess.PriorityClass = System.Diagnostics.ProcessPriorityClass.Idle; } } Deekshit February Getsystemtimes From this information, we can, with a little effort, determine the current level of CPU utilization, as a percent of time spent in any states other than idle. Cached memory probably would not require flushing. Calculate Cpu Usage From /proc/pid/stat In Linux, you can actually just use clock(). totalPhysMem *= memInfo.mem_unit; Physical Memory currently used: Same code as in "Total Virtual Memory" and then long long physMemUsed = memInfo.totalram - memInfo.freeram; //Multiply in next statement to avoid int overflow Relevant documentation: man getloadavg and man 5 proc N.B. On OSX and linux the formatting is slightly different, but on both systems it is the line below the load making it easy to filter out. –Amoss Jul 24 '14 at Getprocesstimes Longer periods would be more useful, though. But you don't care about the total idle time; you care about the idle time in a given period, e.g., the last second. public class Form1 { int totalHits = 0; public object getCPUCounter() { PerformanceCounter cpuCounter = new PerformanceCounter(); cpuCounter.CategoryName = "Processor"; cpuCounter.CounterName = "% Processor Time"; cpuCounter.InstanceName = "_Total"; // will always Edit: remember that when you calculate your process's CPU utilization, you have to take into account 1) the number of threads in your process, and 2) the number of processors in Thanks for your help! Get Cpu Usage C++ Comments containing abusive language, profanity, or are wildly off-topic will not be approved. C++ Get Cpu Usage Linux In case the idle time was equal to the user+kernel time, it would produce 0 rather than 50% as one would expect. –Andrei Belogortseff Dec 31 '16 at 17:46 Basically, I need to determine the CPU utilization, so that if it is high, I can instead divert a new process to another controller in the system, rather than executing on this contact form What does Joker “with TM” mean in the Deck of Many Things? Look at man proc for more information. Why are copper cables round? Getloadavg i want a c sorce code to run cpu usage on code blocks. Even including the bugs, like forgetting the "return" keyword. –Mark At Ramp51 Mar 3 '11 at 1:01 4 yeah, it looks like a copy from that link, so a link Learn to diagnose, debug, and control your own code and the underlying CLR to obtain the fastest performance possible. have a peek here It is still possible for cached pages to be reactivated Free pages that are completely free and ready to be used. This looks like some attempt to game the reputation system. –Amoss Sep 25 '10 at 11:12 what reputation do I gain for answering my own question? –user191776 Sep 25 Glibtop This time is measured in Linux "jiffies", which are 1/100 of a second each. In it, you'll get: The week's top questions and answers Important community announcements Questions that need answers see an example newsletter By subscribing, you agree to the privacy policy and terms c++ linux cpu-usage share|improve this question asked Jun 10 '10 at 18:08 Meltea 60831021 Forgot to mention: delay argument for top is also useless to me... –Meltea Jun 10 Is there a way from code, or by parsing some command's output to get the CPU utilization stats? Posts: 2896Joined: Tue Oct 11, 2011 8:38 pm by williamhbell » Tue Dec 31, 2013 12:03 am Hi,Try taking at look at the source file for the top command,machine/m_linux.c(get_system_info function)The source Getrusage Did Joseph Smith “translate the Book of Mormon”? Keeping windshield ice-free without heater Is the binomial theorem actually more efficient than just distributing A Little Cryptic Puzzle Why do shampoo ingredient labels feature the the term "Aqua"? Virtual Memory Currently Used by my Process You can get statistics about your current process using the task_info function. So, your program ran exactly? Check This Out These stats are shown in the 3rd row from the top command. Why are copper cables round? In it, you'll get: The week's top questions and answers Important community announcements Questions that need answers see an example newsletter By subscribing, you agree to the privacy policy and terms The Ooh-Aah Cryptic Maze Which was the last major war in which horse mounted cavalry actually participated in active fighting? Each process spends some time in kernel mode and some time in user mode. c# cpu-usage share|improve this question edited Dec 27 '09 at 16:12 Peter Mortensen 10.6k1372108 asked Nov 10 '08 at 15:01 Grace closed as off-topic by Andrew Barber Oct 22 '13 at Ben Post authorJuly 15, 2012 at 9:49 pm I think the most likely reason it shows 0% is because it really is very low. I want to know ... Join them; it only takes a minute: Sign up Determining CPU utilization up vote 8 down vote favorite 2 Is there a command or any other way to get the current CPU=(`sed -n 's/^cpu\s//p' /proc/stat`) IDLE=${CPU[3]} # Just the idle CPU time. # Calculate the total CPU time. If the system was idle, it would divide by zero. Erkki Salonen February 2, 2015 at 9:43 am This is good example! Sunlight and Vampires How to help reduce students' anxiety in an oral exam? Output N in base -10 How does Decommission (and Revolt) work with multiple permanents leaving the battlefield?
http://juicecoms.com/cpu-usage/get-cpu-usage-c.html
CC-MAIN-2017-51
refinedweb
1,181
60.95
Lightning @wire adapter deep dive @wire in Lighting Web Components is a great feature. The most common use case is to connect the result of a backend call to property or method. The other key feature is that using the '$recordId' syntax, the input properties of the wired method are dynamically pushed to the backend. @wire(getRecord, { recordId: '$recordId', fields: FIELDS }) But what is Decorator? The Salesforce documentation calls @wire an adapter, and systematically uses the phrase ‘decorate a property or a function’. However, the LWC open-source framework docs calls @wire a decorator. So is it a decorator? Well, it is hard to say. The problem is that there are no decorators in the Javascript language. Or at least, not at the moment. The decorator idea itself is in proposal phase, and to make things worse, there is an old-experimental and a new-experimental version of this feature. The proposal is very detailed, so the best way to understand what a decorator should be, is the relatively straightforward Typescript description of the Decorator feature. Long story short, a decorator is a wrapper around a property or a method, which adds additional functionality (for example, @wire pushes automatically some parameters to the backend and gets the result on the decorated property). But LWC is NOT Typescript. There are decorators in the LWC framework, because it uses internally the Babel transpiler to make available all of the new syntactic sugar like decorators (and it also makes sure, that your code runs in old browsers, too). @wire anything Salesforce documentation doesn’t mention it, but in fact you can wire any class to a property or to a method. According to the LWC documentation, all you need to do, is that your class implements the WireAdapter interface. You say there are no interfaces in Javascript? You are right, but who cares at this point? ;) See this minimalistic example. Any time, you increase num by calling inc(), isq automatically is set to the inverse square root of num. You could make it with a getter too, but here you outsource all the logic to the InvSqrtclass. And this is the InvSqrt class: The methods and the constructor follow the WireAdapter interface definition. Anytime a property is changed on the parameters of the wired class, the updatemethod is called. The important thing here is that you decide when to call the callback — so you can write asynchronous logic as well. See the docs here. Apex and Caching It worth to mention that Apex returns a data structure which has errorand data properties. They behave the same way as on a ‘simple’ Apex call. In order to make an Apex Controller method @wire-able, on your Apex class the cacheable property should be set to true: @AuraEnabled(cacheable=true). So if your @wire decorator parameter is set to a previous value, the backend is not called, because the return value is cached. This saves some bandwidth; however, if your data changed on the server, you can get in trouble. In order to re-push a previous, cached @wire parameter value and get the most recent data, you need use the refreshApex() method. In contrast, client-side wired methods are not cached. You can monitor the caching if you open Chrome DevTools and check the XHR requests on the Network tab. If a value comes from the cache, there is no communication between the server and the browser. Any difference between wired Apex and client-side classes? If you look into the call stack and find your generated code, you can see something like this: Here InvSqrt is a Javascript class, while getTime() is an Apex backend method. Those classes imported very differently: import { InvSqrt } from ‘c/invSqrt’; import getTime from ‘@salesforce/apex/myApp.getTime’; So internally for @wire those two classes are similar. invSqrt.InvSqrt is a JS class import, getTime__default['default'] is an internal wrapper for the Apex callout. They behave the same way from the @wire adapter’s perspective. Final thoughts It is hard to find in the code where exactly the client-side caching happens. From the call stack it looks like that the LWC components are handled internally by the Aura framework, which is not open-sourced and subject to change. So is it safe to use @wire with anything else then the examples from the Salesforce documentation? As the feature is documented in LWC Open Source framework, and it behaves exactly the same way on Salesforce— probably yes.
https://tempflip.medium.com/lightning-wire-adapter-deep-dive-b16b9b01c962?source=post_internal_links---------4----------------------------
CC-MAIN-2021-49
refinedweb
749
63.49